help
help > RE: HPC - SLURM - Parralellisation with conn
Jan 14, 2021 09:01 AM | sophieb
RE: HPC - SLURM - Parralellisation with conn
Dear Alfonso,
As I was trying to perform my second level analyses, my sources were "unknown". I then realise that it did not merge the node results automatically. While running the node_merge.m I find out that one job led to an error thats why it was not merged.
"parallelization settings loaded from /home/betka/Script_SCITAS/conn/conn_jobmanager.mat
Warning: pending jobs in /scratch/betka/12012021Analysis_5_1015.*.dmat not finished yet. Until then, any modifications to this project may be overwritten once the pending jobs finish and they are merged back into this project"
in the node output:
"Step 5/7: Importing ROI data
/scratch/betka/12012021Analysis_5_1015.qlog/210113153328320/node.0033210113153328320.sh error
_NODE END_"
The status error is empty.
Something is still not clear to me, even after looking at the different scripts:
Will the node_merge.m re-ran the pending job and then merge everything or should I run first the node.0033210113153328320.sh and then, run merge_job.m? => Edit 3 volumes were corrupted due to a bad segmentation. I am now re-running the node.0033210113153328320.m and will merge everything afterward, using the node_merge.m
Thanks a lot,
Sophie
As I was trying to perform my second level analyses, my sources were "unknown". I then realise that it did not merge the node results automatically. While running the node_merge.m I find out that one job led to an error thats why it was not merged.
"parallelization settings loaded from /home/betka/Script_SCITAS/conn/conn_jobmanager.mat
Warning: pending jobs in /scratch/betka/12012021Analysis_5_1015.*.dmat not finished yet. Until then, any modifications to this project may be overwritten once the pending jobs finish and they are merged back into this project"
in the node output:
"Step 5/7: Importing ROI data
/scratch/betka/12012021Analysis_5_1015.qlog/210113153328320/node.0033210113153328320.sh error
_NODE END_"
The status error is empty.
Something is still not clear to me, even after looking at the different scripts:
Will the node_merge.m re-ran the pending job and then merge everything or should I run first the node.0033210113153328320.sh and then, run merge_job.m? => Edit 3 volumes were corrupted due to a bad segmentation. I am now re-running the node.0033210113153328320.m and will merge everything afterward, using the node_merge.m
Thanks a lot,
Sophie
Threaded View
Title | Author | Date |
---|---|---|
sophieb | Dec 18, 2020 | |
sophieb | Jan 15, 2021 | |
sophieb | Jan 14, 2021 | |
sophieb | Jan 13, 2021 | |
sophieb | Jan 11, 2021 | |
Alfonso Nieto-Castanon | Jan 11, 2021 | |
Alfonso Nieto-Castanon | Jan 11, 2021 | |
sophieb | Jan 12, 2021 | |
Alfonso Nieto-Castanon | Jan 12, 2021 | |
sophieb | Jan 12, 2021 | |
Alfonso Nieto-Castanon | Jan 26, 2021 | |
sophieb | Jan 12, 2021 | |
sophieb | Jan 11, 2021 | |
sat2020 | Dec 18, 2020 | |
Alfonso Nieto-Castanon | Dec 18, 2020 | |
sophieb | Dec 18, 2020 | |
Alfonso Nieto-Castanon | Dec 18, 2020 | |
sophieb | Dec 18, 2020 | |
Alfonso Nieto-Castanon | Dec 18, 2020 | |
sophieb | Dec 19, 2020 | |
Alfonso Nieto-Castanon | Dec 19, 2020 | |
sophieb | Dec 21, 2020 | |
Alfonso Nieto-Castanon | Dec 21, 2020 | |
sophieb | Jan 2, 2021 | |
sophieb | Jan 8, 2021 | |
sophieb | Dec 22, 2020 | |