help > job manager (slurm computer cluster) error
Showing 1-3 of 3 posts
Display:
Results per page:
Oct 31, 2025  06:10 PM | Jose Maximo
job manager (slurm computer cluster) error

Hi,


I have already run some analyses in conn and then decided to add a new ROI. I know once I add it I need to rerun the pipeline, but can skip already preprocessed subjects/ROIs. After I do that I keep getting errors on my jobs. This is the error


ERROR DESCRIPTION:


Error using save


Can not write file /path/to/conn/project


Error in conn_process (line 617) save (filename, 'data', 'names');


Error in conn_process (line 55)


case 'setup', conn_disp (['CONN: RUNNING SETUP STEP' ]); conn_proces:


Error in conn_jobmanager (line 892)


conn_process (job (n). fcn, job(n) .argsf2: end}) ;


CONN v. 22.2407


SPM25 + DAiSS DEM FieldMap MEEGtools Matlab v. 2018a storage: 186279.8Gb available


spm @ /toolboxes/spm conn @ /toolboxes/conn


This started happening after our cluster went through some data migration. I'm thinking if some of the files got corrupted, but hard to tell. I will try the other option "overwrite existing results" and see if that works.


Thanks.

Nov 4, 2025  12:11 PM | Alfonso Nieto-Castanon - Boston University
RE: job manager (slurm computer cluster) error

Hi Jose,


I am not sure but it looks like either a permissions or a path problem (the folder where CONN is expecting to save your new ROI data either no longer exists or it is not accessible). If there was a data migration in your cluster I would suggest re-loading your CONN project and trying again (when re-loading your project CONN will automatically check that all of the project files and folders are still there and ask you to locate them if they are not found, so I am thinking this may fix this issue...)


Hope this helps


Alfonso


Originally posted by Jose Maximo:



Hi,


I have already run some analyses in conn and then decided to add a new ROI. I know once I add it I need to rerun the pipeline, but can skip already preprocessed subjects/ROIs. After I do that I keep getting errors on my jobs. This is the error


ERROR DESCRIPTION:


Error using save


Can not write file /path/to/conn/project


Error in conn_process (line 617) save (filename, 'data', 'names');


Error in conn_process (line 55)


case 'setup', conn_disp (['CONN: RUNNING SETUP STEP' ]); conn_proces:


Error in conn_jobmanager (line 892)


conn_process (job (n). fcn, job(n) .argsf2: end}) ;


CONN v. 22.2407


SPM25 + DAiSS DEM FieldMap MEEGtools Matlab v. 2018a storage: 186279.8Gb available


spm @ /toolboxes/spm conn @ /toolboxes/conn


This started happening after our cluster went through some data migration. I'm thinking if some of the files got corrupted, but hard to tell. I will try the other option "overwrite existing results" and see if that works.


Thanks.



 

Nov 4, 2025  08:11 PM | Jose Maximo
RE: job manager (slurm computer cluster) error

Hi,


I made sure to have the permission or paths were correct, but still get the same error. The file that cant write is COV_Subject001_Session01.mat The console output tells me that the process stops at Step 3/7 Importing conditions/covariates.


When I move to the denoising tab where I already have preprocessed data, I get the following error


ERROR DESCRIPTION:
 
Error using load
Unable to read MAT-file /data/project/COV_Subject007_Session001.mat. Not a binary MAT-file. Try load -ASCII to read as text.


Error in conn_loadmatfile (line 64)
 data=load(conn_server('util_localfile',filename),varargin{cellfun(@(x)~isequal(x,'-cache'),varargin)});


Error in conn (line 6892)
 CONN_h.menus.m_preproc.X2=conn_loadmatfile(filename,DOREM{3}{:});
CONN22.v2407
SPM12 + DEM FieldMap MEEGtools TFCE cat12
Matlab v.2018a
project: CONN22.v2407
storage: 185567.4Gb available
 
spm @ /data/project/lahtilab/lutherl/toolboxes/spm12
conn @ /data/project/lahtilab/lutherl/toolboxes/conn22b


Thank you.