help
help > RE: Denoised result in different space dimensions
Feb 28, 2020 12:02 PM | Alfonso Nieto-Castanon - Boston University
RE: Denoised result in different space dimensions
Hi Daniel,
Since you are commenting-out the "batch.Setup.voxelresolution = 2;" line, the default setting is going to apply (batch.Setup.voxelresolution = 1) which means that CONN is going to use the resolution of your analysis mask (which you are specifying in the line "batch.Setup.voxelmaskfile = subject.brainmask;") for further analyses. I imagine that the issue here is simply that this subject.brainmask mask has 1mm voxel resolution?. In any way, the simplest solution here would be to use "batch.Setup.voxelresolution = 3;", which tells CONN to use the resolution of your functional .nii data for further analyses.
Another alternative is to resample your "subject.brainmask" file to the desired resolution. For example, if this "subject.brainmask" file is being derived from CONN's functional segmentation step, then changing the line "batch.Setup.preprocessing.voxelsize_func=1" to "batch.Setup.preprocessing.voxelsize_func=2" would automatically do that).
Hope this helps
Alfonso
Originally posted by Daniel van de Velden:
Since you are commenting-out the "batch.Setup.voxelresolution = 2;" line, the default setting is going to apply (batch.Setup.voxelresolution = 1) which means that CONN is going to use the resolution of your analysis mask (which you are specifying in the line "batch.Setup.voxelmaskfile = subject.brainmask;") for further analyses. I imagine that the issue here is simply that this subject.brainmask mask has 1mm voxel resolution?. In any way, the simplest solution here would be to use "batch.Setup.voxelresolution = 3;", which tells CONN to use the resolution of your functional .nii data for further analyses.
Another alternative is to resample your "subject.brainmask" file to the desired resolution. For example, if this "subject.brainmask" file is being derived from CONN's functional segmentation step, then changing the line "batch.Setup.preprocessing.voxelsize_func=1" to "batch.Setup.preprocessing.voxelsize_func=2" would automatically do that).
Hope this helps
Alfonso
Originally posted by Daniel van de Velden:
Dear conn-community,
I am using the conn toolbox now for some time and have implemented a very robust analysis pipeline.
However, the last few of my subject functional datasets result, after the "denoised"-step, in a 256x256x256 space dimension with a 1x1x1mm voxeldimension. This increases the size of my .nii-files from ~30GB ti over 145GB and makes them hardly readable/loadbale.
The data input dimensions of my structural and functinal data is the same as before.
I use an implicit brainmask correctly.
batch.Denoising.confounds.names = {'Grey Matter', 'CSF', 'White Matter', 'realignment'};
batch.Denoising.detrending = 1;
batch.Denoising.overwrite = 'Yes';
I get a correctly dimensioned ("preprocessed") smoothed function dataset out of my preprocessing steps, this result .nii file then is used for the denoising of course. After this step the dimension is screwed up like explained above.
Does anyone know why and/or how to fix it ?
Greetings and all the best
Daniel van de Velden
I am using the conn toolbox now for some time and have implemented a very robust analysis pipeline.
However, the last few of my subject functional datasets result, after the "denoised"-step, in a 256x256x256 space dimension with a 1x1x1mm voxeldimension. This increases the size of my .nii-files from ~30GB ti over 145GB and makes them hardly readable/loadbale.
The data input dimensions of my structural and functinal data is the same as before.
I use an implicit brainmask correctly.
% preprocessing steps and
configurations
preproc_STEPS = {'functional_art' ...
'functional_coregister_affine' ...
'structural_segment' ...
'functional_segment' ...
'functional_smooth' ...
};
preproc_STEPS = {'functional_art' ...
'functional_coregister_affine' ...
'structural_segment' ...
'functional_segment' ...
'functional_smooth' ...
};
batch.Setup.preprocessing.steps =
preproc_STEPS;
batch.Setup.preprocessing.fwhm = 3;
batch.Setup.preprocessing.coregtomean = 1;
batch.Setup.preprocessing.art_thresholds = [3 0.5];
batch.Setup.preprocessing.removescans = params.FMRI.early_drop;
batch.Setup.preprocessing.voxelsize_anat = 1;
batch.Setup.preprocessing.voxelsize_func = 1;
batch.Setup.preprocessing.fwhm = 3;
batch.Setup.preprocessing.coregtomean = 1;
batch.Setup.preprocessing.art_thresholds = [3 0.5];
batch.Setup.preprocessing.removescans = params.FMRI.early_drop;
batch.Setup.preprocessing.voxelsize_anat = 1;
batch.Setup.preprocessing.voxelsize_func = 1;
% denoising steps and configurations
batch.Setup.outputfiles(2) = 1;
batch.Setup.analysisunits = 1;
batch.Setup.voxelmask = 2;
batch.Setup.outputfiles(2) = 1;
batch.Setup.analysisunits = 1;
batch.Setup.voxelmask = 2;
batch.Setup.voxelmaskfile =
subject.brainmask;
% batch.Setup.voxelresolution = 2;
batch.Setup.analysisunits = 1;
batch.Denoising.done = 1;
batch.Denoising.filter = [0.08 0.9];
% batch.Setup.voxelresolution = 2;
batch.Setup.analysisunits = 1;
batch.Denoising.done = 1;
batch.Denoising.filter = [0.08 0.9];
batch.Denoising.confounds.names = {'Grey Matter', 'CSF', 'White Matter', 'realignment'};
batch.Denoising.detrending = 1;
batch.Denoising.overwrite = 'Yes';
I get a correctly dimensioned ("preprocessed") smoothed function dataset out of my preprocessing steps, this result .nii file then is used for the denoising of course. After this step the dimension is screwed up like explained above.
Does anyone know why and/or how to fix it ?
Greetings and all the best
Daniel van de Velden
Threaded View
| Title | Author | Date |
|---|---|---|
| Daniel van de Velden | Feb 28, 2020 | |
| Alfonso Nieto-Castanon | Feb 28, 2020 | |
| Daniel van de Velden | Mar 3, 2020 | |
| Alfonso Nieto-Castanon | Mar 3, 2020 | |
