help > RE: Denoising prior to anatomical ROI-to-ROI rsFC analyses in native space in separate project?
May 25, 2024  02:05 PM | Alfonso Nieto-Castanon - Boston University
RE: Denoising prior to anatomical ROI-to-ROI rsFC analyses in native space in separate project?

Dear Georgio,


That's an interesting question. First you are totally right that denoising is still necessary independently of whether you are planning to perform analyses in subject-space or in MNI-space. Second, it should be perfectly fine to have both analyses in the same project. The WM and CSF regions are derived from the segmentation step, which is defining these masks separately for each subject. Whether you use WM and CSF masks defined in subject-space (e.g. from c2Anatomical.nii and c3Anatomical.nii files) and then extract the BOLD signal from the corresponding subject-space functional data (e.g. from an auFunctional.nii file), or you use WM and CSF masks that have been transformed to MNI-space (e.g. from wc2Anatomical.nii and wc3Anatomical.nii file) and then the BOLD signal is extracted from the equally-transformed-to-MNI-space functional data (e.g. wauFunctional.nii file), the resulting BOLD timeseries from WM and CSF that will later be used during denoising will be exactly the same (up to a very minor  differences due to different resampling/interpolation at the borders of the mask in the two scenarios), so that should not be a problem at all when having the two types of analyses implemented in the same CONN project (i.e. you can simply run denoising normally and it will properly denoise the ROI-level subject-space data as well as the voxel-level MNI-space functional data). 


Hope this helps


Alfonso


Originally posted by georgios argyropoulos:



Dear Alfonso/colleagues,


Hope you're well.


I suspect I'm right in thinking that ROI-to-ROI (resting-state functional connectivity) analyses that are conducted using anatomical seed and target ROIs in native space would still require denoising just like the typical seed-to-voxel connectivity analyses in MNI space - i.e., using the au4D.nii files as input, one would still need to regress the potential confounding effects characterized by CSF timeseries and WM timeseries (using the CSF and WM segmentations in NATIVE SPACE) , plus motion regressors and their first order derivatives, etc... 


If this is so, then I suspect it's difficult to do this within the same project as that for seed (native-space ROIs)-to-voxel (whole-brain, MNI space) analyses, and one would need to set up a separate project in CONN altogether ? Otherwise, if both analyses are in the same project, I can't seem to find a way to regress out CSF and WM timeseries in MNI space for the seed (native space ROI)-to-voxel (whole-brain MNI space) separately from the CSF and WM timeseries in native space for the (native spsace) ROI-to-ROI analyses?


many thanks for your help


Georgios



 

Threaded View

TitleAuthorDate
georgios argyropoulos May 23, 2024
RE: Denoising prior to anatomical ROI-to-ROI rsFC analyses in native space in separate project?
Alfonso Nieto-Castanon May 25, 2024
georgios argyropoulos May 25, 2024