Hi everyone,
I’m preprocessing three separate rs-fMRI datasets in CONN with one unified pipeline, but the native voxel sizes are quite different: [1×1×2 mm], [3.5×3.5×3.5 mm], and [3×3×3.75 mm]
I’m unsure what to do for spatial smoothing. If I choose a single FWHM for all three, what kernel size would you recommend that’s reasonable for all data, without excessively smoothing the high-resolution dataset?
My worry is that a standard kernel (7–8 mm)
might oversmooth the 1×1×2 dataset,
but using different smoothing kernels per dataset could introduce
artificial between-dataset differences, since these will be
analyzed together in the same project.
In your experience, which one is better ?
use one fixed kernel for all (and if
so, what range)?
resample/normalize everything to a common voxel size first and then
smooth?
or use different kernels but account for it somehow at group level?
(if so, how?)
Thanks a lot!
Shiva
Hi Shiva,
If the main objective is to combine these datasets in second-level (group) analyses I would still recommend to use the same spatial kernel for smoothing (e.g. 8mm). The rationale is that the size of that filter is not so much determined by SNR considerations but by an attempt to compensate for heterogeneity in functional localization across subjects (i.e. the functional location of many cognitive or mental processes will differ across individuals even after intersubject coregistration to a common reference space such as MNI, mainly because these procedures are mostly concerned with the coregistration of anatomical features across subjects, while functional-anatomical relations simply vary a lot across individuals)
Hope this helps
Alfonso
Originally posted by shiinsaad:
Hi everyone,
I’m preprocessing three separate rs-fMRI datasets in CONN with one unified pipeline, but the native voxel sizes are quite different: [1×1×2 mm], [3.5×3.5×3.5 mm], and [3×3×3.75 mm]
I’m unsure what to do for spatial smoothing. If I choose a single FWHM for all three, what kernel size would you recommend that’s reasonable for all data, without excessively smoothing the high-resolution dataset?
My worry is that a standard kernel (7–8 mm) might oversmooth the 1×1×2 dataset,
but using different smoothing kernels per dataset could introduce artificial between-dataset differences, since these will be analyzed together in the same project.
In your experience, which one is better ?
use one fixed kernel for all (and if so, what range)?
resample/normalize everything to a common voxel size first and then smooth?
or use different kernels but account for it somehow at group level? (if so, how?)
Thanks a lot!
Shiva
