help
help > RE: ROIs MNI-->native space
Jul 28, 2016 01:07 PM | Athena Demertzi
RE: ROIs MNI-->native space
Hello Anfonso,
thank you for your reply.
The aim of the preprocessing was to avoid normalization and segmentation of structural as well as normalization of functional data. So, I did the following:
functional Realignment & unwrap
functional Slice time correction
functional Coregistration to structrural
functional segmentation (to get the WM, GM and CSF masks )
functional Outlier detection (ART-based scrubbing)
Interestingly, I had an iy_*,nii output that I did not understand why at first. Where did it come from? I assumed it was an inverse transformation matrix but since I did not normalize I do not know where it came from.
Then, with this iy_*.nii I normalised the MNI-ROI.
However, when I checked how the new ROI was coregistered on the nifitDATA*.nii file, the images were of different dimensions.
This is why I created another ROI based on the dimensions of the nifti_SUBJECT*.nii (with Marsbar).
Does it make more sense?
The procedure you propose suggests that the timecourses are extracted from the normalized functional images, right? But since I need to avoid this step (some subjects have badly deformed brains), you think it's still valid?
Thanks so much again,
Athena
thank you for your reply.
The aim of the preprocessing was to avoid normalization and segmentation of structural as well as normalization of functional data. So, I did the following:
functional Realignment & unwrap
functional Slice time correction
functional Coregistration to structrural
functional segmentation (to get the WM, GM and CSF masks )
functional Outlier detection (ART-based scrubbing)
Interestingly, I had an iy_*,nii output that I did not understand why at first. Where did it come from? I assumed it was an inverse transformation matrix but since I did not normalize I do not know where it came from.
Then, with this iy_*.nii I normalised the MNI-ROI.
However, when I checked how the new ROI was coregistered on the nifitDATA*.nii file, the images were of different dimensions.
This is why I created another ROI based on the dimensions of the nifti_SUBJECT*.nii (with Marsbar).
Does it make more sense?
The procedure you propose suggests that the timecourses are extracted from the normalized functional images, right? But since I need to avoid this step (some subjects have badly deformed brains), you think it's still valid?
Thanks so much again,
Athena
Threaded View
Title | Author | Date |
---|---|---|
Athena Demertzi | Jul 25, 2016 | |
Alfonso Nieto-Castanon | Jul 27, 2016 | |
Athena Demertzi | Jul 28, 2016 | |