Hello everyone,
We are encountering an issue where the same functional images, analyzed with identical preprocessing and setup parameters, yield drastically different results across two CONN projects The project that appears corrupted was processed with CONN 22.a, while the other was processed with CONN 22.v. However, when we re-ran a control analysis using the same subjects (selecting five participants from each group) in CONN 22.a, the issues described below did not occur.
Here is what we observe (you can see the screenshots in the attachment):
- In the Functional Data tab, the images appear striped or corrupted.
- The filenames and paths are correct — they point to the same swauf*.nii images.
- In the Other imaging data section, several datasets (e.g., mni-space functional data and smoothed functional data) also appear corrupted in the same striped way.
- However, there is an additional “smoothed data” entry under Other imaging data that shows the correct smoothed image.
- The corrupted project shows this extra “smoothed data” entry, which visually looks correct — while the actual Functional and smoothed functional data entries are not.
So far:
- File paths and filenames are correct and consistent across subjects and sessions.
- The preprocessing pipeline and parameters (realignment, normalization, smoothing, etc.) were identical between projects.
- The image sizes differ between the corrupted and non-corrupted versions (e.g., [61 73 61] vs [91 109 91]).
Could
this be related to a mismatch between “functional data” and “other
imaging data” entries in the project structure?
Or might CONN be reading the wrong intermediate file type (e.g.,
pre-MNI vs post-MNI normalization) under the “functional”
field?
Questions
- Has
anyone seen this “striped” display issue when filenames and
parameters are correct?
- Could
the duplicate “smoothed functional” / “smoothed data” entries mean
CONN linked to an unintended intermediate file?
- What’s
the best way to confirm which dataset CONN is actually using at
first-level analysis?
- And
importantly — what might cause this issue in the first place, and
how can I avoid it in future analyses?
Thank you very much in advance!
Best
regards,
Simay
