help > Edit First and Last Functional volumes (without totaly messing up the pipeline)
Jan 31, 2023  12:01 AM | Stacy Hudgins
Edit First and Last Functional volumes (without totaly messing up the pipeline)
Dear Alfonso et al.:

In a project of 285 subjects, We have subjects with varying number of resting volumes per subject (i.e. 120, 192, 240, 360, and 480). I read a couple of posts that explain statistically the heterscadacity should be an issue in my case (quoted abstraction below). Despite all of the invalid scans are linearly regressed of confounding effects in further analysis, the lab prefers to truncate processed subjects 120 volumes. The a priori challenge is that reviewers (and I personally haven't walked through this fire yet) will take issue if functional volumes vary across subjects, despite for this control in our analysis pipeline.

My ask - Can (if so How do) you effectively edit post hoc the "First" and "Last" indexing from say [192 files] x [size 91 109 91] to say [120 files] x [91 109 91]? I noticed in multiple locations the CONN_x.Setup.Functional{1, 1} has a 1x3 cell comprised of indexing the swausub file for each subject. However, I pause from any consideration of editing the 1x2 struct found adjacent.

Your guidance on how to proceed before I completely mess up my project would be greatly appreciated.

Best Regards,
- Stacy Hudgins 

~~~~~ 

"It is perfectly fine to analyze data with different number of volumes per subject. In that case CONN will simply compute the corresponding connectivity measures using all available data for each subject. It is true that technically this means that the connectivity measures for each subject will have different standard errors across subjects (this is know as heteroscedasticity, in this case it means that for subjects with longer scanning sessions you expect to end up with "cleaner" estimates of their first-level connectivity values compared to subjects with shorter scanning sessions), but as long as the minimum duration is not exceedingly short (and I would say 5mins is perfectly fine) I do not believe that should pose a significant problem to your second-level analyses. There are several reasons for this: 1) the main reason is that the between-subjects variability in "true" connectivity values across subjects is typically considerably larger than the standard errors of the first-level connectivity estimates for each subject, so differences across subjects in the latter introduce only relatively minor heteroscedasticity variations; and 2) typically second-level GLM statistics are considered fairly robust to violations of the heteroscedasticity assumption to begin with. In any way, if you are concerned that differences in scanning length might be affecting your results a couple of potential additional venues worth considering would be: a) using mixed-linear models (e.g. glmFlex); and/or b) introducing scanning length as an additional covariate in your second-level analyses to rule-out potential confounding effects." - Alfonso

Threaded View

TitleAuthorDate
Edit First and Last Functional volumes (without totaly messing up the pipeline)
Stacy Hudgins Jan 31, 2023
Alfonso Nieto-Castanon Feb 10, 2023