help
help > RE: Movement
Apr 2, 2015 01:04 AM | Alfonso Nieto-Castanon - Boston University
RE: Movement
Hi Xiaozhen and Fred,
Just to further clarify, scrubbing (throwing out bad timepoints) is slightly different than "repairing" bad timepoints (e.g. interpolating bad scans from the previous/next scans). In addition scrubbing can be implemented by actually removing the identified outlier scans from the functional volumes or by effectively removing the identified outlier scans using dummy-coded regressors during Denoising (the latter is the implementation in CONN). The advantage of the latter approach over the former is that actually removing the outlier scans will disrupt the continuity of your timeseries, which affects processes like band-pass filtering which assume continuous data, while effectively removing the outlier scans through additional dummy-coded regressors maintains the continuity of the timeseries data. Both scrubbing methods will reduce the effective degrees of freedom of your timeseries in the same manner (either by actually removing timepoints or by adding new covariates of no interest), but the inter-subject variability in the first-level connectivity estimator variance (introduced by the varying degrees of freedom across subjects resulting from scrubbing) is expected to be considerably less than the inter-subject variability in the same first-level connectivity estimator variance due to the varying incidence of outliers across subjects in the absence of scrubbing (so if anything scrubbing can be expected to improve, not worsen, the validity of the homogeneity of variance assumption of standard second-level analyses).
Hope this helps clarify and let me know your thoughts (and looking forward to hearing more about your comparisons of all these methods!)
Best
Alfonso
Originally posted by Fred Uquillas:
Just to further clarify, scrubbing (throwing out bad timepoints) is slightly different than "repairing" bad timepoints (e.g. interpolating bad scans from the previous/next scans). In addition scrubbing can be implemented by actually removing the identified outlier scans from the functional volumes or by effectively removing the identified outlier scans using dummy-coded regressors during Denoising (the latter is the implementation in CONN). The advantage of the latter approach over the former is that actually removing the outlier scans will disrupt the continuity of your timeseries, which affects processes like band-pass filtering which assume continuous data, while effectively removing the outlier scans through additional dummy-coded regressors maintains the continuity of the timeseries data. Both scrubbing methods will reduce the effective degrees of freedom of your timeseries in the same manner (either by actually removing timepoints or by adding new covariates of no interest), but the inter-subject variability in the first-level connectivity estimator variance (introduced by the varying degrees of freedom across subjects resulting from scrubbing) is expected to be considerably less than the inter-subject variability in the same first-level connectivity estimator variance due to the varying incidence of outliers across subjects in the absence of scrubbing (so if anything scrubbing can be expected to improve, not worsen, the validity of the homogeneity of variance assumption of standard second-level analyses).
Hope this helps clarify and let me know your thoughts (and looking forward to hearing more about your comparisons of all these methods!)
Best
Alfonso
Originally posted by Fred Uquillas:
Hi Xiaozhen,
The Conn toolbox uses the ART Artifact Detection Toolbox, which computes regressor files for outliers and movement. To my knowledge, the scrubbing you're referring to (i.e., throwing out bad time points altogether), is implemented in a different toolbox with a similar name, that is, the ArtRepair toolbox from Stanford (http://cibsr.stanford.edu/tools/human-br...).
The reason why it is advised to use regression rather than removing outlier time points, is because you want to maintain the temporal resolution of the data. By removing time points, you're altering the length of scans, making some longer than others, for better or worse (though I'm under the impression it is for worse).
It would be nice if you could run both analyses, and tell us what you find. We did something similar: We looked at analysis results using the ART outlier regressors and no despiking, using art regressors and 'despiking', not using outlier regressors and not using despiking, and not using outlier regressors but using despiking. We're still under the process of finishing the comparisons.
All the best,
Fred
Originally posted by Xiaozhen You:
The Conn toolbox uses the ART Artifact Detection Toolbox, which computes regressor files for outliers and movement. To my knowledge, the scrubbing you're referring to (i.e., throwing out bad time points altogether), is implemented in a different toolbox with a similar name, that is, the ArtRepair toolbox from Stanford (http://cibsr.stanford.edu/tools/human-br...).
The reason why it is advised to use regression rather than removing outlier time points, is because you want to maintain the temporal resolution of the data. By removing time points, you're altering the length of scans, making some longer than others, for better or worse (though I'm under the impression it is for worse).
It would be nice if you could run both analyses, and tell us what you find. We did something similar: We looked at analysis results using the ART outlier regressors and no despiking, using art regressors and 'despiking', not using outlier regressors and not using despiking, and not using outlier regressors but using despiking. We're still under the process of finishing the comparisons.
All the best,
Fred
Originally posted by Xiaozhen You:
Thank you Fred for pointing me to that
thread!
So it's definitely only just regressing these outliers, my question then is statistically the effect same as scrubbing(removing time points when calculating functional connectivity)? since a big spike may still have motion residual effect? I'm wondering is the default CONN option is definitely superior than scrubbing, and whether CONN can provide that real scrubbing option, in case subject moves quite some volumes (especially in pediatric data), for some reason I thought scrubbed shorter time series will be cleaner for functional connectivity analysis..
Thanks!
Xiaozhen
So it's definitely only just regressing these outliers, my question then is statistically the effect same as scrubbing(removing time points when calculating functional connectivity)? since a big spike may still have motion residual effect? I'm wondering is the default CONN option is definitely superior than scrubbing, and whether CONN can provide that real scrubbing option, in case subject moves quite some volumes (especially in pediatric data), for some reason I thought scrubbed shorter time series will be cleaner for functional connectivity analysis..
Thanks!
Xiaozhen
Threaded View
Title | Author | Date |
---|---|---|
Kaylah Curtis | Jul 23, 2014 | |
Mary Newsome | Apr 2, 2015 | |
Alfonso Nieto-Castanon | Apr 6, 2015 | |
Alfonso Nieto-Castanon | Jul 29, 2014 | |
Xiaozhen You | Mar 31, 2015 | |
Fred Uquillas | Mar 31, 2015 | |
Xiaozhen You | Apr 1, 2015 | |
Fred Uquillas | Apr 1, 2015 | |
Alfonso Nieto-Castanon | Apr 2, 2015 | |
Xiaozhen You | Apr 2, 2015 | |
Ekaterina Shcheglova | Mar 26, 2023 | |
Fred Uquillas | Apr 24, 2015 | |
Alfonso Nieto-Castanon | Apr 28, 2015 | |
Fred Uquillas | May 6, 2015 | |
Arkan A | May 6, 2015 | |
Alfonso Nieto-Castanon | May 6, 2015 | |
Arkan A | May 7, 2015 | |
Alfonso Nieto-Castanon | May 8, 2015 | |
Arkan A | May 8, 2015 | |
Bradley Taber-Thomas | Sep 30, 2014 | |
Alfonso Nieto-Castanon | Oct 1, 2014 | |
Bradley Taber-Thomas | Oct 1, 2014 | |
Alfonso Nieto-Castanon | Nov 19, 2014 | |
Kaylah Curtis | Jul 29, 2014 | |
Alfonso Nieto-Castanon | Jul 30, 2014 | |
Alexander Drobyshevsky | Oct 21, 2014 | |
Alfonso Nieto-Castanon | Nov 19, 2014 | |
Kaylah Curtis | Jul 30, 2014 | |
Aleksandra Herman | Oct 23, 2014 | |