help > Single Subject Pre/Post Analysis Options?
Showing 1-9 of 9 posts
Display:
Results per page:
Aug 7, 2019  09:08 AM | Jeff Browndyke
Single Subject Pre/Post Analysis Options?
Hello fellow CONNers,

I was hoping to get some feedback on possible analysis and/or visualization avenues for a particularly interesting pre/post case.  Ideally, I would love to show post-intervention changes in static and dynamic connectivity using CONN, but I'm not certain what would be possible given the inability to use a SPM model .mat file.  Any thoughts?

Regards,
Jeff
Aug 7, 2019  02:08 PM | Stephen L. - Coma Science Group, GIGA-Consciousness, Hospital & University of Liege
RE: Single Subject Pre/Post Analysis Options?
Dear Jeff,

For such single case studies, I have some scripts that can be helpful for static analyses. If you run CONN up to the 1st level analyses, you will get seed-to-voxel results for each seed in your setup. You can then use the following script to perform a t-test contrast with FDR correction:

https://github.com/lrq3000/csg_mri_pipel...

With this, you can for example test post - pre ([1, -1]), to highlight the differences in the subject's connectivity post operatively for the seeds you select.

After generating these results, you can visualize them using CONN surface or volumetric renderers as you prefer, by using this script:

https://github.com/lrq3000/csg_mri_pipel...

This will allow you to select any nifti map of your choice to visualize using the CONN visualizing GUI we all love.

About dynamic connectivity analyses, I too rarely use it for the moment to provide any guidance unfortunately.

Hope this helps,
Best regards,
Stephen Karl Larroque
GIGA-Consciousness, University of Liège & FRS-FNRS
Aug 7, 2019  03:08 PM | Jeff Browndyke
RE: Single Subject Pre/Post Analysis Options?
Thank you so much, Stephen.  These scripts should certainly come in handy!  

I may circle back with questions if I run into trouble, but that said the scripts are commented beautifully and shouldn't be an issue.

Warm regards,
Jeff
Aug 8, 2019  09:08 AM | Jeff Browndyke
RE: Single Subject Pre/Post Analysis Options?
Stephen,

When you were setting up your single-subject pre/post in CONN did you set it as one subject with two sessions or two subjects with one session each?

Thanks,
Jeff
Aug 8, 2019  11:08 AM | Stephen L. - Coma Science Group, GIGA-Consciousness, Hospital & University of Liege
RE: Single Subject Pre/Post Analysis Options?
> Dear Jeff,
>
> I set it up as one subject with two sessions, as this feels for me the
> most natural as well as ensuring that both sessions are totally comparable
> statistically (same denoising etc).
>
> Hope this helps,
> Best regards,
> Stephen
Aug 8, 2019  01:08 PM | Jeff Browndyke
RE: Single Subject Pre/Post Analysis Options?
Thanks again, Stephen.  The script worked flawlessly, but I forgot to address the most important question.  What exactly is being analyzed to generate the p-corrected source images?  For instance, are the [-1 1] condition difference x source results generated through parametric or nonparametric analyses?

Warm regards,
Jeff
Aug 9, 2019  07:08 AM | Stephen L. - Coma Science Group, GIGA-Consciousness, Hospital & University of Liege
RE: Single Subject Pre/Post Analysis Options?
Dear Jeff,

The script is using BETA maps and calculates the thresholds and contrasts from these maps. The significance thresholding is using a parametric voxel-wise FDR approach as supplied by CONN (which slightly differs from how SPM computes FDR values), as there is unfortunately no other way to calculate either cluster-wise thresholding nor non-parametric thresholding at the 1st-level, as both are technically ill-defined at the moment.

Here is an excerpt:
«Thresholding techniques for single subject fMRI are more complicated than for multi subject fMRI, as the fMRI time series contain auto correlation (Woolrich et al., 2001). To be able to perform a permutation test on single subject fMRI data, the auto correlations have to be removed prior to the resampling (Locascio et al., 1997; Bullmore et al., 2001; Friman and Westin, 2005), in order to not violate the exchangeability criterion. Single subject fMRI is further complicated by the fact that the spatial smoothing changes the auto correlation structure of the data. This problem is more obvious for CCA based fMRI analysis, where several filters are applied to the fMRI volumes (Friman et al., 2003). The only solution to always have null data with the same properties, is to perform the spatial smoothing in each permutation, which significantly increases the processing time. This problem was recently solved, by doing random permutation tests on the GPU (Eklund et al., 2011a, 2012).»
From: Anders Eklund, Mats Andersson, Camilla Josephson, Magnus Johannesson and Hans Knutsson, Does Parametric fMRI Analysis with SPM Yield Valid Results? - An Empirical Study of 1484 Rest Datasets, 2012, NeuroImage.

However, Adolf et al devised a simple algorithmic scheme to calculate non-parametric permutations on 1st-level fMRI, accounting for the temporal autocorrelation. The strategy is to do a blockwise permutation of temporal volumes, in other words simply split the resting-state scans and then permute to derive the null distribution. This strategy is implemented in the SPM toolbox StabMultip (and maybe also in FSL PALM?). For more info, see: Adolf, D., Weston, S., Baecke, S., Luchtmann, M., Bernarding, J., & Kropf, S. (2014). Increasing the reliability of data analysis of functional magnetic resonance imaging by applying a new blockwise permutation method. Frontiers in neuroinformatics, 8.

However, I never tried this approach, and I am not sure how it would work in practice. To try this approach, you would need to enable saving the denoised bold timeseries in CONN, to then use them as the input for StabMultip. If you try this approach, I would be very interested in hearing how it went :-)

Hope this helps,
Best regards,
Stephen Karl Larroque
GIGA-Consciousness, University of Liège, F.R.S.-F.N.R.S.
Aug 10, 2019  08:08 AM | Jeff Browndyke
RE: Single Subject Pre/Post Analysis Options?
Thanks, Stephen.  

Your response anticipated my next question, which would be the possibility of some sort of cluster threshold.  As you have already figured out, the results from the script allow for even the tiniest clusters.  I wonder if there's a way to just have the results thresholded for an arbitrary cluster (k) size?  While not empirically pleasing, one could reasonably set the cluster extent to a level that would preclude most of the likely noise effects.

Also, did you or your group figure out how to run single-subject ROI-to-ROI, rather than the seed-to-voxel approach?  Seems like this too might aid in diminishing the effect of the very tiny cluster effects.

Warm regards,
Jeff
Aug 14, 2019  11:08 AM | Stephen L. - Coma Science Group, GIGA-Consciousness, Hospital & University of Liege
RE: Single Subject Pre/Post Analysis Options?
Dear Jeff,

For manual cluster extent threshold, you can try to use one of these softwares (though I never used them): https://www.nitrc.org/projects/cluster_c... (Rubes_cluster_correct.m) or https://www.nitrc.org/projects/peak_nii or FIVE by Aaron Schultz (http://mrtools.mgh.harvard.edu/index.php...) or https://www.nitrc.org/projects/cluster_report/ (an extension of cluster_correct).

Be warned however that Eklund et al disadvise such approach as the extent thus set is arbitrary: as long as a cluster passes the significance threshold bar, it should be presented.

However, I backtrack on what I said earlier: Scott D. Slotnick, who is also the author of the paper "Cluster Success", a reply to Eklund's "Cluster Failure" paper, developed an alternative approach to corrected cluster correction which claims that it can estimate smoothness directly without needing an SPM.mat file, so it might work for 1st-level too if you set the right parameters (the degrees of freedom being the number of BOLD volumes). You can find the scripts and more information here: https://www2.bc.edu/sd-slotnick/scripts....

Hope this helps,
Best regards,
Stephen