help > Question reg second level results
Showing 1-8 of 8 posts
Oct 17, 2011 06:10 PM | Johnson H
Question reg second level results
Hello Experts
I just finished my Conn toolbar second level analysis, but I am not quite sure about the whole brain results. My goal was to do a paired sample t-test analysis on 16 subjects, pre versus post drug dataset. But since I could not figure-out a way to enter data for a paired t-test analysis, I entered the pre and post drug data as different subjects (32 subjects) but separated them as covariates.
Now will my second level results be a 2 sample or paired sample? If 2 sample is there a way to do paired sample using conn toolbar? Is this acceptable?
Appreciate any help, Thanks a lot for your time.
Johnson
I just finished my Conn toolbar second level analysis, but I am not quite sure about the whole brain results. My goal was to do a paired sample t-test analysis on 16 subjects, pre versus post drug dataset. But since I could not figure-out a way to enter data for a paired t-test analysis, I entered the pre and post drug data as different subjects (32 subjects) but separated them as covariates.
Now will my second level results be a 2 sample or paired sample? If 2 sample is there a way to do paired sample using conn toolbar? Is this acceptable?
Appreciate any help, Thanks a lot for your time.
Johnson
Oct 17, 2011 07:10 PM | Alfonso Nieto-Castanon - Boston University
RE: Question reg second level results
Hello Johnson,
The way your analyses were defined this will be a 2 sample t-test. The 'simpler' way to define the experiment info would have been to define two sessions per subjects and define 'pre' and 'post' as two conditions (instead of treating the two session as new subjects and define 'pre' and 'post' as second-level covariates). In that way at the second-level analysis tab you could have simply selected the 'pre' and 'post' conditions, enter a [1,-1] between-conditions contrast, and that would have implemented the desired paired t-test.
In any way, in order to avoid having to re-define the experient info and re-run the analyses, you can still perform a paired t-test on your results. For that you only need to add the appropriate 'subject effects' as additional covariates (Henson and Penny 2005 describes how to perform these sort of anova's in more detail if you are interested). Basically you need to add another 16 second-level covariates (one for each subject) and then control for these covariates in your second-level results. Since defining this many covariates is a bit tedious to do manually you can simply run the commands below from the command line instead (edit the first two lines to indicate which ones, among your 32 'pseudo' subjects correspond to the 'pre' and which ones correspond to the 'post' condition, for the same order of 'real' subjects)
group_pre=1:16;
group_post=17:32;
clear batch;
batch.Setup.subjects.effect_names{1}='Pre';
batch.Setup.subjects.effects{1}=full(sparse(group_pre,1,1,numel(group_pre)+numel(group_post),1));
batch.Setup.subjects.effect_names{2}='Post';
batch.Setup.subjects.effects{2}=full(sparse(group_post,1,1,numel(group_pre)+numel(group_post),1));
for n1=1:numel(group_pre)
batch.Setup.subjects.effect_names{2+n1}=['Subject',num2str(n1,'%02d')];
batch.Setup.subjects.effects{2+n1}=full(sparse([group_pre(n1),group_post(n1)],1,1,numel(group_pre)+numel(group_post),1));
end
conn_batch(batch);After running this, go to the conn gui and look into the Setup->second-level covariates tab to check the new covariates (you should have there 18 covariates, one for the 'pre' sample, and for the 'post' sample, and then one for each subject). If everything looks fine, simply save the conn project to store these new definitions.To define the paired t-test now simply go to the second-level results gui, select all of the (18) effects in the 'subject effects' list, and enter [1,-1,zeros(1,16)] in the 'between-subjects contrast' field. Just to doublecheck you should see now when exploring the results that the degrees of freedom for these results are 15 (the appropriate number for a paired t-test on a sample of 16 subjects) instead of 30 (the appropriate number for a two sample t-test on two samples of 16 subjects each). Hope this helpsAlfonso
Originally posted by Johnson H:
The way your analyses were defined this will be a 2 sample t-test. The 'simpler' way to define the experiment info would have been to define two sessions per subjects and define 'pre' and 'post' as two conditions (instead of treating the two session as new subjects and define 'pre' and 'post' as second-level covariates). In that way at the second-level analysis tab you could have simply selected the 'pre' and 'post' conditions, enter a [1,-1] between-conditions contrast, and that would have implemented the desired paired t-test.
In any way, in order to avoid having to re-define the experient info and re-run the analyses, you can still perform a paired t-test on your results. For that you only need to add the appropriate 'subject effects' as additional covariates (Henson and Penny 2005 describes how to perform these sort of anova's in more detail if you are interested). Basically you need to add another 16 second-level covariates (one for each subject) and then control for these covariates in your second-level results. Since defining this many covariates is a bit tedious to do manually you can simply run the commands below from the command line instead (edit the first two lines to indicate which ones, among your 32 'pseudo' subjects correspond to the 'pre' and which ones correspond to the 'post' condition, for the same order of 'real' subjects)
group_pre=1:16;
group_post=17:32;
clear batch;
batch.Setup.subjects.effect_names{1}='Pre';
batch.Setup.subjects.effects{1}=full(sparse(group_pre,1,1,numel(group_pre)+numel(group_post),1));
batch.Setup.subjects.effect_names{2}='Post';
batch.Setup.subjects.effects{2}=full(sparse(group_post,1,1,numel(group_pre)+numel(group_post),1));
for n1=1:numel(group_pre)
batch.Setup.subjects.effect_names{2+n1}=['Subject',num2str(n1,'%02d')];
batch.Setup.subjects.effects{2+n1}=full(sparse([group_pre(n1),group_post(n1)],1,1,numel(group_pre)+numel(group_post),1));
end
conn_batch(batch);After running this, go to the conn gui and look into the Setup->second-level covariates tab to check the new covariates (you should have there 18 covariates, one for the 'pre' sample, and for the 'post' sample, and then one for each subject). If everything looks fine, simply save the conn project to store these new definitions.To define the paired t-test now simply go to the second-level results gui, select all of the (18) effects in the 'subject effects' list, and enter [1,-1,zeros(1,16)] in the 'between-subjects contrast' field. Just to doublecheck you should see now when exploring the results that the degrees of freedom for these results are 15 (the appropriate number for a paired t-test on a sample of 16 subjects) instead of 30 (the appropriate number for a two sample t-test on two samples of 16 subjects each). Hope this helpsAlfonso
Originally posted by Johnson H:
Hello Experts
I just finished my Conn toolbar second level analysis, but I am not quite sure about the whole brain results. My goal was to do a paired sample t-test analysis on 16 subjects, pre versus post drug dataset. But since I could not figure-out a way to enter data for a paired t-test analysis, I entered the pre and post drug data as different subjects (32 subjects) but separated them as covariates.
Now will my second level results be a 2 sample or paired sample? If 2 sample is there a way to do paired sample using conn toolbar? Is this acceptable?
Appreciate any help, Thanks a lot for your time.
Johnson
I just finished my Conn toolbar second level analysis, but I am not quite sure about the whole brain results. My goal was to do a paired sample t-test analysis on 16 subjects, pre versus post drug dataset. But since I could not figure-out a way to enter data for a paired t-test analysis, I entered the pre and post drug data as different subjects (32 subjects) but separated them as covariates.
Now will my second level results be a 2 sample or paired sample? If 2 sample is there a way to do paired sample using conn toolbar? Is this acceptable?
Appreciate any help, Thanks a lot for your time.
Johnson
Oct 17, 2011 08:10 PM | Johnson H
RE: Question reg second level results
Hi Alfonso
Thanks a lot for the detailed and quick response. And the script will be a great help too you saved me a lot of time.
In the mean time I did something and just want to make sure if its right...
From the FC second level results I got the individual subjects conn images and loaded them in SPM paired sample t-test comparing pre>post(1 -1) and pre>post (-1 1). Is this right?
Will it give the same resuts as what you suggested?
Thanks again for your help!!!!
Johnson
Thanks a lot for the detailed and quick response. And the script will be a great help too you saved me a lot of time.
In the mean time I did something and just want to make sure if its right...
From the FC second level results I got the individual subjects conn images and loaded them in SPM paired sample t-test comparing pre>post(1 -1) and pre>post (-1 1). Is this right?
Will it give the same resuts as what you suggested?
Thanks again for your help!!!!
Johnson
Oct 17, 2011 08:10 PM | Alfonso Nieto-Castanon - Boston University
RE: Question reg second level results
Hi Johnson,
Yes, that is a good point. Your analyses are perfectly correct and they should give you exactly the same results as any of the two alternative approaches within the conn toolbox.
Best
Alfonso
Originally posted by Johnson H:
Yes, that is a good point. Your analyses are perfectly correct and they should give you exactly the same results as any of the two alternative approaches within the conn toolbox.
Best
Alfonso
Originally posted by Johnson H:
Hi Alfonso
Thanks a lot for the detailed and quick response. And the script will be a great help too you saved me a lot of time.
In the mean time I did something and just want to make sure if its right...
From the FC second level results I got the individual subjects conn images and loaded them in SPM paired sample t-test comparing pre>post(1 -1) and pre>post (-1 1). Is this right?
Will it give the same resuts as what you suggested?
Thanks again for your help!!!!
Johnson
Thanks a lot for the detailed and quick response. And the script will be a great help too you saved me a lot of time.
In the mean time I did something and just want to make sure if its right...
From the FC second level results I got the individual subjects conn images and loaded them in SPM paired sample t-test comparing pre>post(1 -1) and pre>post (-1 1). Is this right?
Will it give the same resuts as what you suggested?
Thanks again for your help!!!!
Johnson
Oct 17, 2011 09:10 PM | Johnson H
RE: Question reg second level results
Hi Alfonso
Ok I am going to bother you one more question, Please bear with me...
What is like the normal viewing thershold acceptable in FC by default it is set to 0.05, is this right? or its suppose to be 0.001?
My clusters are huge at 0.05 like 10,700 voxels (with multiple regions) but significant after P correction. what do you think about this? Is there like an acceptable cluster size?
Can you point me to a FC paper that addresses these points.
Johnson
Ok I am going to bother you one more question, Please bear with me...
What is like the normal viewing thershold acceptable in FC by default it is set to 0.05, is this right? or its suppose to be 0.001?
My clusters are huge at 0.05 like 10,700 voxels (with multiple regions) but significant after P correction. what do you think about this? Is there like an acceptable cluster size?
Can you point me to a FC paper that addresses these points.
Johnson
Dec 9, 2015 09:12 AM | Macià Buades-Rotger
RE: Question reg second level results
Hi Johnson,
I'm only a CONN user but I think I can answer your question. Usual whole-brain acceptable thresholds are p<0.005 or p<0.001 uncorrected at the peak level, with a cluster-wise FDR or FWE correction of p<0.05. If you use a peak-level p<0.05 threshold, as you have seen, you get huge clusters which, of course, survive cluster-level correction because it is statistically unlikely that such big clusters appear by chance. However, these massive clusters actually arise because you used an excessively liberal peak-level threshold, and they are furthermore not informative because they are very widespread and don't actually take advantage of the spatial resolution of fMRI.
Bottomline: you want to obtain small but reliable clusters, so use more astringent thresholds.
I hope that helped!
All the best,
Macià
I'm only a CONN user but I think I can answer your question. Usual whole-brain acceptable thresholds are p<0.005 or p<0.001 uncorrected at the peak level, with a cluster-wise FDR or FWE correction of p<0.05. If you use a peak-level p<0.05 threshold, as you have seen, you get huge clusters which, of course, survive cluster-level correction because it is statistically unlikely that such big clusters appear by chance. However, these massive clusters actually arise because you used an excessively liberal peak-level threshold, and they are furthermore not informative because they are very widespread and don't actually take advantage of the spatial resolution of fMRI.
Bottomline: you want to obtain small but reliable clusters, so use more astringent thresholds.
I hope that helped!
All the best,
Macià
Dec 11, 2015 01:12 AM | Alfonso Nieto-Castanon - Boston University
RE: Question reg second level results
Hi Macia and Johnson,
Just to add a related comment, the issue of the choice of voxel-level height threshold has recently re-gained some attention in Eklund et al. "can parametric statistical methods be trusted for fMRI based group studies" manuscript (this is still a pre-print so take with a grain of salt). The point of this manuscript is that cluster-level statistics may be biased/invalid when used in the context of "relatively high" voxel-level thresholds (uncorrected height thresholds above p=.001). In general some of the random field theory assumptions and approximations are asymptotic and only exact when the chosen threshold are very low. Some original investigations about this issue (e.g. Hayasaka S1, Nichols TE. (2003) Validating cluster size inference: random field and permutation methods. Neuroimage 20(4):2343-56) seemed to suggest that using relatively liberal height thresholds (e.g. uncorrected p=.01 height thresholds) would results in conservative cluster-level statistics (not liberal/invalid), while the new manuscript (and Nichols is also an author in this manuscript) would suggest the opposite (too liberal/invalid cluster-level stats when used in combination with p=.01 height thresholds). The bottom line of all this may be that we should probably use caution when using height thresholds above p=.001 uncorrected, and if you do wish to use higher thresholds (e.g. lower thresholds are most sensitive to strong activations, while higher thresholds should be more sensitive to weaker activation clusters that may extend over larger areas) it is probably a good recommendation to use instead (or at least validate your results using) non-parametric statistics.
In the next release of CONN, which should be out just in a few days, we are addressing this concern by adding the ability to use non-parametric statistics for all of your second-level analyses (this was already available in CONN for surface-based and ROI-to-ROI analyses, we have now also added non-parametric statistics for voxel-based analyses as well), so you will be able to choose between parametric (i.e. random field theory) and non-parametric (residual permutation/randomization tests) statistics to further investigate this issue on your own to see how this applies to connectivity analyses, and/or use your preferred method in your specific second-level analyses. We will keep an eye on this issue as well just to see if a community consensus is reached and a change in the default settings in CONN is also warranted (the current default settings for voxel-based analyses uses a p=.001 uncorrected height threshold combined with FDR-corrected cluster-level p<.05 threshold (parametric statistics), and currently all indications seem to suggest that this is a safe/valid approach)
Hope this helps
Alfonso
Originally posted by Macià Buades-Rotger:
Just to add a related comment, the issue of the choice of voxel-level height threshold has recently re-gained some attention in Eklund et al. "can parametric statistical methods be trusted for fMRI based group studies" manuscript (this is still a pre-print so take with a grain of salt). The point of this manuscript is that cluster-level statistics may be biased/invalid when used in the context of "relatively high" voxel-level thresholds (uncorrected height thresholds above p=.001). In general some of the random field theory assumptions and approximations are asymptotic and only exact when the chosen threshold are very low. Some original investigations about this issue (e.g. Hayasaka S1, Nichols TE. (2003) Validating cluster size inference: random field and permutation methods. Neuroimage 20(4):2343-56) seemed to suggest that using relatively liberal height thresholds (e.g. uncorrected p=.01 height thresholds) would results in conservative cluster-level statistics (not liberal/invalid), while the new manuscript (and Nichols is also an author in this manuscript) would suggest the opposite (too liberal/invalid cluster-level stats when used in combination with p=.01 height thresholds). The bottom line of all this may be that we should probably use caution when using height thresholds above p=.001 uncorrected, and if you do wish to use higher thresholds (e.g. lower thresholds are most sensitive to strong activations, while higher thresholds should be more sensitive to weaker activation clusters that may extend over larger areas) it is probably a good recommendation to use instead (or at least validate your results using) non-parametric statistics.
In the next release of CONN, which should be out just in a few days, we are addressing this concern by adding the ability to use non-parametric statistics for all of your second-level analyses (this was already available in CONN for surface-based and ROI-to-ROI analyses, we have now also added non-parametric statistics for voxel-based analyses as well), so you will be able to choose between parametric (i.e. random field theory) and non-parametric (residual permutation/randomization tests) statistics to further investigate this issue on your own to see how this applies to connectivity analyses, and/or use your preferred method in your specific second-level analyses. We will keep an eye on this issue as well just to see if a community consensus is reached and a change in the default settings in CONN is also warranted (the current default settings for voxel-based analyses uses a p=.001 uncorrected height threshold combined with FDR-corrected cluster-level p<.05 threshold (parametric statistics), and currently all indications seem to suggest that this is a safe/valid approach)
Hope this helps
Alfonso
Originally posted by Macià Buades-Rotger:
Hi Johnson,
I'm only a CONN user but I think I can answer your question. Usual whole-brain acceptable thresholds are p<0.005 or p<0.001 uncorrected at the peak level, with a cluster-wise FDR or FWE correction of p<0.05. If you use a peak-level p<0.05 threshold, as you have seen, you get huge clusters which, of course, survive cluster-level correction because it is statistically unlikely that such big clusters appear by chance. However, these massive clusters actually arise because you used an excessively liberal peak-level threshold, and they are furthermore not informative because they are very widespread and don't actually take advantage of the spatial resolution of fMRI.
Bottomline: you want to obtain small but reliable clusters, so use more astringent thresholds.
I hope that helped!
All the best,
Macià
I'm only a CONN user but I think I can answer your question. Usual whole-brain acceptable thresholds are p<0.005 or p<0.001 uncorrected at the peak level, with a cluster-wise FDR or FWE correction of p<0.05. If you use a peak-level p<0.05 threshold, as you have seen, you get huge clusters which, of course, survive cluster-level correction because it is statistically unlikely that such big clusters appear by chance. However, these massive clusters actually arise because you used an excessively liberal peak-level threshold, and they are furthermore not informative because they are very widespread and don't actually take advantage of the spatial resolution of fMRI.
Bottomline: you want to obtain small but reliable clusters, so use more astringent thresholds.
I hope that helped!
All the best,
Macià
Aug 15, 2018 10:08 PM | Jake Flinthoff
RE: Question reg second level analysis
Hello Alfonso -
I based a MATLAB script on what you wrote. However, it is giving me an error. Could you please help me:
clear batch;
batch.Setup.subjects.effect_names{1}='Active';
batch.Setup.subjects.effects{1}=full(sparse(group_Active,1,1,numel(group_Active)+numel(group_SHAM),1));
batch.Setup.subjects.effect_names{2}='SHAM';
batch.Setup.subjects.effects{2}=full(sparse(group_SHAM,1,1,numel(group_Active)+numel(group_SHAM),1));
for n1=1:numel(group_Active)
batch.Setup.subjects.effect_names{2+n1}=['Subject',num2str(n1,'%02d')];
batch.Setup.subjects.effects{2+n1}=full(sparse([group_Active(n1),group_SHAM(n1)],1,1,numel(group_Active)+numel(group_SHAM),1));
end
conn_batch(batch);
I have 2 groups Active vs Sham
The error is : Error using sparse
Index exceeds matrix dimensions.
Can you please help me.
thank you,
Jake
I based a MATLAB script on what you wrote. However, it is giving me an error. Could you please help me:
clear batch;
batch.Setup.subjects.effect_names{1}='Active';
batch.Setup.subjects.effects{1}=full(sparse(group_Active,1,1,numel(group_Active)+numel(group_SHAM),1));
batch.Setup.subjects.effect_names{2}='SHAM';
batch.Setup.subjects.effects{2}=full(sparse(group_SHAM,1,1,numel(group_Active)+numel(group_SHAM),1));
for n1=1:numel(group_Active)
batch.Setup.subjects.effect_names{2+n1}=['Subject',num2str(n1,'%02d')];
batch.Setup.subjects.effects{2+n1}=full(sparse([group_Active(n1),group_SHAM(n1)],1,1,numel(group_Active)+numel(group_SHAM),1));
end
conn_batch(batch);
I have 2 groups Active vs Sham
The error is : Error using sparse
Index exceeds matrix dimensions.
Can you please help me.
thank you,
Jake