NITRC CONN : functional connectivity toolbox Forum: help
http://www.nitrc.org/forum/forum.php?forum_id=1144
Get Public Helpen-usCopyright 2000-2019 NITRC OSIMon, 25 Mar 2019 14:18:51 GMThttp://blogs.law.harvard.edu/tech/rssNITRC RSS generatorproblem voxel to voxel analysis
http://www.nitrc.org/forum/forum.php?thread_id=10098&forum_id=1144
Dear Conn users,<br />
<br />
I am using conn version 18b. When I am at the first level analysis (voxel-to-voxel) and choose to perform intrinsic connectivity analysis, conn starts to run and processes all the different options in the voxel-to-voxel analysis (ICA, MVPA, ALFF...).<br />
Next to the fact that this unnecessarily increases the computation time, it has some other consequences:<br />
1) In the Setup domain, I get all ICA components into the 2nd level covariates. I am not sure what to do with these and if it would be good or bad to have them for intrinsic connectivity analysis.<br />
2) In the Results domain, Intrinsic Connectivity is displayed twice in the list of voxel-to-voxel measures, where each gives different results. Also here, I don't know why it does this and how the two measures differ from each other<br />
<br />
I hope someone could help out with this.<br />
Thanks.<br />
<br />
Best,<br />
StevenSteven JillingsMon, 25 Mar 2019 14:08:51 GMThttp://www.nitrc.org/forum/forum.php?thread_id=10098&forum_id=1144RE: subject's specific ROIs from freesurfer
http://www.nitrc.org/forum/forum.php?thread_id=5878&forum_id=1144
Hello!<br />
<br />
I have a similar question. I am trying to perform surface-based (rather than volume-based) analysis for the first time. I have preprocessed my data using the default surface-based parameters. Now, I am trying to enter each subject's Desikan-Killiany atlas from Freesurfer as an ROI. I am not sure if this is possible because it looks like each subject's atlas is named "lh.aparc.annot" which doesn't seem to end in the correct kind of file (e.g., nii, etc.) Is there a straightforward way of doing this?<br />
<br />
Thanks!<br />
KaitlinKaitlin CassadySat, 23 Mar 2019 19:11:31 GMThttp://www.nitrc.org/forum/forum.php?thread_id=5878&forum_id=1144RE: Differences in group size--compare within-group variance?
http://www.nitrc.org/forum/forum.php?thread_id=10084&forum_id=1144
Hi Alfonso,<br />
Thank you very much for your response. I have a few follow-up questions as I want to be sure I understand and execute everything correctly if I move forward with this method.<br />
<br />
1) To run the conjunction analysis in CONN, would I simply select Groups 1, 3 and 4 (in that order) and enter [-1 1 0]; [-1 -1 2] in the between-subjects contrast? <br />
<br />
2) As you said, the first contrast of the conjunction analysis will identify connections that differ between Groups 1 and 3 (i.e., Group1 - Group3). I'm a little less clear on the second part ([-1 -1 2]) -- will it identify connections that differ between Group4 and the combination of Group1 & Group3 (i.e., Group4 - (Group1+Group3))?<br />
<br />
3) If I'm on track with part 2, above, then the conjunction analysis should return those connections that are different between Groups1 and 3 AND that are also different between Group 4 and (Group1+Group3) -- is that correct? If so, as you said in your initial response, we would interpret those results as connections where "the strength of the Group1vsGroup4 difference in connectivity differs from the strength of the Group3vsGroup4 difference in connectivity."<br />
<br />
4) In your response, you said to "display those between-group differences in connectivity across these same connections in order to evaluate whether in fact the Group3vsGroup4 differences appear to be larger/stronger than the Group1vsGroup4 differences" -- would I just do that by calling up the results in the Results Explorer, selecting all of my ROIs, and using an analysis-level FDR correction?<br />
<br />
5) Conceptually, I'm not clear how the conjunction analysis overcomes the potential bias due to different sample sizes. Can you say clarify that at all?<br />
<br />
6) Would it be reasonable to add an additional covariate to this analysis, such as average framewise displacement? And if so, I assume I would simply select it in the subject effects menu and then add it to both parts of the conjunction analysis, as in [-1 1 0 0]; [-1 -1 2 0]. Does that seem correct?<br />
<br />
Sorry about all of the questions. I really appreciate your help!<br />
<br />
Best,<br />
JeffJeffrey JohnsonFri, 22 Mar 2019 22:02:01 GMThttp://www.nitrc.org/forum/forum.php?thread_id=10084&forum_id=1144missing value in CONN Quality Assurance: # of voxels in GreyMatter
http://www.nitrc.org/forum/forum.php?thread_id=10093&forum_id=1144
Dear Alfonso,<br />
After completing denoising step in our analysis, the new QA generated as a second level covariate QC_GreyMatter_vol_session1 and other QC for while matter and CSF shows 6 subjects value as nan. Why could this be? The files looks normal.<br />
What do you suggest?<br />
Thanks<br />
HaleHALE YAPICI ESERFri, 22 Mar 2019 16:41:26 GMThttp://www.nitrc.org/forum/forum.php?thread_id=10093&forum_id=1144RE: Differences in group size--compare within-group variance?
http://www.nitrc.org/forum/forum.php?thread_id=10084&forum_id=1144
[color=#000000]Hi Jeffrey,[/color]<br />
<br />
[color=#000000]This is an interesting question, and the reviewer is correct to point out that evaluating the difference in the number of significant/supra-threshold connections between two different analyses does not allow you to properly infer whether the differences between the two pairs of groups involved are similar or not. In addition, the expectation of decreased power in this sort of unbalanced designs is somewhat independent of the heteroscedasticity assumption of the GLM model or the homogeneity of variances assumption in a two-sample t-test, It is simply due to the smaller-group's average connectivity estimate having larger unknowns / standard error due to its smaller sample-size (this difference in standard errors is expected even if the within-group standard deviations are exactly the same across the two groups). Because of this I believe that a more direct way to evaluate whether in your results the difference in the number of supra-threshold connections between the two analyses is due to differences in power/sensitivity vs. differences in effect-size would be to actually compare the effect-sizes of the between-group differences/comparisons in both analyses. [/color]<br />
<br />
[color=#000000]There are, of course, a lot of different ways to go about exploring these effect-sizes. Since the two between-group comparisons in your case involve three groups (Group1vsGroup4, and Group3vs.Group4) one relatively straightforward approach would be to perform a conjunction of a [-1 1 0] contrast and a [-1 -1 2] contrast (when selecting Group1, Group3, and Group4 in this order). The first between-subjects contrast identifies those connections where Group1 and Group3 have different connectivity, and the second contrast identifies those connections where the difference between Group1 and Group4 is different from the difference between Group3 and Group4). Those connections that appear as significant (using two-sided tests) in both results will be the ones where you can confidently say that the strength of the Group1vsGroup4 difference in connectivity differs from the strength of the Group3vsGroup4 difference in connectivity. Then simply display those between-group differences in connectivity across these same connections in order to evaluate whether in fact the Group3vsGroup4 differences appear to be larger/stronger than the Group1vsGroup4 differences, which would support your original observation regarding the number of suprathreshold connections in each individual analysis but without the potential biases due to difference in power/sensitivity between those individual analyses. [/color]<br />
<br />
[color=#000000]Hope this helps[/color]<br />
[color=#000000]Alfonso[/color]<br />
<br />
<br />
[i]Originally posted by Jeffrey Johnson:[/i][quote]Hello Alfonso and others,<br />
I've run a few second-level ROI-to-ROI analyses (mostly 2-sample t-tests) involving comparisons between different combinations of four participant groups with different sizes (n = 9, 10, 17, and 17). A reviewer has asked us to address the possibility that some results might be due to differences in sample size, rather than (or in addition to) true differences in connectivity. <br />
<br />
For example, when we compare groups 1 and 4 (n=9 vs. n=17), there are about 15 significantly different connections, but when we compare groups 3 and 4 (n=17 vs. n=17), there are about 35 significant differences. We would like to gain some insight into whether there are fewer differences between groups 1 and 4 than 3 and 4 because there is less power in the first analysis due to group 1 (n=9) being smaller than group 3 (n=17). I was thinking it might be helpful to compare the variance in the correlation coefficients for each group (since those are the data in each of the t-tests in my analyses), as this would at least let me see if we're in violation of the assumption of homogeneity of variance, but my network has more than 700 connections so it's not really feasible to try to compute those values manually. Is there some way to efficiently obtain the variances (maybe a distribution of variances across each group's network) from the CONN results just to get an idea of how things compare between groups? Or do you have any suggestions for a better/alternative way to account for sample size differences? <br />
<br />
Any suggestions or guidance would be very much appreciated.<br />
<br />
Thank you,<br />
Jeff[/quote]Alfonso Nieto-CastanonFri, 22 Mar 2019 0:50:50 GMThttp://www.nitrc.org/forum/forum.php?thread_id=10084&forum_id=1144RE: Problems batch importing l2covariates
http://www.nitrc.org/forum/forum.php?thread_id=10090&forum_id=1144
[quote]Hi Alfonso,[/quote][quote]<br />
[/quote][quote]Yes now it works[/quote][quote]Thanks! [/quote][quote]- Harris [/quote]Harrison FisherThu, 21 Mar 2019 18:43:33 GMThttp://www.nitrc.org/forum/forum.php?thread_id=10090&forum_id=1144RE: Calculation of FDR-corrected p-values
http://www.nitrc.org/forum/forum.php?thread_id=10085&forum_id=1144
[color=#000000]Hi Alex,[/color]<br />
<br />
[color=#000000]When entering a matrix of uncorrected p-values conn_fdr will compute p-FDR values separately for each column (values along the first-dimension) of your matrix. So, in this case, that means that p_FDR, as you are computing it now, will compute seed-level FDR-corrected p-values (the multiple-comparison correction is correcting for multiple target ROIs, but separately for each seed ROI), which is not a symmetric measure, instead of analysis-level FDR-corrected p-values (the multiple-comparison correction is correcting for all pairs of seed&target ROIs), which is a symmetric measure. If you want to compute analysis-level p-fdr values, simply use the syntax:[/color]<br />
<br />
[color=#000000]p_FDR = reshape(conn_fdr(p_uncorrected(:)),size(p_uncorrected));[/color]<br />
<br />
[color=#000000]Hope this helps[/color]<br />
[color=#000000]Alfonso[/color]<br />
[i]Originally posted by Alexandra Cross:[/i][quote]Hi there,<br />
<br />
I'm trying to calculate FDR-corrected p-values from the p field in my ROI.mat file. The p field in this file contains p-values which are equivalent on each side of the diagonal, as expected. However when I compute the FDR-corrected values for some reason the corrected p-values differ on each side of the diagonal (I've attached a screenshot). <br />
<br />
I'm using p_FDR=conn_fdr(p_uncorrected) to compute the FDR-corrected p-values.<br />
<br />
If anybody has any thoughts on the values might be different on each side of the diagonal I would appreciate it!<br />
<br />
Alex[/quote]Alfonso Nieto-CastanonThu, 21 Mar 2019 18:00:27 GMThttp://www.nitrc.org/forum/forum.php?thread_id=10085&forum_id=1144RE: Many repeated scans per subject, design question
http://www.nitrc.org/forum/forum.php?thread_id=10087&forum_id=1144
Thanks!<br />
CClas LinnmanThu, 21 Mar 2019 17:58:50 GMThttp://www.nitrc.org/forum/forum.php?thread_id=10087&forum_id=1144RE: Lesion Masks
http://www.nitrc.org/forum/forum.php?thread_id=6821&forum_id=1144
Hi,<br />
<br />
I'd like to know the answer to the above questions. I have very similar doubts.<br />
<br />
Thanks!<br />
<br />
Marcela TakahashiMarcela TakahashiThu, 21 Mar 2019 17:56:47 GMThttp://www.nitrc.org/forum/forum.php?thread_id=6821&forum_id=1144RE: Export t-values from seed-to-voxel results
http://www.nitrc.org/forum/forum.php?thread_id=10086&forum_id=1144
[color=#000000]Hi Camilla,[/color]<br />
<br />
[color=#000000]If you are referring to voxel-level t-statistics, you can click on [i]'export mask'[/i] within the [i]results explorer[/i] window and that will create a "results.nii" file containing the t-statistics at each voxel (assuming that your analysis involves only individual-contrast vectors across conditions/effects/seeds). For exploring/visualization purposes you can also simply click on the [i]'slice display' [/i]button and that will bring a window showing individual slices, clicking in the image will report the statistics (including T-stats) associated with the selected voxel.[/color]<br />
<br />
[color=#000000]Hope this helps[/color]<br />
[color=#000000]Alfonso[/color]<br />
[i]Originally posted by Camilla Caprioglio:[/i][quote]Hi everyone,<br />
<br />
I've run a seed-to-voxel analysis and now I'm trying to export t-values. Is there a way to export them directly from conn results explorer without opening SPM for each result? <br />
<br />
Thank you immensely for the help,<br />
Camilla[/quote]Alfonso Nieto-CastanonThu, 21 Mar 2019 17:51:43 GMThttp://www.nitrc.org/forum/forum.php?thread_id=10086&forum_id=1144