help > Small voxel-2-voxel resutls,help!
Showing 1-3 of 3 posts
Display:
Results per page:
May 21, 2017  01:05 AM | Jin Li - IDG/McGovern Institute for Brain Research, Beijing Normal University
Small voxel-2-voxel resutls,help!
Dear All, dear Dr Nieto-Castanon,
   
    I was trying to use CONN to calculate voxel-wise global connectivity with 'GlobalCorrelation'. The batch script runs fine, but the results image is strange: the value of each voxel is extramely small. For double check, I performed a very similar analysis using in-house Python codes, I found these two results map is every similar except they are in different scale. The result from our own codes ranged from -0.x - 0.x, which is normal scale for the average correlation value, while the conn_batch result ranged from -0.00x - 0.00x.

Here is my question:

1) Isn't the 'GlobalCorrelation' calculate 'the strength and sign of the connectivity pattern between each voxel and the rest of the brain (average of the correlation coefficient values)', why the resulted value is so small?

2) I collect raw correlation value by adding 'batch.Analysis.measures.norm={0}' in my script, so the result map is not normalized.( I have noticed Dr Nieto-Castanon explained the meaning of normalization in CONN: converted to z-scores by subtacting the mean and dividing by the sd.)  The question is if I want to use these global connectivity value to perform brain-behavior correlation across subjects, should I perform the Fisher's Z transformation at first? What value should I use? Normalized global connectivity value or unnormalized value?

I have attched the batch script, Any help would be greatly appreciated! 
Thanks !
Jin Li
Attachment: v2v_batch_code.PNG
May 22, 2017  05:05 PM | Alfonso Nieto-Castanon - Boston University
RE: Small voxel-2-voxel resutls,help!
Dear Jin Li,

If you are using a release prior to 17a please update to a newer release. I am copying below the relevant change.log entry in the 17a release notes that discuss a scaling issue in these measures that may be related to what you are describing. Let me know if that seems to be it.

(*) Fixed global scaling parameter in GlobalCorrelation/IntrinsicConnectivity voxel-to-voxel measures (prior versions computed average correlation coefficient across all voxels in the volume -with voxels outside of analysis mask set to 0-; new version computes average correlation coefficients across analysis-voxels only -disregarding voxels outside of analysis mask-; difference between old and new behavior is a constant scaling factor for all measures/subjects/conditions equal to SizeOfMask/SizeOfVolume = 0.2913 for the default analysis mask; normalized GC/ICC measures not affected by this change)
note: if updating mid-analysis and still processing new subjects: GlobalCorrelation/IntrinsicConnectivity voxel-to-voxel analyses that used raw measures (ie. normalization setting is unchecked) require rerunning all subjects for consistency


Regarding whether to use normalized GC measures or not, there is no clear consensus on this. In your case, your batch script seems to be skipping all of the default denoising steps (which is never a good idea unless you are already applying some form of denoising externally to the data entered into CONN) so in this case I would definitely recommend using normalized measures in order to avoid what otherwise would be very strong biases introduced by noise-induced GCOR differences between subjects. In general, if your data denoising is conservative enough, using raw/un-normalized measures should be perfectly fine. 

Hope this helps
Alfonso

Originally posted by Jin Li:
Dear All, dear Dr Nieto-Castanon,
   
    I was trying to use CONN to calculate voxel-wise global connectivity with 'GlobalCorrelation'. The batch script runs fine, but the results image is strange: the value of each voxel is extramely small. For double check, I performed a very similar analysis using in-house Python codes, I found these two results map is every similar except they are in different scale. The result from our own codes ranged from -0.x - 0.x, which is normal scale for the average correlation value, while the conn_batch result ranged from -0.00x - 0.00x.

Here is my question:

1) Isn't the 'GlobalCorrelation' calculate 'the strength and sign of the connectivity pattern between each voxel and the rest of the brain (average of the correlation coefficient values)', why the resulted value is so small?

2) I collect raw correlation value by adding 'batch.Analysis.measures.norm={0}' in my script, so the result map is not normalized.( I have noticed Dr Nieto-Castanon explained the meaning of normalization in CONN: converted to z-scores by subtacting the mean and dividing by the sd.)  The question is if I want to use these global connectivity value to perform brain-behavior correlation across subjects, should I perform the Fisher's Z transformation at first? What value should I use? Normalized global connectivity value or unnormalized value?

I have attched the batch script, Any help would be greatly appreciated! 
Thanks !
Jin Li
May 23, 2017  03:05 AM | Jin Li - IDG/McGovern Institute for Brain Research, Beijing Normal University
RE: Small voxel-2-voxel resutls,help!
Dear Dr Nieto-Castanon,
 
Thanks a lot! I updata to 17e and rerun the batch script, the resulted value is normal this time (-0.x - 0.x), and the absolue value as well as whole brain pattern is very similar to the result from our in-house python code (only slightly different value at the the third decimal place). 

It seems that 17e no longer preview the resulted voxel-to-voxel  image in CONN gui? I used to visualize the resulted image with 16b.

As for denoising processes, I have already applied some form of denoising externally to the data entered into CONN, so I decide to use the raw value and perform the Fisher' s Z transformation before correlating the GC value with behavior data across subjects at voxel-wise level.

Your feedback is really helpful since I had stucked by the scaling problem for several days.
Thanks again!

Jin Li