help > VLSM-output negative values
Showing 1-3 of 3 posts
Display:
Results per page:
Sep 14, 2019  12:09 PM | Carolin Lucas
VLSM-output negative values
Dear all!

We are currently working on a VLSM-analysis in n=100 subjects but with rather small lesion volumes. Thus, there are only few voxels with a comparatively high percentage of subjects with a lesion in the respective voxel.

We used normalized data (t-scores) as behavioural data input und computed the VSLM for binary maps and continuous data using Brunner-Munzel test.

The *.txt-outputs (for different sets of behavioural data) show 

- rarely significant results for "+FDR" (FDR-corrected); however, here the result is >0 and the the thresholded statistical map seems correct
- far more often no result for "+FDR" but a negative value for "-FDR" (FWE-corrected).

First of all, I am surprised to find significant results with FWE but not with FDR-correction. I assume this might be due to our data distribution with only a small percentage of affected (lesioned) subjects by voxel (?)...

Second, we are uncertain about how to interpret the negative value (i.e. output of "-FDR"). It does not seem to be appropriate (and gives odd results) when using the negative value as lower bound when thresholding the statistical map.

Thanks a lot for your help!

Best,

Carol
Oct 9, 2019  12:10 PM | Chris Rorden
RE: VLSM-output negative values
Your results make perfect sense. FDR has a lot of power when a large proportion of the samples has an effect, but approaches Bonferroni when only a small proportion has an effect. In this way, FDR works in the reverse way most of us would want a statistical threshold to work: in cases where a large proportion of the brain is involved, we want the specificity to identify the peaks, when only a tiny region is involved we want the sensitivity to find it. While FDR is a principled approach to the multiple-comparison problem, this characteristic is less than ideal. For your data I would suggest you consider regions of interest and using permutation thresholding. The regions of interest will greatly reduce the number of tests performed, and hence the multiple-comparison penalty. The permutation thresholding will estimate the actual variability in the data, with many regions often having intrinsically low power (e.g. virtually never injured, so generating low p-values in virtually every permutation). 

As regards to positive and negative correlations, I assume you are using NiiStat not NPM or Stephen Wilson's VLSM. With NiiStat, a positive statistic means that brighter images are correlated with higher behavioral scores and a negative correlation means that brighter images are associated with lower behavioral scores. The convention is to draw our lesions as bright regions (lesioned voxels = 1) on a dark background (unlesioned voxels = 0). The inference depends on your behavior: on many standardized tests like WAB-AQ or MoCO lower scores mean poorer performance. On the other hand, for reaction time tasks, higher response times means poorer performance. With language tasks we often find brain injury to the anterior cerebral artery territory seems to protect people from aphasia. This seems paradoxical: why would brain injury lead to better performance? However, it makes sense within the population: the inclusion criteria only sampled people with brain injury, those with damage to the MCA territory have impairments, and therefore damage to the PCA is associated with relatively spared performance. This does not mean the people with PCA have better language skills than those without brain injury, only within this sample. These paradoxical effects can actually be leveraged by machine learning to make better prognostic and diagnostic classification.
Nov 2, 2019  09:11 AM | Carolin Lucas
RE: VLSM-output negative values
Dear Chris Rorden,

thank you very much for your comprehensive reply which I found extremely helpful!

Best regards,

Carol