Dear all,
I am currently struggling with setting up the appropriate design and contrast matrices for some statistics I want to run on resting-state connectivity data using the NBS toolbox. In the following, find my experimental design, the tests I want to run, and my questions:
1) Experimental design:
I have 12 subjects (rats) that were scanned under three different
conditions (anesthesia paradigms) at different time points. We
randomized the order of the paradigms to control for carryover
effects (like paradigm A might affect how paradigm B affects
connectivity at a later point).
2) Statistical design and tests:
I want to run 2 types of tests: firstly i want to test overall
group differences (like in a one-way ANOVA), secondly I want
to investigate specific contrasts (like paradigm A > paradigm B;
basically a t-test). One subject was excluded because of excessive
motion, leaving me with 35 scans in total. Furthermore, I include
the sequence of paradigms as a covariate in my model, to account
for the potential carry-over effects.
I then set up the design matrix with 9 columns: 3 columns for the
three different paradigms & 6 columns for the six different orders
in which they were applied. Exchange blocks are defined as a vector
with the numbers 1 to 12, so that the scans of one individual
subject are grouped together.
For the first test, I run an F-test (thresh = 3.1; alpha = 0.01),
with the contrast [1,1,1,0,0,0,0,0,0] and get an enormous component
of altered connectivity. So far, so good. That is what we expect.
Nevertheless though, the code puts out a warning that the model is
rank deficient.
For the post hoc t-tests, I use the same design matrix, run t-tests (thresh = 2.5; alpha = 0.05), using contrasts like [1,-1,0,0,0,0,0,0,0], [-1,1,0,0,0,0,0,0,0] or [1,0,-1,0,0,0,0,0,0] to test for specific group differences. Now all of these contrasts yield exactly the the same results - in terms of component size (= 0) and matrix of the test statistics (nbs.NBS.test_stat; note that some of the absolute t-values in here exceed the 2.5 threshold, but they are negative). This suggests, that the different contrasts might actually be testing the same question, which could be Paradigm A = Paradigm B = Paradigm C. This, however, confuses me, because the way that I set up design and contrast matrices is exactly how I would do it in SPM, and in there it works with these contrasts. Furthermore, the previous F-test already disproved that Paradigm A = Paradigm B = Paradigm C. This apparent discrepancy might have something to do with the highly negative t-values in the stats matrix thoug...
3) This results in the following questions:
a) What is up with the rank deficiency? Where does it come from and
is it problematic? At least for the F-test, the results are
plausible and align with mass-univariate testing.
b) Is it okay to leave out the intercept? What would that even
describe exactly?
c) What is the problem with my t-contrasts? What do the current
contrasts that I run test and how can I properly get my group
differences from this model?
d) I've read about an alternative way of setting up GLMs, known as
reference coding. Here you would leave out the first predictor for
each class of predictors and model them using an intercept. In my
case this would then result in 1 + 2 + 5 columns in the design
matrix, where the first column is all ones and somehow denotes the
first anesthesia paradigm in the first sequence? I'd be delighted
if someone could elaborate on this variant versus the one that I
run now. Do they test the same? Are there advantages/disadvantages
to any of the methods?
I would really appreciate help on this. Sorry for the novel I've
produced here.
Thanks in advance and best regards,
murb
This is my current design matrix:
This would be one of the contrasts:
This how I iterate over the contrasts:
Hi Murban,
Your query may be too extensive for this forum and relates to general statistical design. You may want to consult with a statistician to assist with the design.
You may also want to check the NBS manual for examples and tips.
Rank deficiency means that your deisgn matrix is not formulated correctly.
Kind regards,
Andrew
Originally posted by murban198:
Dear all,
I am currently struggling with setting up the appropriate design and contrast matrices for some statistics I want to run on resting-state connectivity data using the NBS toolbox. In the following, find my experimental design, the tests I want to run, and my questions:
1) Experimental design:
I have 12 subjects (rats) that were scanned under three different conditions (anesthesia paradigms) at different time points. We randomized the order of the paradigms to control for carryover effects (like paradigm A might affect how paradigm B affects connectivity at a later point).
2) Statistical design and tests:
I want to run 2 types of tests: firstly i want to test overall group differences (like in a one-way ANOVA), secondly I want to investigate specific contrasts (like paradigm A > paradigm B; basically a t-test). One subject was excluded because of excessive motion, leaving me with 35 scans in total. Furthermore, I include the sequence of paradigms as a covariate in my model, to account for the potential carry-over effects.
I then set up the design matrix with 9 columns: 3 columns for the three different paradigms & 6 columns for the six different orders in which they were applied. Exchange blocks are defined as a vector with the numbers 1 to 12, so that the scans of one individual subject are grouped together.
For the first test, I run an F-test (thresh = 3.1; alpha = 0.01), with the contrast [1,1,1,0,0,0,0,0,0] and get an enormous component of altered connectivity. So far, so good. That is what we expect. Nevertheless though, the code puts out a warning that the model is rank deficient.
For the post hoc t-tests, I use the same design matrix, run t-tests (thresh = 2.5; alpha = 0.05), using contrasts like [1,-1,0,0,0,0,0,0,0], [-1,1,0,0,0,0,0,0,0] or [1,0,-1,0,0,0,0,0,0] to test for specific group differences. Now all of these contrasts yield exactly the the same results - in terms of component size (= 0) and matrix of the test statistics (nbs.NBS.test_stat; note that some of the absolute t-values in here exceed the 2.5 threshold, but they are negative). This suggests, that the different contrasts might actually be testing the same question, which could be Paradigm A = Paradigm B = Paradigm C. This, however, confuses me, because the way that I set up design and contrast matrices is exactly how I would do it in SPM, and in there it works with these contrasts. Furthermore, the previous F-test already disproved that Paradigm A = Paradigm B = Paradigm C. This apparent discrepancy might have something to do with the highly negative t-values in the stats matrix thoug...
3) This results in the following questions:
a) What is up with the rank deficiency? Where does it come from and is it problematic? At least for the F-test, the results are plausible and align with mass-univariate testing.
b) Is it okay to leave out the intercept? What would that even describe exactly?
c) What is the problem with my t-contrasts? What do the current contrasts that I run test and how can I properly get my group differences from this model?
d) I've read about an alternative way of setting up GLMs, known as reference coding. Here you would leave out the first predictor for each class of predictors and model them using an intercept. In my case this would then result in 1 + 2 + 5 columns in the design matrix, where the first column is all ones and somehow denotes the first anesthesia paradigm in the first sequence? I'd be delighted if someone could elaborate on this variant versus the one that I run now. Do they test the same? Are there advantages/disadvantages to any of the methods?
I would really appreciate help on this. Sorry for the novel I've produced here.
Thanks in advance and best regards,
murb
This is my current design matrix:
![]()
This would be one of the contrasts:
This how I iterate over the contrasts:
Dear Andrew,
thanks for the taking the time to reply.
We could resolve our confusion.
Best,
murb
Maybe this can help somebody else:
Turns out that when the global variable nbs is loaded in l. 146 of
NBSrun.m, it would load our previous test statistics. Because we
did not specifiy new contrast paths for our post hoc testing, but
rather overwrote the previous files, NBSrun assumed that the input
hadn't changed (it is checking for that in l. 271 ff.), and did not
re-compute the edge statistics. Thus, all t-contrasts would yield
the exact same results although it appeared as if NBSrun was
recomputing, because the permutations were re-done.
Adding 'clear global nbs' before NBSrun solves this.
