help

help >

**RE: p-value confusion and p-FDR**Nov 28, 2015 08:11 AM | Alfonso Nieto-Castanon -

*Boston University*RE: p-value confusion and p-FDR

Dear Pravesh,

Regarding question (1), the 0.3491 value in the ROI(i).p(j) variable in this case is the one-sided p-value of your one-sampe t-test of the connections between ROIs i and j. Matlab ttest function returns the two-sided p-value (0.6982) for the same one-sample t-test, which is just twice the one-sided p-value (in general p_twosided = 2*min(p_onesided, 1-p_onesided) ).

And regarding question (2) the p-FDR value, also called FDR-corrected p-value, or FDR q-value, represents the minimum alpha value at which this test would reach significance when controlling false discovery rate at an alpha level. So, in other words, if the FDR-corrected p-vaue of this test is 0.8510 it means that this test would only reach significance if controlling FDR at a 0.8510 level (or higher, but it would not reach significance if controlling FDR at a 0.8509 level or lower). In general, if you want to know which tests reach significance when controlling FDR at a 0.05 level you would simply select those results with p-FDR values equal or lower than 0.05 (so in other words, you can treat p-FDR as any other p-value to determine significance based on whether this value is below your desired threshold). In CONN if you have a vector containing all of the uncorrected p-values in your set, you can compute the corresponding FDR-corrected p-values using the function call

Hope this helps

Alfonso

Regarding question (1), the 0.3491 value in the ROI(i).p(j) variable in this case is the one-sided p-value of your one-sampe t-test of the connections between ROIs i and j. Matlab ttest function returns the two-sided p-value (0.6982) for the same one-sample t-test, which is just twice the one-sided p-value (in general p_twosided = 2*min(p_onesided, 1-p_onesided) ).

And regarding question (2) the p-FDR value, also called FDR-corrected p-value, or FDR q-value, represents the minimum alpha value at which this test would reach significance when controlling false discovery rate at an alpha level. So, in other words, if the FDR-corrected p-vaue of this test is 0.8510 it means that this test would only reach significance if controlling FDR at a 0.8510 level (or higher, but it would not reach significance if controlling FDR at a 0.8509 level or lower). In general, if you want to know which tests reach significance when controlling FDR at a 0.05 level you would simply select those results with p-FDR values equal or lower than 0.05 (so in other words, you can treat p-FDR as any other p-value to determine significance based on whether this value is below your desired threshold). In CONN if you have a vector containing all of the uncorrected p-values in your set, you can compute the corresponding FDR-corrected p-values using the function call

*p_FDR=conn_fdr(p_uncorrected)*Hope this helps

Alfonso

*Originally posted by Pravesh Parekh:*Dear Dr. Alfonso,

I have two questions regarding the p values in Conn. My apologies if these questions are too trivial or the answers obvious.

1. I am exploring the ROI.mat file (conn_*/results/secondlevel/ANALYSIS_01/AllSubjects/Condition_Name/ROI.mat) and I observed the following (ROI-ROI connectivity):

- xX has the details of the subjects selected

- y has the correlation values of all subjects (correlation is the measure of functional connectivity selected at first level)

- names has the names of the source ROIs

- xyz most likely has the centroid of the source ROI

- names2 has the name of the target ROIs

- xyz2 most likely has the centroid location of the traget ROIs

- h is the average of the y values for the particular pair of ROIs (essentially, the beta displayed in the results window)

- F has the appropriate statistical value (T value for example)

(I am assuming that my interpretation of these names is correct)

However, when I look at the p variable, I expect to find the uncorrected p value. However, the displayed value is nowhere close to the actual uncorrected or FDR corrected p value.

To illustrate the above,

ROI(1).y(:,2) gives me the correlation coefficients of all subjects between ROI1 and ROI2 which is:

-0.1660

-0.2309

0.6556

-0.1602

0.2618

-0.0870

0.0671

Now, I use [h, p, ci, stats] = ttest(ROI(1).y(:,2)) to get my statistics

which returns p =0.6982 (the uncorrected p value being displayed in the results window) and T = 0.4069 which corresponds to the T statistics. I find the same values in the h and F variables. However, ROI(1).p(:,2) has a value of 0.3491. What exactly is this p value indicating (the p-FDR in the results window reads 0.8510)?

2. Another question is regarding the calculation of p-FDR value. I understand that the FDR procedure controls for the proportion of false positives. The procedure involves sorting the p values and then finding the rank for which the FDR equation is satisfied. All values that are lower than (or equal to) the p value at that rank are considered significant. Therefore, we are essentially just changing the alpha value. However, the p value obtained from the test should remain the same, right? How should I go about interpreting the p-FDR value? In the above example, if I run the same set of correlation coefficients through a t test (with a different alpha value), my T and p values remain the same (with only a change in the confidence interval).

I have two questions regarding the p values in Conn. My apologies if these questions are too trivial or the answers obvious.

1. I am exploring the ROI.mat file (conn_*/results/secondlevel/ANALYSIS_01/AllSubjects/Condition_Name/ROI.mat) and I observed the following (ROI-ROI connectivity):

- xX has the details of the subjects selected

- y has the correlation values of all subjects (correlation is the measure of functional connectivity selected at first level)

- names has the names of the source ROIs

- xyz most likely has the centroid of the source ROI

- names2 has the name of the target ROIs

- xyz2 most likely has the centroid location of the traget ROIs

- h is the average of the y values for the particular pair of ROIs (essentially, the beta displayed in the results window)

- F has the appropriate statistical value (T value for example)

(I am assuming that my interpretation of these names is correct)

However, when I look at the p variable, I expect to find the uncorrected p value. However, the displayed value is nowhere close to the actual uncorrected or FDR corrected p value.

To illustrate the above,

ROI(1).y(:,2) gives me the correlation coefficients of all subjects between ROI1 and ROI2 which is:

-0.1660

-0.2309

0.6556

-0.1602

0.2618

-0.0870

0.0671

Now, I use [h, p, ci, stats] = ttest(ROI(1).y(:,2)) to get my statistics

which returns p =0.6982 (the uncorrected p value being displayed in the results window) and T = 0.4069 which corresponds to the T statistics. I find the same values in the h and F variables. However, ROI(1).p(:,2) has a value of 0.3491. What exactly is this p value indicating (the p-FDR in the results window reads 0.8510)?

2. Another question is regarding the calculation of p-FDR value. I understand that the FDR procedure controls for the proportion of false positives. The procedure involves sorting the p values and then finding the rank for which the FDR equation is satisfied. All values that are lower than (or equal to) the p value at that rank are considered significant. Therefore, we are essentially just changing the alpha value. However, the p value obtained from the test should remain the same, right? How should I go about interpreting the p-FDR value? In the above example, if I run the same set of correlation coefficients through a t test (with a different alpha value), my T and p values remain the same (with only a change in the confidence interval).

## Threaded View

Title | Author | Date |
---|---|---|

Pravesh Parekh |
Nov 28, 2015 | |

Alfonso Nieto-Castanon |
Nov 28, 2015 | |