<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="https://www.nitrc.org/themes/nitrc3.0/css/rss.xsl.php?feed=https://www.nitrc.org/export/rss20_forum.php?forum_id=7907" ?>
<?xml-stylesheet type="text/css" href="https://www.nitrc.org/themes/nitrc3.0/css/rss.css" ?>
<rss version="2.0"> <channel>
  <title>NITRC News Group Forum: reproducibility-of-r-fmri-metrics-on-the-impact-of-different-strategies-for-multiple-comparison-correction-and-sample-sizes.</title>
  <link>http://www.nitrc.org/forum/forum.php?forum_id=7907</link>
  <description>
	&lt;table border=&quot;0&quot; width=&quot;100%&quot;&gt;&lt;tr&gt;&lt;td align=&quot;left&quot;/&gt;&lt;/tr&gt;&lt;/table&gt;
        &lt;p&gt;&lt;b&gt;Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.&lt;/b&gt;&lt;/p&gt;          
        &lt;p&gt;Hum Brain Mapp. 2017 Oct 11;:&lt;/p&gt;
        &lt;p&gt;Authors:  Chen X, Lu B, Yan CG&lt;/p&gt;
        &lt;p&gt;Abstract&lt;br/&gt;
        Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability &amp;lt; 0.3 for between-subject sex differences, &amp;lt; 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., &amp;lt; 80 [40 per group]) not only minimized power (sensitivity &amp;lt; 2%), but also decreased the likelihood that significant results reflect &quot;true&quot; effects (PPV &amp;lt; 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 00:000-000, 2017. © 2017 Wiley Periodicals, Inc.&lt;br/&gt;
        &lt;/p&gt;&lt;p&gt;PMID: 29024299 [PubMed - as supplied by publisher]&lt;/p&gt;
    </description>
  <language>en-us</language>
  <copyright>Copyright 2000-2026 NITRC OSI</copyright>
  <webMaster></webMaster>
  <lastBuildDate>Sat, 02 May 2026 7:41:51 GMT</lastBuildDate>
  <docs>http://blogs.law.harvard.edu/tech/rss</docs>
  <generator>NITRC RSS generator</generator>
 </channel>
</rss>
