<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="https://www.nitrc.org/themes/nitrc3.0/css/rss.xsl.php?feed=https://www.nitrc.org/export/rss20_forum.php?forum_id=7136" ?>
<?xml-stylesheet type="text/css" href="https://www.nitrc.org/themes/nitrc3.0/css/rss.css" ?>
<rss version="2.0"> <channel>
  <title>NITRC News Group Forum: multi-view-ensemble-classification-of-brain-connectivity-images-for-neurodegeneration-type-discrimination</title>
  <link>http://www.nitrc.org/forum/forum.php?forum_id=7136</link>
  <description>
                  &lt;h3 class=&quot;a-plus-plus&quot;&gt;Abstract&lt;/h3&gt;
                  &lt;p class=&quot;a-plus-plus&quot;&gt;Brain connectivity analyses using voxels as features are not robust enough for single-patient classification because of the inter-subject anatomical and functional variability. To construct more robust features, voxels can be aggregated into clusters that are maximally coherent across subjects. Moreover, combining multi-modal neuroimaging and multi-view data integration techniques allows generating multiple independent connectivity features for the same patient. Structural and functional connectivity features were extracted from multi-modal MRI images with a clustering technique, and used for the multi-view classification of different phenotypes of neurodegeneration by an ensemble learning method (random forest). Two different multi-view models (intermediate and late data integration) were trained on, and tested for the classification of, individual whole-brain default-mode network (DMN) and fractional anisotropy (FA) maps, from 41 amyotrophic lateral sclerosis (ALS) patients, 37 Parkinson’s disease (PD) patients and 43 healthy control (HC) subjects. Both multi-view data models exhibited ensemble classification accuracies significantly above chance. In ALS patients, multi-view models exhibited the best performances (intermediate: 82.9%, late: 80.5% correct classification) and were more discriminative than each single-view model. In PD patients and controls, multi-view models’ performances were lower (PD: 59.5%, 62.2%; HC: 56.8%, 59.1%) but higher than at least one single-view model. Training the models only on patients, produced more than 85% patients correctly discriminated as ALS or PD type and maximal performances for multi-view models. These results highlight the potentials of mining complementary information from the integration of multiple data views in the classification of connectivity patterns from multi-modal brain images in the study of neurodegenerative diseases.&lt;/p&gt;
                </description>
  <language>en-us</language>
  <copyright>Copyright 2000-2026 NITRC OSI</copyright>
  <webMaster></webMaster>
  <lastBuildDate>Sun, 19 Apr 2026 21:48:56 GMT</lastBuildDate>
  <docs>http://blogs.law.harvard.edu/tech/rss</docs>
  <generator>NITRC RSS generator</generator>
 </channel>
</rss>
