<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="https://www.nitrc.org/themes/nitrc3.0/css/rss.xsl.php?feed=https://www.nitrc.org/export/rss20_forum.php?forum_id=1144" ?>
<?xml-stylesheet type="text/css" href="https://www.nitrc.org/themes/nitrc3.0/css/rss.css" ?>
<rss version="2.0"> <channel>
  <title>NITRC CONN : functional connectivity toolbox Forum: help</title>
  <link>http://www.nitrc.org/forum/forum.php?forum_id=1144</link>
  <description>Get Public Help</description>
  <language>en-us</language>
  <copyright>Copyright 2000-2026 NITRC OSI</copyright>
  <webMaster></webMaster>
  <lastBuildDate>Fri, 06 Mar 2026 21:06:31 GMT</lastBuildDate>
  <docs>http://blogs.law.harvard.edu/tech/rss</docs>
  <generator>NITRC RSS generator</generator>
  <item>
   <title>Multi-dataset pipeline</title>
   <link>http://www.nitrc.org/forum/forum.php?thread_id=15967&amp;forum_id=1144</link>
   <description>&lt;p&gt;Hi Alfonso,&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;I&amp;rsquo;m building a resting-state functional connectivity mega-analysis that combines multiple independent rs-fMRI datasets, and I&amp;rsquo;m now starting the preprocessing stage. I would really appreciate your guidance as I move forward.&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;Across datasets, there is substantial heterogeneity in both (i) study-level variables (number of subjects; number of visits per subject [1&amp;ndash;4]; condition availability; demographics; study design [parallel vs crossover]; and stimulation parameters including active vs sham, stimulation site, intensity, duration, electrode shape/size, and potentially a &amp;ldquo;number of prior sessions&amp;rdquo; variable to capture consolidation/time effects for studies with repeated active visits) and (ii) acquisition-level variables (TR, slice timing information/slice order, and native voxel size).&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;I&amp;rsquo;m preprocessing using CONN in MATLAB (batch mode), aiming to follow CONN&amp;rsquo;s default MNI preprocessing pipeline as closely as possible.&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;Data organization&lt;/strong&gt;&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;I keep a consistent structure across datasets, e.g.:&lt;br&gt;Study (01_Name) / Subject (S01) / Visit (V1) / anat and func (PRE, DURING, POST)&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;Each subject can have 1&amp;ndash;4 visits (V1&amp;hellip;V4). Each visit typically has one T1 and one 4D resting-state NIfTI per condition: PRE and POST, and in some datasets also DURING.&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;What I am doing in my batch script (so far tested on one dataset)&lt;/strong&gt;&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;1. loop&amp;nbsp;subjects within a dataset&lt;/strong&gt;&lt;br&gt;I run the batch one dataset at a time, looping over all subjects in that dataset.&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;2. auto-detect available visits for the subject&lt;/strong&gt;&lt;br&gt;For each subject, the script scans the subject metadata and automatically identifies available visit labels (V1&amp;hellip;V4), then sorts them in visit order.&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;3. Build CONN sessions as visit &amp;times; condition&lt;/strong&gt;&lt;br&gt;For each detected visit, look for functional runs corresponding to each condition in a fixed order:&lt;br&gt;PRE -&amp;gt; DURING (if available) -&amp;gt; POST&lt;br&gt;Each available run becomes one CONN &amp;ldquo;session&amp;rdquo;, so the final session list is:&lt;br&gt;V1_PRE, V1_DURING, V1_POST, V2_PRE, &amp;hellip; (depending on availability).&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;em&gt;My intent was to keep a transparent mapping between acquisition blocks and CONN sessions. In your opinion, for a multi-visit setup, is it better to:&lt;/em&gt;&lt;/p&gt;&lt;br /&gt;
&lt;ul&gt;&lt;br /&gt;
&lt;li&gt;&lt;em&gt;keep conditions identical across visits (PRE/DURING/POST), &lt;/em&gt;&lt;/li&gt;&lt;br /&gt;
&lt;li&gt;&lt;em&gt;or define visit-specific conditions (V1_PRE, V2_PRE, etc.)?&lt;/em&gt;&lt;/li&gt;&lt;br /&gt;
&lt;/ul&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;4. one CONN project per subject (multi-session)&lt;/strong&gt;&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;For each subject, I create one CONN project containing all that subject&amp;rsquo;s sessions (sessions = visit &amp;times; condition).&lt;/p&gt;&lt;br /&gt;
&lt;p class=&quot;isSelectedEnd&quot;&gt;&lt;em&gt;This was mainly to simplify handling variable session counts and reduce failures when subjects are &amp;ldquo;incomplete&amp;rdquo; relative to others. However, I&amp;rsquo;m unsure if this is a good design for later group-level analyses and contrasts. Would you recommend instead:&lt;/em&gt;&lt;/p&gt;&lt;br /&gt;
&lt;ul&gt;&lt;br /&gt;
&lt;li&gt;&lt;em&gt;one project per dataset (all subjects in that dataset), or&lt;/em&gt;&lt;/li&gt;&lt;br /&gt;
&lt;li&gt;&lt;em&gt;a single project for the full mega-analysis?&lt;/em&gt;&lt;/li&gt;&lt;br /&gt;
&lt;/ul&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;5.&lt;/strong&gt;&lt;strong&gt;&amp;nbsp;Structural assignment&lt;/strong&gt;&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;At the moment, I provide one T1 per subject (selected from available visits, usually the earliest valid T1).&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;em&gt;I initially tried using one T1 per visit/session, but it produced repeated segmentation outputs across visits (multiple c0/c1/c2 generations), and in the CONN GUI the structural preview looked distorted/tilted compared to the original T1.&lt;br&gt;From a best-practice perspective for multi-visit data: should I insist on using the visit-matched T1 (one per visit), or is one T1 per subject acceptable/preferable in CONN?&lt;br&gt;&lt;/em&gt;&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;6. TR handling&lt;/strong&gt;&lt;br&gt;The script reads TR from the JSON (RepetitionTime) when available; otherwise it falls back to a TR value from an external table. It also checks within-subject consistency across sessions.&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;em&gt;In one case, a single session had a different TR in JSON (e.g., 2.5 s vs 3.0 s in other sessions). The JSONs were generated from DICOM headers and slice timing info was missing for that Philips scanner.&lt;br&gt;&lt;/em&gt;&lt;em&gt;I&amp;rsquo;m not sure if this reflects conversion/metadata issues or true acquisition differences that should be modeled explicitly.&lt;/em&gt;&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;7. STC strategy&lt;/strong&gt;&lt;br&gt;If SliceTiming is available in JSON, STC is run using the BIDS SliceTiming&lt;br&gt;If SliceTiming is missing, the script falls back to a user-defined slice order (from an excel)&lt;br&gt;If neither is available, STC is skipped for that subject (and recorded in the log)&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;8. preprocessing&lt;/strong&gt;&lt;br&gt;I run an explicit steps list that matches CONN&amp;rsquo;s default MNI pipeline order, with functional_removescans explicitly placed first.&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;In practice:&lt;/p&gt;&lt;br /&gt;
&lt;ul&gt;&lt;br /&gt;
&lt;li&gt;If STC is available: removescans -&amp;gt; functional centering -&amp;gt; realign/unwarp -&amp;gt; slice-timing -&amp;gt; ART -&amp;gt; direct functional segmentation+normalization -&amp;gt; smoothing -&amp;gt; structural centering -&amp;gt; structural segmentation+normalization&lt;/li&gt;&lt;br /&gt;
&lt;li&gt;If STC is not available: same order, but without the slice-timing step.&lt;/li&gt;&lt;br /&gt;
&lt;/ul&gt;&lt;br /&gt;
&lt;p&gt;------------------------------&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;A) &lt;/strong&gt;Main issue I&amp;rsquo;m currently debugging: intermittent &amp;ldquo;mean functional file not found&amp;rdquo;&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;When running this multi-session setup through the default-MNI-style pipeline with mean-based coregistration enabled, I intermittently get:&lt;br&gt;&amp;ldquo;Mean functional file not found (associated with functional data &amp;hellip; au*.nii)&amp;rdquo;&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;It generates a mean image for the first session (V1_PRE), but then fails to find/generate the expected mean for later sessions (e.g., V1_DURING/POST, V2 sessions).&lt;br&gt;Have you seen this behavior before, and do you have any idea what in the multi-session setup could trigger it (e.g., project structure, session ordering, conditions, or something else)?&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;I would like to keep mean-based coregistration (i.e., not switch to coregtomean=0). As a temporary workaround, I handle this specific failure in the script as follows:&lt;/p&gt;&lt;br /&gt;
&lt;ul&gt;&lt;br /&gt;
&lt;li&gt;I run conn_batch in a try/catch.&lt;/li&gt;&lt;br /&gt;
&lt;li&gt;If the error message contains &amp;ldquo;Mean functional file not found (associated with functional data &amp;hellip; au*.nii)&amp;rdquo;, I parse the missing au*.nii path from the message.&lt;/li&gt;&lt;br /&gt;
&lt;li&gt;generate mean directly from that 4D file using SPM (mean across volumes, saved in the same folder).&lt;/li&gt;&lt;br /&gt;
&lt;li&gt;retry conn_batch once&lt;/li&gt;&lt;br /&gt;
&lt;/ul&gt;&lt;br /&gt;
&lt;p&gt;This resolves the failure. From the output files, it seems like CONN (with direct normalization) is using V1_PRE &amp;ldquo;reference mean functional&amp;rdquo; to estimate the normalization, and then it applies that transform to all sessions. In that case, those later sessions do not have session-specific ART-related outputs (e.g., y_art_mean_au&amp;hellip; and related files), I am not sure if this is correct.&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;Related questions:&lt;/p&gt;&lt;br /&gt;
&lt;ul&gt;&lt;br /&gt;
&lt;li&gt;Is direct functional segmentation+normalization the recommended choice here, or would indirect normalization be better practice in a multi-visit context (and why)?&lt;/li&gt;&lt;br /&gt;
&lt;li&gt;Do you recommend one T1 per subject or one T1 per visit when each visit has its own T1?&lt;/li&gt;&lt;br /&gt;
&lt;/ul&gt;&lt;br /&gt;
&lt;p&gt;------------------------------&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;B) &lt;/strong&gt;Smoothing question&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;Most datasets have native voxel sizes around ~3&amp;ndash;3.75 mm, but I also have one higher-resolution dataset (1&amp;times;1&amp;times;2 mm). We already discussed this in a previous post, however, the study author suggested that an 8 mm kernel may over-smooth their data. In your opinion, should I:&lt;/p&gt;&lt;br /&gt;
&lt;ul&gt;&lt;br /&gt;
&lt;li&gt;keep 8 mm for all datasets for maximal consistency, and later model site/dataset effects (e.g., covariates/ComBat), or&lt;/li&gt;&lt;br /&gt;
&lt;li&gt;use a smaller kernel (e.g., 6 mm) for all datasets, or&lt;/li&gt;&lt;br /&gt;
&lt;li&gt;use different kernels by dataset (e.g., 6 mm for the high-resolution dataset and 8 mm for the others), and then handle this difference via harmonization/covariates, or&lt;/li&gt;&lt;br /&gt;
&lt;li&gt;run two pipelines (6 mm and 8 mm) and compare the stability of results?&lt;/li&gt;&lt;br /&gt;
&lt;/ul&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;------------------------------&lt;/strong&gt;&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;C)&amp;nbsp;&lt;/strong&gt;If you think my project organization (one project per subject vs per dataset) or the way I define sessions/conditions needs revision, I would be very grateful for your recommendations on how to structure CONN batch preprocessing for this multi-dataset mega-analysis.&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;------------------------------&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;D) &lt;/strong&gt;More broadly, I&amp;rsquo;d appreciate any critique of my current strategy (design choices, weak points, ...) and how you would recommend I proceed.&amp;nbsp;&lt;br&gt;This is my first attempt working with fMRI and CONN, so I may be missing best practices for handling study-level parameters (active/sham, site, montage, dose, etc.). My assumption is that most of those variables will be handled after preprocessing, unless you think they should influence preprocessing decisions.&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;------------------------------&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;Thank you very much,&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;Best regards,&lt;br&gt;Shiva&lt;/p&gt;</description>
   <author>shiinsaad</author>
   <pubDate>Thu, 05 Mar 2026 15:13:37 GMT</pubDate>
   <guid>http://www.nitrc.org/forum/forum.php?thread_id=15967&amp;forum_id=1144</guid>
  </item>
  <item>
   <title>The first 10 volumes</title>
   <link>http://www.nitrc.org/forum/forum.php?thread_id=15966&amp;forum_id=1144</link>
   <description>&lt;p&gt;Hi Alfonso,&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;In the default preprocessing pipeline, is the step of removing the first 10 volumes of resting-state data automatically included (but not seen in the list of the pipeline)? Or, is this removal step not needed for the preprocessing and ART methods in CONN?&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;Thanks for your comment on this,&lt;/p&gt;&lt;br /&gt;
&lt;p&gt;meikei&lt;/p&gt;</description>
   <author>meikei leung</author>
   <pubDate>Thu, 05 Mar 2026 3:14:48 GMT</pubDate>
   <guid>http://www.nitrc.org/forum/forum.php?thread_id=15966&amp;forum_id=1144</guid>
  </item>
  <item>
   <title>Should I re-run motion/WM/CSF regression in CONN if already done in FSL?</title>
   <link>http://www.nitrc.org/forum/forum.php?thread_id=15965&amp;forum_id=1144</link>
   <description>&amp;lt;p data-start=&amp;quot;214&amp;quot; data-end=&amp;quot;219&amp;quot;&amp;gt;Hi,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p data-start=&amp;quot;221&amp;quot; data-end=&amp;quot;530&amp;quot;&amp;gt;I am trying to import preprocessed BOLD data (already in MNI space) into CONN. The data have been slice-time, motion, and distortion corrected. In my preprocessing pipeline (using FSL commands), I also regressed out motion parameters and nuisance signals (CSF and WM), producing residualized BOLD time series.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p data-start=&amp;quot;532&amp;quot; data-end=&amp;quot;740&amp;quot;&amp;gt;My question is:&amp;lt;br data-start=&amp;quot;547&amp;quot; data-end=&amp;quot;550&amp;quot;&amp;gt;When importing into CONN (Setup step), should I input the data before&amp;amp;nbsp;nuisance regression and allow CONN to handle denoising, or is it appropriate to input the already residualized data?&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p data-start=&amp;quot;742&amp;quot; data-end=&amp;quot;832&amp;quot;&amp;gt;I want to avoid double regression or otherwise interfering with CONN&amp;amp;rsquo;s denoising pipeline.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p data-start=&amp;quot;834&amp;quot; data-end=&amp;quot;862&amp;quot;&amp;gt;Thank you for the hep!&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p data-start=&amp;quot;864&amp;quot; data-end=&amp;quot;879&amp;quot;&amp;gt;Best,&amp;lt;br data-start=&amp;quot;869&amp;quot; data-end=&amp;quot;872&amp;quot;&amp;gt;Gabby&amp;lt;/p&amp;gt;</description>
   <author>Gabby Guadalupe</author>
   <pubDate>Wed, 04 Mar 2026 16:37:25 GMT</pubDate>
   <guid>http://www.nitrc.org/forum/forum.php?thread_id=15965&amp;forum_id=1144</guid>
  </item>
  <item>
   <title>RE: Multiple Conn remote connection issues</title>
   <link>http://www.nitrc.org/forum/forum.php?thread_id=15949&amp;forum_id=1144</link>
   <description>&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;HI Alfonso,&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;In regards to the Ubuntu Cluster, In was able to talk to my PI about the configuration. For the cluster, there is a first password for the cluster itself, which is the one I enter for the Conn and Matlab interface. His explanation is that the second password is needed for a Kerberos system to connect to the accompanying NAS storage. This is the process that requires a second, department-level password. I am able to enter that second password in a terminal SSH session, but cannot through the Matlab and Conn interfaces to date.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;Regarding the Windows PC connection, I tested utilizing the scp prompt you gave me, and recieved the following:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;(base) ➜&amp;lt;span class=&amp;quot;Apple-converted-space&amp;quot;&amp;gt;&amp;amp;nbsp; &amp;lt;/span&amp;gt;~ scp jnnea@100.74.53.80: ~/connserverinfo.json&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;jnnea@100.74.53.80's password:&amp;lt;span class=&amp;quot;Apple-converted-space&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;scp: download ./: not a regular file&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;Let me know what thoughts you might have to troubleshoot these connections further.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Thanks again,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Josh&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;Originally posted by Alfonso Nieto-Castanon:&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Hi Josh,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Regarding the Ubuntu cluster connection, from your description it seems likely that the system is getting stucked in that &amp;quot;second password&amp;quot; procedure. Can you please provide more details (or point me to your cluster documentation) about how this double-password system works? (e.g. in BU, for example, we have a two-factor authentication procedure in place, but that seems to work fine when using CONN to connect remotely -you would simply get that same prompt requesting to choose a 2FA method in the Matlab command-window when connecting remotely and after answering that prompt everything continuous normally).&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;And regarding connecting to your Windows PC, the &amp;quot;reading configuration information&amp;quot; step is trying (and failing) to use scp to download that configuration file from your Windows machine. Can you check from a terminal window on your Mac computer the command:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;scp jnnea@100.74.53.80:~/connserverinfo.json .&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;and see whether that works fine? (one possibility would be that the Windows PC might have a ssh but not scp server active?)&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Hope this helps&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Alfonso&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Originally posted by Joshua Neal:&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Hello,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;I'm reaching out because I have a M1 Macbook Air running Matlab R2024b and CONN toolbox 22.v2407. I have been attempting to use it as the client computer for either of two different computers - an Ubuntu Cluster, and a personal Windows PC.&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;For the Ubuntu cluster, the remote connection works up to the &amp;quot;This may take a few minutes, please be patient as your job currently sits in a queue. CONN will resume automatically when the new Matlab session becomes available&amp;quot; prompt. When I manually ssh into this cluster, I have to enter two separate passwords - first a general password, then a university account specific password. For the Conn remote connection I have been entering the first password, but am not given a prompt to enter a second. Is there a configuration option on my end that I could utilize to get both passwords entered and fulfill the connection?&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;For the Windows PC, I simply get the following error when attempting the remote login:&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;quot;Reading configuration information from jnnea@100.74.53.80:~/connserverinfo.json to /Users/joshneal/.conn_cache/conncache_27de0c9542e2f7f30a24833d879248a1&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p class=&amp;quot;p1&amp;quot;&amp;gt;Unable to find CONN distribution in jnnea@100.74.53.80.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p class=&amp;quot;p1&amp;quot;&amp;gt;If this is your first time connecting to 100.74.53.80, please use the following steps to confirm that CONN is available there and then try connecting again:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p class=&amp;quot;p1&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;Apple-converted-space&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/span&amp;gt;1. Log in to 100.74.53.80 as user jnnea&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p class=&amp;quot;p1&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;Apple-converted-space&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/span&amp;gt;2. Launch Matlab&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p class=&amp;quot;p1&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;Apple-converted-space&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/span&amp;gt;3. From Matlab command-window type &amp;quot;conn remotely setup&amp;quot; (without quotes) and confirm that the file ~/connserverinfo.json is created&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p class=&amp;quot;p1&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;Apple-converted-space&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/span&amp;gt;4. Log out from 100.74.53.80&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p class=&amp;quot;p1&amp;quot;&amp;gt;(see &amp;quot;in the server computer&amp;quot; section at https://web.conn-toolbox.org/resources/remote-configuration for details)&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p class=&amp;quot;p2&amp;quot;&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p class=&amp;quot;p1&amp;quot;&amp;gt;Try connecting again to jnnea@100.74.53.80? (yes|no) :&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p class=&amp;quot;p1&amp;quot;&amp;gt;This has occurred multiple times, in spite of generating the connserverinfo.json file repeatedly, inlcuding through ssh connection from the Mac to PC. Is there a compatability issue with Windows as the CONN server, or is there another troubleshooting process I can try here?&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;Appreciate any help with either or both connections.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p class=&amp;quot;p1&amp;quot;&amp;gt;Thanks,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p class=&amp;quot;p1&amp;quot;&amp;gt;Josh&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;</description>
   <author>Joshua Neal</author>
   <pubDate>Mon, 02 Mar 2026 15:29:56 GMT</pubDate>
   <guid>http://www.nitrc.org/forum/forum.php?thread_id=15949&amp;forum_id=1144</guid>
  </item>
  <item>
   <title>Deleting imported files</title>
   <link>http://www.nitrc.org/forum/forum.php?thread_id=15963&amp;forum_id=1144</link>
   <description>&amp;lt;p&amp;gt;Hello,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;I imported data for all subjects with four sessions each. For one subject, however, I need to exclude two sessions from the analysis.&amp;amp;nbsp;I initially tried removing the corresponding files from the file paths, but this now causes errors in CONN.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Is there a proper way to remove specific sessions from a subject within the project so that these two sessions are no longer included in the analysis?&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Thank you.&amp;lt;/p&amp;gt;</description>
   <author>Mara Britt Neumann</author>
   <pubDate>Mon, 02 Mar 2026 14:30:45 GMT</pubDate>
   <guid>http://www.nitrc.org/forum/forum.php?thread_id=15963&amp;forum_id=1144</guid>
  </item>
  <item>
   <title>RE: degrees of freedom</title>
   <link>http://www.nitrc.org/forum/forum.php?thread_id=9675&amp;forum_id=1144</link>
   <description>&amp;lt;p&amp;gt;Hi Alfonso,&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;If there an easy way to output the residual DOF after denosing at the subject-level? Or get the average DOF across subject after denosing? I used the 2019b version of CONN. Thank you!&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Best,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Amy&amp;lt;/p&amp;gt;</description>
   <author>Amy Wang</author>
   <pubDate>Sun, 01 Mar 2026 21:47:02 GMT</pubDate>
   <guid>http://www.nitrc.org/forum/forum.php?thread_id=9675&amp;forum_id=1144</guid>
  </item>
  <item>
   <title>RE: Session specific re-alignment</title>
   <link>http://www.nitrc.org/forum/forum.php?thread_id=12426&amp;forum_id=1144</link>
   <description>&amp;lt;p&amp;gt;Hi Jeff,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;In some of the latest releases we added to the default pipeline a &amp;quot;session-specific centering&amp;quot; step before realignment (which applies a simple translation to all functional runs before running realignment) which in our experience helped make realignment more robust avoiding or minimizing this sort of issues. Have you tried if that helps in your case as well? (alternatively, of course, you can run preprocessing separately for each session, as Herberto was doing and similarly to what one would do in the context of longitudinal data, but often that is reserved to cases where the anatomy itself may be expected to have changed between sessions, like in long developmental datasets, in pre/post surgery designs, etc.; and btw the way in batch to run preprocessing only within a single session is to set &amp;lt;em&amp;gt;BATCH.Setup.preprocessing.sessions=session_number&amp;lt;/em&amp;gt;)&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Hope this helps&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Alfonso&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Originally posted by Jeff Browndyke:&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Hi,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Did you ever find a solution to this problem and how did you import the separately realigned and processed files back into CONN?&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;I've been running into this problem for quite a few people in our pipeline.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Thanks,&amp;lt;br&amp;gt;Jeff&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;</description>
   <author>Alfonso Nieto-Castanon</author>
   <pubDate>Sat, 28 Feb 2026 12:48:21 GMT</pubDate>
   <guid>http://www.nitrc.org/forum/forum.php?thread_id=12426&amp;forum_id=1144</guid>
  </item>
  <item>
   <title>RE: Session specific re-alignment</title>
   <link>http://www.nitrc.org/forum/forum.php?thread_id=12426&amp;forum_id=1144</link>
   <description>&amp;lt;p&amp;gt;Hi,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Did you ever find a solution to this problem and how did you import the separately realigned and processed files back into CONN?&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;I've been running into this problem for quite a few people in our pipeline.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Thanks,&amp;lt;br&amp;gt;Jeff&amp;lt;/p&amp;gt;</description>
   <author>Jeff Browndyke</author>
   <pubDate>Fri, 27 Feb 2026 18:29:09 GMT</pubDate>
   <guid>http://www.nitrc.org/forum/forum.php?thread_id=12426&amp;forum_id=1144</guid>
  </item>
  <item>
   <title>RE: Error with conn_jobmanager / parallel processing fails</title>
   <link>http://www.nitrc.org/forum/forum.php?thread_id=15956&amp;forum_id=1144</link>
   <description>&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Thanks again, Alfonso. I updated my CONN with the patch and tried running in parallel again and it worked perfectly without any errors. To answer your question: I didn't use a batch script to run the parallel process -- I simply selected &amp;quot;distributed processing (run on Background process (Windows))&amp;quot; in the GUI, specified the number of parallel processes (I tried it several times with the number ranging from 2-10 and got the error every time), and which outlier detection thresholds to use (I always chose default intermediate). That was it. I hope this is helpful.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;&amp;lt;br&amp;gt;Thanks again for your help!&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Jeff&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Originally posted by Alfonso Nieto-Castanon:&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;&amp;quot;&amp;gt;for some reason I am not able to attach the patch here, in any case the current development version of CONN in github (github.com/alfnie/conn) now includes this fix (and this specific patch is in the file conn.m)&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Best&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Alfonso&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Originally posted by Alfonso Nieto-Castanon:&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Hi Jeff,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Thanks for the additional info, this was indeed quite a strange error to debug! if I am interpreting correctly the issue was that in your project info (somehow, I am not sure yet how this happened) the project filename is stored as conn_n21_parallel (the folder) instead of conn_n21_parallel.mat (the file) and that is creating some problems down the stream when running things in parallel. The attached patch should be able to fix this issue, but to make sure I am not missing anything (and to avoid others from running into the same issue) if you could please let me know some specific details on how the parallel process was run (e.g. did you use a batch script) that would be very helpful.&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;(note: this patch is for release 25b, to install it simply copy the attached file to your CONN distribution folder overwriting the file with the same name there)&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Best&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Alfonso&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Originally posted by johnsojp13:&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Hi Alfonso,&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Thanks for your response. To keep things moving forward, I ran outlier detection overnight using the standard, non-parallelized method. But I also tried running a version in parallel just now and the logs are still showing the same errors, so I'm attaching the qlog subdirectory you requested.&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Thank you,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Jeff&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;amp;nbsp;&amp;lt;/p&amp;gt;</description>
   <author>johnsojp13</author>
   <pubDate>Thu, 26 Feb 2026 16:38:20 GMT</pubDate>
   <guid>http://www.nitrc.org/forum/forum.php?thread_id=15956&amp;forum_id=1144</guid>
  </item>
  <item>
   <title>RE: Importing custom ROIs in MNI space</title>
   <link>http://www.nitrc.org/forum/forum.php?thread_id=15961&amp;forum_id=1144</link>
   <description>&amp;lt;p&amp;gt;Hi Fabian,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;yes you can import your ROI in MNIspace in the setup of CONN.&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;If you plan a group analysis, you need to normalize you subject images in the MNI space (with fmriprep before import in CONN or with the 'preprocessing' step inside CONN). Then your images will be in MNI space and so in the same space as your ROI.&amp;amp;nbsp;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;if you plan to make only 1st level analysis (meaning at the individual level) you don't need to normalize subject images. but your ROI should be import in the subject space. I don't know if you can do it inside conn.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;hope it helps,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Isabelle&amp;amp;nbsp;&amp;lt;/p&amp;gt;</description>
   <author>Isabelle Faillenot</author>
   <pubDate>Thu, 26 Feb 2026 13:39:37 GMT</pubDate>
   <guid>http://www.nitrc.org/forum/forum.php?thread_id=15961&amp;forum_id=1144</guid>
  </item>
 </channel>
</rss>
