open-discussion > Suggestions for HCP Data
Showing 1-6 of 6 posts
Display:
Results per page:
Jan 19, 2021  08:01 PM | Steven Meisler - Harvard University / MIT
Suggestions for HCP Data
Hi,

Do you have any suggestions for using DTIPrep on HCP datasets? A typical diffusion dataset for a subject includes multiple pairs of AP-PA full sequences, along with sbrefs and fieldmap acquisition. The HCP processing pipeline usually includes averaging the AP-PA pairs together while denoising, and I imagine this would be thrown off if one removes slices with DTIPrep beforehand. Happy to provide more info as needed. Any guidance on this?

Thanks,
Steven
Jan 19, 2021  08:01 PM | Martin Styner
RE: Suggestions for HCP Data
Hi Steven

Good question. Most of our new studies have HCP-like sequences and thus DTIPrep is insufficient for the full preprocessing pipeline (as it does not provide any susceptibility correction). So, a new version of DTIPrep is needed, one that interfaces with FSL (or other toolboxes) to achieve the susceptibility correction.

And I am really not a fan of the old-FSL-style processing by averaging AP-PA sequences. The "new" style processing rather corrects susceptibility artifacts in the individual AP & PA volumes, but does not average them. This allows, for example, that you can reject a bad volume in AP, but include the corresponding good volume in PA. This also allows the processing of incompletely acquired data (where the subject had to leave the scanner early).

Furthermore, the new FSL versions (6.0.1+) have powerful correction methods included in eddy (FSL's eddy current and motion correction tool), particularly to incorporate the computed susceptibility-undistortion field into the eddy current/motion correction, as well as interpolate bad data. But, I think it would be bad practice use a volume that has significant artifacts throughout the image for the purpose of interpolation (i.e. it's better to just reject that whole bad image). 

So, our most up-to-date (script-based) processing starts with a conservative DTIPrep, which removes the really bad volumes and does not do a motion/eddy current correction. The data is then converted to NIFTI, for use with FSL 6.0.3 for a topup-eddy based correction of susceptibility/motion/eddy/remaining-bad-data-interpolation.  

Happy to share my scripts, if interestd.

Best
Martin
Jan 19, 2021  09:01 PM | Steven Meisler - Harvard University / MIT
RE: Suggestions for HCP Data
Hi Martin,

Thanks for the quick reply! I also use a conservative DTIPrep implementation, only running slice-wise and interlace checks with pretty strict thresholds to throw out bad volumes. If you are willing to share your scripts I'd like to take a look at them. Ideally, I would like to use the QCed files as inputs to QSIPrep. To this end, since I do not want to concatenate AP-PA together for computational efficiency, I would probably need to have the AP-PA sequences have the same amount of data, so I would be interested in applying interpolation technique. In your opinion, would interpolation be better served after Eddy/Topup has been run?

Best,
Steven
May 19, 2021  10:05 AM | Gustavo Sudre
RE: Suggestions for HCP Data
Martin,

Just jumping in the discussion here! Could you make those scripts available? I'm working with data from a couple different studies that have similar sequences, so it would be great to take a look at those scripts and how they prepare the data for eddy!

Thanks a bunch,

Gustavo
Oct 23, 2021  11:10 AM | neda mohammadi
RE: Suggestions for HCP Data
Hi Dear Martin,

Sorry, I have a question about installation DTI-Prep on Ubuntu 20.04 alongside MRtrix software, would you please help me? I posted a question in this forum...

Thanks in advance
Neda
Nov 9, 2021  02:11 PM | Martin Styner
RE: Suggestions for HCP Data
Sorry this took a bit, as i had not checked the forum in a while.

attached is the script (tcsh) that I ran to combine DTIPrep & topup & eddy (FSL 6.0.3) in one of our studies
Best
Martin

Originally posted by Gustavo Sudre:
Martin,

Just jumping in the discussion here! Could you make those scripts available? I'm working with data from a couple different studies that have similar sequences, so it would be great to take a look at those scripts and how they prepare the data for eddy!

Thanks a bunch,

Gustavo