help > RE: Potential bugs in preprocessing pipeline
Oct 10, 2018  08:10 PM | Alfonso Nieto-Castanon - Boston University
RE: Potential bugs in preprocessing pipeline
Dear Pravesh,

My apologies for the late reply (and sorry that you needed some persistence to get my attention! I really appreciate your helpful contributions to the forum, not only with your own interesting questions but also helping others out, so if I haven't said so yet thanks a ton for that!)

Regarding the "slice-timing correction" question, first, you are exactly right, a few of those issues are related to SPM having a somewhat dual way of specifying slice timing information for this procedure. In SPM you can either: a) specify the slice-order vector (a sequence of indexes indicating which slice is acquired at each point in time, in the same order as they were acquired; of course this syntax does not work for multi-band sequences since there is no way to specify that two slices were acquired simultaneously) and then specify the reference slice (which slice you want to use as reference; implicitly this forces the reference to be an actual slice); or b) specify the slice-timing vector (a sequence of times indicating the time of acquisition of each slice; this syntax works perfectly fine for multi-band sequences) and then specify the reference time (which does not need to be the time of acquisition of an actual slice). This somewhat convoluted dual-syntax structure is due to the latter option being just a recent addition to SPM in order to support slice-timing correction for multi-band sequences (but they needed/wanted to keep the former option working as well, even if only for backwards-compatibility with older scripts / software packages). CONN tries to maintain compatibility with this dual-syntax and allows you to specify either slice-order or slice-timing vectors. When doing the former CONN will select the slice closest to the average acquisition time as reference-slice, and when doing the latter CONN will select the average of the slice-acquisition times as reference-time (not necessarily an actual slice acquisition time). You are right that this creates the somewhat unfortunate inconsistency that the same procedure on the same data will lead to slight differences (due to different reference times being used) depending on whether you use the "slice-order" or the "slice-timing" syntax to characterize your data. For example, when using a BIDS dataset which includes an "SliceTiming" field in the sidecar .json files (or when importing DICOMs that may have a "MosaicRefAcqTimes" field) describing the acquisition times, CONN will pass this information using SPM's "slice-timing" syntax, while when you use any of the standard "ascending/descending/interleaved/etc." descriptors to specify the slice acquisition order CONN will convert these to slice ordering sequences and use instead the "slice-order" syntax in SPM. Depending on which case you use the reference slice will be selected based on the average acquisition time within a TR or the timing of the slice acquired closer to mid-TR. 

In any way, thanks very much for pointing this issue out. I believe perhaps the best compromise solution would be to have CONN always use SPM's "slice-timing" syntax internally (e.g. if a user specifies the slice ordering vector CONN can translate that to slice acquisition-times and feed that info to SPM) which then would allow us to explicitly choose the reference time that would be consistent across all use cases. My choice would be to use the average acquisition time in all cases (irrespective of whether that time coincides with the acquisition of an actual slice), rather than trying to match the reference time to the actual acquisition time of one slice (mostly because that seems a reasonable quantitative choice, since it makes the distribution of shift/correction times that SPM needs to apply centered and with minimal variance, but also because it allows us to have a unique solution without having to round/choose between potentially similar-distance slices -e.g. when you have even number of slices-), but please let me know your thoughts/comments/suggestions.

Also on the 21 vs. 22 selected slice depending on the ascending vs. descending order, when using the "slice-order" syntax CONN will select as reference the "floor(nslice/2)"-th acquired slice. This would translate to slice #21 if the slice-order is 1,2,...,42 or slice number 22 if the slice-order is 42,41,...,1. Again, with the proposed change, in both cases the reference time would be selected as (TR-TR/nslices)/2 (which would be right in between the 21th and 22th slices)

Regarding the "indirect normalization" question, that is rather unusual. I have tried to replicate that behavior but do not seem to be able to do so. Since in the "indirect normalization" pipeline the structural is normalized/segmented directly (without any further modification) the resulting normalized structural images should be very similar if not identical to those resulting from the "direct normalization" pipeline (disregarding perhaps minor differences due to different initial states related to the additional centering step in the "direct normalization" pipeline), so I am not really sure what could possibly result in this cropping/masking effect. If you don't mind perhaps you could send me your structural and mean-functional volumes (the ones reported by CONN in Matlab's command-line as being selected for the indirect normalization step) so I can try to replicate this behavior more directly?

And regarding the "CSF semgentation mask" question, the wc0* structural image for each subject is, at least in CONN, only used for display purposes. Its main purpose was just to show a skull-stripped version of the normalized anatomical image, so the sum of the gray/white/csf should basically leave out those areas that are being identified as bone, soft-tissue, or air. Of course you are right that, depending on what you plan to use that structural image for, other masks may be better suited to the task (and even just for display purposes having the eye balls being removed by not including wc3 in the mask formation would be a good idea, although I would worry about the removal of pial matter and ventricles since those are closer to functionally-relevant areas). In any way, if you want to modify this behavior in CONN you could do so modifying conn_setup_preproc.m in the lines that read "...expression='(i2+i3+i4).*i1'". That expression specifies an imcalc command to compute the wc0 mask. In there i1 corresponds to the structural (wc0*), and i2 to i4 correspond to the gray/white/csf respectively (wc1*, wc2*, and wc3*), so, for example, changing the expression to "(i2+i3).*i1" would have the effect of masking out CSF from the resulting wc0 structural image.

Hope this helps
Alfonso

Originally posted by Pravesh Parekh:
Dear Dr. Alfonso,

Thank you for your reply. It is always a learning experience when listening to your thoughts. Thank you for that (and of course, many thanks for Conn!).

Thank you for the link to work in progress. I have downloaded the experimental release and will try it out (look forward to Conn 18b!). Apart from the ongoing discussion about slice timing correction, I have also seen a strange behaviour when using the indirect normalization pipeline. I have appended details of that below the slice timing correction discussion.

Regarding slice timing correction:
Say that my data has 42 slices (data was acquired in ascending manner). The JSON file associated with the data has the following slice timing correction:

[2.225; 2.17; 2.1175; 2.0625; 2.0075; 1.9525; 1.9; 1.845; 1.79; 1.7375; 1.6825; 1.6275; 1.5725; 1.52; 1.465; 1.41; 1.3575; 1.3025; 1.2475; 1.1925; 1.14; 1.085; 1.03; 0.9775; 0.9225; 0.8675; 0.8125; 0.76; 0.705; 0.65; 0.5975; 0.5425; 0.4875; 0.4325; 0.38; 0.325; 0.27; 0.2175; 0.1625; 0.1075; 0.0525; 0]

Timings for a few important slices:
first slice = 0;
last slice = 2.225
middle slice (i.e. slice number 21) = 1.1400
mean timing = 1.1121

When I specify the order as ascending in the GUI and check the SPM batch, the reference slice number is 21. However, when I specify BIDS and check the batch, the reference slice number (actually timing) is 1.1121. This corresponds to the mean timing but does not correspond to the timing of any specific slice (the timing is in the middle of 21). To get the same result as when specifying the slices (rather than timings), shouldn't the reference timing be the timing for slice number 21 (middle slice) = 1.1400? Of course, the difference in timing is quite minor.

On a related note, I noticed that for the same number of slices (=42), the batch shows reference slice as 21 for ascending acquisition and 22 for descending slice. I guess this must be because Conn is actually calculating the mean of the number of slices (just like the case of timing) and then rounding it up for the descending case and rounding it down in the ascending case (so that the resulting slice number is a valid slice). In the same tune, I was wondering if it would be better to specify the timing of a slice which actually exists.


Regarding indirect normalization:
I was trying out different preprocessing options and see a strange behaviour when running the indirect pipeline (I used the realignment and unwarping option without phase map). For the test case, the functional data had a couple of cerebellar slices missing while the functional data had a larger field of view and consequently had the full acquisition. After normalization, the resulting structural image gets cropped from the bottom resulting in an image similar to the functional image. This behaviour does not happen when using the direct normalization pipeline. I was unable to replicate the behaviour manually preprocessing the same image. I assumed that this is perhaps because of ART mask being calculated from the functional volume; however, this persists even if I skip ART outlier detection step during preprocessing. I have seen similar problem with other subjects too. What could be causing this?

I have attached a snapshot showing the native space structural and functional images, and the normalized structural and functional images using direct pipeline, and the normalizaed structural image using indirect normalization pipeline.


Regarding inclusion of CSF segmentation file for calculating wc0 file:
On a different note, when calculating the wc0 (normalized, skull stripped image), Conn includes GM, WM, and CSF files. It adds them up (resulting in a value of 1 in all brain tissue areas) and then multiplies it with the structural image to get the wc0 file. However,  this means that areas like eye balls will get included into the skull stripped image. Would it be better to only add c1 and c2 files, threshold (to get binary values), and then multiply with the structural image to get skull stripped image without eye balls and the "ring" of CSF around the brain? Of course, this does not matter too much as all processing happens using either the c* images or the functional images.


Look forward to your thoughts

Best Regards
Pravesh

Originally posted by Alfonso Nieto-Castanon:
Dear Pravesh,

As always thanks for your comments and feedback. The rationale for selecting the mid-time slice as reference when performing slice-timing correction in CONN is in order to attempt to minimize the average temporal displacement that needs to be corrected across all slices (each slice is corrected -i.e. time-shifted- by an amount equal to its actual acquisition time minus the acquisition time of the reference slice). In any way, perhaps I am missing something here so please feel free to clarify why you believe in this case selecting the mid-slice as reference could be more appropriate or preferable. 

Also, regarding your previous message, sorry the patch I sent you had too many dependencies with other version-18b changes. If you do not mind, please feel free to download the code in https://www.conn-toolbox.org/resources/s... to get the current development version -that already includes the patch that I sent you- (the final 18b version should be released in the next few weeks and that will become available as always here at nitrc.org)
 
Thanks
Alfonso
Originally posted by Pravesh Parekh:
Dear Dr. Alfonso,

I think there may be another bug in the pre-processing pipeline. When performing slice timing correction and selecting BIDS to pick up slicing order (actually timing), the reference slice is specified as half the last slice time i.e. say the last slice was acquired at 2250ms, then the reference slice is specified as 1125ms. I am assuming that instead of this, the timing for the middle slice is what needs to be picked up. 


Regards
Pravesh

Threaded View

TitleAuthorDate
Pravesh Parekh Sep 1, 2018
Pravesh Parekh Oct 10, 2018
Pravesh Parekh Oct 4, 2018
Pravesh Parekh Sep 17, 2018
Alfonso Nieto-Castanon Sep 25, 2018
Pravesh Parekh Sep 26, 2018
RE: Potential bugs in preprocessing pipeline
Alfonso Nieto-Castanon Oct 10, 2018
Pravesh Parekh Oct 11, 2018
Alfonso Nieto-Castanon Oct 12, 2018
Pravesh Parekh Dec 6, 2018
Alfonso Nieto-Castanon Dec 9, 2018
Jeff Browndyke Dec 10, 2018
Pravesh Parekh Dec 11, 2018
Pravesh Parekh Sep 25, 2018
Pravesh Parekh Sep 12, 2018
Alfonso Nieto-Castanon Sep 3, 2018
Pravesh Parekh Sep 5, 2018