[Mrtrix-discussion] Crossing-fibers gray matter CSD

Donald Tournier d.tournier at brain.org.au
Tue Apr 7 22:20:25 PDT 2009


Hi Vinod,

Yes, the 'reference/template' image can be any image, any voxel size,
any orientation. There's actually no reason why the tracks can't be
mapped onto a different image: the coordinates of each point on the
tracks are stored in real space, with no reference to the original
data set. Obviously, the volume of space that the template image
occupies must encompass the tracts you are interested in (this volume
is determined by the combination of dimensions, voxel sizes, and
transformation matrix).

The tracks2prob command only uses the header information from the
template image to create the output image, so as long as it's in a
supported format, it can be used. You can supply the MNI template if
you wish, but it will probably not work as expected unless it's
already been corregistered with your DWI data. In other words,
tracks2prob won't magically 'transform' the tracks into MNI space - it
will simply map the tracks onto an image with the same dimensions,
voxel sizes and transformation matrix as that MNI template image.

I hope I'm making sense...
Cheers,

Donald.


On Wed, Apr 8, 2009 at 11:33 AM, vinod kumar <mail.vinod at yahoo.com> wrote:
> Hi donald,
>
> Greetings :)
>
> just a question on same command. 'track2prob'
>
> tracks2prob [ options ] tracks reference/template output
>
> template / reference ?   can it be any kind of image for example.. if I put
> MNI template,  will it transform in that space ?
>
> or should I use dwi.mif as reference or b0 as reference, which is similar
> space like track. and than use calculated transformation matrix to transform
> the track in other space ?
>
> thank you ,
> vinod
>
>
>
> ________________________________
> From: Donald Tournier <d.tournier at brain.org.au>
> To: Wim Otte <wim at invivonmr.uu.nl>
> Cc: mrtrix mailinglist www.nitrc.org>
> Sent: Wednesday, April 8, 2009 3:11:06 AM
> Subject: Re: [Mrtrix-discussion] Crossing-fibers gray matter CSD
>
> Hi Wim,
>
> Yes, in theory, tracks2prob should just copy the layout from the
> reference image. But it so happens that the NIfTI image handling
> routine overrides the layout and sets it to +0,+1,+2. There is no
> point in trying to use mrconvert to change the layout for a NIfTI
> image, it will always end up with the same result. What you could do
> is mrconvert the reference.nii image, and then the layouts will both
> be +0,+1,+2. Does that sound like a workable solution?
>
> I might try to make the NIfTI handler honour the layout specification,
> but that won't be ready for some time...
>
> Regards,
>
> Donald.
>
>
> On Tue, Apr 7, 2009 at 5:47 PM, Wim Otte <wim at invivonmr.uu.nl> wrote:
>> Hi Donald,
>>
>> Thank you very much for your clear tracking explanations!
>>
>> The layouts of the reference image and output are indeed different(see
>> output below); but shouldn't tracks2prob 'copy' the data layout from
>> the reference?
>> I'll try mrconvert with the layout option; it's not a real problem,
>> It's just that I am scared to flip left and right halfway the
>> post-processing...
>> Thanks again!
>>
>> Wim Otte
>>
>>
>> (Data layout: [ -0 -1 +2 ] becomes Data layout: [ +0 +1 +2 ]).
>>
>>
>> tracks2prob  M1.tck  reference.nii  output.nii
>>
>> mrinfo reference.nii results in:
>> ************************************************
>> Image:               "reference.nii"
>> ************************************************
>>  Format:            NIfTI-1.1
>>  Dimensions:        64 x 25 x 64
>>  Voxel size:        5 x 5 x 5
>>  Dimension labels:  0. left->right (mm)
>>                     1. posterior->anterior (mm)
>>                     2. inferior->superior (mm)
>>  Data type:         32 bit float (little endian)
>>  Data layout:       [ -0 -1 +2 ]
>>  Data scaling:      offset = 0, multiplier = 1
>>  Comments:          (none)
>>  Transform:                    1          -0           0        -299
>>                               -0           1           0      -116.1
>>                               -0          -0           1         -16
>>                                0           0           0           1
>>
>> mrinfo output.nii results in:
>> ************************************************
>> Image:               "output.nii"
>> ************************************************
>>  Format:            NIfTI-1.1
>>  Dimensions:        64 x 25 x 64
>>  Voxel size:        5 x 5 x 5
>>  Dimension labels:  0. left->right (mm)
>>                     1. posterior->anterior (mm)
>>                     2. inferior->superior (mm)
>>  Data type:         32 bit float (little endian)
>>  Data layout:       [ +0 +1 +2 ]
>>  Data scaling:      offset = 0, multiplier = 1
>>  Comments:          track fraction map
>>                     count: 5000; init_threshold: 0.2; lmax: 8;
>> max_dist: 200; max_num_tracks: 5000
>>  Transform:                    1          -0           0        -299
>>                               -0           1           0      -116.2
>>                               -0          -0           1         -16
>>                                0           0           0           1
>>
>>
>>
>>
>> On 4/7/09, Donald Tournier <d.tournier at brain.org.au> wrote:
>>> Hi Wim,
>>>
>>>
>>>  > - In which voxels do I have to estimate the fiber response function?
>>>  > In all brain-voxels (resulting in a less 'flat' response function) or
>>>  > in the white matter voxels? As we need it to track in both white
>>>  > matter and gray matter voxels.
>>>
>>>
>>> This is an interesting question, and not one I have a satisfactory
>>>  answer to. Mind you, I don't think anyone has a good answer to this.
>>>  The few people who have suggested the possibility of tracking in grey
>>>  matter (I can only think of Van Wedeen, and maybe indirectly Tim
>>>  Behrens for deep grey matter structures) have both used methods that
>>>  don't need an explicit response function (diffusion spectrum imaging
>>>  and the diffusion tensor, respectively).
>>>
>>>  Thankfully, CSD is not overly sensitive to inaccuracies in the
>>>  response function. I'd recommend you go for the 'flattest' response
>>>  function you can get, since this will always produce sharper
>>>  directions. This means that you should opt for the white matter
>>>  response function. Besides, that is your only real option anyway,
>>>  since it won't be possible to isolate even a single voxel within the
>>>  grey matter that contains a single coherent fibre direction, from
>>>  which you might have been able to estimate a response function...
>>>
>>>
>>>
>>>  > - How does streamtrack determine the principal tracking direction(s)
>>>  > from the spherical decomposition data? Is it using a 'find_SH_peaks'
>>>  > internally? Does it take crossing-fibers into account, or only the
>>>  > first direction?
>>>
>>>
>>> There are various algorithms within streamtrack, but all the SD based
>>>  ones will take crossing fibres into account. The SD_STREAM option will
>>>  do find the closest peak to the current direction of tracking, and use
>>>  its direction for the next step. The SD_PROB option on the other hand,
>>>  randomly samples a direction from the current distribution of fibre
>>>  orientations (the FOD), within a 'cone' about the current direction of
>>>  tracking. The angle of the cone is determined from the curvature
>>>  constraint, and is given by phi = 2 asin (s/2R), where s is the step
>>>  size and R is the radius of curvature. This means both algorithms will
>>>  preferentially track through crossing fibre regions, providing of
>>>  course that there is a fibre orientation in that direction.
>>>
>>>
>>>
>>>  > - tracks2prob flips my tracks in the x and y plane ( I have to
>>>  > 'correct' with fslswapdim -x -y z <input> <output> to get it right
>>>  > with the 'reference image'). The fibers are oriented correctly in
>>>  > mrview. Am I doing something wrong? (q-form, s-form thing?).
>>>
>>>
>>> I'm surprised by this. Can you provide more details on what you mean?
>>>  Is the resulting image flipped within MRView, or within FSL? As far as
>>>  I can tell, FSLview is not very flexible when it comes to data
>>>  ordering, and will for example display images acquired in the sagittal
>>>  plane 'as if' they had been acquired axial, which will of course look
>>>  wrong (although the L-R, A-P, and I-S orientation labels are in the
>>>  correct places). I'd imagine that this would also mean that overlaying
>>>  two images where the data are ordered differently will produce
>>>  artefacts like you mention.
>>>
>>>  If the problem is with FSL's display, then you may have no other
>>>  option than what you've already done (although mrconvert does have a
>>>  "-layout" option that could be used to the same effect, and probably
>>>  more robustly). If the problem is with MRView, let me know and I will
>>>  investigate further.
>>>
>>>  Hope this helps.
>>>  Cheers!
>>>
>>>  Donald.
>>>
>>>
>>>
>>>  --
>>>  Jacques-Donald Tournier (PhD)
>>>  Brain Research Institute, Melbourne, Australia
>>>  Tel: +61 (0)3 9496 4078
>>>
>>
>
>
>
> --
> Jacques-Donald Tournier (PhD)
> Brain Research Institute, Melbourne, Australia
> Tel: +61 (0)3 9496 4078
> _______________________________________________
> Mrtrix-discussion mailing list
> Mrtrix-discussion at www.nitrc.org
> http://www.nitrc.org/mailman/listinfo/mrtrix-discussion
>
>



-- 
Jacques-Donald Tournier (PhD)
Brain Research Institute, Melbourne, Australia
Tel: +61 (0)3 9496 4078


More information about the Mrtrix-discussion mailing list