help > W2MHS
Showing 1-20 of 20 posts
Display:
Results per page:
Mar 10, 2014  06:03 PM | Nicolas Vinuesa
W2MHS
Dear All,

I have been testing the automatic segmentatioin method described in your
paper using the W2MHS toolkit as proposed.
As I am not obtaining expected results (the amount of WMH regions is
much bigger than it should be, even for a threshold probability, gamma,
equal to 0.5), would you agree on me sending you one subject of our
database with our acquisition protocol and you could run the same
analysis (in case we have made any mistakes in the analysis)?
Thank you very much for your time.

Best regards,
Nicolas Vinuesa
Mar 11, 2014  07:03 PM | Christopher Lindner
RE: W2MHS
You should note that the output of W2MHS is not a binary segmentation. It is a probability map (P-Map). The output pmap will not change if you tune the threshold (gamma) only the quantification value will (the only voxels that are counted are those above the threshold). In order to properly visualize the segmentation you must either cut out voxels below the threshold and generate a binary segmentation or in your visualization software change the threshold on the pmap overlay to match your chosen gamma.

We have chosen this method as it makes it easier to select (or adjust)  a pmap cut value after the first run through W2MHS.

If this still isn't clear let me know.
Mar 12, 2014  09:03 AM | Nicolas Vinuesa
RE: W2MHS
Dear Christopher,

I am aware that output of W2MHS is a probability map. What I tried to point out was that when we run W2MHS on our acquisitions we don't obtain satisfactory results.
To clarify this I attached some screenshots that attempt to highlight the problem (I uploaded them in a .zip file as I cannot upload more than one file separately).
In the first image (cuttingP06) you'll find the registered FLAIR on the background with the output p-map in the foreground in hot colors, for this case the threshold in the viewer was set to 0.6 meaning that all the voxels (in the p-map) which are lower than 0.6 are discarded. Clearly, the problem here is that there are huge areas in where the probability is quite high (bigger than 0.6, and therefore will be taken into account in the quantification step) but there are NO HYPERINTENSITIES.
In order to solve this problem we set the threshold higher, lets say 0.7 (you will find the second image named cuttingP07). Now this big areas have dissapeared (meaning that its probabilities where below 0.7), and it looks better. But when we look a little closer, we see that the areas where we can really SEE HYPERINTENSITIES in the FLAIR image, have become way bigger than the actual output in the p-map over that area. To illustrate this, you can find the third image (cuttingP07localEffect) in where this area is highlighted in green.
This results are not satisfying and differ a lot from what I have found on the paper "Extracting and summarizing white matter hyperintensities using supervised segmentation methods in Alzheimer's disease risk and aging studies.".

I would guess that the problem lies in the training step, and because our acquisitions might be a little different than the ones you used for training (e.g. different scanner although same protocol) then the results at testing stage are not satisfying. If you agree, then could you please precise the way you trained your model and provide the code necessary for it?

Thanks a lot for your time.
Best regards,
Nicolas VINUESA
Attachment: screenshots.zip
Mar 13, 2014  09:03 PM | Christopher Lindner
RE: W2MHS
Nicholas,

I can see now that you are not getting accurate segmentation and I apologize for trivializing your issue. The screenshots helped. From first glance I would attribute the poor segmentation to the brightness of your FLAIR image, which could be due to different scanner and post processing of imaging data. It is relatively difficult to tell the difference between a hyperintense voxel and a normal one in your image. The error IS stemming form the training step and if rectified W2MHS should give you some better results; however, training your own RF-Regression model will require a large set (30 - 40) hand segmented training examples. It would be a tedious process for you to create your own training data but it is possible.

We can provide you with the code we used to extract features from our images as well as the code necessary to actually train the model. This code is currently not incredibly well documented or intuitive to use. If there is enough demand I will try and clean it up and make an easy to use interface for it. We would love to have your lab and others make good use of our code. I do have a other projects going on but if this is not time sensitive I can, in the next couple of weeks, get some good documentation of our training methodology.

In the meantime, you could try doing some different pre-processing on your images (Normalization / Darkening of your FLAIR images) and see if that gives you better results. Let me know if making your own trining set is still appealing or if your results improve, I will try and work on a training interface when I find the time.

Thank you for your interest in our paper and software.
Best,
Chris Lindner
Mar 14, 2014  12:03 PM | Nicolas Vinuesa
RE: W2MHS
Dear Chris,

First of all, thanks a lot for your time and your valuable answers.
The fact that the quality of our FLAIR images was part of the problem is something we were taking into consideration. In this sense, it would be extremely helpful if you can send me one or two FLAIR sessions with the same protocol as described in the paper. They could serve us as a reference and help us obtain better results as well as better quality acquisitions.

Also, on the training side, I agree with you, training a model with 30 or 40 subjects segmented by hand will be very tedious and since we are expecting to acquire 3D FLAIR using a protocol very similar to the one you used, it shouldn't be necessary.

I am looking forward to your answer respect to if it is possible or not to send us the FLAIR images.

Best,
Nicolas
Mar 25, 2014  08:03 PM | Christopher Lindner
RE: W2MHS
Yes we should be able to send you a couple of images. Sorry for the delay. I will upload them to Google Drive or Dropbox and send you a link.
Mar 26, 2014  09:03 AM | Nicolas Vinuesa
RE: W2MHS
Thanks Chris!, great news for us...
I'm looking forward to the link.
Also, if it's not too much to ask, could you please detail any post-processing that you made on the scans? It would also be very helpful and we could start doing it on our scans.
Again, thank you very much for your time.
Cheers
Mar 26, 2014  05:03 PM | BIBA_Lab
RE: W2MHS
Chris,

I wanted to chime in on this conversation and post a picture depicting the issues that we are having as well.
The image is of the T2 registered to the T1 and overlayed with the pmap displayed at a .5 to 1 threshold. This test subject was pre-processed through the W2MHS program and had the default, 2.5 cleaning threshold as well. You can see from the attached image that the program is identifying major hyperintensities (although not some of the smaller ones in this particular slice), but more importantly, is missing the center of the hyperintensities. The probability for the voxels along the edge of that hyperintensity are slightly above .5, while the voxels in the center are around .2.
Therefore, we would be interested in the features and training codes as well. Any thoughts on this issue are greatly appreciated.

Megan
Mar 27, 2014  08:03 PM | Christopher Lindner
RE: W2MHS
Megan,

I would first try setting the clean threshold to zero. Then this may fix your issue but may also cause some noise around the edges of the white matter. If that helps you can try slowly increasing Clean-Th (in steps of .1 or so) until the noise is gone. If you are still not detecting all WMHs then training your own model might be helpful.

And Nicholas,

Another member of our lab is still selecting which examples we are going to upload for users to test and compare.  If just comparing the images isn't enough to help we can give you our acquisition parameters as well. Also, we have done no preprocessing on W2MHS inputs.

Chris
Apr 2, 2014  03:04 PM | Christopher Lindner
RE: W2MHS
Nicholas,

Three sample subjects are now available on the downloads page.

Best,

Chris
Apr 3, 2014  08:04 AM | Nicolas Vinuesa
RE: W2MHS
Chris,

Thanks a lot for uploading these samples, they are very useful.
One last question, could you please tell me the exact acquisition parameters? Since we are right now tuning the parameters for our new scanner (and new project as well), and since we really need to use your toolkit to obtain and quantify WMH, it could be really useful to try to obtain the same acquisitions as you (or as similar as possible).

Best,
N
Apr 4, 2014  02:04 AM | Christopher Lindner
RE: W2MHS
Acquisition parameters are described in our paper:

http://pages.cs.wisc.edu/~vamsi/files/hb...
Apr 7, 2014  03:04 AM | Vikas Singh
RE: W2MHS
Nicolas,

If you provide us with 1--2 of the image files where there is noticeable wmh but the probability maps are not picking them up, we may be able to give you more specific feedback. As Chris mentioned, the acquisition parameters are provided in the paper. Happy to answer more questions.

--Vikas
Apr 10, 2014  12:04 PM | Nicolas Vinuesa
RE: W2MHS
Dear Vikas,

Thanks for your time as well as Chris'.
Here is the dropbox link where you can find 2 of our scans (T1 + FLAIR). For the first one (S1) the toolbox works fine, while for the second one (S2) the probability maps include a huge region of relatively low p (<0.6). We are still trying to understand what these results mean, so your feedback on it is very important.

https://www.dropbox.com/sh/07w8sowqinkmj...

Best,
Apr 10, 2014  08:04 PM | Vikas Singh
RE: W2MHS
Ok, thanks. We will look into it and let you know if we find something useful. If you adjust the probabilities manually for the image where the results are unsatisfactory, do the segmentation get better?
Apr 15, 2014  08:04 PM | Christopher Lindner
RE: W2MHS
I have noticed the large amount of detections in the second subject. This is because again likely due to the brightness of your image. It may help to use more bias correction in the preprocessing step or you can just adjust the quantification threshold high enough to ignore those false positives.  It isn't a huge issue. Alternatively you could manually mask out noisy or unwanted detections such as these. We recently created a module for masking out unwanted detections and requantifying. You can create masks for your subjects in something like FreeSurfer. Hopefully this helps.

Chris
Apr 16, 2014  11:04 AM | Nicolas Vinuesa
RE: W2MHS
That is exactly what we found, quite good results on the first image but a lot of "false positives" in the 2nd one.
Our big question is: If it is only due to the quality of the acquisition, then why these artifacts don't appear on the first image?
Secondly: These so called "false positives" can be seen naked eye both on the FLAIR scan as in the T2, while you won't find them for the first subject; so, could these be a lot of small WMH? I mean, if it is both on the FLAIR and T2 of one subject but not on the other subject, this could be something biological instead of just noise.
Your opinion is very important for me since i am struggling with this issues and it is very important to detect methodological errors (at the acquisition) in this state of the project.
Thank you very much
Apr 16, 2014  05:04 PM | Christopher Lindner
RE: W2MHS
WMHs will appear slightly darker on a T1 image. I don't see darkness by the detections in the T1 but I do notice that it is noisy which may account for the noisy result. The scans can be affected even by room temperature or humidity or a number of other factors. This may or may not account for the noise that is present. On your T2 images I don't see too many WMHs. There are a few by the ventricles and a few deep in the white matter. I have never seen a subject with that many lesions if it is the case that the entire region is hyperintense.  My best judgment says that you are seeing false positives with p-values around .5  If you still believe they may be hyperintense I would consult an expert.

Chris
Sep 22, 2014  09:09 AM | rogier K
RE: W2MHS
Dear all,
We are having very similar problems to the above. We ran the toolbox on 260 participants between 18-88 using the same FLAIR parameters across individuals, but found no relation between WMHI and age. Inspection of individual maps shows that WMHI estimation is inconsistent across individuals, with the largest problems in the youngest people (although only looking at older individuals gives the similar problems). Some 20 year old individuals have half their brain classified as WMHI (probably because they have no WMHI at all). Similar to the above, raising the threshold manually sometimes helps but on other occasions leads to false negatives for true WMHI in older adults. I think it may be because the training algorithm was performed only on individuals 45 and up? We could train a subsample of people for the entire range but I am not sure whether that would help, given that the priors would then differ so greatly between young (expected volume=0) and old (expected volume >0) that it may again lead to misclassifications? Does anyone have experience with including healthy young adults? Given the size of our sample manual classification is not feasible, but at this time it is looking as if the automated solution may not work either...
Best,
Rogier
Sep 24, 2014  03:09 PM | Vikas Singh
RE: W2MHS
Rogier, 

Yes, because the training was performed on older individuals, the classifier will likely be biased for your experiments. We have some code to help with generating the training data based on a random walker segmentation implementation. Its not in clean/documented form just yet but we're hoping to have it available here in a month or so. However, if you're willing to tinker on your own with what we have right now, we can share it. Let us know.

--Vikas