I am new in neuroimage analysis, and new in the whole world of research in general, but I already have experience working with AI by employing a deep learning model (ViT) to analyze static gestures of the American Sign Language.
Now, I am working on a research project that involves using multimodal MRI data from different datasets found online.
I’m currently in the data collection and planning phase, and I’m exploring the best pipelines to process and analyze MRI data for this kind of research. I’d appreciate guidance on the following:
Preprocessing:
- What are the best tools and frameworks for preprocessing multimodal MRI data (e.g., SPM, FSL, ANTs)?
- Any tips on aligning data from different modalities to create a unified dataset?
Feature Extraction:
- What methods are commonly used to extract relevant features from each MRI modality?
Pipeline Suggestions:
- Are there established pipelines for combining multimodal MRI data? For instance, would a sequential approach (processing each modality separately before fusion) or a simultaneous fusion approach (multimodal embeddings) work better for this application? I have found a couple of online software that can possibly help me in the pipeline process, but it seems most of these only work with single modal, static sMRI images, in which some are T-1 weight but others are T-2.
Challenges:
- Any pitfalls I should watch out for when working with multimodal neuroimaging data, or is it better to change the paradigm into a single modal approach?
hi, i just finish my dissertation on the same project do you mind if i contribute to your project
Hi, I'm relatively new in Neuroimaging analysis too
I've experience in research and development, I don't have a ready template of my skills
So allow me to be brief, I'm familiar with data analysis, the preprocessing, cleaning, encoding, standardizing and all those jupyter python stuff
I recently worked with EGG data, visualized them with matplotlibs and more
So right now i'm looking to contribute or work with anyone
running similar projects
Majorly to build my experience, learn and at the same time contribute
Feel free to get back in touch
Thank you 😊
Hey Aaron,
I noticed that you didn't include compute resources in your list.
A few questions you might ask on the topic:
- Are you planning on doing everything on your local
machine?
- Do you have access to a University-based compute
cluster?
- Are you familiar with any job running / orchestration
tools?
Let me know if you would like to talk offline about the topic.
Following as I am interested in this as well. My interest lies in the fusion approach and explainability aspect, be in through attention mechanism or post hoc
@tobiramah - Congratulations on completing your dissertation! I’d be very interested in reading it.
For structural T1 pipeline, you can try NiChart (https://cloud.neuroimagingchart.com/) to quickly segment and plot the ROI variables. The tool will add multi-modality support so stay tuned.
It runs on a cloud for complete free of charge!
If you don't have an available computing resource, you can try NiChart. It includes things like Dicom to Nifti conversion & download, brain segmentation, machine learning score generation, and visualization of the generated datapoints. Currently supports structural T1 only but the Dicom to Nifti conversion should work for multimodality.
Originally posted by tobiramah:
hi, i just finish my dissertation on the same project do you mind if i contribute to your project
I'd be glad to! Can you give me your contact information so we can talk about this further? I'd talk with my project assesor about it, too
Originally posted by codecrafted:
Hey Aaron,
I noticed that you didn't include compute resources in your list.
A few questions you might ask on the topic:
- Are you planning on doing everything on your local machine?
- Do you have access to a University-based compute cluster?
- Are you familiar with any job running / orchestration tools?
Let me know if you would like to talk offline about the topic.
Hello!
- I really haven't thought about it, really. I guess I will do it
with my computer as it has some decent compute power (AMD Ryzen 7
7840HS and RTX 4060) However it's yet to see if it will be able to
process al the data correctly
- No, unfortunately not, and I don't know if my University has it
because I am studying in a Latinamerican university so I don't know
if this much compute power can be accesed anywhere
here
- Not really
Sure, I will be always open for learning more!
Originally posted by Kyle Baik:
If you don't have an available computing resource, you can try NiChart. It includes things like Dicom to Nifti conversion & download, brain segmentation, machine learning score generation, and visualization of the generated datapoints. Currently supports structural T1 only but the Dicom to Nifti conversion should work for multimodality.
Are there any tutorials online for using this? Also, is T1 the only mode of structural MRI scanning? I am re-analyzing my dataset and it seems the data has mostly T1 weighted, alongside some others. I guess some python packages can help me to visualize and tell the type of MRI scans, can they?
Thanks for the information! I’ll consider using NiChart to handle the Dicom-to-Nifti conversion, especially given its applicability to multimodal data. As for tutorials, there are some resources online for using NiChart, particularly on its official website or GitHub, where you may find documentation or examples. Regarding MRI scans, T1-weighted is not the only structural scan mode. There are usually also T2-weighted, FLAIR, and T1-weighted inversion recovery modes, each with distinct imaging characteristics. As for Python packages, there are several tools that can help you visualize and analyze MRI scan types. For example, nibabel can read Nifti files, nilearn and matplotlib can help you visualize brain images, pydeface can assist with removing facial regions, and some deep learning libraries like MONAI can be used for more complex analyses.