Showing 1-75 of 162 topics
Display:
Results per page:

1   2   3   Next >
  • Feb 29, 2024  06:02 PM | zhenous hadi jafari
    segmenting on 3 color channel

    Hello everyone,


    I'm working on segmenting a NIfTI image using MRIcron, and I've encountered a challenge. My image is an RGB image, consisting of three color channels. My goal is to perform the segmentation specifically on the green channel. However, it's crucial for my analysis to observe and compare the same region across the other two channels (red and blue) simultaneously.


    To achieve this, I open all three channels of my image in MRIcron. Then, I begin the segmentation process on the green channel. My question is: how can I ensure that the regions I've marked for segmentation on the green channel are visible or highlighted when I switch to view the red and blue channels? I need to ensure that the exact areas segmented in the green channel are accurately reflected and can be compared in the other two channels.


    Any guidance or tips on how to manage this within MRIcron or through another method would be greatly appreciated.


    (I attached a pdf of what I explained= showing 3 color channel of my nifti image and the segmented red marks on the green channel image)

    Attachment: doc1.pdf

    • Feb 29, 2024  06:02 PM | Chris Rorden
      RE: segmenting on 3 color channel

      I would use MRIcroGL for this. Use Fill/Open to open your background channel, and File/AddOverlay to add your additional layers. In the Layers panel, you will see all your layers listed. You can click on the layer name to set properties (e.g. in the screenshot I have selected the `green` image and set its color scheme to `2green`). Note that each layer has a checkbox that toggles if it is visible (or drag the opacity slider for translucent). The Draw/DrawColor allows you to set your drawing color. Note you can toggle layers on or off to see the desired layer.


       


       

  • Dec 6, 2018  01:12 PM | hannasofia - Gothenburg university
    Error message "error in reaging NIFTI header.0"
    To whom it may concern, 

    I try to open a file named "BETA_Subject_001:condition001_source.nii" in MRIcron and receive this message: "error in reaging NIFTI header.0". It is still possible to open NIFTI anatomical Pictures named T1_3d_TRA.nii. Does somebody know what cpuld be wrong? Neither can I open files named "corr_subject001_condition001_Source001.nii":

    Thank you in advance,
    Hanna

    • Dec 6, 2018  02:12 PM | Chris Rorden
      RE: Error message "error in reaging NIFTI header.0"
      1. What version of MRIcron are you using and what operating system are you using, e.g. MRIcron v1.0.20181114 on MacOS. 
      2. If you are not using the latest version, does upgrading fix your problem?
      3. Be aware that some file systems do not allow the ':'character in filenames.
      4. Does fslhd report any problems with these images?
      5. If the above does not fix your problems, you can share a copy with me for more feedback.

      • Dec 6, 2018  03:12 PM | hannasofia - Gothenburg university
        RE: Error message "error in reaging NIFTI header.0"
        Hi Chris, 

        Thank you for your reply. I'm using the latest version of mricron on Mac. There are no ":" characters in file names. I'm not sure I understan n.4 of your points. But I haven't seen FSL or FSLHD report anything. The only information given can be seen in the screenshot I've attached. 

        Best Regards, 
        Hanna

        • Dec 6, 2018  03:12 PM | Chris Rorden
          RE: Error message "error in reaging NIFTI header.0"
          I can not provide much more advice without seeing the image, but this error usually suggests a file permission error. I would suggest you run 'ls -l "filename.nii"' from the command line. You will want to make sure that you have read access permission for the file. Often you can fix this with a "chown" or "chmod" command.

          Again, running 'fslhd "filename.nii"' from the command line is likely to provide more feedback (assuming you have installed fsl).

          • Dec 7, 2018  02:12 PM | hannasofia - Gothenburg university
            RE: Error message "error in reaging NIFTI header.0"
            Hello again, thank you for your reply. 

            I'm a beginner in using mricron and working with fmri-images. Could you please try to explain this in a simpler way? I'm not sure that I have the program fslhd, if its not downloaded per default together with mricron. And I cannot find where to download it. 

            Thank you in advance, 
            Hanna

            • Dec 7, 2018  03:12 PM | Chris Rorden
              RE: Error message "error in reaging NIFTI header.0"
              Hanna-
               Unix files have permissions associated with them, so a user may not have permission to open a particular file. 
               FSL is a popular and free tool for brain imaging. One of the many tools it includes is 'fslhd' that will read a brain imaging header and describe the contents. Check Google to find out how to install and use fsl.
               I strongly recommend the fsl training course - they not only describe their own tools, but provide an introduction in how to use Unix computers. It has a lot of useful information for beginners as well as experts (I went this year, even though I have been in the field for decades):
                 https://fsl.fmrib.ox.ac.uk/fslcourse/
               I also suggest you look at Martin Lindquist's book and course
                http://www.biostat.jhsph.edu/~mlindqui/main.html
               Finally, all the material and slides for my own course is here
                 https://www.mccauslandcenter.sc.edu/crnl/psyc589888

            Dec 12, 2018  02:12 PM | hannasofia - Gothenburg university
            RE: Error message "error in reaging NIFTI header.0"
            Hi, 

            Now I've installed fsl to figure out why I cannot open files with Mricron. I'm using Mac, so in Terminal I receive this info;

            "ml-181128-006:~ Xerhad$ fslhd corr_Subject001_Condition001_Source001.nii
            Image Exception : #63 :: No image files match: corr_Subject001_Condition001_Source001
            No image files match: corr_Subject001_Condition001_Source001
            ml-181128-006:~ Xerhad$"

            Seems to me the problem isn't about permission, right? Could you help me further? I've never used Terminal before. 

            Thanks in advance, 
            Hanna

            • Dec 12, 2018  03:12 PM | Chris Rorden
              RE: Error message "error in reaging NIFTI header.0"
              Can I suggest you contact a local friend who is familiar with Linux. The fact that FSL can not open the file suggests the problem is not with MRIcron, but I am not even sure you are in the correct folder. You would want to run "ls -l" from the command line to make sure your terminal is in the correct folder and that you have permissions. Someone familiar with computers can help you out. The Unix handout here might be a good starting point.
              https://fsl.fmrib.ox.ac.uk/fslcourse/

              • Dec 17, 2018  10:12 AM | hannasofia - Gothenburg university
                RE: Error message "error in reaging NIFTI header.0"
                Hi Chris, 

                A friend helped me out in terminal (Mac) and made sure that permission was given to all files in the folder where my files are saved. Unfortunately it does´nt seem to solve the problem.

                When I open a file named "corr_subject002_condition001_source001.nii" in Mricron it says "Error reading NIFTI header-0" followed by "File not open, press ok to ignore risk and data corruption. Press Abort to kill the program". 

                What could be the problem then?

                Thanks in advance, 
                Hanna Eriksson

                • Dec 17, 2018  03:12 PM | Chris Rorden
                  RE: Error message "error in reaging NIFTI header.0"
                  Did you install fsl and see what the output of 'fslhd corr_subject002_condition001_source001.nii'. If you want to send me a copy of your image by email I can take a look. However, you may also want to chat to some of the experienced neuroimagers at your university.

                  • Dec 20, 2018  12:12 PM | hannasofia - Gothenburg university
                    RE: Error message "error in reaging NIFTI header.0"
                    Hi, 


                    Please see attached files for terminal screen shot when typing ´fslhd corr_subject002_condition001_source001.nii'.

                    Thank you in advance, 
                    Hanna

                    • Dec 20, 2018  02:12 PM | Chris Rorden
                      RE: Error message "error in reaging NIFTI header.0"
                      Your screenshot only shows part of the output. Why don't you see if you can upload the .nii file or send it to me directly.

                      • Dec 28, 2018  06:12 PM | hannasofia - Gothenburg university
                        RE: Error message "error in reaging NIFTI header.0"
                        Hi Chris, 

                        Please see attached corrupt file. 

                        This is the file permissions of the file:

                        -rw-r--r-- 1 Xerhad staff 3610868 Dec 28 18:56 corr_Subject001_Condition001_Source001.nii
                        And this is the result from fslhd:
                        filename corr_Subject001_Condition001_Source001.nii
                        size of header 348
                        data_type FLOAT32
                        dim0 3
                        dim1 91
                        dim2 109
                        dim3 91
                        dim4 1
                        dim5 1
                        dim6 1
                        dim7 1
                        vox_units mm
                        time_units s
                        datatype 16
                        nbyper 4
                        bitpix 32
                        pixdim0 -1.000000
                        pixdim1 2.000000
                        pixdim2 2.000000
                        pixdim3 2.000000
                        pixdim4 0.000000
                        pixdim5 0.000000
                        pixdim6 0.000000
                        pixdim7 0.000000
                        vox_offset 352
                        cal_max 0.000000
                        cal_min 0.000000
                        scl_slope 1.000000
                        scl_inter 0.000000
                        phase_dim 0
                        freq_dim 0
                        slice_dim 0
                        slice_name Unknown
                        slice_code 0
                        slice_start 0
                        slice_end 0
                        slice_duration 0.000000
                        toffset 0.000000
                        intent Unknown
                        intent_code 0
                        intent_name
                        intent_p1 0.000000
                        intent_p2 0.000000
                        intent_p3 0.000000
                        qform_name Aligned Anat
                        qform_code 2
                        qto_xyz:1 -2.000000 0.000000 -0.000000 90.000000
                        qto_xyz:2 0.000000 2.000000 -0.000000 -126.000000
                        qto_xyz:3 0.000000 0.000000 2.000000 -72.000000
                        qto_xyz:4 0.000000 0.000000 0.000000 1.000000
                        qform_xorient Right-to-Left
                        qform_yorient Posterior-to-Anterior
                        qform_zorient Inferior-to-Superior
                        sform_name Aligned Anat
                        sform_code 2
                        sto_xyz:1 -2.000000 0.000000 0.000000 90.000000
                        sto_xyz:2 0.000000 2.000000 0.000000 -126.000000
                        sto_xyz:3 0.000000 0.000000 2.000000 -72.000000
                        sto_xyz:4 0.000000 0.000000 0.000000 1.000000
                        sform_xorient Right-to-Left
                        sform_yorient Posterior-to-Anterior
                        sform_zorient Inferior-to-Superior
                        file_type NIFTI-1+
                        file_code 1
                        descrip
                        aux_file

                        Could you help further? 

                        Best Regard, 
                        Hanna Eriksson

                        • Dec 28, 2018  08:12 PM | Chris Rorden
                          RE: Error message "error in reaging NIFTI header.0"
                          I just downloaded MRIcron (v1.0.20181114) on my MacBookPro11,1 using MacOS 10.13.6 and your sample image loaded just fine. By the way, if you press the shift key down while starting MRIcron it will ask you if you want to reset all of its default values - this can help if something unusual happens. Unfortunately, I can not replicate your issue.

                          • Jan 26, 2024  07:01 PM | Fabien Hauw
                            RE: Error message "error in reaging NIFTI header.0"

                            Hello Chris,


                            I also get this error when trying to add an overlay.


                            And exploring the header, I can't see specific difference between an overlay that fails, or another that doesn't...


                            Have you find the reason and a potential fix?


                            Thanks,
                            Best


                            Fabien

  • Jan 4, 2024  11:01 PM | Cornelia Wang
    Different voxel values shown in MRIcron and MRIcro or MRIcroGL

    Hello, 


    I have a 3-class mask that was saved in nifti format. When I clicked on the images to view the pixel values, I noticed that those shown in MRIcron were not the same as those shown in MRIcro or MRIcroGL; the latter two showed the same result. I attached an example where the same mask was opened via MRIcro and MRIcron. In the MRIcro, the values showed 2, which is what I expected; in MRIcron, the value is 0. They are not perfectly aligned, but the area around the pinpoint shares the same value. So, it is not an issue of misalignment. I am confused by that. Is it the viewing tool issue, or how I computed my mask incorrectly? Can someone help me with it?


    Thanks, 


    Cornelia


     



    • Jan 5, 2024  03:01 PM | Chris Rorden
      RE: Different voxel values shown in MRIcron and MRIcro or MRIcroGL

      Hello,


        I think you will find that the MRIcron value has simply not yet updated, move the crosshair a little and you will see that it is merely reporting the previous location.

      • Jan 5, 2024  09:01 PM | Cornelia Wang
        RE: Different voxel values shown in MRIcron and MRIcro or MRIcroGL

        Hello Chris, 


         


        Thanks for the reply. Yes, I changed the other image with different intensities everywhere and noticed the MRIcron sometimes reported the previous location. 


         


        Thanks, 


        Cornelia 

  • Jan 4, 2024  05:01 PM | Christian Schranz
    Get the volume size of template

    Hello,


    I am trying to get the relative size of a brain lesion in a template.


    For this I created lesion masks that are normalized to the MNI brain.


    I then opened a template added my lesion mask as an overlay to get me the size of my lesion in cc using batch descriptive.


    I was hoping to get the total brain volume again from batch descriptive of the template but the size I am getting for different template is in the 1600cc area or 1700cc are which seems higher than the average, which makes me concerned that I am getting the right numbers.


    Can I get the volume of a template this way and if not how can I get either the volume of a template or the relative volume of a lesion mask in the template compared to the whole brain.


    Thank you!


    Christian Schranz

    • Jan 4, 2024  07:01 PM | Chris Rorden
      RE: Get the volume size of template

      You could use FSL:


      fslstats input.nii.gz -V


      I would derive these details using Python, which is very scriptable.  Note that you may want to consider partial volume effects when computing volumes for non-binary images, e.g. if you have a gray matter probability mask, you may only want to consider voxels that are over 50% GM. To be even more accurate, you could modulate volume by the probability, such that a voxel which is 50% GM only contributes half a voxels volume to the sum.


       


      import nibabel as nb
      import numpy as np
      from nibabel.imagestats import mask_volume
      fnm = 'input.nii.gz'
      img = nb.load(fnm)
      computed_volume = mask_volume(img)
      print(f"{computed_volume}mm3")


       


      Do remember that the MNI template is larger than the average brain, explaining the difference between SPM and MNI templates when normalizing data, e.g. Figure 1 here:


        https://www.ncbi.nlm.nih.gov/pmc/article...


       

  • Nov 15, 2023  06:11 PM | Cornelia Wang
    Show Axial Only when open MRIcron

    Hi, 


    I want to display the images in the axial-only model by default whenever I open the MRIcron. Can anyone help me with it? 


     


    Thank you so much!


    Tongyao

    • Nov 15, 2023  06:11 PM | Chris Rorden
      RE: Show Axial Only when open MRIcron

      While I am glad that MRIcron remains popular, it is a mature tool thatis not actively developed. Can I suggest you try out MRIcroGL - you can set the Axial view as your default in the Preferences - see the attached screenshot.

  • Dec 8, 2023  08:12 PM | Emily Narinsky
    Saving Flipped Brain Images

    Hi,


    For the purpose of my analysis, I would like to save flipped versions of my fMRI NIfTI files. With the 2019 version of MRIcron, I have tried View, then flip L/R, and then I save that image as a new file. However, I do not think it saves the flipped version. Is there a way to do this in MRIcron?


     


    Thank you. 

    • Dec 10, 2023  07:12 PM | Chris Rorden
      RE: Saving Flipped Brain Images

      Hello,


        MRIcron is a visualization tool, not an image manipulation tool. The View/FlipLR is designed to change between neurological and radiological orientation.


       


      You could try MRIcroGL, which has the Import/Tools/Rotation menu item.


       


      Alternately, you could write a Python script. Here is what ChatGPT suggests:


      --------------


      import nibabel as nib
      import numpy as np


      def flip_first_dimension(input_path, output_path):
          # Load the NIfTI file
          img = nib.load(input_path)
          data = img.get_fdata()


          # Flip the order of voxels in the first dimension
          flipped_data = np.flip(data, axis=0)


          # Create a new NIfTI image with the flipped data
          flipped_img = nib.Nifti1Image(flipped_data, img.affine, img.header)


          # Save the flipped NIfTI file
          nib.save(flipped_img, output_path)
          print(f"Flipped NIfTI file saved to {output_path}")


      if __name__ == "__main__":
          # Replace 'input_file.nii.gz' and 'output_file.nii.gz' with your file paths
          input_file_path = 'input_file.nii.gz'
          output_file_path = 'output_file.nii.gz'


          flip_first_dimension(input_file_path, output_file_path)

  • Nov 13, 2023  12:11 PM | tammartru
    VOI is not visible on the render view

    Dear Chris,


    I'm currently in the process of loading an overlay and a VOI on a standard template in MRICroGL. I've chosen to do this to more effectively manage their visibility, as having both files as overlays didn't produce the desired visual result. I'm pleased with the outcome so far, but I do have two questions:


    1. Is it possible to change the color of the VOI? I couldn't locate this option in the menu, aside from the ability to draw a VOI with RGB colors.


    2. Why is the VOI not visible on the rendered brain view, although it appears in all other views (see attached)? Additionally, in the left tool window, I can only see the loaded overlay but not the VOI.


    I appreciate your help!


    Thank you very much,
    Tammar

    • Nov 13, 2023  12:11 PM | Chris Rorden
      RE: VOI is not visible on the render view

      Drawings are special objects for MRIcroGL.


       1. The color reflects the voxel intensity (e.g. 1=red, 2=blue, 3 = green, etc). 


       2. The drawings are not shown in the renderings. The renderings require gradient information for volumes, which is computationally expensive and therefore incompatible with interactively modifying drawings. Once the dynamic drawing stage is completed, you can always load it as a static overlay (File/AddOverlay) to view it on the rendering.


      Our nascent NiiVue does allow you to customize drawing colors, and does show them in the rendering (albeit the rendering view is blank while the drawing pen is being dragged).


      You can see this by changing the palette in the draw menu here:


       https://niivue.github.io/niivue/features/draw.ui.html


      and you can make your own custom palette by adjusting the script at the bottom of this web page:


        https://niivue.github.io/niivue/features...


       


       


       


       

      Nov 13, 2023  04:11 PM | tammartru
      RE: VOI is not visible on the render view

      Thank you, Chris, for yoursuper quick response!


      While I can use both of my ROIs as overlays in MRICroGL, my challenge lies in adjusting the transparency of one of them while keeping the other opaque in the render view. Specifically, I'm using these overlays to visualize connectivity results, where one ROI serves as the seed and the other as the target (activation area).


      I aim to make the seed ROI appear slightly transparent (like in "image1" in light red and on top the opaqe green) while keeping the target ROI fully opaque. However, when I load them as overlays, I find it challenging to adjust the transparency of only one of them without affecting the transparency of the other or the underlying template ("image2".


      I tried also "draw->advance->create overlap image" option to combine the rendered brain with the seed ROI thinking then to overlay the other ROI (target) on top of them. Unfortunately, this approach distorts the template template, and the colors are not preserved as desired ("image3").


      I would appreciate any further advice or alternative methods you might suggest for achieving this visualization goal, either within MRICro or using any other tools you have in mind.


      Thank you very much for your assistance!


      Tammar

      Attachment: chris_MRICro.png

      • Nov 13, 2023  04:11 PM | Chris Rorden
        RE: VOI is not visible on the render view

        You will want to use the File/AddOverlay function to independently blend data. The drawing features make a lot of compromises to ensure fluid interaction for large dynamic datasets.

  • Oct 9, 2023  08:10 AM | Joyce Oerlemans
    Error NPM ("unable to load explicit mask")

    Dear Mr. Rorden,


    I am trying to run my first VLSM analysis using SPM and this error occurs:


    Error: unable to load explicit mask named E:\stroke_studie\VOI\Normalized\VOI\Test.val
    Voxels also non-zero in mask E:\stroke_studie\VOI\Normalized\VOI\Test.val = 0
    Error: no remaing voxels also non-zero in mask E:\stroke_studie\VOI\Normalized\VOI\Test.val
    Unable to complete analysis.


    I saw that this question had already been posted on the forum, but I have failed to find the answer (https://www.nitrc.org/forum/reply.php?qu...).


    Could you help me with this?


    Many thanks!


    Joyce

    • Oct 28, 2023  01:10 AM | 立欣 马 - the second hosptial of hebei medical university
      RE: Error NPM ("unable to load explicit mask")

      HI,joyce have you solved your problem?I have a samilar error about data type. If you don't mind,could we talk about it together.

      Attachment: data type error.doc

  • Oct 2, 2023  03:10 PM | Charly Dominic
    Mricron brain image faulty.

    Hi,
    I am using Mricron for my thesis. Recently i shifted to a new pc and the brain images are not displaying properly. I can only see the landmark placed. Rest is dark. Does any one know how to solve this?

    Attachment: mmm.jpeg

  • Sep 19, 2023  04:09 PM | Kate Fissell
    dcm2nii BIDS field for CoilCombineMethod

    Hello,


    I see dcm2niix (I am using v1.0.20220720) when used to convert DWI dicoms extracts a field into the BIDS json called "CoilCombinationMethod" and sets it to a string such as "Adaptive Combine".


    Before beginning to explore using BIDS, for Siemens dicom files I had found this information in the Siemens private CSA header, using string  "ucCoilCombineMode" which was set to an integer code such as 2.  In the newer Siemens scanner platforms such as XA20 the dicom CSA information appears unavailable and I could not get the information.  I see dcm2niix does retrieve this for XA20, that is super !


    Where does dcm2niix get the CoilCombineMethod for Siemens XA20 platforms ?   I see that the BIDS specification does recommend including "CoilCombinationMethod" for fMRI datasets, but it does not suggest any dicom tags to find it.   Does the BIDS specification define what the "CoilCombinationMethod" field should contain (eg a string vs a code, or what dicom tags it is derived from ) ?


    thank you,


    Kate


     

    • Sep 19, 2023  04:09 PM | Chris Rorden
      RE: dcm2nii BIDS field for CoilCombineMethod

      For Siemens V* the CSA header is tag 0029,1020. For Siemens XA the CSA header is tag 0021,1019.

      Sep 20, 2023  07:09 PM | Kate Fissell
      RE: dcm2nii BIDS field for CoilCombineMethod

      Thanks very much Chris !

  • Aug 30, 2023  09:08 PM | mengdizhu
    When converting Dicom-Nifti, how to export the converted nifti files to another folder?

    Hi, 


     


    I am new to this software, I wonder when converting Dicom-Nifti, how do I export the converted fnifti iles to another folder? Because right now, the converted nifti file is exported to the same folder with original Dicom file. Thank you so much!


    MZ

    • Aug 30, 2023  10:08 PM | Chris Rorden
      RE: When converting Dicom-Nifti, how to export the converted nifti files to another folder?

      You can use the "Output Directory" button in the graphical interface to select an output folder.


      Alternatively, you can run the core dcm2niix conversion tool from the command line and specify the output directory:


       dcm2niix -o /output/path /path/to/DICOMs


      It is often useful to specify your output filename:


        https://github.com/rordenlab/dcm2niix/bl...


      so to specify the series number (%s) and protocol name (%p) you could call


       dcm2niix -f %s_%p -o /output/path /path/to/DICOMs


       


       

  • Jul 31, 2023  10:07 PM | Matt Amandola
    jhu189 template and ch2bet

    Hi - 


    I'm wondering where I can find more information about the jhu189.nii.gz template, as well as the ch2bet.  Mostly, which MNI template are these two images normalized to? They seem to fit relatively similarly to the MNI152Lin6Asym, but just a tad off.


    Thanks!

    • Aug 1, 2023  11:08 AM | Chris Rorden
      RE: jhu189 template and ch2bet

      We describe the JHU 189 atlas here


        https://www.ncbi.nlm.nih.gov/pmc/article...


      by citing this paper


        https://pubmed.ncbi.nlm.nih.gov/22498656...


       


      The ch2bet is based on a single individual and is described here:


        https://www.mcgill.ca/bic/software/tools...


      While I am happy that my classic MRIcro and MRIcron tools have proved mature and popular, please understand that my development efforts focus on modern graphics hardware  resulting in MRIcroGL and NiiVue. Personally, I prefer the mni152 and spm152 templates provided with MRIcroGL. These are based on the 2009 atlases with MNI size (larger than average) and SPM size (average) to match the most popular normalization atlases:


      https://www.bic.mni.mcgill.ca/ServicesAtlases/ICBM152NLin2009


      For details on these two sizes, see:


      https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6050588/

  • Jun 12, 2023  04:06 PM | shirgalin
    comparison figure from cordinates

    hi,


    I'm trying to compare different brain areas in mode A versus mode B using  X, Y, Z coordinates and Z values provided in various research articles. Specifically, I would like to create a diagram with two layers: one representing brain areas associated with depression and another layer to compare and identify similarities between depression and cardiac activation.


    I've gone through the manual, but I'm still unsure about the process of creating this diagram. I was wondering if someone understands how to achieve this using MRIcron. Any guidance, specific steps, or tips would be greatly appreciated

    • Jun 13, 2023  06:06 PM | Chris Rorden
      RE: comparison figure from cordinates

      You will want to save your contrast image in NIfTI format, and then you can load the image just like a statistical map. For example here


      http://brainmap.org/software.html#Sleuth


       


      Once you create the NIfTI image "MyMap.nii" you could visualize it, for example the MRIcroGL script:


       


      import gl
      gl.resetdefaults()
      gl.loadimage('spm152')
      gl.overlayload('myMap')

      • Jun 14, 2023  08:06 AM | shirgalin
        RE: comparison figure from cordinates

        thank you!!


         


         

        • Jun 18, 2023  01:06 PM | shirgalin
          RE: comparison figure from cordinates

          I am trying to create 2 different VOIs in different colors overlaid on TLRC tamplate (see attached template file). I'm able to create different VOIs and see them on the template, but not with different colors. also, it looks like only 2 points can be seen on a single VOI each time. Saving they seperately as NIfTI files and trying to open them doesn't work either. (dissapear/change color).


           


          do you know this problen and how to fix it?


           


          thank you!

  • Nov 10, 2016  12:11 PM | Sabrina Golde - Charité - Universitätsmedizin Berlin
    DCM2NII with Mac OS Sierra
    Hi,

    in order to use BET of the DPARSFA-Toolbox, I need to install the latest version of dcm2nii on an iMac with OS Sierra. However, it does not seem to create the .ini file when I double click on the dcm2nii-file in the MRIcron folder as described in the documentation. At least I cannot find it anywhere in my home directory.

    Thanks in advance!

    • Nov 11, 2016  04:11 PM | Chris Rorden
      RE: DCM2NII with Mac OS Sierra
      Duplicate of https://www.nitrc.org/forum/message.php?msg_id=19270

      Apr 3, 2023  04:04 AM | hanlai Zhang
      RE: DCM2NII with Mac OS Sierra
      Originally posted by Sabrina Golde:
      Hi,

      in order to use BET of the DPARSFA-Toolbox, I need to install the latest version of dcm2nii on an iMac with OS Sierra. However, it does not seem to create the .ini file when I double click on the dcm2nii-file in the MRIcron folder as described in the documentation. At least I cannot find it anywhere in my home directory.

      Thanks in advance!

      • Apr 3, 2023  11:04 AM | Chris Rorden
        RE: DCM2NII with Mac OS Sierra
        Modern versions of dcm2niix will not generate .ini files by default, and some recent versions of MacOS restrict tools from reading and writing files in the users home directory as this is considered a security issue. In theory, you can generate a .ini file with the `-g y` argument, just be aware that your OS may not allow you to read this file.

        ```
        %dcm2niix -g y
        Chris Rorden's dcm2niiX version v1.0.20230320 Clang14.0.0 ARM (64-bit MacOS)
        Saving defaults file /Users/chris/.dcm2nii.ini
        Example output filename: 'myFolder_MPRAGE_19770703150928_1.nii'
        % ls ~/.dcm2nii.ini
        ```
        In general, the consensus is that one should control dcm2niix's behavior via the command line arguments, not the .ini file to ensure consistent and reliable behavior across users and machines. Perhaps the DPARSFA-Toolbox developers want to ensure that the desired commands are generated. For more details, see:

        https://github.com/rordenlab/dcm2niix/is...
        https://github.com/rordenlab/dcm2niix/is...

  • Dec 2, 2022  03:12 PM | Molly Rowlands
    reducing multivolume nifti to single volume nifti
    Hi 

    I have nii and nii.gz files that contain two volume (MRIcro asks to pick which to view when I try to open it). I need to input .nii files into a software that does not read multivolumes. The files need to be in a single volume. Is there a way to do this with MRIcron MRIcroGL or another software? 

    I've tried converting the multivolume nii to hdr/img, and then running it through matlab code that converts it back to nii, but it still has multiple volumes when I do this. 

    (Funnily/frustratingly, I have done this before, but I didn't write down how I did it and now cannot recall. There is a relatively simple way of doing it, in that I didn't need to download any extra software/packages to do so.) 

    Any guidance would be appreciated! 

    Thanks,
    Molly

    • Dec 2, 2022  06:12 PM | Chris Rorden
      RE: reducing multivolume nifti to single volume nifti
      I would use fslsplit and fslmerge, but if you like Python you could use nibabel
        https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Fslutils

      Jan 26, 2023  03:01 AM | liu yingqian
      RE: reducing multivolume nifti to single volume nifti
      I meet the same question. Had you solved the problem?


      Originally posted by Molly Rowlands:
      Hi 

      I have nii and nii.gz files that contain two volume (MRIcro asks to pick which to view when I try to open it). I need to input .nii files into a software that does not read multivolumes. The files need to be in a single volume. Is there a way to do this with MRIcron MRIcroGL or another software? 

      I've tried converting the multivolume nii to hdr/img, and then running it through matlab code that converts it back to nii, but it still has multiple volumes when I do this. 

      (Funnily/frustratingly, I have done this before, but I didn't write down how I did it and now cannot recall. There is a relatively simple way of doing it, in that I didn't need to download any extra software/packages to do so.) 

      Any guidance would be appreciated! 

      Thanks,
      Molly

  • Sep 14, 2022  03:09 PM | Ying Tian - Peking Union Medical College Hospital
    Can voi in descriptive statistics used for measuring the volume change of hippocampus?
    Hello everyone,
    I'm new for MRIcroN. I am conducting a research about a certain drug using influencing the volume of rat hippocampus and indeed changing cognitive function. I used "pen tool" drawing a region of hippocampus and used "fill tool" to cover the whole hippocampus. Then, the descriptive statistics showed as follows:

    VOI notes
    Center of mass XYZ 87.41x76.83x13.00
    VOI nvox[cc]=min/mean/max=SD: 669 0.02 = 12.1168 19.4356 23.3717 = 1.6913
    VOI <>0 nvox[cc]=min/mean/max=SD: 669 0.02 = 12.1168 19.4356 23.3717 = 1.6913
    VOI >0 nvox[cc]=min/mean/max=SD: 669 0.02 = 12.1168 19.4356 23.3717 = 1.6913
    Mode: 18.7517070770264

    My question is that is the value "VOI mean=19.4356" can be directly for volume compare? That is I obtain every VOI mean of my rats, and use it for statistic analyse. 

    Thanks for your time, and expecting your reply.
    Ying Tian, PUMCH

    • Sep 14, 2022  05:09 PM | Chris Rorden
      RE: Can voi in descriptive statistics used for measuring the volume change of hippocampus?
      The size of your drawing covers 669 voxels, or 0.02 cubic centimeters.

      The range of brightness of the background image voxels has a range of
      min...max  12.1168...23.3717 with a mean of 19.4356 and a standard deviation of 1.6913. Since image intensity in MRI scans is relative, these values are typically not of huge interest, though they are informative if you are drawing on a calibrated image (e.g. CT scan, CBF, etc).

      • Sep 15, 2022  05:09 AM | Ying Tian - Peking Union Medical College Hospital
        RE: Can voi in descriptive statistics used for measuring the volume change of hippocampus?
        Thanks for your reply. It was so timely.
        If my research is only to compare the volume change of hippocampus and there is no other lesions, can I just take the coverd voxels of one same slice into statistics analyse? That is, I choose the max hippocampus volume slice, and use pen tool and draw tool to get the voxels of this slice. If the voxels in the same slice changes significantly, I can say the significant difference is exist between groups. I'm not sure whether this way is OK.
        Thanks again for your professional support and wish you a good day.

        • Sep 15, 2022  07:09 PM | Chris Rorden
          RE: Can voi in descriptive statistics used for measuring the volume change of hippocampus?
          Taking data from a single 2D slice will always be inherently noisier than sampling the entire region. From first principles, SNR follows the square root of the number of averaged samples:
            https://en.wikipedia.org/wiki/Signal-to-noise_ratio
          A further concern is that if there are differences in angulation and position between your images, you will likely have large variability caused by how oblique the acquired slice is relative to the region of interest. This is why chefs cut carrots on a bias, to increase the surface area. Likewise, the volume of a non-spherical structure on a 2D slice will vary tremendously if the imaging plane is perpendicular to versus oblique with the structure.

  • Aug 4, 2022  04:08 PM | Gamal Osman
    Imaging background lost when adding atlas based images
    I have a problem with MRIcron. I have tried to add structures from Atlases to MRIs and as soon as I add those structures, the MRI background turns black. Any clues on why this could be happening and possible solutions.

  • Apr 11, 2022  05:04 AM | nairulislam
    No overlap between overlay and background - these images do not appear coregistered.
    I am opening MRI scans in MRIcron, and the I am adding other nii.gz files over it as an overlay. But, it gives me an error message as mentioned in the Title. I'd like to know how I can add the overlay without getting this error.

    • Apr 11, 2022  11:04 AM | Chris Rorden
      RE: No overlap between overlay and background - these images do not appear coregistered.
      Sounds like you have to register the two images to a common space. You can use SPM's coregister, FSL's FLIRT or AFNI's 3dWarpDrive

      • Apr 12, 2022  06:04 AM | nairulislam
        RE: No overlap between overlay and background - these images do not appear coregistered.
        Thank You very much for the suggestions. I hope that these tools will fix the problem for me.

        Jul 23, 2022  02:07 PM | nairulislam
        RE: No overlap between overlay and background - these images do not appear coregistered.
        One more question.
        Is it possible for T2 MR images to be converted to the MNI space? I have been using FSL-Flirt for it, but I think it converts onlt T1 images to the MNI space.

        • Jul 23, 2022  04:07 PM | Chris Rorden
          RE: No overlap between overlay and background - these images do not appear coregistered.
          This question is out of scope for this forum. Consider posting on the fsl jiscmail support list. You should consider using a modality specific reference image or a cross-modal cost function.

  • Jul 21, 2022  10:07 AM | nairulislam
    How can I register all the MRI files in my folder to MNI space in a single bach?
    I am currently working with the gui version of the fsl-flirt, and I have a lot of files to work with. So, it is taking a lot of time.
    I would want to know if there is a way fpr me to register all the files to MNI space in one go.
    If there is any other software like SPM that maybe of use to me, please suggest.

    • Jul 21, 2022  12:07 PM | Chris Rorden
      RE: How can I register all the MRI files in my folder to MNI space in a single bach?
      This question is out of scope for this forum.

      If you want to automate FSL, contact the FSL jiscmail forum. If you are comfortable with shell scripting, look at the FSL scripting tutorial
       https://open.win.ox.ac.uk/pages/fslcourse/website/#CourseMaterials

      If you want to automate using Matlab and SPM, contact the SPM jiscmail forum.

      If you want to know how to automate tasks with Python, consider the neurostars forum.

      • Jul 23, 2022  02:07 PM | nairulislam
        RE: How can I register all the MRI files in my folder to MNI space in a single bach?
        Thank You.

  • Apr 8, 2021  01:04 PM | alexpcg10
    VOI in mm cube?
    Hi all!

    I'm new to this software and wondering how can I convert the value shown in VOI to mm cube or increasing the decimal place of cc?

    So what I did is that I highlight the area interested> descriptive. But the brain that I used for is from animal, which has a small brain. 

    The result that I got is 0.03cc but I would like to have more decimal place (or convert it into mm cube) to get a more accurate result. How can I do that?
     
    Center of mass XYZ 73.23x28.50x5.63
    VOI nvox(cc)=min/mean/max=SD: 1125 0.03 =11.1916 37.9018 59.0193 = 7.2466
    VOI <>0 nvox(cc)=min/mean/max=SD: 1125 0.03 =11.1916 37.9018 59.0193 = 7.2466
    VOI >0 nvox(cc)=min/mean/max=SD: 1125 0.03 =11.1916 37.9018 59.0193 = 7.2466


    Many thanks in advanced!

    • Apr 8, 2021  02:04 PM | Chris Rorden
      RE: VOI in mm cube?
      If you are just starting out and have a modern computer, I would suggest you consider MRIcroGL not MRIcron. While I hope my legacy software is mature and useful, it does not leverage modern hardware and is not actively developed.

      Note that you are given the volume as the discrete number of voxels (1125) as well as the cubic centimeters (0.03):

      VOI nvox(cc)=min/mean/max=SD: 1125 0.03 =11.1916 37.9018 59.0193 = 7.2466

      Therefore, you can get the precise volume knowing the spatial resolution of your image. This is reported in the header of your image, which you can see with the Window/Information menu item. Specifically, it will report the spacing (typically in millimeters) for the first three (spatial) dimensions I, J, K. This gives you the precise volume of each voxel, which you can multiply by the number of voxels.

      • Apr 9, 2021  02:04 AM | alexpcg10
        RE: VOI in mm cube?
        Hi Chris,

        Thanks for the response. I'm actually calculate the infarct volume of the animal after inducing stroke.

        Is there any video available showing how to calculate the precise volume?

        I am a bit confused with what you have mentioned in the previous post, otherwise could you please show me the working on how can I get the precise volume?

        So I used MRIcroGL for my analysis, and these are the value shown in the header of each images:
        I1:-0.480432x-0.218863x4.35898mm (53x20x21)= 0.348061
        I2:-0.429688x-0.192176x3.86228mm (53x20x20)= 0.6656362
        I3:-0.378944x-0.165489x3.36558mm (53x20x19) = 0.250943
        I4:-0.328199x -0.138802x 2.86888mm (53x20x18) = 0.320685
        I5:-0.277455x -0.112115x 2.37218mm (53x20x17)= 0.194236
        I6: -0.226711x-0.0854282x1.87548mm (53x20x16)=0.81149


        And here is the descriptive of all the highlighted area from the images above:
        Background image: 7_6_T2_20210324105011_70001
        Drawing Center of mass XYZvox 102.17x38.28x14.58
        Drawing Center of mass XYZmm 4.33x2.61x1.77
        Background image: 7_6_T2_20210324105011_70001
        Drawing VOI nvox(cc)=min/mean/max=SD: 6733 0.03 = 0.3754 10.6360 16.2363 = 2.0323
        Drawing VOI <>0 nvox(cc)=min/mean/max=SD: 6733 0.03 = 0.3754 10.6360 16.2363 = 2.0323
        Drawing VOI >0 nvox(cc)=min/mean/max=SD: 6733 0.03 = 0.3754 10.6360 16.2363 = 2.0323

        Is this mean I will take -0.480432*-0.218863*4.35898mm * 6733 to get the VOI for image 1, -0.429688*-0.192176*3.86228mm * 6733 to get the VOI for image 2, and so on, then I add the total value together to get the VOI for the whole infarct volume?

        Thanks in advance!

        • Apr 9, 2021  12:04 PM | Chris Rorden
          RE: VOI in mm cube?
          The distance between voxels centers in the row, column and slice directions is stored as pixdim[1], pixdim[2], pixdim[3] in the NIfTI header. Just make sure the image specifies NIFTI_UNITS_MM
            https://nifti.nimh.nih.gov/pub/dist/src/niftilib/nifti1.h
          I am surprised that you list negative values, as usually these are absolute values. Not sure which tool described these. You can also determine the voxel size from the s-form or q-form, where negative values would create a negative determinant (e.g. an image that is mirrored with respect to canonical space).

          Your formula looks fine, though I would use absolute values for volume. 

          If you have a lot of images, you would be better off writing a Matlab or Python script. MRIcroGL can save images as either .nii, .nii.gz or .voi format - in reality the .voi format is just a .nii.gz (with the extension telling the software to treat it as a discrete rather than continuous image). A script like this one 
          https://github.com/rordenlab/spmScripts/...
          could easily be modified to report volumes for hundreds of images with whatever precision you prefer.

          • May 29, 2022  07:05 PM | tammartru
            RE: VOI in mm cube?
            Dear MriCron expert!
            Flowing the suggested script for calculating images volume (https://github.com/rordenlab/spmScripts/...)
            I am not sure it is the right script, because it seems to only calculate number of Nan voxels.
            Could you please direct me to the script calculating ROIs volumes?

            Thank you very much!
            Tammar

            Originally posted by Chris Rorden:
            The distance between voxels centers in the row, column and slice directions is stored as pixdim[1], pixdim[2], pixdim[3] in the NIfTI header. Just make sure the image specifies NIFTI_UNITS_MM
              https://nifti.nimh.nih.gov/pub/dist/src/niftilib/nifti1.h
            I am surprised that you list negative values, as usually these are absolute values. Not sure which tool described these. You can also determine the voxel size from the s-form or q-form, where negative values would create a negative determinant (e.g. an image that is mirrored with respect to canonical space).

            Your formula looks fine, though I would use absolute values for volume. 

            If you have a lot of images, you would be better off writing a Matlab or Python script. MRIcroGL can save images as either .nii, .nii.gz or .voi format - in reality the .voi format is just a .nii.gz (with the extension telling the software to treat it as a discrete rather than continuous image). A script like this one 
            https://github.com/rordenlab/spmScripts/...
            could easily be modified to report volumes for hundreds of images with whatever precision you prefer.

            • May 30, 2022  10:05 AM | Chris Rorden
              RE: VOI in mm cube?
              The script reports the number of voxels, just multiple by the volume of each voxel (in cubic millimeters) to get the volume in that unit:
                hdr = spm_create_vol(filename);
                mm3=prod(abs(hdr.mat(1:3, 1:3)*[1;1;1]));
              You may find other scripts in that repository useful, for example to find mean and standard deviation:
                https://github.com/rordenlab/spmScripts/...

              • May 31, 2022  06:05 AM | tammartru
                RE: VOI in mm cube?
                Thank you very much Mr. Rorden

                With your permission, some following questions:
                1. The space between voxels is not included when measuring volume, or given volume of each voxel includes spacing?
                I have seen in previous posts that to get volume in cubic mm, spacing of the three dimensions should be multiplied by number of voxels.

                2. My VOIs (lesion maps) and the background image (MNI) have different resolutions (voxel sizes and dimensions). 
                Trying overlay the same VOI on two different resolution backgrounds, I see that it affects nvox but not volume in cc.
                Does is make sense? Should I first coreg:reslice my VOIs to the background image dimension? 


                Cheers!
                Tammar

      Apr 11, 2021  10:04 AM | alexpcg10
      RE: VOI in mm cube?
      Hi Chris,

      Sorry to be bothersome.

      I'm really catch no ball how to get the precise volume. Is there any easy way for you to show me how to do that? I'm not good at using python script so thinking of doing the calculation manually. MRIcron or MRIcroGL both downloaded so either is fine.

      Thanks in advance and really appreciate if you can guide me a simple calculation for that?

      • Apr 11, 2021  01:04 PM | Chris Rorden
        RE: VOI in mm cube?
        The volume is the product of the number of voxels and the voxel spacing.
         
        For example, in the screenshot below the voxel spacing is 1.875*1.875*9mm, so each voxel is 31.640625mm^3
        Attachment: spacing.png

  • May 24, 2022  08:05 AM | Krystal WANG
    Help with the file bar
    Hi,

    The bar for "files" is missing when I open MRIcron. How can I fix this? Please help.

    Best,
    Krystal

    • May 24, 2022  01:05 PM | Chris Rorden
      RE: Help with the file bar
      The toolbar in your screen shot looks identical to the one shown in the manual:
        https://www.nitrc.org/plugins/mwiki/index.php/mricron:MainPage
      When you load overlays the drop-down bar that currently shows only "background" will show the additional layers.

      While I am glad that MRIcron has proved popular, be aware that I am no longer actively working on this project, having shifted my efforts to MRIcroGL
       https://www.nitrc.org/plugins/mwiki/index.php/mricrogl:MainPage
      and NiiVue
       https://github.com/niivue/niivue

  • May 24, 2022  11:05 AM | Yusuke Sudo
    Merging two projects
    I am new to Conn and I am using CONN 17f. I am attempting to merge two projects for the first time. However, when I try to merge two projects, I can't do it and in matlab it says "Mismatched first-level covariate names with project ***" "Mismatched second-level covariate names with project ***". The covariates names are the same between two projects. I would appreciate it if you could give me some advice.

    • May 24, 2022  01:05 PM | Chris Rorden
      RE: Merging two projects
      This forum is for MRIcron. It sounds like you should post to the CONN forum:
        https://www.nitrc.org/forum/forum.php?forum_id=1144

      • May 24, 2022  01:05 PM | Yusuke Sudo
        RE: Merging two projects
        I'm sorry to mistake the place to ask about CONN. Thank you for telling me the correct place.

  • Apr 19, 2022  11:04 AM | Lars Eirik Bø
    Brunner-Munzel test in NiiStat
    Hi,

    I see in this forum that the NPM has now been replaced with NiiStat. As far as I understand, NPM supported the use of the Brunner-Munzel test for creating VLSMs, but in the NiiStat introduction it states that "[r]esults are identical to a Student's pooled-variance t-test (if one of the dimensions is binomial)". Does that mean that NiiStat does not support the Brunner-Munzel test?

    TIA

    Lars Eirik

    (PS. I tried asking about this in the NiiStat help forum, but didn't get an answer, so I thought I'd try this forum, as it seems more active.)

    • May 24, 2022  01:05 PM | Chris Rorden
      RE: Brunner-Munzel test in NiiStat
      True, NiiStat no longer includes the Brunner-Munzel test. The results are therefore GLM (e.g. where the mean is the best measure of central difference). In general, the Brunner-Munzel test is a bit more sensitive if the assumptions of the GLM are violated (I like to think the BM test generally detects differences in median). However, both provide robust permutation based thresholding for family wise error, and NiiStat includes the ability to model nuisance regressors. I do think that NPM is robust, I am simply spread too thin to support a lot of these older projects. All the source code is open source, so any one can extend or maintain these legacy tools.

  • Apr 30, 2022  07:04 AM | Sachin Patalasingh - NIMHANS
    Discrepancy in results by MRIcron: NPM and VLSM Toolbox
    Hello VLSM experts,

    I am trying to run VLSM on some glioma patients (N=85). I am using both MRICron NPM and VLSM Toolbox by Stephen M. Wilson. However I am shocked to see different results in both. I am using 8000 permutations in both the process still the regions coming in the results are very contradictory to each other.
    So, I was wondering if I am doing any mistakes or is there any problem in the software itself?
    I would appreciate any kind of help on this. Thank you in advance.
    Best,
    Sachin

    • May 24, 2022  01:05 PM | Chris Rorden
      RE: Discrepancy in results by MRIcron: NPM and VLSM Toolbox
      First of all, are you using a GLM-based statistic (e.g. t-test) or the non-parametric Brunner-Munzel test? There are differences in the fundamental assumptions of NPM and VLSM. Each is a valid way of testing for differences based on these assumptions. While I am happy that NPM has proved robust and popular, I no longer maintain it. You may want to try my NiiStat. Alternatively, I am a huge advocate for Anderson Winkler's PALM. I only created NPM/NiiStat because PALM did not exist at that time.

  • Apr 25, 2022  05:04 AM | nairulislam
    How do I extract the masks out of a nii file?
    I am opening an MRI image, and adding overlays on it. I want to extract those areas within the MRIs where the base images and the overlays overlap.

    • Apr 25, 2022  12:04 PM | Chris Rorden
      RE: How do I extract the masks out of a nii file?
      With MRIcron
       Draw/MaskImageWithVOI
      With MRIcroGL
       Draw/Advanced/MaskImage

      • Apr 27, 2022  05:04 PM | nairulislam
        RE: How do I extract the masks out of a nii file?
        For MRIcoGL the steps are Draw/Advanced/Mask Image/Preserve regions with VOI, and after that you save the mask, File/Save Volume. Right?

  • Apr 19, 2022  11:04 AM | Lars Eirik Bø
    Brunner-Munzel test in NiiStat?
    Hi,

    I see in this forum that the NPM has now been replaced with NiiStat. As far as I understand, NPM supported the use of the Brunner-Munzel test for creating VLSMs, but in the NiiStat introduction it states that "[r]esults are identical to a Student's pooled-variance t-test (if one of the dimensions is binomial)". Does that mean that NiiStat does not support the Brunner-Munzel test?

    TIA

    Lars Eirik

    (PS. I tried asking about this in the NiiStat help forum, but didn't get an answer, so I thought I'd try this forum, as it seems more active.)

  • Mar 12, 2022  08:03 AM | schn
    NIfTI to VOI
    Hello everyone,

    I am a beginner using MRICron. For a research project I manually drew the lesion areas of several stoke patients in MRICron in axial view. I aimed to create .VOI files - one for each patient - in order to create overlays afterwards. 

    Instead of saving the VOIs via Draw->Save VOI I saved the files via File->Save as NIfTI and chose the .voi format. When trying to open the images again the drawn lesions were not in the file anymore. Now I found out that I probably had saved new templates instead of saving VOI. 

    Do you know of any way to convert the NIfTI files to VOI to make the drawn lesions visible again?

    Thanks in advance for your help and kind regards
    Nadine

    • Mar 12, 2022  02:03 PM | Chris Rorden
      RE: NIfTI to VOI
      The File/SaveAsNIfTI option saves the MRI scan you are viewing. This allows you to convert various image formats (MGH, NRRD, ECAT) to NIfTI. The Draw/SaveVOI option saves your drawing. This is described in the manual:
        https://people.cas.sc.edu/rorden/mricron/stats.html
      If you are just getting started, and are using a computer built in the last decade, I would suggest you use the modern MRIcroGL instead of the legacy MRIcron
        https://www.nitrc.org/plugins/mwiki/index.php/mricrogl:MainPage#Drawing_Regions_of_Interest
      While I am glad that my classic tools have remained popular and robust, I have a heavy teaching, administration, service and research load, so have little time to support these older tools.

      • Mar 15, 2022  07:03 AM | schn
        RE: NIfTI to VOI
        Dear Mr. Rorden, 

        thank you very much for your reply! 

        Kind regards!

  • Mar 9, 2022  05:03 PM | Qiushuo Cheng
    Can't download dcm2nii.exe
    Hi,
    I download win.zip, but when I unzip it, there is no dcm2nii.exe.
    The same happened in my Linux system, there is no dcm2nii.

    What should I do?

    Yours,
    Qiushuo

    • Mar 9, 2022  06:03 PM | Chris Rorden
      RE: Can't download dcm2nii.exe
      When you download a recent version of MRIcron (e.g. v1.0.20190902)
        https://www.nitrc.org/frs/?group_id=152
      you will notice that it includes an executable dcm2niix in its 'resources' folder. The MRIcron "Import" menu item provides a graphical wrapper for dcm2niix, but you can also run it from the command prompt.

      dcm2niix is the modern version of dcm2nii that includes support for BIDS output and enhanced DICOM.
        https://github.com/rordenlab/dcm2niix

      • Mar 10, 2022  09:03 AM | Qiushuo Cheng
        RE: Can't download dcm2nii.exe
        Thank you for you timely response!

        But I think I need to use dcm2nii, rather dcm2niiX.
        When I use dcm2nii, the script report a fault,"Warning: Unable to determine manufacturer (0008,0070), so conversion is not tuned for vendor."
        I am trying to reconstruct a Github project on my computer, and they used dcm2nii.




        Qiushuo

        • Mar 10, 2022  01:03 PM | Chris Rorden
          RE: Can't download dcm2nii.exe
          You can always download an older version of MRIcron that uses dcm2nii, e.g. the version from 2016:
            https://www.nitrc.org/frs/?group_id=152
          Regardless of whether you use dcm2niix or dcm2nii, knowing the manufacturer is vital for interpreting the private tags (e.g. 0021,105b refers to Sequence Variant for Siemens data but TaggingFlipAngle for GE data). While both tools MIGHT be able to extract an image, they will have impoverished meta data. You may want to use a tool like gdcmdump to identify your manufacturer and dcmodify to re-instate this crucial information.

          • Mar 10, 2022  02:03 PM | Qiushuo Cheng
            RE: Can't download dcm2nii.exe
            Thank you for you response.

            I have downloaded and unzipped every version on the website, however, there is no dcm2nii.exe in any packages.

            Mar 10, 2022  02:03 PM | Qiushuo Cheng
            RE: Can't download dcm2nii.exe
            There is no 2016 version of MRIcron, either.

            • Mar 11, 2022  01:03 PM | Chris Rorden
              RE: Can't download dcm2nii.exe
              Good point, the release was hidden. If you check again you will find
                MRIcron/NPM/dcm2nii 2MAY2016
              The project is open source so you can build it yourself.
                https://github.com/neurolabusc/MRIcron/tree/master/dcm2nii

              Please bear in mind that I have a heavy teaching, service and administration load, and while I am happy my legacy tools have proved popular and robust, I am not in a position to support them. That software was developed before enhanced DICOM was popular, and before the BIDS standard.

  • Feb 10, 2022  04:02 AM | Krystal Yau
    Classification of DICOM images
    Dear expert,

    I have been searching up online on how to classify the DICOM images into different categories/ folders, i.e. T1, T2, FLAIR, M0 and PCASL.  

    Currently, the files that named with Z in the beginning, e.g. "Z01", "Z02" ... "Z8568".  After converting dcm images into nii files are basically named "'myFolder_MPRAGE_19770703150928_1.nii'".  I understand that I can modify the file name by modifying the annotation under "Output Filename". I'm stuck at the step of categorizing them...

    Could you please advise how to classify the images as well as in batch processing?

    Your attention and help will be highly appreciated!

    Thanks a lot.

    Best regards,
    Krystal
    Attachment: Screenshot_1.jpg

    • Feb 10, 2022  12:02 PM | Chris Rorden
      RE: Classification of DICOM images
      dcm2niix does not know the intention of your sequences. A T2* scan might be a task fMRI in one series and a resting state acquisition in another. 

      You may want to look at one of the many tools that wraps dcm2niix with your study-specific heuristics. Examples include ezBIDS, Heidiconv, and Dcm2Bids
        https://github.com/rordenlab/dcm2niix#li...

      For future acquisitions, you can set the series name on your console to match your intention, e.g. "T1", "T2", "FLAIR". You can even do this to help automate conversion to BIDS data:
       https://github.com/ReproNim/reproin

  • Jan 27, 2022  02:01 PM | Bradley Delman - Mount Sinai Medical Center
    Rotation of angled subject
    Hi,

    Some subjects just cannot lay still in a neutral position. I am looking for a way in Mricron to rotate a volume to visualize subjects in anatomic/orthogonal alignment. This would only require a few degrees of correction of pitch, roll and yaw, while retaining coordinates of the unrotated dataset.

    Thanks in advance!

    • Jan 27, 2022  04:01 PM | Chris Rorden
      RE: Rotation of angled subject
      Interpolating and reslicing data is a lossy operation. You will really want to use a dedicate tool like FSL's mcflirt or SPM's motion correction algorithm to deal with these effects. If you use AFNI, you can uses its motion censoring functions to detect and eliminate noisy artifacts.

      • Jan 27, 2022  04:01 PM | Bradley Delman - Mount Sinai Medical Center
        RE: Rotation of angled subject
        Hi and thanks for the prompt feedback.

        Because of our application I'm less concerned about lossiness. We need to identify landmarks on a dataset that we will then go back to use the initial (unrotated) data for. I just need to be sure the landmarks are consistently and evenly placed. Does that change options for rotating within Mricron?

  • Jan 26, 2022  04:01 AM | Krystal Yau
    Image display is off
    Dear experts,

    I am new to mricron and trying to view the images. As I opened the nifti files that were just converted from dicom, the images look odd.  The brain images were not displayed in their proper size/ dimension. I have no idea how to resize it.  Please see the attached file for your reference.

    The images, also, look so dull.  How can I make it clearer?

    Thanks.

    Regards,
    Krystal
    Attachment: Screenshot_1.jpg

    • Jan 26, 2022  03:01 PM | Chris Rorden
      RE: Image display is off
      Your data is anisoptropic. You can check the "Reorient images when loading" checkbox in preferences to interpolate the images, though note there will no longer we a one-to-one correspondence between screen pixels and image voxels. For future sequences, you may want to consider using isotropic sequences. 

      You may also want to try MRIcroGL - I am glad that MRIcron has proved robust and popular, but I no longer have time to actively develop MRIcron.

  • Jan 25, 2022  01:01 PM | Krystal Yau
    DICOM converted into Nifti
    Hi experts,

    I am new to SPM 12. I have converted the MR images from dcm to nii by using mricron. But when I followed through the tutorial (https://andysbrainbook.readthedocs.io/en...), it showed the directory where the files are stored. However, when I relocated to my directory, which was on my desktop, it was not found (even though I copied the path).

    Can you please advise?

    Thanks,
    Krystal

    • Jan 25, 2022  02:01 PM | Chris Rorden
      RE: DICOM converted into Nifti
      Sounds like a question for the SPM forum:
        https://www.jiscmail.ac.uk/cgi-bin/wa-jisc.exe?A1=ind2201&L=SPM&X=6A30DF6DC73478E1E1&Y=rorden%40sc.edu

      You may also want to watch some of Andy's outstanding YouTube videos.

  • Jul 7, 2020  06:07 PM | sbr
    How to adjust preferences to load all volumes?
    When attempting to load the NIfTI file images, a pop-up message indicates that a "large image downsample" has occurred and that an adjustment in the preferences must be completed to load all volumes. Currently, this makes the sagittal and coronal views of the images look blurry (attached image). Could anyone give me directions on how to alter this in the preferences (which setting to alter) to be able to load all volumes?

    Thank you in advance.

    • Jul 7, 2020  06:07 PM | Chris Rorden
      RE: How to adjust preferences to load all volumes?
      Hello,
       What version are you using? If you are using the pre-release (v1.2.20200707) https://github.com/rordenlab/MRIcroGL12/releases/tag/v1.2.20200707 you can change this by going to the Preferences window and setting the "Reduces volumes larger than ..." If you are using an older version of v1.2, you can select Preferences and press the "Advanced" button to edit the text file, in this case the value you want to edit is "MaxVox=560". The default setting is 560 which will work for all modern graphics cards. If you have a discrete graphics card with 8Gb of RAM, you can set this to 1024. This will stop the downsampling of large data.

      Unfortunately, while changing this will hide the warning, in your case it will not make the resulting image look much better. In your example, the image has a exceptionally high in-plane resolution, but very thick slices. These thick slices make the image look blurry in the slice (head-foot) direction. This is an inherent property of your anisotropic image, any tool must interpolate this low resolution dimension. In future, you may want to consider acquiring isotropic images. For Siemens the classic 3D T1 sequence is referred to as "MP-RAGE" and the 3D T2 sequence is "SPACE" - both can provide outstanding isotropic images.

      • Jul 7, 2020  06:07 PM | sbr
        RE: How to adjust preferences to load all volumes?
        Thank you for your help and the advice, luckily the data is from a pilot and things can be altered.

        Jan 12, 2022  11:01 AM | mehrnaz kamyab
        RE: How to adjust preferences to load all volumes?
        Hello,
        I have same problem in loading fMRI image. MRIcroGL only loads 409 volume of total 600 volume. I am using "MRIcroGL 1.2.20211006 GTK2 x86-64 LLVM" on ubuntu 20.04. In Help menu by selecting Prefernces, I can not find "Reduces volumes larger than ...", and also in advanced mode I have no "MaxVox=560" element in it. Why is it like that? How can I solve this issue?

  • Jan 8, 2022  03:01 AM | where
    problem about coregister
    您好,请问我用 SPM12 做MRI T1像与 fMRI 的配准时,为什么会失败?配准结束后,我用 mricron 同时点开 T1 像和 fMRI 影像(view 中有yoke 的功能),它们并不在同一个坐标原点上,甚至有的时候,查看其中一个影像时,另外一个影像不会在mricron的展示区内出现。
    十分感谢您的帮助,期待回复~

    • Jan 8, 2022  03:01 AM | where
      RE: problem about coregister
      Hello,
      why does the registration of MRI T1 image and fMRI fail when I use SPM12? After the registration, I use mricron to open the T1 image and the fMRI image at the same time (there is a yoke function in the view), they are not at the same coordinate origin, and sometimes even when viewing one image, the other image does not Appears in mricron's display area.
      It will prompt: No overlap between overlay and background - these images do not appear coregistered.
      Thank you very much for your help, looking forward to your reply~

  • Dec 20, 2021  08:12 AM | Joyce Oerlemans
    View settings
    Hello everyone,

    as I am very new to VLSM and using MRIcron, I was wondering if you could help me with the following (probably basic) problem: when I open iùages in MRIcron this is the standard view setting I see (see attachement). As you can see, the sagital and coronal view are suppressed for some reason. Could somebody tell me how to reset this view?

    Many thanks!
    Attachment: View settings.JPG

    • Dec 20, 2021  02:12 PM | Chris Rorden
      RE: View settings
      These scans are anisotropic (slice resolution is higher in plane than between slices). Choosing the "Reorient images when loading" preference will reslice this data to be isotropic. Since you are just starting out, you may want to try MRIcroGL which leverages modern hardware. While I am happy that MRIcron is popular and has proved robust, it was designed for computer hardware from 2005.

  • Dec 10, 2021  02:12 AM | yali123
    problem about NPM
    HELLO expert:
    i am trying to analyse VLSM with NPM  (the version: Chris Rorden's NPM :: 6 June 2013 CacheMB = 512; Threads used = 4),   it could successfully run in the tutorial datasets,but in my datasets,it always showed "unsupported compressed data type 64", how can i solve this problem?
    thank you
    Attachment: 20211209201024.png

    • Dec 10, 2021  06:12 PM | Chris Rorden
      RE: problem about NPM
      Hello,

      I suspect this is related to:
        https://github.com/nipy/nibabel/issues/1046
      if you create these images with Python, please choose an appropriate precision for your data (e.g. dtype=np.uint8 for binary data)


      Assuming your data is binary lesion maps (where each voxel is either lesioned [1], or unlesioned [0]), the 64-bit precision is not required, and you can dramatically reduce image size by storing them as UINT8, e.g.
        fslmaths ~/T1 -add 0 ~/T1c -odt char
      or to save as 32-bit float:
       fslmaths ~/T1 -add 0 ~/T1f -odt float


       I am glad that my legacy popular remains popular. I have moved my own development on to NiiStat.
        https://www.nitrc.org/plugins/mwiki/index.php/niistat:MainPage
       Feel free to maintain and exend the NPM source code and project.

  • Dec 5, 2021  12:12 PM | jonathanattwood
    Open/convert image/vector files to generate VOI
    Hi, 

    Is it possible to convert an image file (jpeg/png) or vector file into a file that will open as a 2D region of interest in MRIcron? I need to generate 3D VOIs by interpolating between 2D lesion drawings like the one attached, and would rather recreate the lesion exactly rather than trace it if possible. 

    Thanks for your help.
    Attachment: Picture 1.png

    • Dec 6, 2021  04:12 PM | Chris Rorden
      RE: Open/convert image/vector files to generate VOI
      This is beyond the scope of my viewing software.
       1. Convert 2D PNG to NIfTI 2D
            https://stackoverflow.com/questions/52535729/convert-png-files-to-nii-nifti-files
       2. Convert 2D NIfTI to 3D NIfTI using Matlab, Python or fslmerge.
       3. Set the origin, orientation and scale. Perhaps SPM's Display function can help here. Be aware that CT scans of the brain were often acquired with gantry tilt, and the slices will be pitched relative to MNI space.

  • Oct 15, 2021  08:10 AM | alialam
    Change slice sequence
    Hi 

    My nii gz file for my MRI head scans for some reason are in a peculiar sequence  ie they do not follow on from one another 
    Is there anyway to change the slice sequence so that it follows the MRI head sequence?

    Thanks

    • Oct 15, 2021  12:10 PM | Chris Rorden
      RE: Change slice sequence
      This question is underspecified. Are you trying to reorder the 2D slices in a 3D volume, or the 3D volumes in a 4D time series? Why ais the image order scrambled - what is the source of these images and how were they converted (e.g. did you convert them from DICOM to NIfTI using dcm2niix)? I suspect a combination of fslroi and fslmerge can solve your issue, but it might be easier to fix it at the source.

      • Oct 15, 2021  12:10 PM | alialam
        RE: Change slice sequence
        > Hi Chris
        >
        So I am trying to reorder the 2D slices in a 3D volume
        The source was DICOM files converted to Nifti
        I can’t quite understand why the image order is scrambled - I’ve tried to
        re-convert them to no avail
        Is there any other bits of info that may help?
        Thanks

        >

        Nov 25, 2021  01:11 PM | alialam
        RE: Change slice sequence
        Hi Chris

        They are corrupt at the source so I fear I will have to rearrange the 2D images to create a 3D volume
        Ive tried to reconvert the DICOM to nifti but no luck

        Any idea's on how this can be done?

        • Nov 25, 2021  03:11 PM | Chris Rorden
          RE: Change slice sequence
          You could use fslsplit and fslmerge.

          Alternatively, you could write a Python script using nibabel.

  • Nov 5, 2021  02:11 PM | Nathalie Rieser
    Linux Installation Problem
    Hi everyone,

    When I want to install MRIcron on Centos 7, I receive the following error message. Could someone help me with this?

    mricron/MRIcron
    [FORMS.PP] ExceptionOccurred
    Sender=EAccessViolation
    Exception=Access violation
    Stack trace:
    $00000000004242D5
    $0000000000431E00
    $000000000043CA04
    $000000000043A1F7
    $00000000004B7ADA
    $00000000005CB075
    $000000000056279D
    $00007F40315E0B4D

    Best wishes & thanks in advance,
    Nathalie

    • Nov 5, 2021  04:11 PM | Chris Rorden
      RE: Linux Installation Problem
      Hello,
       I am not very familiar with CentOS, but Wiki says that CentOS 7 was released in 2014, with full updates ending in 2020. I wonder if it does not the modern widget sets. Do some of the older releases work? 

      https://www.nitrc.org/frs/?group_id=152

      You could also try the pre-release for the upcoming version
        https://github.com/neurolabusc/MRIcron/releases
      as noted in the release notes, the latest version expects libqt5pas, which you may be able to install from a rpm:
       https://github.com/davidbannon/libqt5pas/releases/tag/v1.2.9

  • Oct 18, 2021  11:10 AM | jonathanattwood
    Lesion drawing with Damasio 1989 template
    Hi, 

    Is it possible to adjust the orientation of the axial plane relative to the brain in MRIcron, or other similar programmes?

    I assume most MNI brain volumes used now, including Ch2 in MRIcron, place the axial plane on the AC-PC line, approx 9 degrees steeper than the orbitomeatal line used on early CT scans.

    I am trying to create a VOI by copying the outline of a brain lesion which was originally drawn onto 9 axial slices from a template created by Damasio & Damasio 1989 ('Lesion analysis in Neuropsychology', Appendix Figure A.2, attached). The axial plane in the template is somewhere between 0-15 degrees to the OM line. When I try to identify the axial slices on MRI I can either match anterior or posterior regions but not both suggesting the angle needs adjusting. Is this possible? Or is there another way round this problem?

    Thanks for your help.

    • Oct 18, 2021  12:10 PM | Chris Rorden
      RE: Lesion drawing with Damasio 1989 template
      Are these modern CT scans or is this an archival study? 

      Once upon a time, CT scans were acquired with irregularly spaced slices (more near the brain stem, fewer slices for the cortex) and with substantial gantry tilt. The wide spacing between slices typically render these slices a poor choice for normalization. You may want to see if you can take a popular slice and angulate the volume to match the Damasio slides. My legacy MRIcro software has a "free rotate" button that helps, but you could also do this with SPM:
        https://people.cas.sc.edu/rorden/mricro/mricro.html


      For modern CT scans, these images typically have good spatial accuracy in all planes and can be warped to standard space using my clinical toolbox for SPM. I would suggest using a modern MNI-oriented atlas rather than the legacy Damasio atlas, as this will aid translation to studies using other modalities.

      • Oct 18, 2021  04:10 PM | jonathanattwood
        RE: Lesion drawing with Damasio 1989 template
        Thanks for your reply Chris. 

        It is an archive project. The scan was acquired in 1985 and drawn onto the template shortly afterwards. I have 80 or so similar scans which I'd like to be able to use to identify lesions.

        Using MRIcro to rotate the pitch by -8 makes the volume equate to the template. Very satisfying! I have been able to save the rotated volume and re-open it in MRIcron where it remains rotated, which is great. 

        I have tried rotating the Brodmann atlas in MRIcro and exporting this but when I re-opened in MRIcron it did not appear to be rotated any more. Is there a reason for this, and is there another way to make and use rotated overlays?

        I'm afraid I am not a programmer (yet) so am trying to avoid SPM and Matlab for now as far as possible.

        Thanks again.

  • Sep 12, 2021  02:09 PM | niklasl - Umeå university
    MRIcron does not find libqt5pas1
    Hi

    I am using redhat 7.9 and tried to instal MRIcron (both versions) but I get the follonging message (see below) when trying to install libqt5pas1. I have a M5000 Nvidacard and I have cuda 9.1 installed and it is working. What might be the issue?

    best

    nick


    [root@localhost MRIcroGL]# yum install libqt5pas1
    Loaded plugins: langpacks, nvidia, product-id, search-disabled-repos, subscription-manager
    file:///var/cuda-repo-9-1-local/repodata... [Errno 14] curl#37 - "Couldn't open file /var/cuda-repo-9-1-local/repodata/repomd.xml"
    Trying other mirror.
    file:///var/cuda-repo-9-1-local-compiler... [Errno 14] curl#37 - "Couldn't open file /var/cuda-repo-9-1-local-compiler-update-1/repodata/repomd.xml"
    Trying other mirror.
    file:///var/cuda-repo-9-1-local-cublas-p... [Errno 14] curl#37 - "Couldn't open file /var/cuda-repo-9-1-local-cublas-performance-update-1/repodata/repomd.xml"
    Trying other mirror.
    file:///var/cuda-repo-9-1-local-cublas-p... [Errno 14] curl#37 - "Couldn't open file /var/cuda-repo-9-1-local-cublas-performance-update-3/repodata/repomd.xml"
    Trying other mirror.
    No package libqt5pas1 available.
    Error: Nothing to do

    • Sep 13, 2021  12:09 PM | Chris Rorden
      RE: MRIcron does not find libqt5pas1
      I use Debian rather than RedHat Linux distributions. My best guess is that you can install the .rpm file from here:
        https://github.com/davidbannon/libqt5pas/releases
      Given your hardware, I would suggest trying out MRIcroGL instead of MRIcon:
        https://github.com/rordenlab/MRIcroGL/re...
      MRIcroGL uses OpenGL 2.1, which was released in 2006. As long as your computer supports OPenGL 2.1, I strongly suggest using MRIcroGL instead of MRIcron. MRIcroGL will leverage the dedicated hardware and 2048 cores of your graphics card.

      I provide MRIcroGL for both GTK2 and QT5. Again, QT5 will require the libqt5pas library.

  • Aug 19, 2021  10:08 AM | pzg20
    MRIcron plotting height threshold not peak-level T
    Hello

    I have a t-contrast in SPM with a peak-level statistic of 7.99 which I am saving as an nii (thresholded SPM) and plotting in MRIcron.

    For some reason the activation is plotting the height threshold from SPM (3.21). I am not having this problem with any other contrasts but it appears to be doing it for all clusters in this one - any ideas? (I have tried without cluster correction)

    Thank you

    • Aug 19, 2021  12:08 PM | Chris Rorden
      RE: MRIcron plotting height threshold not peak-level T
      I am happy that my old software is mature and remains popular. However, MRIcron has not been updated in the last decade. You may want to try out MRIcroGL that leverages recent advances in computers:
        https://www.nitrc.org/plugins/mwiki/index.php/mricrogl:MainPage
      I suspect with your overlay, you simply have to set the minimum and maximum values for the layer for the desired contrast. The algorithm that guesses the default contrast when loading an image does not know your intention, and may not always be ideal. I would suggest the notes on the Layers in the manual
        https://people.cas.sc.edu/rorden/mricron/main.html

  • Jul 6, 2021  11:07 AM | Lisa Schmidt - Philipps University Marburg / Clinic for Psychaitry and Psychotherapy
    Multislice view
    Hello :)
    I am using MRIcroGL for my DTI results.
    How can I use the multi slice view?
    I tried to figure it out with MRIcro tutorial but this just works for this version...

    Any ideas?
    It would be helpful because I want to depict multiple results in one slice picture 

    Best, Lisa

    • Jul 6, 2021  12:07 PM | Chris Rorden
      RE: Multislice view
      Choose the Display/Multi-Planar menu item to see the axial, coronal and sagittal images simultaneously. You can click on the image to navigate to different coordinates, or use the controls in the "2D Slice Selection" panel.

      You can also choose Display/Mosaic to create custom multi-slice views. You can use the widgets in the "Mosaic" panel to crudely set these values, or input custom text for fine control The Scripting/Templates/Mosaic and Scripting/Templates/Mosaic2 menu items show how you can control these views with a Python script. 
        https://www.nitrc.org/plugins/mwiki/index.php/mricrogl:MainPage#Mosaic_Views

  • Jun 14, 2021  01:06 PM | Laura Mendes
    Cannot find MRIcron directory
    Hello, I installed the MRIcron software for Mac, there were no problems while installing and it opens perfectly. However, I cannot find the MRIcron home directory, and I need to copy some files to there.

    Thank you

    • Jun 14, 2021  02:06 PM | Chris Rorden
      RE: Cannot find MRIcron directory
      For the MacOS, all applications are folders. From the Finder, you can right-click (or Control-click if you are using a trackpad) on the program icon and select "Show Package Contents". Be aware that Apple has started to require notarized applications, and depending on your version of MacOS, modified applications make not work after you changed them, as Apple fears that the program has been tampered with. This is an issue with recent versions of MacOS, and is not specific to my software.

  • May 19, 2021  07:05 AM | Wiebke Trost
    Effect sizes
    Hello

    We were wondering if there is a possibility to obtain effect sizes for NPM analyses?
    Thank you very much in advance!

    Kind regards
    Wiebke

    • May 19, 2021  12:05 PM | Chris Rorden
      RE: Effect sizes
      NPM has been replaced by NiiStat.

      You may want to look at the Power and PowerMap scripts
        https://github.com/neurolabusc/NiiStat

  • Apr 26, 2021  08:04 AM | rosenkohl
    mriCRON: Nifti to .hdr
    Hi all,

    I am using the latest mriCron Version on a Linux machine. I would like to convert niftis, that I got from a SPM first level analysis, into .hdr format. I already know (from google) that this should be possible by "import -> convert nifti to .hdr" BUT my mriCron does not have this option or even this button in the GUI. The only option avaible is "convert DICOM to NIfti". How can I achive a conversion to .hrd and whats the reason for the missing button? Is it just that I am using the latest mriCRON version?

    Thank you so much in advance and all the best,

    Mareike

    • Apr 26, 2021  11:04 AM | Chris Rorden
      RE: mriCRON: Nifti to .hdr
      Modern SPM works well with single file NIfTI (filename.nii), so I am not sure why you need the dual file version (.hdr/.img). I have moved active development to MRIcroGL, so the MRIcron documentation might be getting a bit out of date.

      I would use the FSL tool fslchfiletype to do this, but you could also use a Matlab script with SPM:



      function nii_nii2hdr (fnms)
      %Convert file.nii to file.hdr/file.img
      % https://github.com/rordenlab/dcm2niix/is...
      %n.b. FSL does not like file.nii and file.hdr co-existing
      % fnms : (optional) images to convert
      %Examples
      % nii_nii2hdr
      % nii_nii2hdr('T1_LM1003.nii');
      % nii_nii2hdr(strvcat('T1_LM1003.nii','T2_LM1003.nii'));

      if ~exist('fnms','var') %file not specified
      [A,Apth] = uigetfile({'*.nii';'*.*'},'Select .nii file(s)', 'MultiSelect', 'on');
      fnms = strcat(Apth,char(A));
      end
      for i=1:size(fnms,1)
      fnm = fnms(i,:);
      [pth, nm] = spm_fileparts(fnm);
      hdr = spm_vol(fnm);
      img = spm_read_vols(hdr);
      hdr.fname = fullfile(pth, [nm, '.img']);
      spm_write_vol(hdr,img);
      end

      • Apr 26, 2021  12:04 PM | rosenkohl
        RE: mriCRON: Nifti to .hdr
        Hi Chris, thank you so much for your response, both alternatives worked perfectly fine for me (inside SPM & FSL)!

        Did I understand that correctly, that there is a possibility in MRIcroGL to do the conversion (by using the GUI)?

  • Apr 9, 2021  01:04 PM | KOUSTAV CHATTERJEE - INSTITUTE OF NEUROSCIENCES KOLKATA
    error message in converting DICOM DWI data through dcm2niigui
    Dear Sir,
    I was trying to convert DICOM images of DWI data through dcm2niigui. My output format was "Compressed FSL (4D NIfTI nii) and I have also checked 'Protocol name", "Acquisition series", "collapse folder", "recursive folder search depth=1"

    I got the following message as I dragged and dropped all 308 images into the gui. Could you please help me resolving the error messages.

    Converting 217/308 1
    11145403->diffAPMPoptMB350b100050b200s012a1001.nii
    151424 16
    GZip...diffAPMPoptMB350b100050b200s012a1001.nii.gz
    *Warning: Number of images in series (91) not divisible by number of volumes (14)
    * Perhaps the selected folder only has some of the images
    * Potential partial acquisition or improper segmentation of files
    * Possible solution: check 'Collapse folders' in Help/Preferences and select directory that contains all images in subfolders
    Converting 308/308 14
    11161603->diffAPMPoptMB350b100050b200s013a1001.nii
    1968512 16
    GZip...diffAPMPoptMB350b100050b200s013a1001.nii.gz
    Conversion completed in 206561 ms

    • Apr 9, 2021  01:04 PM | Chris Rorden
      RE: error message in converting DICOM DWI data through dcm2niigui
      Development of that tool ended 6 years ago. Even at that time, it recommended that you upgrade to dcm2niix. The DICOM standard has evolved a lot since that time, with vendors now supporting enhanced DICOM. I strongly suggest you use dcm2niix:
        https://www.nitrc.org/plugins/mwiki/index.php/dcm2nii:MainPage
      If you prefer a graphical interface, get MRIcroGL (Import/ConvertDICOMtoNIfTI menu item):
        https://www.nitrc.org/plugins/mwiki/index.php/mricrogl:MainPage

      While I hope my legacy software is mature and robust, I am a full time scientist and instructor. I can not support old tools. It is an open source project, so you can always maintain and extend it as you wish.
        https://github.com/neurolabusc/MRIcron

      Apr 14, 2021  10:04 AM | KOUSTAV CHATTERJEE - INSTITUTE OF NEUROSCIENCES KOLKATA
      RE: error message in converting DICOM DWI data through dcm2niigui
      Will you just tell me please what does the following message mean while converting dcm2niigui?

      "Warning: for compatibility, converting UINT16->FLOAT32, range: 63064
      If you prefer file size over compatibility, edit your preference named UINT 16 to FLOAT32"

      • Apr 14, 2021  12:04 PM | Chris Rorden
        RE: error message in converting DICOM DWI data through dcm2niigui
        DICOM images can store images as 16-bit unsigned integers (range 0...65535). However, several NIfTI tools only support 16-bit signed integers (-32768..32767) or 32-bit floating point data (e.g. AFNI). Therefore, while my tools attempt to convert DICOM data losslessly, it faces a dilemma with UINT16 data. There are two options:
         - Retain UINT16 datatype, and be aware that some tools may fail.
         - Promote UINT16 to FLOAT32, which requires twice the disk space and may be slower to process.

        You can edit the preferences to choose between these two options.

  • Mar 10, 2021  12:03 AM | abdou - Stanford University School of Medicine
    Postdoctoral Fellow Position in Simultaneous Spinal Cord/Brain fMRI - Stanford University School of Medicine
    DESCRIPTION

    The Systems Neuroscience & Pain Laboratory at Stanford University (SNAPL) is actively recruiting a postdoctoral fellow who will join our research project on chronic pain & opioid addiction. Funded by the National Institute of Health and directed by neuroscience professors Sean Mackey and Gary Glover, our goal is to develop safe and effective chronic pain treatments & therapies by investigating corticospinal function via emerging technology known as simultaneous spinal cord/brain fMRI.
    Over 100 million Americans have been diagnosed with chronic pain, effectively setting the stage for nation-wide opioid addiction. This chronic pain epidemic, combined with opioid abuse, has been a major healthcare crisis costing over a trillion dollars annually and leading to thousands of American deaths. Ramifications of this public health crisis have been well documented by the Institute of Medicine's 2011 report Relieving Pain in America: A Blueprint for Transforming Prevention, Care, Education, and Research (co-authored by co-PI Sean Mackey) and more recently by the 2016 NIH/Health and Human Services National Pain Strategy (co-chaired by Dr Mackey).
    Our plan is to utilize pain biomarkers in order to characterize neurobiological mechanisms underlying chronic pain & opioid addiction and, ultimately, to derive new personalized interventions. Given abundant findings that fMRI of the brain can act as biomarker for neuropathic disorders, we propose utilization of our simultaneous spinal cord/brain fMRI to develop biomarkers of pain severity. Our research can be categorized as follows:
    1. CNS mechanisms of chronic pain and pain modulation,
    2. central sensitization and descending modulation,
    3. corticospinal biomarkers of chronic pain conditions,
    4. predictive models of pain resilience.
    When you join, not only will you have opportunity to collaborate with top neuroscientists and pain specialists at Stanford, but you will also have access to cutting-edge fMRI technology; to invent and develop your own related research. Multidisciplinary aspects of this project will allow you to explore diverse chronic pain topics ranging from biomarker design for degenerative conditions to heart rate variability as a simple biomarker for psychological disorders.
    We have a strong track record for successfully transitioning postdoctoral fellows to independent grant funding and faculty positions. We can offer NIH T32 training to select candidates. The ideal candidate is a motivated problem solver and innovator who has a neuroscience background, enjoys challenging the paradigms of contemporary research and experimentation, is proficient with computer-aided analysis, and is enthusiastic and passionate about fMRI acquisition.

    RESEARCH AREA

    - fMRI and behavioral data from Healthy Control and Fibromyalgia,
    - Novel analysis techniques for multiecho cardiac-gated fMRI, spinal cord/brain task/resting-state fMRI,
    - Experiment design for:
    1. temporal summation & central sensitization of pain,
    2. conditioned pain modulation,
    3. descending pain modulation & emotional reappraisal of pain,
    - Multivariate pattern analysis of spinal cord/brain data for corticospinal biomarker,
    - Simultaneous spinal cord/brain fMRI for multiple sclerosis, spinal cord injury & trauma, motor neuron disease.

    QUALIFICATION

    Applicants hold a PhD in a field of Science, Technology, Engineering, Mathematics, or Psychophysics. Experience in Neuroscience study design and data analysis is a must. Additional experience in any of these is a plus:
    1. Cognitive / Affective Neuroscience
    2. Conditioned Pain Modulation
    3. computational modeling / Machine Learning / Neural Networks / ICA
    4. chronic pain / opioid addiction / fibromyalgia
    5. MATLAB / Python / R / Linux / C
    6. SPM / AFNI / FSL / fMRIPrep

    APPLICATION MATERIALS

    Submit (1) CV, (2) NIH Biosketch, (3) Letter of Research Intent
    Find instructions, blank format pages, and sample biosketches here: https://grants.nih.gov/grants/forms/bios...
    Applicants follow non-fellowship templates. Letter of Intent template is available here: http://med.stanford.edu/content/dam/sm/p...
    For more information, visit http://snapl.stanford.edu
    Please email all materials to Dr. Christine Sze Wan Law: cslaw@stanford.edu

  • Jan 22, 2021  12:01 PM | Anyi Liu - UCL
    Display fNRIS optode locations on the rendered brain in MRICron
    Hi neuroimaging experts,

    I ran into a question while using the MRICron to display my fNRIS data: how can I show the fNIRS optode locations (as bulbs) on the rendered brain in MRIcron? I want to have a 3D representation of where my optodes have been on the brain.

    I have in hand: 1) the optodes coordinates in MNI and voxel space. 2) a nifti file of the brain that has a very similar size to the one where optodes have been placed on. 

    Has anyone done something like this on MRIcron before? Or is there any Github resources that can do this?


    Thanks a lot for your help in advance,

    Anyi

    • Jan 22, 2021  04:01 PM | Chris Rorden
      RE: Display fNRIS optode locations on the rendered brain in MRICron
      I would suggest you use Surfice for this task. See the Loading Nodes script

      • Jan 23, 2021  01:01 PM | Anyi Liu - UCL
        RE: Display fNRIS optode locations on the rendered brain in MRICron
        Hi Chris,

        Thanks a lot for your reply!
        I am on Surf ice now, although to show my MNI channels as nodes, I need to convert my txt file into a node. 
        I am not too sure how to do the conversion, is it done through a few lines of code?

        Many thanks,
        Anyi

        • Jan 23, 2021  09:01 PM | Chris Rorden
          RE: Display fNRIS optode locations on the rendered brain in MRICron
          You can load a text file saved as BrainNet Node format. You can do this with the Nodes/AddNodesOrEdges menu item or with a Python script (see the Scripting/Python/node script for an example). The node file format is a simple text file where each file reports the X,Y,Z coordinates of each node, the size, the color/intensity and a name. Each node is a separate line, each line has these 6 properties:


          -9.631 28.620 33.320 1 1 L.superior.frontal.gyrus
          10.801 28.312 33.032 1 1 R.superior.frontal.gyrus
          -30.468 35.927 26.576 1 1 L.middle.frontal.gyrus
          30.734 37.800 25.642 1 1 R.middle.frontal.gyrus

          • Jan 24, 2021  04:01 PM | Anyi Liu - UCL
            RE: Display fNRIS optode locations on the rendered brain in MRICron
            Thanks I managed to plot the channel on the brain!

            However I ran into another problem: the channels are not perfectly placed on the cortex, especially the left hand side, as shown in the picture. It might have been caused by the interpolation that makes the two sets of coordinates not registered 1 to 1 as before? 

            Is there a way to plot the brain as 80% transparent so that the channels can be seen without adjusting the depth? 


            Many thanks,
            Anyi
            Attachment: Picture 1.png

            • Jan 24, 2021  07:01 PM | Chris Rorden
              RE: Display fNRIS optode locations on the rendered brain in MRICron
              Looks like your normalization with the mesh is not great. You may want to create a custom mesh for each individual. Regardless, you will want to improve your normalization. This will depend on the tools you use (SPM, FSL, ANTS, AFNI), and are beyond the scope of this help list. You can adjust the transparency of a mesh with respect to the nodes by adjusting the overlay sliders. If you prefer Python scripts, the command gl.shaderxray will adjust this property:

              import gl
              gl.resetdefaults()
              gl.meshload('BrainMesh_ICBM152.lh.mz3')
              gl.edgeload('LPBA40.edge')
              gl.clipazimuthelevation(0.3, 0, 130)
              gl.nodesize(6, 1)
              gl.edgesize(3,1)
              gl.nodehemisphere(-1)
              gl.azimuthelevation(250, 35)
              gl.edgecolor('actc',1)
              gl.nodecolor('blue',1)
              gl.nodethresh(1.0,1.0)
              gl.edgethresh(0.5,1.0)
              gl.meshcurv()
              gl.overlayminmax(1,-1,1)
              gl.overlaycolorname(1,'surface')
              gl.overlayinvert(1,1)
              gl.overlaytranslucent(1, 1)
              gl.meshhemisphere(-1)
              gl.shaderxray(0.5, 0.5)

              • Feb 3, 2021  01:02 PM | Anyi Liu - UCL
                RE: Display fNRIS optode locations on the rendered brain in MRICron
                Hi Chris,

                Thanks a lot for your help on Surf ice! I really like the software, it is really useful for what I am doing.

                Although I came across another problem: how to display two sets of nodes at the same time on the same mesh?  I am trying to compare two optodes registration methods.

                Many thanks,
                Anyi

                • Feb 3, 2021  01:02 PM | Chris Rorden
                  RE: Display fNRIS optode locations on the rendered brain in MRICron
                  For questions regarding Surfice, please use the dedicated NITRC forums or create an issue on Github

                  The .node files use the simple BrainNet text format. You can use your favorite text editor or scripting language to concatenate two node files. The format lists six columns for each node:
                    Xmm Ymm Zmm Color Radius Name
                  You could also provide a different "color" for your two sets of nodes. When you 

                  -9.631 28.620 33.320 1 1 L.superior.frontal.gyrus
                  10.801 28.312 33.032 1 1 R.superior.frontal.gyrus
                  -30.468 35.927 26.576 1 1 L.middle.frontal.gyrus
                  30.734 37.800 25.642 1 1 R.middle.frontal.gyrus
                  ....
                  -10 28.620 33.320 2 1 L.superior.frontal.gyrus
                  11 28.312 33.032 2 1 R.superior.frontal.gyrus
                  -32 35.927 26.576 2 1 L.middle.frontal.gyrus
                  31 37.800 25.642 2 1 R.middle.frontal.gyrus
                  ....

                  Then the you display the nodes in Surfice, you could differentiate the node by its color as shown in the screenshot.
                  Attachment: nodes_by_color2.png

                  • Feb 3, 2021  03:02 PM | Anyi Liu - UCL
                    RE: Display fNRIS optode locations on the rendered brain in MRICron
                    Amazing! I should have thought about this, thanks a lot! :)

                    Best wishes,
                    Anyi

  • Jan 6, 2021  11:01 AM | loutions
    Qform matrix in MRIcron and MRIcroGL
    Dear MRIcron experts,

    I have a set of 3D coordinates in 'lab-space' (obtained using a neuronavigation system) which I want to convert to voxel indices of the respective nifti.
    I tried opening the nifti file in MRIcron and check if the coordinates the neuronavigation system gave me made sense, which they do.
    To do the conversion, I just thought I'd load the nifti header into MATLAB, extract the qform, and get the voxel indices doing
    V = round(inv(Q)*C');
    where V are the voxel indices, Q is the qform matrix (which I got from the MRIcron), and C are the coordinates.
    (This didn't work, i.e. the returned indices do not match those in MRIcron, and I'm convinced the problem lies in the algebra - if a kind soul could help me, I'd be highly appreciative!)

    However, I opened the same image on MRIcroGL to check the header and the qform is different to that displayed in MRIcron (screenshots are in attachment). 
    The difference is not just the orientation of the axes (trivial rotations and permutations from the different conventions), but also in the qoffset, which is what puzzles me.
    I then displayed the qform matrix of the same nifti using different softwares (FSL's fslhd and Freesurfer's mri_info - the results of which are also attached), and these are congruent with the matrix displayed in MRIcroGL.
    I thought it might have something to do with the center of the referential being poorly defined, but subtracting the c_ras from the mri_info output does not correct for the different qoffsets seen between MRIcron and everything else I tested.

    I was wondering if someone could explain to me this difference, or re-direct me to a place where I can read about it.
    The versions I'm using are:
    MRIcron v1.0.20190902
    MRIcroGL v1.2.20201102

    Thank you very much in advance!
    Ricardo
    Attachment: screenshots.png

    • Jan 6, 2021  12:01 PM | Chris Rorden
      RE: Qform matrix in MRIcron and MRIcroGL
      Your data is anisotropi and not aligned with canonical space, hence the 'resliced' seen in the MRIcroGL status bar. Note that MRIcron will also manipulate the header depending on your Preferences (e.g. you can choose if MRIcron loads the raw data as stored on disk, orthogonally losslessly rotates the data to match the canonical NIfTI space, or resamples the data).

      Be aware that NIfTI stores two spatial transforms: the quaternion-based QForm (9 DoF) and matrix based SForm (12 DoF). When both are set, both MRIcron and MRIcroGL give precedence to the latter as described here:
        https://github.com/neurolabusc/blog/blob...
      For consistency across tools you might want to make sure the SForm and QForm is identical, unless a shear is required to represent your transform.

      In future, you may want to consider converting images from DICOM to NIfTI using dcm2niix, which will losslessly reorient 3D acquisitions to match canonical space (e.g. reslice your sagittal data to axial). Likewise, choosing an isotropic image resolution during acquisition may make life easier (in your case increasing the scan time by 20%).

      When using different systems that have different conventions for how they store data internally, I would suggest always thinking about the coordinates in world space mm, rather than voxel order as stored on disk.

  • Dec 10, 2020  12:12 AM | Jamal Williams - Princeton University
    Show all four sagittal images at once (medial LH, RH, and lateral LH, RH)
    I'm trying to get something like the plot below. Unfortunately in MRIcro GL I can't get a view like this. Furthermore, how does one remove substructures (e.g. cerebellum, brainstem, etc.) like the plots below. Any help would be greatly appreciated!

    Image

    • Dec 10, 2020  12:12 PM | Chris Rorden
      RE: Show all four sagittal images at once (medial LH, RH, and lateral LH, RH)
      With MRIcroGL you could create two renderings. Choose a template without a cerebellum if you do not want it...

      import gl
      gl.resetdefaults()
      gl.backcolor(255, 255, 255)
      #open background image
      gl.loadimage('spm152')
      #open overlay: show positive regions
      gl.overlayload('spmMotor')
      gl.minmax(1, 4, 4)
      gl.opacity(1,50)
      #open overlay: show negative regions
      gl.overlayload('spmMotor')
      gl.minmax(2, -4, -4)
      gl.colorname (2,"3blue")
      gl.colorbarposition(0)
      #"a"xial, "c"oronal and "s"agittal "r"enderings
      gl.clipazimuthelevation(0.5, 90, 0)
      gl.mosaic("S R 0; S R -0");
      gl.savebmp("LH.png")
      gl.clipazimuthelevation(0.5, 270, 0)
      gl.mosaic("S R 0; S R -0");
      gl.savebmp("RH.png")
      Attachment: MRIcroGL.png

      Dec 11, 2020  01:12 PM | Chris Rorden
      RE: Show all four sagittal images at once (medial LH, RH, and lateral LH, RH)
      It appears like the image you want to mimic is based on a triangle-based mesh-renderer like Surfice, not a voxel-based volume renderer like MRIcroGL. Assuming your statistical maps are aligned to one of the popular mesh templates, you can create a similar image with surfice. This example script should work with the sample images that come with Surfice



      import gl
      gl.resetdefaults()
      gl.meshloadbilateral(0)
      gl.meshload('BrainMesh_ICBM152.rh.mz3')
      gl.overlayload('motor_4t95vol.nii.gz')
      gl.overlayminmax(1,2,12)
      gl.overlayload('motor_4t95vol.nii.gz')
      gl.overlayminmax(2,-1,-2)
      gl.colorbarvisible(1)
      gl.overlaytransparencyonbackground(25)
      gl.meshcurv()
      gl.azimuthelevation(90, 0)
      gl.savebmp('RH.png')
      gl.azimuthelevation(270, 0)
      gl.savebmp('RH2.png')
      #show left hemisphere
      gl.meshload('BrainMesh_ICBM152.lh.mz3')
      gl.overlayload('motor_4t95vol.nii.gz')
      gl.overlayminmax(1,2,12)
      gl.overlayload('motor_4t95vol.nii.gz')
      gl.overlayminmax(2,-1,-2)
      gl.colorbarvisible(1)
      gl.overlaytransparencyonbackground(25)
      gl.meshcurv()
      gl.azimuthelevation(270, 0)
      gl.savebmp('LH.png')
      gl.azimuthelevation(90, 0)
      gl.savebmp('LH2.png')

      Attachment: surfice.jpg

  • Nov 25, 2020  12:11 PM | golivec95 - Universitat de Barcelona
    Errors doing VLSM with Npm
    I'm unable to perform a VLSM using NPM between lesion masks and a continuous variable.
    After establishing the desing with 18 binary masks and one continuous behavior variable, I launch the analysis and I get the error:

    "Number of Lesion maps = 18
    Dimension 1 of C:\VLSM\lesion_masks\wtS08_mask.nii does not match C:\VLSM\lesion_masks\wTS02_mask.nii
    Problem with "C:\Users\UB\Desktop\GUILLEM\VLSM\lesion_masks\wtS08_mask.nii
    Error: File dimensions differ from mask."

    How can i fix this?

    Thank you!

    • Nov 25, 2020  01:11 PM | Chris Rorden
      RE: Errors doing VLSM with Npm
      I am happy that my legacy tools are robust and popular. I would encourage you to consider using NiiStat or PALM instead. All my old tools are open source, so you are free to extend, explore and maintain them. In general, all neuroimaging tools (SPM, FSL, NiiStat, PALM) expect images in spatial register with each other. Assuming your images are all normalized to the same template, you can replace them to a common space using a simple script nii_reslice_target.m

  • Dec 11, 2019  04:12 PM | Daria Porter
    MRIcron Access denied
    I am having issues with both 2019 and the 2016 versions for mricron. I downloaded the versions for linux. I cannot load any templates, SPM results, or change the color schemes. No matter what I try, I get the following error message:

    Access denied.
    Press OK to ignore and risk data corruption.
    Press Abort to kill the program.

    • Dec 11, 2019  04:12 PM | Chris Rorden
      RE: MRIcron Access denied
      Any details about your system, e.g. if you type 'unman -a' it should reveal something about your computer. It sounds like you do not have permissions to the files you are trying to read. To test this, do things work correctly if you run MRIcron as a super-user ('su micron'). If this is your problem, you may want to download the folder as a single user and try running it as the user who downloaded the software. 

      Assuming your computer has a graphics card with driver installed, you may also want to try out the more modern MRIcroGL.

      • Dec 11, 2019  04:12 PM | Daria Porter
        RE: MRIcron Access denied
        Amazing! That was the issue --  if I open it as su everything works. Thanks tons!

        Oct 6, 2020  04:10 PM | Kristin Sandness - University of MN
        RE: MRIcron Access denied
        Hi there,

        I'm having the same issue with MRIcroGL but on a Windows 10 education (64-bit) machine. It's a shared device, but I downloaded under my profile to my personal desktop to run (double edit: this is happening with both the fall 2019 and March 2020 version of the software).

        The program had previously been working without this error message, I'm told, so I'm wondering if the network folks at the University pushed some sort of permissions change for apps.

        Edit: When I log-in as an administrator (and download and run the program as an admin), I get the error 'unable to load openGL 3.3 core', and the program doesn't load.

        So this may be a permissions issue, but doesn't seem to be an admin issue. Thoughts/do you know of a possible workaround here?

        Thanks!

        Kristin

        • Oct 7, 2020  12:10 PM | Chris Rorden
          RE: MRIcron Access denied
          Try the pre-release that only requires OpenGL 2.1 (from 2006) https://github.com/rordenlab/MRIcroGL12/releases/tag/v1.2.20200707

          You may also want to see if your graphics drivers can be updated, OpenGL 3.3 was released in 2009. Very old hardware may have issues with modern, high resolution images. A high-end desktop graphics card from 2009 is less powerful than the graphics card in a modern smart phone.

          • Oct 7, 2020  02:10 PM | Kristin Sandness - University of MN
            RE: MRIcron Access denied
            Thanks, Chris - will do! And I'll see what options we have re the graphics card.

            The odd thing is, the newer release was working on this device up until August 2020, so presumably the graphics card could handle the program prior.

            Thanks again for the quick response!

            • Oct 7, 2020  06:10 PM | Kristin Sandness - University of MN
              RE: MRIcron Access denied
              Update - with the older software we get the error 'unable to load openGL 2.1'. I'm guessing there's something crazy going on with our device, but if you have any insights from the software side, Chris, let me know! Thanks again -

              Kristin

  • Sep 17, 2020  01:09 PM | antoninro
    Display issue with MRICron: image stretched z-direction
    Hello!

    I am reading a nifti file obtained using dcm2niix from a set of 3DT1 images from a GE scanner.
    Dcm2niix doesn't seems to produce any error nor warning (even in verbose mode).
    However, the image seems stretched in the z-direction, which is also the slice direction (i.e. the 3DT1 was acquired in axial mode).

    Note that opening the exact same nifti file with another viewer such as FSLEyes goes perfectly fine... by clicking "Info" I get the same slice number (about 300) and dim3 size (0.5mm) than using MRIcron.

    Tested with MRIcron, latest version on MacOS and Linux (Ubuntu). Conversion done on Ubuntu with latest version of dcm2niix.

    Please find a screenshot attached to this message.

    Best

    Antonin Rovai

    • Sep 17, 2020  01:09 PM | Chris Rorden
      RE: Display issue with MRICron: image stretched z-direction
      Sounds like a variation of issue 386 https://github.com/rordenlab/dcm2niix/issues/386, e.g. GE's interpolation of 3D datasets. Note that both MRIcron and FSLeyes use dcm2niix to convert DICOM images, so this reflects which version of dcm2niix you are using. The latest developmental release (and upcoming stable release) should fix this. You can always put the developmental release of dcm2niix in your MRIcron Resources folder to use it instead of the version that ships with your software.

      If this does not resolve your issue,  please post an issue on Github.

      For future acquisitions, I strongly encourage disabling the interpolation on the console. Interpolation requires more disk space and will disrupt some post processing (mrdegibbs).

      Sep 17, 2020  01:09 PM | antoninro
      RE: Display issue with MRICron: image stretched z-direction
      Hello,

      I used the same version of dcm2niix for both images because I directly opened the nifti in FSLEyes and MRIcron. Dcm2niix was used before that, in command line mode, not using any GUI.

      At any rate:
      - I'll wait for the next release
      - I'll disable interpolation on the console

      Thanks
      Antonin

      • Sep 17, 2020  02:09 PM | Chris Rorden
        RE: Display issue with MRICron: image stretched z-direction
        Oh, sounds like it is just an anisotropic image. Did you try going to MRIcron's Help/Preferences window and choosing "Reorient images when loading" and reloading the image? You may also want to try MRIcroGL which is able to leverage modern hardware.

  • Sep 2, 2020  09:09 AM | Ardalan Aarabi
    discrepancy between the results of MRIcron V2012 and V2019
    Hi everyone,

    In MRIcron, I tried to compute the overalp between two 3D images, one is the natbrainlab.nii (an atlas) and the other is a binary mask, when I use the older version of MRIcron (2012), I find the same results as those that I obtained using the matlab code that I wrote, in other words, I find the exact number of voxles falling into each region of atlas, but when I used the latest version of MRIcron (2019), the number of overlapping voxels/region is more than that I get using the older version as shown in the attached file, any ideas?


    Need your kind help
    Ardalan
    Attachment: Doc6.docx

    • Sep 2, 2020  10:09 AM | Chris Rorden
      RE: discrepancy between the results of MRIcron V2012 and V2019
      1. There is not a one-to-one correspondence between the voxels in one image and the other. Therefore, the choice of which image to warp and the choice of interpolation method (nearest neighbor, linear, sinc, etc) will impact your results. This explains why tools like FSL and SPM require image masks to have a perfect correspondence to the image they are applied to. You can see this difference by choosing different "intrep" reslicing with SPM
        https://github.com/rordenlab/spmScripts/blob/master/nii_reslice_target.m

      2. Since your image is not orthogonal to the image plane, it will load a little differently as a background image depending on whether you set the Preferences to reorient the images when reloading, which will reslice the data.


      fslhd threshZ4V_5VD-qzhoActiv.nii
      ...
      dim1 157
      dim2 189
      dim3 136
      sform_code 2
      sto_xyz:1 -1.000000 -0.000159 0.000962 77.945053
      sto_xyz:2 -0.000160 0.999999 -0.001155 -111.876495
      sto_xyz:3 0.000962 0.001155 0.999999 -50.143166
      sto_xyz:4 0.000000 0.000000 0.000000 1.000000

      fslhd natbrainlab.nii
      ...
      dim1 182
      dim2 218
      dim3 182
      sto_xyz:1 -1.000000 0.000000 0.000000 90.000000
      sto_xyz:2 0.000000 1.000000 0.000000 -126.000000
      sto_xyz:3 0.000000 0.000000 1.000000 -72.000000

      • Sep 2, 2020  01:09 PM | Ardalan Aarabi
        RE: discrepancy between the results of MRIcron V2012 and V2019
        thanks, I resliced it with different interpolation methods, the results obtained by the latest version are different, moreover, when I computed the total number of voxels listed in the descriptive statistics, with the 2012 version, I get 1025 voxels (the same as the number I get with matlab), but it is different from V2019 (1149, please see the attachment), there are some additional voxels which do not exist in the binary file (threshZ4V_5VD-qzhoActiv.nii),

        Regards,
        Ardalan
        Attachment: 2012.docx

        • Sep 2, 2020  02:09 PM | Chris Rorden
          RE: discrepancy between the results of MRIcron V2012 and V2019
          Your atlas is not orthogonal to the image plane, as can be seen by the angulation:

          fslhd natbrainlab.nii
            sto_xyz:1 -1.000000 -0.000159 0.000962 77.945053
            sto_xyz:2 -0.000160 0.999999 -0.001155 -111.876495
            sto_xyz:3 0.000962 0.001155 0.999999 -50.143166

          In MRIcron go to preferences and turn off "Reorient images when loading" to load in native space rather than reslicing. Open your continuous image that has been been resliced to match the template
            https://github.com/rordenlab/spmScripts/blob/b0d8001ae97e28e662f2052751669bdb7e4bc68e/nii_reslice_target.m
          Use MRIcron to compute volumes:

          Index Name numVox numVoxNotZero fracNotZero
          0 0 3628098 624 0.000
          1 1 4875 23 0.005
          6 6 46399 2 0.000
          7 7 6672 2 0.000
          8 8 27876 412 0.015
          12 12 11032 55 0.005
          13 13 10078 92 0.009
          16 16 7310 36 0.005
          101 101 5298 89 0.017
          102 102 8372 24 0.003
          103 103 391 14 0.036
          107 107 5071 30 0.006
          108 108 22061 488 0.022
          111 111 13929 1 0.000
          112 112 12566 514 0.041
          113 113 17470 144 0.008
          116 116 7525 338 0.045

          Do the same with Matlab (see attached script)

          116 regions in /Users/chris/Downloads/tst/natbrainlab.nii,1
          region nZero nPos nNeg nNotFinite mn
          0 3627474 0 624 0 -0.00024788
          1 4852 0 23 0 -0.00107998
          6 46397 0 2 0 -2.99442e-07
          7 6670 0 2 0 -3.63876e-08
          8 27464 0 412 0 -0.0208179
          12 10977 0 55 0 -0.00367871
          13 9986 0 92 0 -0.00782235
          16 7274 0 36 0 -0.00378585
          101 5209 0 89 0 -0.0358616
          102 8348 0 24 0 -0.001553
          103 377 0 14 0 -0.0221525
          107 5041 0 30 0 -0.0110846
          108 21573 0 488 0 -0.0314722
          111 13928 0 1 0 -5.84004e-07
          112 12052 0 514 0 -0.0757793
          113 17326 0 144 0 -0.0103095
          116 7187 0 338 0 -0.102179

          Note the results are the same.
          Attachment: roi_count.zip

  • Aug 17, 2020  04:08 AM | kchat
    Unable to use MRIcron in Linux machine
    Hi everyone,
    I have installed Ubuntu 16.04 LTS version in my Linux machine without any graphics card. I have downloaded different versions of MRIcron for linux machine through NITRC webpage. However, dcm2niigui is showing nonfunctional.
    Interestingly, MRIcron for windows machine can be downloaded and dcm2niigui is showing functional.
    Need your kind guidance.

    • Aug 17, 2020  10:08 AM | Chris Rorden
      RE: Unable to use MRIcron in Linux machine
      Can I suggest you install the latest version of MRIcron (currently 1.0.20190902) and use the included dcm2niix. You can use dcm2niix from the command line or from MRIcron's graphical user interface (the Import/ConvertDicomToNifti menu item). dcm2niix has replaced the older dcm2nii/dcm2niiGui. While I hope my legacy tools are stable and mature, the DICOM standard has changed a lot since development switched, so I would be cautious about using the older tool for newer images (e.g. enhanced DICOM). 

      You can always update to the latest version of dcm2niix by getting a copy from Github. For Linux, this one-liner should work:
        curl -fLO https://github.com/rordenlab/dcm2niix/re...

      All my code is open source, so if you prefer a legacy tool, you can always download the code and extend or improve it in any way you want.

  • Aug 13, 2020  03:08 PM | rossisl - NIH/NIA
    datasets generated with AFNI
    Hello, I have second level analysis output from AFNI (2 sub-brick nifti files, 1 = mean, 2 = z-score) that I cannot figure out how to properly threshold in MRIcron. AFNI displays both positive and negative Fc in the same file so I am thinking this is where I am running into a problem? Attached is a screen shot where I import the overlay and the threshold is determined automatically. I also tried masking the data to the template to remove any confounding "zero voxels" outside the brain. Here is the output from 3dinfo for the file I opened: 

    Number of values stored at each pixel = 2
    -- At sub-brick #0 'SetA_mean' datum type is byte: 0 to 255 [internal]
    [* 0.003159] 0 to 0.805545 [scaled]
    -- At sub-brick #1 'SetA_Zscr' datum type is byte: 0 to 255 [internal]
    [* 0.0509804] 0 to 13 [scaled]
    statcode = fizt 

    Any help/suggestions would be much appreciated.

    Thanks,Sharyn

    • Aug 13, 2020  03:08 PM | Chris Rorden
      RE: datasets generated with AFNI
      If you want to independently control the negative values and the positive values for a single overlay, you should just load the image twice. You can then set the positive and negative intensities independently. You can use the same principle for MRIcron or MRIcroGL. Since MRIcroGL allows full scripting, I prefer it for exposition. Here is the sample script you see if you choose the Scripting/Templates/Basic. Note it loads the same overlay "spmMotor" twice.
       1. For the first loading it hides values darker than +4, and clips values greater than +4 to +4, it also sets this layer to be 50% translucent.
       2. For the second loading, it hides values brighter than -4, and clips values darker than -4 to -4. It sets this overlay to have the blue color scheme:



      import gl
      gl.resetdefaults()
      #open background image
      gl.loadimage('spm152')
      #open overlay: show positive regions
      gl.overlayload('spmMotor')
      gl.minmax(1, 4, 4)
      gl.opacity(1,50)
      #open overlay: show negative regions
      gl.overlayload('spmMotor')
      gl.minmax(2, -4, -4)
      gl.colorname (2,"3blue")
      #gl.orthoviewmm(37,-14,47)

  • Aug 6, 2020  09:08 AM | Jose Angel Pineda Pardo
    Save multislice/mosaic representation from commandline
    Hello,
    is there any way to take advantage of the batch commands, and save the created representation from the commandline, and afterwords close mricron, so I can run it across a large dataset?

    Many thanks!
    José

    • Aug 6, 2020  01:08 PM | Chris Rorden
      RE: Save multislice/mosaic representation from commandline
      You should upgrade to MRIcroGL. It has full Python scripting that you can run from the user interface or call from Matlab/bash scripts/Python, etc. The Scripting/Templates/mosaic menu item will run the following script and generate the attached image:


      import gl
      gl.resetdefaults()
      #open background image
      gl.loadimage('spm152')
      #open overlay: show positive regions
      gl.overlayload('spmMotor')
      gl.minmax(1, 4, 4)
      gl.opacity(1,50)
      #open overlay: show negative regions
      gl.overlayload('spmMotor')
      gl.minmax(2, -4, -4)
      gl.colorname (2,"3blue")
      gl.mosaic("A L+ H -0.2 -24 -16 16 40; 48 56 S X R 0");
      Attachment: gl.png

  • Jun 30, 2020  01:06 PM | helma
    problem with installing and opening MrIcron
    Hi,

    I try to install the software to convert my DICOM files to nifti.this message popped up"MRIcron" can't be opened because Apple cannot check it for malicious software.I appreciate if you can help me with that.

    • Jun 30, 2020  02:06 PM | Chris Rorden
      RE: problem with installing and opening MrIcron
      1.) First of all, can you confirm that you installed using the mricron_macOS.dmg (v1.0.20190902) available at one of these sites:
        https://www.nitrc.org/projects/mricron
        https://github.com/neurolabusc/MRIcron/releases
      these should be notarized with Apple and therefore should work if you directly download them an install them.

      2.) If those files do not work, have you tried installing (v1.2.20200331)?
        https://github.com/rordenlab/MRIcroGL12/releases

      3.) If both options fail, does going to AppleMenu/SystemsPreferences/Security&Privacy provide an option for you to run this.

      4.) If all of the above fail, can you tell me the version of MacOS you are using.

  • Jun 9, 2020  08:06 AM | Raffaele Cacciaglia
    LUT colormap
    Dear Chris and MRIcron users, 

    I'd like to know whether it is possible to import a .clut color scheme from SurfIce into MRIcron.
    Specifically, I need to apply the color-scheme cwp_videen.clut (in SurfIce under /Resources/lut/unused) to MRIcron overlays.
    I copied the file to MRIcron /lut sub-directory and changed it from *.clut to *.lut.
    Even though it appears as available color scheme in MRIcron drop-down menu, it doesn't work as expected. 
    Is there a way to fix this? 

    Many thanks!

    Best regards, 
    Raffaele

    • Jun 9, 2020  01:06 PM | Chris Rorden
      RE: LUT colormap
      The following Matlab script can convert any Surfice/MRIcroGL clut file (which interpolates Red/Green/Blue/Alpha values between nodes) to a MRIcron/ImageJ .lut format (which stores 256 explicit Red/Green/Blue values).
      • You can find more ImageJ/MRIcron color lookup tables here.
      • Here is a simple LUT maker for Windows.
      • See here for more Matlab code. 
      • Here are tools that help make color maps with sensible luminance gradients.
      • I would suggest Viridis and Plasma which help those with the most common form of color blindness.

      function clut2lut(clut)
      %Convert Surfice/MRIcroGL .clut color table to MRIcon/ImageJ .lut format
      % clut : filename to convert
      %Examples
      % clut2lut('cwp_videen.clut')

      if ~exist('clut','var')
      [A,Apth] = uigetfile({'*.clut;';'*.*'},'Select MRIcroGL/Surfice color table');
      clut = [Apth, A];
      end
      if ~exist(clut, 'file'), error('Unable to find %s', clut); end
      txt = fileread(clut);
      txt = splitlines(txt);
      %fid = fopen(clut);
      %txt = textscan(fid,'%s');
      %fclose(fid);
      numnodes = parseKeySub(txt, 'numnodes=');
      inten = zeros(numnodes,1);
      rgba = zeros(numnodes,4);
      for i = 1: numnodes
      inten(i) = parseKeySub(txt, sprintf('nodeintensity%d=',i-1));
      rgba(i,:) = parseKeySub(txt, sprintf('nodergba%d=',i-1));
      end
      lut = zeros(256,3);
      for i = 1: numnodes-1
      intenLo = inten(i);
      intenHi = inten(i+1);
      rgbLo = rgba(i, 1:3);
      rgbHi = rgba(i+1, 1:3);
      for j = intenLo : intenHi
      frac = (j - intenLo)/(intenHi-intenLo);
      lut(j+1,:) = round((1.0-frac)*rgbLo + frac * rgbHi);
      end
      end
      %save lut format
      [p,n] = fileparts(clut);
      fnm = fullfile(p,[n,'.lut']);
      fid = fopen(fnm,'wb');
      fwrite(fid,lut,'uchar');
      fclose(fid);
      function vals = parseKeySub(txt, key)
      idx = find(contains(txt,key));
      if isempty(idx), error('Unable to find %s', key); end
      str = txt{idx(1)};
      str = str(length(key)+1:end);
      str = strsplit(str,'|');
      vals=str2double(str);
      %end parseKeySub()
      Attachment: cwp_videen.lut

      • Jun 9, 2020  03:06 PM | Raffaele Cacciaglia
        RE: LUT colormap
        Dear Chris,

        Thank you very much for the thorough response and all the mentioned resources! 
        I've just downloaded the color scheme you posted, it works perfectly.

        All the best, 
        Raffaele

  • Jun 5, 2020  10:06 PM | Alexander Cohen - Boston Children's Hospital
    Going the other way: Converting from nii.gz to DICOM
    Hi Chris and anyone else on here:
    Are there any efforts in place/I'm going to create a converter they can take xxx.nii.gz + xxx.json files and produce DICOM files that could be readable by clinical software/PACS systems?

    I know there is this: https://github.com/biolab-unige/nifti2dicom (but it has not been updated in years) and Slicer will take nii.gz files to DICOM, but I'm interested in whether there is a tool that could also take BIDS json data and/or additional metadata to repopulate a DICOM header.

    Thanks!

    -Alex Cohen

  • May 26, 2020  06:05 AM | Mavis Zhang
    Could I use Draw/Descriptive to measure ROI?
    Hi,
      I have with several niis. It is said that I could measure them through MRICron. However, After I drew one ROI region, there were three-line outputs. 
      Could I use Draw/Descriptive to measure ROI? Thanks for your answer!
    Attachment: MRicron.docx

    • May 26, 2020  03:05 PM | Chris Rorden
      RE: Could I use Draw/Descriptive to measure ROI?
      There are three descriptive lines:
       All voxels ("VOI")
       All non-zero voxels ("VOI <>0")
       All voxels with positive intensity ("VOI >0")

      For each, you get the number of voxels (nvox), cubic centimeters (cc), minimum (min), mean (mean) and maximum (max) and standard deviation (SD) for the voxel intensities under the drawing...

      Overlay /Users/chris/src/NiiStat/roi/AICHA.nii
      Center of mass XYZ 90.35x108.06x87.24
      VOI nvox(cc)=min/mean/max=SD: 1153456 1153.46 =8.0000 88.3904 184.0000 = 19.5849
      VOI <>0 nvox(cc)=min/mean/max=SD: 1153456 1153.46 =8.0000 88.3904 184.0000 = 19.5849
      VOI >0 nvox(cc)=min/mean/max=SD: 1153456 1153.46 =8.0000 88.3904 184.0000 = 19.5849
      Mode: 86.00146484375

      • May 28, 2020  01:05 AM | Mavis Zhang
        RE: Could I use Draw/Descriptive to measure ROI?
        Thanks a lot! I get your point!
        Originally posted by Chris Rorden:
        There are three descriptive lines:
         All voxels ("VOI")
         All non-zero voxels ("VOI <>0")
         All voxels with positive intensity ("VOI >0")

        For each, you get the number of voxels (nvox), cubic centimeters (cc), minimum (min), mean (mean) and maximum (max) and standard deviation (SD) for the voxel intensities under the drawing...

        Overlay /Users/chris/src/NiiStat/roi/AICHA.nii
        Center of mass XYZ 90.35x108.06x87.24
        VOI nvox(cc)=min/mean/max=SD: 1153456 1153.46 =8.0000 88.3904 184.0000 = 19.5849
        VOI <>0 nvox(cc)=min/mean/max=SD: 1153456 1153.46 =8.0000 88.3904 184.0000 = 19.5849
        VOI >0 nvox(cc)=min/mean/max=SD: 1153456 1153.46 =8.0000 88.3904 184.0000 = 19.5849
        Mode: 86.00146484375

  • Mar 11, 2020  03:03 PM | Luca Cuffaro - University of East London, NeuroRehabiliation Unit
    Measurement of entire volume brain and Roi
    Hi!

    I am working retrospectively on MRI of stroke patients - project of neurorehabilitation. These images are from clinical admission/discharge of these patients, so I don't have nice series (e.g. just the axial and the sagittal are T1 and none MP-RAGE) to merge all plans and perform an efficient normalization/segmentation. I have mapped the lesion and created VOIs using just the optimal axial one I have for all of them.

    1. How can I calculate the entire volume of brain using Mricron? (I cannot use MRIcroGL because I had a old graphics card in this pc)
    2. Following: draw->descriptive of VOIs in Mricron, are the results shown already calculated in cc, don't they?

    Thanks for every reply!
    Luca

    • May 22, 2020  06:05 PM | Chris Rorden
      RE: Measurement of entire volume brain and Roi
      1. I would use a tool like SPM's segmentation to segment the brain from the other tissue, and segment different tissue types in the brain. This will generate probability maps for each voxel for 0..1 (0%..100%) for gray matter, white matter and CSF. The sum for each volume will reveal the volume of each tissue type. This is better than a binary classification of each voxel as it solves the partial volume problem (e.g. voxels that are partly gray and partly white matter).

      2. For a drawing in MRIcron, choose Draw/Descriptive to see the volume of your region. For example

      Center of mass XYZ 106.00x126.00x106.00
      VOI nvox(cc)=min/mean/max=SD: 33401 33.40 =25.0000 96.8260 120.0000 = 23.1273
      suggests your drawing submsumes 33401 voxels or 33.4 cubic centimeters

  • May 20, 2020  02:05 PM | Lukas Van Oudenhove
    JHU.nii and JHU.txt files in C:\Program Files\MRIcron\Resources\templates
    Dear Chris,

    I found these files referring to an atlas that looks interesting to me under my MRIcron folder above, but contrary to the JHU white matter atlas, it is not available as a template from the MRIcron menu.

    Could you please let me know which atlas this is (I presume it is another Johns Hopkins Uni atlas, but can't seem to find the details) and provide me with a reference for it so I can potentially use it?

    Thanks very much in advance.

    Best wishes,

    Lukas

    • May 20, 2020  05:05 PM | Chris Rorden
      RE: JHU.nii and JHU.txt files in C:\Program Files\MRIcron\Resources\templates
      JHU_MNI_SS_BPM_TypeII_ver2.1
       https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3358461/

      I would contact the first author for more details. The rough notes I have for this are here:
        https://github.com/neurolabusc/NiiStat/blob/master/roi/read.me

      The atlas has a single voxel that is clearly wrong, it is labelled as 'corticospinal tract right' but is located in the left hemisphere (at the boundary of PLIC_L and GP_L). When I next release the software I will relabel this voxel.

      102|CST_R|corticospinal tract right|2
      133|PLIC_L|posterior limb of internal capsule left|2
      81|GP_L|globus pallidus left|1

      • May 22, 2020  07:05 AM | Lukas Van Oudenhove
        RE: JHU.nii and JHU.txt files in C:\Program Files\MRIcron\Resources\templates
        Thanks a lot for the prompt and clear response Chris!

        Best wishes,

        Lukas

        May 22, 2020  02:05 PM | Lukas Van Oudenhove
        RE: JHU.nii and JHU.txt files in C:\Program Files\MRIcron\Resources\templates
        Hi again Chris,

        Sorry having to bother you again!

        When overlaying on the ch2better template in MRIcron, I noticed that it does not seem to be well-aligned with the MNI template (see attached screenshot).

        This may be due to what I found in the README you sent me the GitHub link for: "Data warped to match stroke control dataset (ventricles larger and some cortical atrophy relative to young adults)".

        Hence, I guess it would not be recommended to use this particular version of the atlas in studies on healthy adults?!

        However, based on the readme I get the impression that there may also be an MNI space version of the atlas which is not warped to a stroke control dataset? Any idea how we could retrieve it?

        I was also a little confused by the paper you sent as it compares different denoising pipelines but does not seem to be specifically mentioning the atlas?
        Do you have any other references or contact info?

        Thanks a lot in advance again!

        Best wishes,

        Lukas

      May 22, 2020  05:05 PM | Chris Rorden
      RE: JHU.nii and JHU.txt files in C:\Program Files\MRIcron\Resources\templates
      As noted from the article where this image originated, the authors used SPM's segmentation-normalization
        https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3358461/
      As noted when the MNI template was created, the algorithm used led to the template being larger than average. In contrast, SPM normalizes to an average sized brain. This can be seen nicely in Figure 1 of of Horn et al. (2017)
        https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6050588/
      Therefore, if you used an MNI template for your normalization (FSL, ANTS) you would probably want to take this into account when using this template, as it is based on SPM normalization.

      While I am happy that my MRIcron software has proved popular and robust, if your hardware can support it I would encourage you to try out MRIcroGL.
      https://www.nitrc.org/projects/mricrogl
      With MRIcroGL, if you choose the File/OpenStandard menu, you will see two templates: mni152 and spm152. The difference between these images is whether they use the larger MNI size or the more accurate SPM size.

      You really want to choose your atlas to match your normalization procedures. In my own work with older stroke patients, I use templates that have larger ventricles to match the typical normalization seen for these people who are older than the sample used for the MNI dataset.

      • May 22, 2020  06:05 PM | Lukas Van Oudenhove
        RE: JHU.nii and JHU.txt files in C:\Program Files\MRIcron\Resources\templates
        Thanks a ton again Chris, very informative!

        I am already using MRIcroGL but had never noticed the choice between spm152 and mni152, nor did I know about this difference!

        If I understand the last part of your message correctly, the JHU.nii file provided with MRIcron is simply normalized based on the spm instead of MNI template (thanks for that paper), but not adapted/normalized by your group for use in an elderly/stroke population, and can hence be used in any study where SPM normalization is done?

        Best wishes,

        Lukas

  • May 4, 2020  07:05 PM | Tal Geffen
    second level covariates: ANCOVA with 2 covariates/ binaric covariate/ statistical test
    Hello,

    I want to ask three questions, relates to second level covariates.


    1. In case I would like to perform ANCOVA, with 2 covariates (that I would like to control, both are continuous like age and IQ ), with 2 groups (control/ Parkinson), and want to compare the connectivity between the groups, for example:

    control: 11110000
    Parkinson: 00001111
    age: 25 36 37 38 38 39 30 50
    IQ: 98 100 32 93 89 120 90 130

    To see if the connectivity is higher among the experiment I will use this contrast: [-1 1 0 0]
    Is this right? Is this is the same for adding more continues covariates [-1 1 0 0 0] and so on?

    2.Binary covariate:


    In case I would like to use a covariate (in order to control it) that is binary (such as the groups), do I use the covariates in the same way? E.g., if I want to see the effect of IQ and control for my two groups:

    control: 00001111
    Parkinson: 11110000
    IQ: 98 100 32 93 89 120 90 130

    contrast: [0 0 1]
    Is this is right, or is it different in the case of binary covariates?

    3. Statistical test:


    In the case I define those contrast:

    control: 11110000
    Parkinson: 00001111
    age: 25 36 37 38 38 39 30 50

    and check this contrast: [-1 1 0] (Parkinson>control)
    I get as a result value of a t-test (T(53)>x). Why don't I get an F value? This is actually an ANCOVA test?

    Thank you very much for the answers!
    Tal

    • May 4, 2020  08:05 PM | Chris Rorden
      RE: second level covariates: ANCOVA with 2 covariates/ binaric covariate/ statistical test
      Tal-

      MRIcron just helps you draw the lesion. You will want to use a different tool for statistics. For ANCOVA, you would want to use VLSM, and should direct your questions to that group https://aphasialab.org/vlsm/

      Another option would be to conduct a permutation-thresholded Freedman-Lane analysis using NiiStat
        https://www.nitrc.org/plugins/mwiki/index.php/niistat:MainPage
      For this, your Excel file would look like this

      ID PD AGE IQ
      C1 0 25 98
      C2 0 36 92
      C3 0 37 99
      C4 0 38 102
      P1 1 39 104
      P2 1 30 100
      P3 1 50 98
      P4 1 33 96

      And your analysis would be a [1 0 0] analysis. Since NiiStat is designed for lesion work, we tend to assume that most brain injury impairs behavior, so we have conduct a one-tailed test. Based on the distribution of the data, the two tails may not be symmetrical, so you can have different scores for the positive end of the tail and the negative. I am a huge fan of permutation thresholding, it solves a lot of ills in Neuroimaging.

      By the way, NiiStat should report Z-scores not F or T scores. With F and T scores, you need to know the degrees of freedom to interpret them. Computers are extremely efficient at transforming F/T scores to z-scores. Humans are much less good at this. So I let the computers do the hard work of converting the statistical values to this intuitive metric.

      • May 5, 2020  10:05 AM | Tal Geffen
        RE: second level covariates: ANCOVA with 2 covariates/ binaric covariate/ statistical test
        Dear Chris, 

        Thank you very much for the answer. 

        I am not familiar with NiiStat, but sounds very helpfull and I will check it.  You mention here that it's initially meant for lesions. I am working in the field of neuropsyhciatry (schizophrenia data), so no clear lesions. Is this is aimed also for this? 

        Last, is there is no way to do those kind of calculations via CONN? 

        Best, 
        Tal

        • May 5, 2020  11:05 AM | Chris Rorden
          RE: second level covariates: ANCOVA with 2 covariates/ binaric covariate/ statistical test
          I was simply pointing out that with lesions we have a strong one-tailed hypothesis, so we traditionally put 0.05 as the threshold and only look at one tail. If you have a two-tailed hypothesis (e.g SZ may result in some regions may showing increased activity, others decreased), you would have a two-tailed hypothesis and use 0.025 as the threshold for each tail. NiiStat is a general linear model tool, and it can analyze voxels, regions of interest and connectomes. It is agnostic regarding the source of the data.

          You may also want to look at PALM
            https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/PALM
          which is very similar to NiiStat, but does allow different models.

          • May 5, 2020  12:05 PM | Tal Geffen
            RE: second level covariates: ANCOVA with 2 covariates/ binaric covariate/ statistical test
            Thank you very much, 

            Best, 
            Tal

  • May 4, 2020  05:05 PM | Jeremy Purcell - University of Maryland
    Mricron workshops
    I thought that some in this community might be interested in a few MRIcron based workshops I developed. They were designed to help educators and/or researchers. See here: https://sites.google.com/view/cogneuroworkshops/home
    Cheers!
    Jeremy

  • Apr 27, 2020  04:04 AM | ashwativ
    MriCron issues installing on MacOS Catalina
    Hi,

    After downloading the dmg file for the latest (2019) version of MRICron my computer is giving the following error when I try and open the application:

    ""MRIcron" can't be opened because Apple cannot check it for malicious software."

    Does anyone else have this issue? Any help would be greatly appreciated. I have been using MRIcron for years now and have never faced an issue installing it.

    Thank you so much.

    • Apr 27, 2020  07:04 AM | Isabelle Faillenot - university of Saint-Etienne (UJM)
      RE: MriCron issues installing on MacOS Catalina
      Hi !
      this is not a problem of MRIcroN but of your mac when download soft from the web.
      when you have this error, you have to go in the finder, choose applications (or programs...) and then open MriCroN and accept.
      otherwise look at the web and apple support.
      Isabelle

  • Mar 27, 2020  08:03 AM | Adam Zabicki
    unable to create file "*.\mricron.ini"
    Dear Chris,
    this isn't rather a bug, i guess, but i am wondering why there always when i close MRICON an error message appears telling 
    ******************
    Unable to create file
    "...\mricron.ini"
    Press OK...
    Press Abort...
    ********************
    ?

    And despite this message, that *ini-file will be indeed created and updated... 

    I guess it is rather a win10-related problem?!

    best, adam

    • Mar 29, 2020  02:03 PM | Chris Rorden
      RE: unable to create file "*.\mricron.ini"
      1. If you select the file properties for the ini file, is the "Read-only" checkbox selected? You want a file you can both read and write, in a folder you can read and write. 

      2.You may want to try upgrading to MRIcroGL (go here for a pre-release of the upcoming version). It will save your preferences in the user home folder, which avoids file permission issues. I will consider this for the next release of MRIcron. However, most of my efforts focus on my more recent tools (e.g. MRIcroGL) over legacy tools (MRIcron).
      Attachment: readonly.png

      • Mar 30, 2020  06:03 AM | Adam Zabicki
        RE: unable to create file "*.\mricron.ini"
        dear chris, 

        thank you for the reply. no it wasn't the read-only-attribute.
        and after observing some other weird behaviour (when repeatedly just double-click mricron.exe and close it immediately, the error-message sometimes appeared and sometimes not), i moved the mricron-folder from my dropbox and now it seems ok. i guess the dropbox-client was "blocking" the ini-file.

        best,
        adam

  • Mar 10, 2020  08:03 PM | ck_ - Institute of Neurosciences Kolkata
    Getting error message and unable to use dcm2niigui.exe application after installing 'MRIcron' in Windows 7 (32bit)
    I want to convert DICOM to NIIfti using dcm2niigui, where I am supposed to drag and drop all the required DICOM imageS needed to be converted. 

    I followed these steps:

    1. Downloaded MRIcron ((2-September-2019 (v1.0.20190902)), MRIcron_windows.zip
    After few minutes,
    2.click on the downloaded .zip file to extract the items.
    after extracting,
    3. Can see a folder named'MRIcron'
    4. Double click on it.
    5. can see: a folder named'Resources' and another .exe file named 'mricron.exe'6. 
    6. If I click on 'mricron.exe'6- I am geeting an error saying' !  Cannot execute "C:\Users\.......\AppData\Local\Temp\Rar$EXa4796.38558\MRIcron\mricron.exe"
    7. I even cannot see 'dcm2niigui.exe' application inside 'resource' folder instead I can see 'dcm2nii.exe'.
    8. If I click on this application, I am again getting same type of error message as above.

    Please help me resolving it.

    • Mar 10, 2020  09:03 PM | Chris Rorden
      RE: Getting error message and unable to use dcm2niigui.exe application after installing 'MRIcron' in Windows 7 (32bit)
      If your version of Windows does not allow you to drag and drop folders, press the "Select Folder To Convert" button at the bottom left of the window.

  • Feb 6, 2020  04:02 PM | Nele De Bruyn
    change colour of lesion heatmap
    Dear MRICon users, 

    I would like to create an heatmap of my lesions.
    I have already created the overlay image but have some troubles with the colour settings of it. 
    I am able to change the colour of the heatmap overlay  at the dropdown menu from red to for instance violet or blue. 
    However i would like to create a heatmap with more than one colour, so red for high overlap and blue or green for low overlap of lesions. 
    I know a colleague of me has the opportunity to chose this into this dropdown menu, but that is not the case with me. 
    I have already downloaded and installed the latest new version of MRIcron, but this does not change the colour options. 
    Is there a place were i can download or enable this option? 

    Thank you very much for your help, 
    Nele

    • Feb 6, 2020  04:02 PM | Chris Rorden
      RE: change colour of lesion heatmap
      The first few color schemes (Grayscale, Red, Blue.. Cyan) are built into the software. The additional color schemes are loaded from files in the "Resources" folder. When you download MRIcron, it should include this folder that has color schemes, image templates, etc. It should be in the same location as your MRIcron executable. You can change the files in this folder, e.g. adding and removing your preferred color schemes. However, if the resources folder is not found, you will only see the built-in effects. You did not mention the Operating System you are using, but for MacOS you need to click on the executable icon in the Finder and choose to "Show Package Contents" to see the resources folder. So the simple thing is to redownload MRIcron and keep it in the same location as its Resrouces folder.

      If you have a modern computer, you might want to try my newer MRIcroGL. While I am happy my old software has proved popular and robust, my more recent tools leverage advances in computer hardware.
      Attachment: clut.png

  • Feb 3, 2020  04:02 PM | Ana Pereira
    Superimposition of binary masks
    Hi all,

    I'm new to MRIcron and I am just looking for some help!


    I am working with some binary masks for intracranial volume and white matter lesions and I wanted to superimpose/subtract the test masks with some reference standard masks that were provided to me. Looking at the software instructions online the only way I could find to do this was by creating an overlay between a background and overlay image. The result, however, comes out as a colour overlay rather than a binary set? (instead of only allowing me to adjust intensities from 0 to 1 it allows me to vary colour and transparency % on the background and overlay) - Any thoughts?


    Thank you in advance,
    Ana

    • Feb 3, 2020  05:02 PM | Chris Rorden
      RE: Superimposition of binary masks
      The Draw/OverlayComparisons will allow you to subtract/add other images to your drawing. I would load the MRI scan as your background image, load the standard masks as overlays, and then create your drawing. The OverlayComparisons will allow you to choose an intersection, union or mask between your drawing and the masks.

  • Jan 3, 2020  11:01 AM | hannasofia - Gothenburg university
    Flip T1 images
    Hi, 

    I´m working with brains with lesions. I would like to have all the lesions from all subjects on the same side of the brain. I wonder if it is possible to turn some of the brain scans around in micron, or if any other software is suggested?

    Thank you in advance!

    /Hanna

    • Jan 3, 2020  02:01 PM | Chris Rorden
      RE: Flip T1 images
      Draw/Advanced/LRflip - you may want to do this prior to spatial normalization if you have an asymmetric template.