help
help > RE: large filesize after matc2nii conversion
May 24, 2017 03:05 AM | Alfonso Nieto-Castanon - Boston University
RE: large filesize after matc2nii conversion
Dear Matti,
The larger filesize is most likely due to a combination of: a) by default CONN will use 'float32' type for functional data (your original data may use a different/smaller datatype); and b) by default (unless otherwise specified in the Setup.Options tab 'analysis space' field) CONN will resample your functional data to a 2mm voxel resolution (your original data may use a different/larger voxel-size).
note: the vol(n).private.dat values are memory-mapped, they all point to the same source data (and that does not mean that the data is actually replicated 410 times, that would lead to a filesize much larger than 1.5Gb)
Hope this helps
Alfonso
Originally posted by Matti Gärtner:
The larger filesize is most likely due to a combination of: a) by default CONN will use 'float32' type for functional data (your original data may use a different/smaller datatype); and b) by default (unless otherwise specified in the Setup.Options tab 'analysis space' field) CONN will resample your functional data to a 2mm voxel resolution (your original data may use a different/larger voxel-size).
note: the vol(n).private.dat values are memory-mapped, they all point to the same source data (and that does not mean that the data is actually replicated 410 times, that would lead to a filesize much larger than 1.5Gb)
Hope this helps
Alfonso
Originally posted by Matti Gärtner:
Dear Conn Experts,
I want to extract confound-corrected time course data and checked the box in Setup-->Options-->"Create confound-corrected time-series". I have datasets of 205 images X 2 conditions that have a filesize of 350 MB for each condition. The confound-corrected nifti images (niftiDATA_Subject004_Condition000.nii) have a large filesize of 1,5 GB. Since this filesize seemed a bit large to me I used spm_vol to look at the data and found a structure array of length "number-of-images" (Z = 410x1 struct). When I looked at size(Z(1).private.dat) I saw that it contained 4-D data of 410 images. It looks like Z(2), Z(3) ... Z(n) all contain the same 4-D which I think is the explanation for the large file size. My question is whether I made a mistake somewhere or whether there is a reason why the same 4-D is saved 410 times?
Thanks a lot for your help in advance
Matti
I want to extract confound-corrected time course data and checked the box in Setup-->Options-->"Create confound-corrected time-series". I have datasets of 205 images X 2 conditions that have a filesize of 350 MB for each condition. The confound-corrected nifti images (niftiDATA_Subject004_Condition000.nii) have a large filesize of 1,5 GB. Since this filesize seemed a bit large to me I used spm_vol to look at the data and found a structure array of length "number-of-images" (Z = 410x1 struct). When I looked at size(Z(1).private.dat) I saw that it contained 4-D data of 410 images. It looks like Z(2), Z(3) ... Z(n) all contain the same 4-D which I think is the explanation for the large file size. My question is whether I made a mistake somewhere or whether there is a reason why the same 4-D is saved 410 times?
Thanks a lot for your help in advance
Matti
Threaded View
| Title | Author | Date |
|---|---|---|
| Matti Gärtner | May 10, 2017 | |
| Alfonso Nieto-Castanon | May 24, 2017 | |
