I am trying to load 'GMF_Metals' to calculate the abundance of hydrogen for halo_ID = 0, snap = 91 in TNG 50-1. However, while trying to do so, it takes more than 10 GB of memory and, therefore, causes the death of the kernel. I have tried to save the array in chunks, but that has resulted in the same issue.
Is there a way to load the hydrogen abundance without encountering this problem?
Dylan Nelson
8 May
Depending on what you are trying to do with this data, you may want to load it in smaller pieces, one at a time.
(If you just load it in 1 GB pieces, you can never run out of memory).
For the particular case of two-dimensional datasets such as GFM_Metals where you only want a slice, you can use the (undocumented) argument mdi of il.snapshot.loadSubset(). This stands for "multi-dimensional index" and will allow you to load just the H slice.
I am trying to load 'GMF_Metals' to calculate the abundance of hydrogen for halo_ID = 0, snap = 91 in TNG 50-1. However, while trying to do so, it takes more than 10 GB of memory and, therefore, causes the death of the kernel. I have tried to save the array in chunks, but that has resulted in the same issue.
Is there a way to load the hydrogen abundance without encountering this problem?
Depending on what you are trying to do with this data, you may want to load it in smaller pieces, one at a time.
(If you just load it in 1 GB pieces, you can never run out of memory).
For the particular case of two-dimensional datasets such as
GFM_Metals
where you only want a slice, you can use the (undocumented) argumentmdi
ofil.snapshot.loadSubset()
. This stands for "multi-dimensional index" and will allow you to load just the H slice.