I would like to extract the Dark matter filaments within 10 Virial Radius from center of the specific Group halo, using DisPerSE.
To do this, total dark matter data must be imported, but the capacity is too large to load.
As a solution to this, I used a il.snapshot.loadHalo , but it seems to import data up to about three times the Virial radius.
Is there any way to get only a certain range of dark matter snapshot data? ; ex) (100kpc, 100kpc, 100kpc) < (x,y,z) < (200kpc, 200kpc, 200kpc)
I know there is already a DisPerSE catalog in Illustris TNG , but I wonder if there's a way like this.
Best Regards,
Yeongyeong Lee
Dylan Nelson
14 Jan '23
Hello Yeongyeong,
It is somewhat unusual to run a cosmic web classifier like Disperse on the raw particle data, instead of just on the halo catalogs. Are you sure you don't want to run it on the halos? (A much smaller dataset, tracing the same structures). This is what the available (s) Cosmic Web Distances (Disperse) catalog does.
Otherwise, the short answer is that to load all (e.g. DM) particles out to a large distance away from a halo, you need to load the entire snapshot. This will require a large memory machine. Or, you can load the entire snapshot in a number of (e.g. 100) chunks. For each chunk, you can save only the particles within the region of interest. At the end, you will have collected all the particles of interest, and never used more than (total_dataset_size/100) of memory.
Hello,
I would like to extract the Dark matter filaments within 10 Virial Radius from center of the specific Group halo, using DisPerSE.
To do this, total dark matter data must be imported, but the capacity is too large to load.
As a solution to this, I used a il.snapshot.loadHalo , but it seems to import data up to about three times the Virial radius.
Is there any way to get only a certain range of dark matter snapshot data? ; ex) (100kpc, 100kpc, 100kpc) < (x,y,z) < (200kpc, 200kpc, 200kpc)
I know there is already a DisPerSE catalog in Illustris TNG , but I wonder if there's a way like this.
Best Regards,
Yeongyeong Lee
Hello Yeongyeong,
It is somewhat unusual to run a cosmic web classifier like Disperse on the raw particle data, instead of just on the halo catalogs. Are you sure you don't want to run it on the halos? (A much smaller dataset, tracing the same structures). This is what the available (s) Cosmic Web Distances (Disperse) catalog does.
Otherwise, the short answer is that to load all (e.g. DM) particles out to a large distance away from a halo, you need to load the entire snapshot. This will require a large memory machine. Or, you can load the entire snapshot in a number of (e.g. 100) chunks. For each chunk, you can save only the particles within the region of interest. At the end, you will have collected all the particles of interest, and never used more than (total_dataset_size/100) of memory.