Hi!
I have a question regarding the speed of catalogue loading using illustris_python in JupyterLab.
I have a function in which I have to load the halo catalogue for every snapshot, and it looks like this:
def loop_quantity_over_halos(base_path): for snap in range(100): fields = ['GroupPos','Group_R_Mean200','Group_M_Crit200'] halos = il.groupcat.loadHalos(base_path,snap,fields=fields) Q = compute_quantities(halos)
The function compute_quantities is pretty fast, but loadHalos is slowing down the process (takes 99% of the computation time).
compute_quantities
loadHalos
Is there a faster way to do this kind of loop of halo catalogue loading over snapshots?
You can switch to this version of the illustris_python library (just replace the directory in the Lab).
It includes a multi-threaded catalog reader, which I believe should also work OK in the Lab. On our cluster, this usually makes such loads 10x or more faster.
Thank you very much Dylan!
Hi!
I have a question regarding the speed of catalogue loading using illustris_python in JupyterLab.
I have a function in which I have to load the halo catalogue for every snapshot, and it looks like this:
The function
compute_quantities
is pretty fast, butloadHalos
is slowing down the process (takes 99% of the computation time).Is there a faster way to do this kind of loop of halo catalogue loading over snapshots?
You can switch to this version of the illustris_python library (just replace the directory in the Lab).
It includes a multi-threaded catalog reader, which I believe should also work OK in the Lab. On our cluster, this usually makes such loads 10x or more faster.
Thank you very much Dylan!