UPDATE: I found my problem with using zip, so I'm working on it to see if I can fix it. Will update here if the files save successfully so you may delete this. But I'm leaving the stuff below just in case something still isn't working. Thank you!
Hello,
I'm having trouble figuring out why my code fails to save all particle fields I requested. I check with the catalog for these subhalos, and see that they have a lot more particle data than is being saved (all of the subhalos I found should have at least 800 dark matter particles as I set a minimum total dm mass of 5e9 M_sun). The first couple of subhalos with IDs I put through the loop save fine (they have 1000-3000 KB), but then the files become anywhere between 1-200 KB (a lot are around 5 KB) which is obviously too small. I figured it out when I tried to create profiles in a loop and the loop broke, so I checked the points in that file and found it only saved ~100 dm particles when the catalog says that one has ~10000 dm particles. Is there something wrong with doing this in a loop? Here's my code:
Just to explain the try/except KeyError code: some of these subhalos don't have any stars in them, so my code breaks since there's no key PartType4. So if a key error exception comes up, I tell my code to pass IF I'm looking to save the ones with stars. If I want to save the ones with no stars, I use the try/except function in the way that it just tries to append a star field to an empty list, which will bring up the error and send it to the next line of code under except where I have it save only dm data.
Is there something wrong with using the API in a loop, or is it my code? Thank you!
Dylan Nelson
28 May '20
Hi Sophia,
For simplicity I would separate out downloading/saving, and everything else, as
from os import rename
def get(path, params=None):
# make HTTP GET request to path
r = requests.get(path, params=params, headers=headers)
# raise exception if response code is not HTTP SUCCESS (200)
r.raise_for_status()
if r.headers['content-type'] == 'application/json':
return r.json() # parse json responses automatically
if 'content-disposition' in r.headers:
filename = r.headers['content-disposition'].split("filename=")[1]
with open(filename, 'wb') as f:
f.write(r.content)
return filename # return the filename string
return r
def save_part_data(simRun,snapval,subidval):
""" Download and save particle cutout for one subhalo. """
baseUrl = 'http://www.tng-project.org/api/'
headers = {"api-key":"my_key"}
sub_prog_url = "http://www.tng-project.org/api/"+simRun+"/snapshots/"+str(snapval)+"/subhalos/"+str(subidval)+"/"
sub_prog = get(sub_prog_url)
cutout_request = {'dm':'Coordinates','stars':'Coordinates,Masses'}
cutout = get(sub_prog_url+"cutout.hdf5", cutout_request)
final_filename = "cutout_snap=%d_sub=%d.hdf5" % (snapval,subidval)
rename(cutout, final_filename)
def analyze_part_data(snapval,subidval):
""" Analyze an already downloaded cutout. """
filename = "cutout_snap=%d_sub=%d.hdf5" % (snapval,subidval)
data = {}
with h5py.File(filename,'r') as f:
for groupname in ['PartType1','PartType4']:
if groupname not in f:
continue
data[groupname] = {}
for key in f[groupname]:
data[groupname][key] = f[key][()
# now compute rdm, rstars, and do any other analysis of interest
dm_pos = data['PartType1']['Coordinates']
# ...
def run():
simRun = "TNG100-1"
snap = 99
subid = 123456
save_part_data(simRun,snap,subid)
analyze_part_data(simRun,snap,subid)
UPDATE: I found my problem with using zip, so I'm working on it to see if I can fix it. Will update here if the files save successfully so you may delete this. But I'm leaving the stuff below just in case something still isn't working. Thank you!
Hello,
I'm having trouble figuring out why my code fails to save all particle fields I requested. I check with the catalog for these subhalos, and see that they have a lot more particle data than is being saved (all of the subhalos I found should have at least 800 dark matter particles as I set a minimum total dm mass of 5e9 M_sun). The first couple of subhalos with IDs I put through the loop save fine (they have 1000-3000 KB), but then the files become anywhere between 1-200 KB (a lot are around 5 KB) which is obviously too small. I figured it out when I tried to create profiles in a loop and the loop broke, so I checked the points in that file and found it only saved ~100 dm particles when the catalog says that one has ~10000 dm particles. Is there something wrong with doing this in a loop? Here's my code:
Just to explain the try/except KeyError code: some of these subhalos don't have any stars in them, so my code breaks since there's no key PartType4. So if a key error exception comes up, I tell my code to pass IF I'm looking to save the ones with stars. If I want to save the ones with no stars, I use the try/except function in the way that it just tries to append a star field to an empty list, which will bring up the error and send it to the next line of code under except where I have it save only dm data.
Is there something wrong with using the API in a loop, or is it my code? Thank you!
Hi Sophia,
For simplicity I would separate out downloading/saving, and everything else, as
Hi Dylan, this is so much neater! Thank you! =)