Note
Go to the end to download the full example code.
Start a SPH simulation from a phantom dump#
This simple example shows how to start a SPH simulation from a phantom dump
Download a phantom dump
10 dump_folder = "_to_trash"
11 import os
12
13 os.system("mkdir -p " + dump_folder)
14
15 url = "https://raw.githubusercontent.com/Shamrock-code/reference-files/refs/heads/main/blast_00010"
16
17 filename = dump_folder + "/blast_00010"
18
19 from urllib.request import urlretrieve
20
21 urlretrieve(url, filename)
('_to_trash/blast_00010', <http.client.HTTPMessage object at 0x7fb9bc089ac0>)
Init shamrock
25 import shamrock
26
27 # If we use the shamrock executable to run this script instead of the python interpreter,
28 # we should not initialize the system as the shamrock executable needs to handle specific MPI logic
29 if not shamrock.sys.is_initialized():
30 shamrock.change_loglevel(1)
31 shamrock.sys.init("0:0")
32
33 ctx, model, in_params = shamrock.utils.phantom.load_simulation(
34 dump_folder, dump_file_name="blast_00010", in_file_name=None
35 )
-----------------------------------------------------------
---------------- Phantom dump loading -----------------
-----------------------------------------------------------
- Loading phantom dump from: _to_trash/blast_00010
- Generating Shamrock solver config from phantom dump
adiabatic eos: gamma = 1.6666666666666667
Setting periodic boundaries from phantmdump
- Setting Shamrock solver config
- Initializing domain scheduler
- Initializing from phantom dump (setup file: False)
Info: Push particles : [Model][rank=0]
rank = 0 patch id=0, add N=576 particles, coords = (-0.5,-0.4871392896287467,-0.5103103630798287) (0.5,0.4871392896287467,0.5103103630798287)
Info: Scheduler step timings : [Scheduler][rank=0]
metadata sync : 18.48 us (85.1%)
Info: summary : [LoadBalance][rank=0]
Info: - strategy "psweep" : max = 576 min = 576 [LoadBalance][rank=0]
Info: - strategy "round robin" : max = 576 min = 576 [LoadBalance][rank=0]
Info: Loadbalance stats : [LoadBalance][rank=0]
npatch = 1
min = 576
max = 576
avg = 576
efficiency = 100.00%
Info: Scheduler step timings : [Scheduler][rank=0]
metadata sync : 2.27 us (0.5%)
patch tree reduce : 1213.00 ns (0.2%)
gen split merge : 812.00 ns (0.2%)
split / merge op : 0/0
apply split merge : 962.00 ns (0.2%)
LB compute : 472.65 us (95.3%)
LB move op cnt : 0
LB apply : 13.77 us (2.8%)
Info: current particle counts : min = 576 max = 576 [Model][rank=0]
- Shamrock solver config:
----- SPH Solver configuration -----
[
{
"artif_viscosity": {
"alpha_max": 1.0,
"alpha_min": 0.0,
"alpha_u": 1.0,
"av_type": "varying_cd10",
"beta_AV": 2.0,
"sigma_decay": 0.1
},
"boundary_config": {
"bc_type": "periodic"
},
"cfl_config": {
"cfl_cour": 0.3,
"cfl_force": 0.25,
"cfl_multiplier_stiffness": 2.0,
"eta_sink": 0.05
},
"combined_dtdiv_divcurlv_compute": false,
"debug_dump_filename": "",
"do_debug_dump": false,
"enable_particle_reordering": false,
"eos_config": {
"Tvec": "f64_3",
"eos_type": "adiabatic",
"gamma": 1.6666666666666667
},
"epsilon_h": 1e-06,
"ext_force_config": {
"force_list": []
},
"gpart_mass": 0.001726334915006219,
"h_iter_per_subcycles": 50,
"h_max_subcycles_count": 100,
"htol_up_coarse_cycle": 1.1,
"htol_up_fine_cycle": 1.1,
"kernel_id": "M4<f64>",
"mhd_config": {
"mhd_type": "none"
},
"particle_killing": [],
"particle_reordering_step_freq": 1000,
"save_dt_to_fields": false,
"self_grav_config": {
"softening_length": 1e-09,
"softening_mode": "plummer",
"type": "none"
},
"show_ghost_zone_graph": false,
"show_neigh_stats": false,
"smoothing_length_config": {
"type": "density_based"
},
"time_state": {
"cfl_multiplier": 0.01,
"dt_sph": 0.0,
"time": 0.050000000000000225
},
"tree_reduction_level": 3,
"type_id": "sycl::vec<f64,3>",
"unit_sys": {
"unit_current": 1.0,
"unit_length": 1.0,
"unit_lumint": 1.0,
"unit_mass": 1.0,
"unit_qte": 1.0,
"unit_temperature": 1.0,
"unit_time": 1.0
},
"use_two_stage_search": true
}
]
------------------------------------
Run a simple timestep just for wasting some computing time :)
41 model.timestep()
---------------- t = 0.050000000000000225, dt = 0 ----------------
Info: summary : [LoadBalance][rank=0]
Info: - strategy "psweep" : max = 576 min = 576 [LoadBalance][rank=0]
Info: - strategy "round robin" : max = 576 min = 576 [LoadBalance][rank=0]
Info: Loadbalance stats : [LoadBalance][rank=0]
npatch = 1
min = 576
max = 576
avg = 576
efficiency = 100.00%
Info: Scheduler step timings : [Scheduler][rank=0]
metadata sync : 15.61 us (1.6%)
patch tree reduce : 2.40 us (0.3%)
gen split merge : 591.00 ns (0.1%)
split / merge op : 0/0
apply split merge : 1072.00 ns (0.1%)
LB compute : 877.79 us (92.7%)
LB move op cnt : 0
LB apply : 13.91 us (1.5%)
Info: Scheduler step timings : [Scheduler][rank=0]
metadata sync : 2.94 us (71.3%)
Warning: High interface/patch volume ratio. [InterfaceGen][rank=0]
This can lead to high mpi overhead, try to increase the patch split crit
patch 0 high interf/patch volume: 4.104166666666667
Info: conservation infos : [sph::Model][rank=0]
sum v = (3.1631663503355727e-19,5.832204939435293e-19,1.2313420489508965e-17)
sum a = (-6.33202265670962e-17,-4.423671295679711e-16,5.716848567539504e-16)
sum e = 0.9998775176108142
sum de = 7.582029080098474e-16
Info: CFL hydro = 0.00010701881380824262 sink sink = inf [SPH][rank=0]
Info: cfl dt = 0.00010701881380824262 cfl multiplier : 0.01 [sph::Model][rank=0]
Info: Timestep perf report: [sph::Model][rank=0]
+======+============+======+========+===========+======+=============+=============+=============+
| rank | rate (N/s) | Nobj | Npatch | tstep | MPI | alloc d% h% | mem (max) d | mem (max) h |
+======+============+======+========+===========+======+=============+=============+=============+
| 0 | 1.6565e+04 | 576 | 1 | 3.477e-02 | 0.2% | 2.8% 0.0% | 935.14 MB | 0.00 B |
+------+------------+------+--------+-----------+------+-------------+-------------+-------------+
Info: estimated rate : 0 (tsim/hr) [sph::Model][rank=0]
<shamrock.model_sph.TimestepLog object at 0x7fb999723130>
Note
Note that shamrock has to update some smoothing lengths that were in the phantom dump I think that since smoothing length is single precision in phantom they are slightly off from the shamrock point of view hence the update
Plot the result
51 import matplotlib.pyplot as plt
52
53 pixel_x = 1200
54 pixel_y = 1080
55 radius = 0.7
56 center = (0.0, 0.0, 0.0)
57
58 aspect = pixel_x / pixel_y
59 pic_range = [-radius * aspect, radius * aspect, -radius, radius]
60 delta_x = (radius * 2 * aspect, 0.0, 0.0)
61 delta_y = (0.0, radius * 2, 0.0)
62
63 arr_rho = model.render_cartesian_column_integ(
64 "rho", "f64", center=(0.0, 0.0, 0.0), delta_x=delta_x, delta_y=delta_y, nx=pixel_x, ny=pixel_y
65 )
66
67 import copy
68
69 import matplotlib
70
71 my_cmap = copy.copy(matplotlib.colormaps.get_cmap("gist_heat")) # copy the default cmap
72 my_cmap.set_bad(color="black")
73
74 fig_width = 6
75 fig_height = fig_width / aspect
76 plt.figure(figsize=(fig_width, fig_height))
77 res = plt.imshow(arr_rho, cmap=my_cmap, origin="lower", extent=pic_range)
78
79 cbar = plt.colorbar(res, extend="both")
80 cbar.set_label(r"$\int \rho \, \mathrm{d} z$ [code unit]")
81 # or r"$\rho$ [code unit]" for slices
82
83 plt.title("t = {:0.3f} [code unit]".format(model.get_time()))
84 plt.xlabel("x")
85 plt.ylabel("z")
86 plt.show()
![t = 0.050 [code unit]](../../_images/sphx_glr_run_start_sph_from_phantom_dump_001.png)
Info: compute_column_integ field_name: rho, rays count: 1296000 [sph::CartesianRender][rank=0]
Info: compute_column_integ took 2.26 s [sph::CartesianRender][rank=0]
Total running time of the script: (0 minutes 4.505 seconds)
Estimated memory usage: 160 MB