I have worked on different exciting topics in the area of theoretical nuclear physics and high energy physics. On this page, I will give a short overview of the most important ones. The list of my publications can be found here.
The early universe was filled with hot and dense nuclear matter, which was not confined into nucleons like in the atoms, which make up the visible universe today, but rather in a plasma of quarks and gluons. Understanding the early universe requires understanding this plasma. Nowadays, we can generate this state by colliding heavy ions like lead or gold in particle accelerators.
However, investigating this state proves to be a formidable challenge. The quarks gluon plasma exists for only a very short time. Therefore, detectors can only measure the final state particles. So far, there exists no clear mapping between properties of the final state and the quark gluon plasma itself.
As throughout the evolution of a heavy ion collision, different degrees of freedom are realised - the quark gluon plasma is well described by hydrodynamics, whereas bound states can be described in transport theory - the computational description also requires combining different models for different stages. At the moment, I am working and expanding the hybrid approach SMASH-vHLLE-hybrid.
My work is currently centered around investigating viscosities in hybrid approaches. The viscous corrections affect observables like flows in different stages of the collision and therefore also traces of a phase transition and an emergence of collective behaviour.
Currently, I am working on investigating the impact of the initial state model. As the initial state of the collision, the first few moments after the nuclei came in contact, cannot be observed directly. This is due to its short-livedness and the strong signal of the proceeding phases. As a result, there is a multitude of theoretical models describing this initial state, which is used as an input to the hydrodynamic description in hybrid models. Initial state properties can have an impact on the emergence of collective behaviour like flow - the preference of outgoing particles to move in a certain direction. Collective flow is however often used to extract properties from the quark-gluon-plasma. Therefore, it is important to study how the choice of initial state models changes theoretical predicitons.
Performing a data analysis on extensive sets of event data which I created, I could infer that the presence of initial state transverse momentum contributies to final state flows. The prevalent simple picture of a deformed initial state leading to final state flow is therefore incomplete - momentum in the initial state will change final state flow and has a significant impact on preditictions.
My first peer-reviewed paper was centered around the investigation of a conjectured density dependence of the shear viscosity for intermediate energy collisions, where the SMASH-vHLLE-hybrid approach is especially successful. By including both temperature and net baryochemical potential dependent shear viscosity and generating extensive data sets using HPC tools, I could show that a net baryochemical potential dependent shear viscosity reduces the impact of free technical parameters of the simulation. A free parameter governs, for example, the point of transition between the hydrodynamic and non-equilibirum transition. Using my parameterisation, changing this parameter within a reasonable range does have a much reduced impact on the elliptic flow.
Almost all particle physics predictions are obtained by computing integrals like the cross section, many of which are calculated numerically using Monte Carlo methods. However, due to their complex structure this comes at an extremely high computational cost. Machine learning offers new ways to optimize the integration process, which significantly improves the success of methods like adaptive importance sampling beyond the capabilities of current approaches like VEGAS.
In my Master's Thesis, I worked on Neural Importance Sampling. This method achieves a reduction of the variance of the integrator (and by this an improvement of the precision of the integral) by learning the optimal sampling of the latent variables. This is done by training a neural network which learns the optimal parameters of a normalizing flow. This normalizing flow transforms the originally uniformly distributed latent variables. The normalizing flows are realised using coupling cells, which contain the neural network which control the transformation. The mapping combination of multiple coupling cells determine the mapping, which is sketched in the lower right.
The advantage of this approach lies in the fact that it is adaptive and does not require any knowledge of the integrand. Together with a quasi-flat phase space generator, this allows fast integration also on a GPU architecture. Other than the very common adaptive approach VEGAS, this integration strategy does not fail for correlations along the integration axes. An example for this is the slashed circle function. The GIF on the right hand side shows how the Neural Importance Sampling learns the distribution over time. The VEGAS algorithm would not be able to learn this distribution efficiently as both axes are maximally correlated.
In my Master's Thesis, I developed both multiple approaches of Neural Importance Sampling as well as a quasi-flat phase space generator. A demonstration of the strength of this approach is given in this demo jupyter notebook, which shows how to use the python package I developed.
I have continued my work on this promising approach by contributing to the ZüNIS package.
The investigation of the early states of heavy ion collisions is of great interest with respect to the understanding of the thermodynamics of the quark-gluon plasma, which was present during the first moments of the universe. As many questions about the early universe, including the excess of matter over antimatter are still unclear, a lot of research is done to understand the dynamics of heavy ion collisions.
Before the nucleonic matter thermalises into a thermodynamical equilibrium, which is the quark-gluon plasma, a so-called glasma is predicted, which is a dense gluonic matter state for heavy-ion collisions at very high energies. However, it is yet not clear which process leads to the fast thermalisation which is observed during experiments.
The glasma can be described starting from the Color Glass Condensate (CGC) effective field theory, in which the evolution of the gluon fields in leading order of the strong coupling constant behaves like the collision of coherent classical Yang–Mills fields. The CGC describes the initial condition before and at the time of the collision. It approximates the description of the fast partons in the wave function of a hadron by using the fact that their dynamics is slowed down by relativistic effects, and provides a way to track the evolution of states with energies that are relevant in the dense regime. This description enables using Kinetic Theory in order to build numerical simulations of the interactions during the thermalisations of the glasma in the form of Boltzmann solvers.
In collaboration with Prof. François Gelis for a research internship, I worked on such simulations in order to include momentum-dependent scattering amplitudes. This generalisation was possible by revisiting multiple approximations of the simulation. By allowing momentum-dependent scattering, much more realistic cases can be investigated, including resummed higher order interactions.
Multiple numerical experiments showed the great effect of the momentum dependent terms on the thermalisation time. The pictures on the right show how these terms can cause early thermalisation.
In my Bachelors' thesis (in German), I worked on the numerical treatment of hyperbolic differential equations. This is important for numerical simulations of fluid dynamics, for example, the development of the distribution of matter in the universe over long time scales in order to test different models of dark matter by comparing galaxy and star formation and evolution to the observed universe. Such a simulation is performed using the Euler equations, which are hyperbolic differential equations. They are, however, only valid for positive values of the density.
Hyperbolic differential equations can be efficiently solved using the Discontinuous Galerkin Method. However, a high convergence rate is necessary in order to reduce the resolution of time steps, especially when simulating long running processes. The simulation collapses when a negative density is reached, which can happen for (near to) discontinuous initial conditions/shock waves. In order to prevent such cases, a positivity preserving limiter is required. Naive approaches would greatly reduce the convergence rate and by this, cause high numerical costs.
For the one dimensional case, I implemented a positivity preserving limiter which also allows high convergence rates by making use of a simple linear scaling limiter. The additional numerical cost of such a limiter is very low. As seen in the figure on the right, the plot of a 1D blast wave shows that, although very few DG cells were used, a stable simulation with high resolution of the blast wave front can be realised. Positivity preserving limiters are an important part of modern DG codes like TENET, which are used for high-performance astrophysical simulations.