Search is not available for this dataset
text
stringlengths 1
1.92M
| id
stringlengths 14
6.21k
| metadata
dict |
---|---|---|
\section{Introduction}
\label{intro}
The astrophysical plasmas characterized by high Lundquist number
$S\equiv Lv_A/\eta$ ($L\equiv$ length scale of the magnetic field
\textbf{B} variability, $v_A\equiv$ Alfv\'en speed, and $\eta\equiv$
magnetic diffusivity) satisfy the Alfv\'en's flux-freezing theorem in presence of laminar plasma flow, ensuring magnetic field lines to be tied to fluid
parcels \citep{Alfven}. The scenario is different
in a turbulent magnetofluid, see \citet{Vishnaic1999, Vishnaic2000, Eyink}
for details.
An inherent large $L$ implies large $S$ and
ensures the flux freezing in the astrophysical plasmas. Particularly,
the solar corona with global $L\approx 100 ~\rm Mm$, $v_{A}\approx10^{6}$
ms$^{-1}$, B$\approx10$ G, and $\eta\approx1$ m$^2$s$^{-1}$ (calculated
using Spitzer resistivity) has a $S\approx10^{14}$ \citep{Aschwanden}.
However, the coronal plasma also exhibits diffusive behavior in the
form of solar transients---such as solar flares, coronal mass
ejections (CME), and jets. All of these are manifestations of
magnetic reconnections that in turn lead to dissipation of magnetic
energy into heat and kinetic energy of plasma flow, accompanied by
a rearrangement of magnetic field lines \citep{Arnab}. The magnetic
reconnections being dissipative processes, their onset is due to
the generation of small scales in consequence of large-scale dynamics,
ultimately increasing the magnetic field gradient and thereby
resulting in intermittently diffusive plasma. The small scales may
naturally occur as current sheets (CSs) \citep{ParkerECS},
magnetic nulls \citep{Parnell96,Ss2020} and quasi-separatrix layers
(QSLs) \citep{Demoulin, avijeet2020}, or can develop spontaneously
during the evolution of the magnetofluid. Such spontaneous developments
(owing to discontinuities in the magnetic field) are expected from
Parker’s magnetostatic theorem \citep{ParkerECS} and have also been
established numerically by MHD simulations \citep{Ss2020, DKRB,
SKRB, Sanjay2016, SK2017, avijeet2017, avijeet2018, Ss2019,
Sanjay2021}. Identification of the small (viz. the dissipation)
scale depends on the specific physical system under consideration.
For example, the length scale at which the reconnection occurs is
found to be $L_{\eta}\equiv\sqrt{\tau_{d}\eta}\approx$32~m, based on
$\eta\approx1$ m$^2$s$^{-1}$ and the magnetic diffusion time scale
$\tau_{d}$ approximated by the impulsive rise time of hard X-ray
flux $\approx 10^3$ s \citep{PF200} during a flare. Consequently,
the estimated ion inertial length scale
$\delta_i\approx 2.25$ m in the solar corona \citep{PF200} suggests
that the order of the dissipation term, $1/S\approx 10^{-5}$ (approximated
with $L_{\eta}$), is
smaller than the order of the Hall term, $\delta_i/L_\eta\approx 10^{-2}$,
in standard dimensionless induction equation
\citep{Westerberg07, 2021ApJ...906..102B}
\begin{equation}
\label{inducresist}
\frac{{\partial\bf{B}}}{\partial t} =
\nabla\times \left({\bf{v}}\times{\bf{B}}\right)
-\frac{1}{S}\nabla\times{\bf{J}}
-\frac{\delta_i}{L_\eta}\nabla\times\left({\bf{J}}\times{\bf{B}}\right)~,
\end{equation}
where ${\bf{J}}(=\nabla\times{\bf{B}})$ and ${\bf{v}}$ are the
volume current density and the plasma flow velocity, respectively.
This difference in the order of magnitude irrefutably indicates the
importance of the Hall term in the diffusive limit {\bf{ \citep{BIRN, BhattacharjeeReview}}} of the solar
coronal plasma which, further signifies that the HMHD can play a
crucial role for coronal transients as the magnetic reconnections
are their underlying mechanism. Importantly, the aforesaid activation
of the Hall term only in the diffusive limit is crucial in setting up a
HMHD based numerical simulation, invoked latter in the paper.
Important insight into magnetic reconnection can be gained by casting
(\ref{inducresist}) in absence of dissipation, as
\begin{equation}
\label{inducresist1}
\frac{{\partial\bf{B}}}{\partial t} =
\nabla\times \left({\bf{w}}\times{\bf{B}}\right)~,
\end{equation}
\noindent following \citet{hornig-schindler}. The velocity ${\bf{w}}={\bf{v}}-\delta_i/L_\eta{\bf{J}}$,
which is also the electron fluid velocity, conserves magnetic flux \citep{schindler} and topology \citep{hornig-schindler} since
field lines are tied to it. Consequently, field lines slip out from the fluid parcels advecting with velocity
{\bf{v}} to which the lines are frozen in ideal MHD. Importantly, the resulting breakdown of the flux freezing is
localized to the region where current density is large and the Hall term is effective. Because of the slippage, two fluid parcels do not remain connected with the same field lines over time---a change in field line connectivity. Quoting \citet{schindler}, such localized breakdown of flux freezing along with the resulting change in connectivity can be considered as the basis of reconnection \citep{axford}. Additional slippage of field lines
occur in presence of the dissipation term, but with a change in magnetic topology. The present paper extensively relies on this interpretation of reconnection as the slippage of magnetic field lines and the resulting change in magnetic connectivity.
The importance of HMHD is by no means limited to
coronal transients. For example, HMHD is important in the Earth's
magnetosphere, particularly at the magnetopause and the magnetotail
where CSs are present \citep{Mozer2002}. Generally, the HMHD is
expected to support faster magnetic reconnections, yet without
directly affecting the dissipation rate of magnetic energy and
helicity by the Hall term in the induction equation \citep{PF200, chenshi}. The faster reconnection may be associated with a more effective
slippage of field lines in HMHD compared to the resistive MHD, compatible
with the arguments presented earlier. Nevertheless,
these unique properties of the HMHD are expected to bring
subtle changes in the dynamical evolution of plasma, particularly
in the small scales dominated by magnetic reconnections, presumably
bringing a change in the large scales as a consequence. Such subtle
changes were found in the recent HMHD simulation
\citep{2021ApJ...906..102B}, performed by extending the computational
model EULAG-MHD \citep{PiotrJCP} to include the Hall effects.
Notably, the faster reconnection compared to MHD led to a breakage
of a magnetic flux rope, generated from analytically constructed
initial bipolar magnetic field lines \citep{Sanjay2016}. In turn,
the flux rope breakage resulted in the generation of magnetic
islands as theorized by \citet{Shibata}. Clearly, it is compelling
to study the HMHD evolution in a more realistic scenario with the
initial magnetic field obtained from a solar magnetogram. To attain
such an objective, we select the recently reported active
region (AR) NOAA 12734 by \citet{2021Joshi} that produced a C1.3 class
flare.
In absence of reliable direct measurement of the coronal magnetic field,
several extrapolation models such as nonlinear force-free
field (NLFFF) \citep{2008Wglman, 2012WglmnSakurai} and non-force-free
field (non-FFF) \citep{HuDas08, Hu2010} have been developed to construct
the coronal magnetic field using photospheric magnetograms. The
standard is the NLFFF, and the recent data-based MHD simulations
initialized with it have been reasonably successful in simulating
the dynamics of various coronal transients \citep{2013Jiang,
2014NaturAm, 2014Innoue, 2016Savcheva}. However, the NLFFF
extrapolations require to treat the photosphere as force-free, while
it is actually not so \citep{Gary}. Hence, a ``preprocessing technique''
is usually employed to minimize the Lorentz force on the photosphere
in order to provide a boundary condition suitable for NLFFF
extrapolations \citep{2006SoPhWgl, 2014SoPhJiang} and thereby
compromising the reality. Recently, the non-force-free-field (non-FFF)
model, based on the principle of minimum energy dissipation rate
\citep{bhattaJan2004, bhattaJan2007}, has emerged as a plausible
alternative to the force-free models \citep{HuDas08, Hu2010,
2008ApJHu}. In the non-FFF model, the magnetic field \textbf{B} satisfies
the double-curl-Beltrami equation \citep{MahajanYoshida} and the
corresponding Lorentz force on the photosphere is non-zero while
it decreases to small values at the coronal heights \citep{avijeet2018,
Ss2019, avijeet2020}---concurring with the observations. In this
paper, we use non-FFF extrapolation \citep{Hu2010} to obtain the
magnetic field in corona using the photospheric vector magnetogram
obtained from the Helioseismic Magnetic Imager (HMI) \citep{HMI}
onboard the Solar Dynamics Observatory (SDO) \citep{SDO}.
The paper is organized as follows. Section \ref{obs} describes the
flaring event in AR NOAA 12734, section \ref{extrapolation} presents
magnetic field lines morphology of AR NOAA 12734 along with the
preferable sites for magnetic reconnections such as QSLs, 3D null
point, and null-line found from the non-FFF extrapolation. Section
\ref{simulation-results} focuses on the numerical model, numerical
set-up and the evolution of magnetic field lines obtained from
the extrapolation along with their realizations in observations.
Section \ref{summary} highlights the key findings.
\section{Salient features of the C1.3 class flare in AR NOAA 12734}
\label{obs}
The AR NOAA 12734 produced an extended C1.3 class flare
on March 08, 2019 \citep{2021Joshi}. The impulsive phase of the
flare started at 03:07 UT as reported in the Figure 3 of
\citet{2021Joshi}, which shows the X-ray flux in the 1-8 {\AA} and
0.5-4 {\AA} detected by the Geostationary Operational Environmental
Satellite (GOES) \citep{Gracia}. The flux evinces
two subsequent peaks after the onset of the flare,
one around 03:19 UT and another roughly around 03:38 UT. \citet{2021Joshi}
suggested the eruptive event to take place in a coronal sigmoid
with two distinct stages of energy release. Additional observations
using the multi-wavelength channels of Atmospheric Imaging Assembly
(AIA) \citep{AIA} onboard SDO are listed below to highlight important
features pertaining to simulations reported in this paper. Figure
\ref{observations} illustrates a spatio-temporal
observational overview of the event. Panel (a)
shows the remote semicircular brightening (C1) prior to the impulsive
phase of the flare (indicated by the yellow arrow). Panels (b) to (d)
indicate the flare by yellow arrow and the eruption by the white arrow
in the 94 {\AA}, 171 {\AA}, and 131 {\AA} channels respectively.
Notably, the W-shaped brightening appears in panels (b) to (d) along
with the flare in different wavelength channels of SDO/AIA. Panel
(e) shows the circular structure of the chromospheric material (C2)
during the impulsive phase of the flare. It also highlights the
developed W-shaped flare ribbon (enclosed by the white box) which has
a tip at the center (marked by the white arrow). Panel (f) depicts
the post-flare loops in 171 {\AA} channel, indicating the post-flare
magnetic field line connectivity between various negative and
positive polarities on the photosphere.
\section{non-FFF Extrapolation of the AR NOAA 12734}
\label{extrapolation}
As stated upfront, the non-FFF extrapolation technique proposed by
\citet{HuDas08} and based on the minimum dissipation rate theory
(MDR) \citep{bhattaJan2004, bhattaJan2007} is used to obtain the
coronal magnetic field for the AR NOAA 12734. The extrapolation
essentially solves the equation
\begin{eqnarray}
\label{tc}
\nabla\times\nabla\times\nabla\times \textbf{B}+a_1 \nabla\times\nabla\times
\textbf{B}+b_1 \nabla\times\textbf{B}=0~,
\end{eqnarray}
where parameters $a_1$ and $b_1$ are constants. Following
\citep{Hu2010}, the field is constructed as
\begin{eqnarray}
\textbf{B}=\sum_{i=1,2,3} \textbf{B}_{i}~,~~ \nabla\times \textbf{B}_{i}
=\alpha_{i} \textbf{B}_{i}~,
\end{eqnarray}
where $\alpha_i$ is constant for a given $\textbf{B}_i$. The subfields
$\textbf{B}_1$ and $\textbf{B}_3$ are linear force-free having
$\alpha_1\neq\alpha_3$, whereas $\textbf{B}_2$ is a potential field
with $\alpha_2=0$. An optimal pair of $\alpha=\{\alpha_1,\alpha_3\}$
is iteratively found by minimizing the average deviation
between the observed transverse field ($\textbf{B}_t$) and the
computed ($\textbf{b}_t$) transverse field, quantified by
\begin{equation}
\label{En}
E_n=\left(\sum_{i=1}^{M} |\textbf{B}_{t,i}-\textbf{b}_{t,i}|\times |\textbf{B}_{t,i}|\right)/\left(\sum_{i=1}^{M}|\textbf{B}_{t,i}|^2\right)~,
\end{equation}
on the photosphere. Here, $M=N^2$ represents
the total number of grid points on the transverse plane. The grid
points are weighted with respect to the strength of the observed
transverse field to minimize the contribution from weaker fields,
see \citep{HuDas08, Hu2010} for further details.
Since (\ref{tc}) involves the evaluation of the second-order
derivative, $(\nabla\times\nabla\times \textbf{B})_z=-(\nabla^2
\textbf{B})_z$ at $z=0$, evaluation of \textbf{B} requires magnetograms
at two different values of $z$. In order to work with the generally
available single-layer vector magnetograms, an algorithm was
introduced by \cite{Hu2010} that involves additional
iterations to successively fine-tune the potential subfield
$\textbf{B}_2$. The system is reduced to second order by taking
initial guess $\textbf{B}_2=0$, which makes it easier to determine
the boundary condition for $\textbf{B}_1$ and $\textbf{B}_3$. If the
calculated value of $E_n$ turns out unsatisfactory---i.e.,
overly large---then a potential field corrector to $\textbf{B}_2$
is calculated from the difference in the observed and computed
transverse fields and subsequently summed with the previous
$\textbf{B}_2$ to further reduce $E_n$. Notably, recent simulations
initiated with the non-FFF model have successfully explained the
circular ribbon-flares in AR NOAA 12192 \citep{avijeet2018} and AR
NOAA 11283 \citep{avijeet2020} as well as a blowout
jet in AR NOAA 12615 \citep{Ss2019}, thus validating non-FFF's credibility.
The vector magnetogram is selected for 2019 March 08, at 03:00 UT
($\approx$ 7 minutes prior to the start of flare). The original
magnetogram cut out of dimensions 342$\times$195 pixels with pixel resolution 0.5 arcsec per pixel having an extent of $124~ \rm Mm\times 71$ Mm from
``hmi.sharp$\_$cea$\_$720s" series is considered, which ensures an
approximate magnetic flux balance at the bottom boundary. To
optimize the computational cost with the available resources, the
original field is re-scaled and non-FFF-extrapolated over a volume of
256$\times$128$\times$128 pixels while keeping the physical extent same
and preserving all magnetic structures throughout the region. The reduction, in effect, changes the conversion factor of 1 pixel to $\approx 0.484$ Mm along x and $\approx 0.554$ Mm along y and z directions of the employed Cartesian coordinate system.
Panel (a) of Figure~\ref{lfcombnd} shows $E_n$ in the transverse
field, defined in (\ref{En}), as a function of number of iterations.
It shows that $E_n$ tends to saturate at the value of $\approx$0.22.
Panel (b) of Figure \ref{lfcombnd} shows logarithmic decay of the
normalized horizontally averaged magnetic field, current density,
and Lorentz force with height. It is clear that the Lorentz force
is appreciable on the photosphere but decays off rapidly with height,
agreeing with the general perception that the corona is force-free
while the photosphere is not \citep{Liu2020, Yalim20}. Panel (c)
shows that the Pearson-r correlation between the extrapolated and
observed transverse fields is $\approx$0.96, implying strong
correlation. The direct volume rendering of the Lorentz force in
panel (d) also reveals a sharp decay of the Lorentz force with
height, expanding on the result of panel~(b).
To facilitate description, Figure \ref{regions}~(a) shows the
SDO/AIA 304 {\AA} image at 03:25 UT, where the flare ribbon brightening
has been divided into four segments marked as B1-B4.
Figure \ref{regions}~(b) shows the initial global magnetic field
line morphology of AR NOAA 12734, partitioned into
four regions R1-R4, corresponding to the flare ribbon brightening
segments B1-B4. The bottom boundary of panel (b) comprises of
$B_z$ maps in grey scale where the lighter shade indicates positive
polarity regions and the darker shade marks the negative polarity
regions. The magnetic field lines topologies and structures
belonging to a specific region and contributing to the flare are
documented below. \bigskip
\noindent {\bf{Region R1:}} The top-down view of the global magnetic field
line morphology is shown in the panel (a) of Figure~\ref{region1}.
To help locate QSLs, the bottom boundary is overlaid
with the $\log Q$ map of the squashing factor $Q$ \citep{Liu} in all
panels of the figure. Distribution of high $Q$ values along with
$B_z$ on the bottom boundary helps in identifying differently
connected regions. The region with a large $Q$ is prone to the onset
of slipping magnetic reconnections \citep{Demoulin}. Foot points
of magnetic field lines constituting QSL1 and QSL2 trace along the
high $Q$ values near the bottom boundary. QSL1, involving the
magnetic field lines Set I (green) and Set II (maroon), is shown
in panel (b). Particularly, magnetic field lines Set
I (green) extends higher in the corona forming the largest loops
in R1. Panel~(c) illustrates a closer view of QSL2
(multicolored) and the flux rope (black) beneath,
situated between the positive and negative polarities P1, P2 and
N1, respectively. In panel~(d), the flux rope (constituted by the
twisted black magnetic field lines) is depicted using the side view.
The twist value $T_w$ \citep{Liu} in the three vertical planes along the cross
section of the flux rope is also overlaid. Notably, the twist value
is 2 at the center of the rope and decreases outward (cf. vertical
plane in middle of the flux rope in panel (d)). \bigskip
\noindent {\bf{Region R2:}} Figure~\ref{R2R3R4exp} (a) shows the
side view of a 3D null point geometry of magnetic
field lines and the bottom boundary $B_z$ overlaid
with log $Q$ ranging between 5 and 10. Panel~(b) depicts an enlarged
view of the 3D null location, marked black. The height of the null
is found to be $\approx$ 3~Mm from the photosphere. The null is
detected using the bespoke procedure \citep{DKRB, Ss2020} that
approximates the Dirac delta on the grid as
\begin{equation}
\label{ndefine}
n(B_i) = \exp\big[-\sum_{i=x,y,z}{(B_{i} -B_{o})^2}/{d_{o}^2}\big]~,
\end{equation}
where small constants $B_o$ and $d_o$ correspond to the isovalue
of $B_i$ and the Gaussian spread. The function $n(B_i)$ takes
significant values only if $B_i\approx 0~\forall i$, whereupon a
3D null is the point where the three isosurfaces having isovalues
$B_i=B_o$ intersect.\bigskip
\noindent {\bf{Region R3:}} Side view of the magnetic field line
morphology in region R3 is shown in Figure \ref{R2R3R4exp} (c),
where the yellow surface corresponds to $n=0.9$. Panel~(d) highlights
a ``fish-bone-like'' structure, similar to the
schematic in Figure 5 of \citet{WangFB}. To show
that in the limiting case $n=0.9$ reduced to a null line, we plot
corresponding contours in the range $0.6\leq n \leq 0.9$ on three
pre-selected planes highlighted in panel (e). The size reduction
of the contours with increasing $n$ indicates the surface converging
to a line. Such null lines are also conceptualized as favorable
reconnection sites \citep{WangFB}. \bigskip
\noindent {\bf{Region 4}} Figure \ref{R2R3R4exp} (f) shows magnetic
field lines relevant to plasma rotation in B4. Notably, the null
line from the R3 intrudes into R4 and the extreme left plane in R3 (Figure \ref{R2R3R4exp} (e)) is also shared by the R4.
\section{HMHD and MHD simulations of AR NOAA 12734}
\label{simulation-results}
\subsection{Governing Equations and Numerical Model}
In the spirit of our earlier related works
\citep{avijeet2018, Ss2019, avijeet2020}, the plasma is idealized
to be incompressible and thermodynamically inactive as well as
explicitly nonresistive. While this relatively simple
idealization is naturally limited, it exposes the basic dynamics
of magnetic reconnections unobscured by the effects due to
compressibility and heat transfer. Albeit the latter are important
for coronal loops \citep{2002ApJ...577..475R}, they do not directly
affect the magnetic topology---in focus of this paper. Historically
rooted in classical hydrodynamics, such idealizations have a proven
record in theoretical studies of geo/astrophysical phenomena
\citep{Rossby38, 1991ApJ...383..420D, RBCLOW, 2021ApJ...906..102B}.
Inasmuch as their cognitive value depends on an a posteriori validation
against the observations, the present study offers yet another
opportunity to do so.
The Hall forcing has been incorporated \citep{2021ApJ...906..102B}
in the computational model EULAG-MHD \citep{PiotrJCP} to solve the
dimensionless HMHD equations,
\begin{eqnarray}
\label{momtransf}
\frac{\partial{\bf v}}{\partial t} +({\bf v}\cdot \nabla){\bf v}&=&
-\nabla p + (\nabla\times{\bf B})\times{\bf B} +
\frac{1}{R_F^A}\nabla^2 {\bf v}~,\\
\label{induc}
\frac{\partial{\bf B}}{\partial t}&=& \nabla\times(\textbf{v}\times{\bf B})
-d_H\nabla\times((\nabla\times{\bf B})\times{\bf B})~,\\
\label{incompv}
\nabla\cdot {\bf v}&=& 0~, \\
\label{incompb}
\nabla\cdot {\bf B}&=& 0~,
\end{eqnarray}
where $R_F^A=(v_A L/\nu)$, $\nu$ being the kinematic viscosity---is an effective fluid Reynolds number,
having the plasma speed replaced by the Alfv\'en speed $v_A$.
Hereafter $R_F^A$ is denoted as fluid Reynolds number for convenience. The transformation of the dimensional quantities (expressed in cgs-units)
into the corresponding non-dimensional quantities,
\begin{equation}
\label{norm}
{\bf{B}}\longrightarrow \frac{{\bf{B}}}{B_0},
\quad{\bf{x}}\longrightarrow \frac{\bf{x}}{L_0},
\quad{\bf{v}}\longrightarrow \frac{\bf{v}}{v_A},
\quad t \longrightarrow \frac{t}{\tau_A},
\quad p \longrightarrow \frac{p}{\rho_0 {v_{A}}^2}~,
\end{equation}
assumes arbitrary $B_0$ and $L_0$ while the Alfv\'en speed $v_A \equiv
B_0/\sqrt{4\pi\rho_0}$. Here $\rho_0$ is a constant mass density,
and $d_H$ is the Hall parameter. In the limit of $d_H=0$,
(\ref{momtransf})-(\ref{incompb}) reduce to the MHD equations
\citep{avijeet2018}.
The governing equations (\ref{momtransf})-(\ref{incompb})
are numerically integrated using EULAG-MHD---a magnetohydrodynamic
extension \citep{PiotrJCP} of the established Eulerian/Lagrangian
comprehensive fluid solver EULAG \citep{Prusa08} predominantly used
in atmospheric research. The EULAG solvers are based on the
spatio-temporally second-order-accurate nonoscillatory forward-in-time
advection scheme MPDATA (for {\it multidimensional positive definite
advection transport algorithm}) \citep{Piotrsingle}. Importantly,
unique to MPDATA is its widely
documented dissipative property mimicking the action of explicit
subgrid-scale turbulence models wherever the concerned advective
field is under-resolved; the property known as implicit
large-eddy simulations (ILES) \citep{Grinstein07}. In effect,
magnetic reconnections resulting in our simulations dissipate the
under-resolved magnetic field along with other advective
field variables and restore the flux freezing. These reconnections
being intermittent and local, successfully mimic physical reconnections.
\subsection{Numerical Setup}
The simulations are carried out by mapping the physical domain of $256\times128\times128$ pixels on the computational domain of $x\in\{-1, 1\}$, $y\in\{-0.5,0.5\}$, $z\in\{-0.5,0.5\}$ in a Cartesian coordinate system. The dimensionless spatial step sizes are $\Delta x=\Delta y=\Delta z \approx 0.0078$. The dimensionless time step is $\Delta t=5\times 10^{-4}$, set to resolve whistler speed---the fastest
speed in incompressible HMHD. The rationale is briefly presented in the Appendix \ref{appnd}.
The corresponding initial state is motionless ($\textbf{v}=0$) and the initial
magnetic field is provided from the non-FFF extrapolation. The non-zero
Lorentz force associated with the extrapolated field pushes the
magnetofluid to initiate the dynamics. Since the maximal variation
of magnetic flux through the photosphere is only 2.28$\%$ of its
initial value during the flare (not shown), the $\text{B}_z$ at the
bottom boundary (at $z=0$) is kept fixed throughout the simulation
while all other boundaries are
kept open. For velocity, all boundaries are set open. The mass density is set to $\rho_0=1$.
The fluid Reynolds number is set to $500$, which is roughly two orders of magnitude smaller than its coronal value $\approx 25000$ (calculated using kinematic viscosity $\nu=4\times 10^9 ~\rm m^2s^{-1}$ \citep{Aschwanden} in solar corona).
Without any loss in generality, the reduction in $R_F^A$ can be envisaged
to cause a reduction in computed Alfv\'en speed, $v_A|_\text{computed} \approx 0.02\times v_A|_\text{corona}$ where the $L$ for the computational and coronal length scales are set to 71 Mm and 100 Mm respectively. This diminished Alfv\'en speed reduces the requirement of computational resources and also relates it with the observation time. The results presented herein pertain to a run for 1200$\Delta t$ which along with the normalizing $\tau_A\approx 3.55\times 10^3$ s roughly corresponds to an observation time of $\approx$ 35 minutes. For the ease of reference in comparison with observations, we present the time in units of 0.005$\tau_a$ (which is 17.75 s) in the discussions of the figures in subsequent sections.
Although the coronal plasma idealized to have reduced Reynolds number is inconsequential here, in a comparison of MHD and HMHD evolution, we believe the above rationale merits further contemplation. Undeniably such a coronal plasma is not a reality. Nevertheless, the reduced $R_F^A$ does not affect the reconnection or its
consequence, but slows down the dynamics between two such events and importantly---reduces the computational cost, making data-based simulations realizable even with reasonable computing resources.
A recent work by \citet{JiangNat} used homologous approach toward simulating a realistic and self-consistent flaring region.
In the present simulations, all
parameters are identical for the MHD and the HMHD
except for the $d_H$, respectively set to 0 and 0.004.
The value 0.004 is motivated by recognizing ILES dissipation models intermittent magnetic reconnections at the ${\mathcal O}(\parallel\Delta{\bf x}\parallel)$ length scales,
consistent with the thesis put forward in Introduction, we specify
an appreciable Hall coefficient as $d_H = 0.5 \Delta z/L \approx
0.004$, where $L=1\equiv$ smallest extent of the computational volume,
having $\Delta y= \Delta z \approx 0.0078$ as the dissipation scales because of the ILES
property of the model. Correspondingly, the value is also at the lower bound of the pixel or scale order
approximation and, in particular, an order of magnitude smaller
that its coronal value valid at the actual dissipation scale. An
important practical benefit of this selection is the optimization
of the computational cost while keeping magnetic field line dynamics
tractable. Importantly, with dissipation and Hall scales being tied, an increased current density at the dissipation scale introduces additional slippage of field lines in HMHD over MHD (due to the Hall term) and, may be responsible for more effective and faster reconnections found in the Hall simulation reported below.
\subsection{Comparison of the HMHD and MHD simulations}
The simulated HMHD and MHD dynamics leading to the flare show
unambiguous differences. This section documents these differences
by comparing methodically simulated evolution of the magnetic
structures and topologies in the AR NOAA 12734---namely,
the flux rope, QSLs, and null points---identified in the extrapolated
initial data in the regions R1-R4.
\subsubsection{Region R1}
The dynamics of region R1 are by far the most complex among the
four selected regions. To facilitate future reference as well as to outline the
organization of the discussion that follows, Table~\ref{tab:r1} provides a brief
summary of our findings---in a spirit of theses to be proven by the simulation results.
\begin{table}
\caption{Salient features of magnetic field lines dynamics in R1}
\label{tab:r1}
\begin{tabular}{ |p{3cm}|p{5.5cm}|p{5.5cm}| }
\hline
Magnetic field lines structure& HMHD & MHD \\ [4ex]
\hline
QSL1 & Fast reconnection followed by a significant rise of loops,
eventually reconnecting higher in the corona. &Slow reconnection
followed by a limited rise of loops. \\ [6ex]
\hline
QSL2 & Fast reconnection causing the magnetic field lines to entirely
disconnect from the polarity P2. & Due to slow reconnection magnetic
field lines remain connected to P2. \\ [6ex]
\hline
Flux rope &Fast slipping reconnection of the flux-rope foot points,
followed by the expansion and rise of the rope envelope. & Slow
slipping reconnection and rise of the flux-rope envelope; the
envelope does not reach the QSL1. \\ [6ex]
\hline
\end{tabular}
\end{table}
\bigskip
The global dynamics of magnetic field lines in region R1 is
illustrated in Figure~\ref{fullR1}; consult
Figure~\ref{region1} for the initial condition and terminology. The
snapshots from the HMHD and MHD simulations are shown in panels
(a)-(d) and (e)-(f), respectively. In panels (a) and (b), corresponding
to $t=19$ and $t=46$, the foot points of magnetic field lines Set
II (near P2, marked maroon) exhibit slipping reconnection along
high values of the squashing factor $Q$ indicated by black arrows.
Subsequently, between $t=80$ and 81 in panels (c) and (d), the
magnetic field lines Set II rise in the corona and reconnect with
magnetic field lines Set I to change connectivity. The MHD counterpart
of the slipping reconnection in panels (e) and (f), corresponds to
magnetic field lines Set II between t=19 and t=113. It lags behind
the HMHD displays, thus implying slower dynamics. Furthermore, the
magnetic field lines Set II, unlike for the HMHD, do not reach up
to the magnetic field lines Set I constituting QSL1 and hence do
not reconnect. A more informative visualization of the highlighted
dynamics is supplemented in an online animation. The decay index is calculated for each time instant for both the simulations and is found to be less than 1.5 above the flux rope, indicating an absence of the torus instability \citep{Torok}.
For more detail,
Figures~\ref{R1QSL} and \ref{ropeHMHD-MHD} illustrate evolution of
QSL2 and flux rope separately.
Figure~\ref{R1QSL} panels (a)-(b) and (c)-(d) show,
respectively, the instants from the HMHD and MHD simulations of
QSL2 between P1, P2 and N1. The HMHD instants show
magnetic field lines that were anchored between P2
and N1 at $t=10$ have moved to P1 around t=102, marked by black
arrows in both panels. The magnetic field lines anchored at P2
moved to P1 along the high $Q$ values---signifying the slipping
reconnection. The MHD instants in panels (c)-(d)
show the connectivity changes of the violet and white colored
magnetic field lines. The white field line was initially connecting
P1 and N1, whereas the violet field line was connecting P2 and N1.
As a result of reconnection along QSL, the white field line changed
its connectivity from P1 to P2 and violet field line changes the
connectivity from P2 to P1 (marked by black arrows). Notably, in
contrast to the HMHD evolution, all magnetic field lines initially
anchored in P2 do not change their connectivity from P2 to P1 during
the MHD evolution, indicating the slower dynamics.
The flux rope has been introduced in panels (c) and
(d) of Figure~\ref{region1}, respectively, below the QSL2 and in
enlargement. Its HMHD and MHD evolutions along with the twists on
three different vertical cross sections are shown in panels (a)-(f)
and (g)-(i) of Figure~\ref{ropeHMHD-MHD}, respectively. Magnetic
field lines constituting the rope, rise substantially higher during
the HMHD evolution as a result of slipping reconnection along the high $Q$
in panels (c)-(f). In panel (c) at $t=32$, the foot points of the
rope that are anchored on right side (marked by black arrow) change
their connectivity from one high $Q$ regime to another in panel (d)
at t=33; i.e., the foot points on the right have moved to the left
side (marked by black arrow). Afterwards, the magnetic field lines rise because of the
continuous slipping reconnection, as evidenced in panels (e) to (f)
and the supplemented animation. Comparing panels (a) with (g) at
$t=10$ and (c) with (h) at t=32, we note that the twist
value $T_w$ is higher in the HMHD simulation. Panels
(h)-(i) highlight the displaced foot points of flux rope due to slipping reconnection
at t=32 and t=120 (cf. black arrow). The rope is preserved throughout the
HMHD and MHD simulations.
The rise and expansion of the flux-rope envelope
owing to slipping reconnection is remarkable in the
HMHD simulation. \citet{dudik} have already shown such a flux-rope
reconnection along QSL in a J-shaped current region,
with slipping reconnection causing the flux rope to form a sigmoid
(S-shaped hot channel observed in EUV images of SDO/AIA) followed
by its rise and expansion. Further insight is gained by overlaying
the flux rope evolution shown in Figure \ref{ropeHMHD-MHD} with direct volume rendering of
$|{\bf J}|/|{\bf B}|$ (Figures \ref{ropecs} and \ref{ropecsmhd}) as a measure of magnetic field gradient for the HMHD and MHD simulations.
In the HMHD case, appearance of large values of $|{\bf J}|/|{\bf B}|>475$ inside the rope
(panels (a) to (c)) and foot points on left of the rope (panels (d) to (e)) are apparent.
The development of the large $|{\bf J}|/|{\bf B}|$ is indicative of reconnection
within the rope. Contrarily, MHD simulation lacks such high values of $|{\bf J}|/|{\bf B}|$
in the same time span (panels (a)-(b)) and the field lines show no slippage---agreeing with the proposal that large currents magnify the Hall term, resulting into more effective slippage of field lines.
\subsubsection{Region R2}
To compare the simulated magnetic field lines dynamics in region
R2 with the observed tip of the W-shaped flare ribbon
B2 (Figure \ref{extrapolation} (a)) during the HMHD and MHD evolution,
we present the instants from both
simulations at t=70 in panels (a) and (b) of Figure \ref{R2comp}
respectively. Importantly, the lower spine remains anchored to the bottom boundary during the HMHD simulation (evident from the supplemented animation along with Figure \ref{R2comp}). Further, Figure \ref{R2comp-CS} shows the evolution of the lower spine along with the $|\textbf{J}|/|\textbf{B}|$ on the bottom boundary for the HMHD (panels (a) to (d)) and MHD (panels (e) to (h)) cases. In the HMHD case, noteworthy is the slipping motion of lower spine (marked by the black arrows) tracing the $|\textbf{J}|/|\textbf{B}|>350$ regions on the bottom boundary (panels (a) to (b)). Whereas, in the MHD such high values of $|\textbf{J}|/|\textbf{B}|$ are absent on the bottom boundary---suggesting the slippage of the field lines on the bottom boundary to be less effective in contrast to the HMHD. The finding is in agreement with the idea of enhanced slippage of field lines due to high current densities as conceptualized in the introduction.
The anchored lower spine provides a path for the plasma to flow downward
to the brightening segment B2. In the actual corona, such flows result in
flare brightening \citep{Benz}.
In contrast, the lower
spine gets completely disconnected from the bottom boundary (Figure
\ref{R2comp} (b)) in the MHD simulation, hence failing to explain
the tip of the W-shaped flare ribbon in B2. The
anchored lower spine in the HMHD simulation is caused by a complex
series of magnetic field lines reconnections at the 3D null and
along the QSLs in R2, as depicted in the animation.
\subsubsection{Region R3}
HMHD and MHD simulations of magnetic field lines dynamics around
the null-line are shown in Figures~\ref{R3HMHD} and \ref{R3MHD}
respectively. Figure~\ref{R3HMHD} shows the blue magnetic field
lines prior and after the reconnections (indicated
by black arrows) between t=4 to 5 (panels (a)-(b)), t=52 to 53
(panels (c)-(d)), and t=102 to 103 (panels (e)-(f)) during the HMHD
simulation. Figure \ref{R3MHD} shows the same blue
magnetic field lines prior and after the reconnections
(indicated by black arrows) between t=12 to 13 (panels (a)-(b)),
t=59 to 60 (panels (c)-(d)), and t=114 to 115 (panels (e)-(f))
during the MHD simulation. Comparison of the panels (a)-(f) of
Figure \ref{R3HMHD} with the same panels of Figure \ref{R3MHD}
reveals earlier reconnections of the blue magnetic
field lines in the HMHD simulation. In both figures, green
velocity vectors on the right represent the local plasma flow. They
get aligned downward along the foot points of the fan magnetic field
lines, as reconnection progresses. Consequently, the plasma flows
downward and impacts the denser and cooler chromosphere to give
rise to the brightening in B3. The velocity vectors
pointing upward represent a flow toward the null-line. The plasma
flow pattern in R3 is the same in the HMHD and in
the MHD simulation. The vertical $yz-$plane passing through the cross section
of the null-line surface (also shown in Figure \ref{R2R3R4exp} (d))
in all the panels of Figures \ref{R3HMHD} and \ref{R3MHD} shows the
variation of $n$ with time. It is evident that the null is not
destroyed throughout the HMHD and MHD evolution. Structural changes in the field lines caused by reconnection is near-identical for both the simulations, indicating inefficacy of the Hall term. This inefficacy is justifiable as $|\textbf{J}|/|\textbf{B}|$ remains small $\approx 10$ (not shown) in both HMHD and MHD evolution.
\subsubsection{Region R4} The development of the circular motion of magnetic field lines in region R4 during the HMHD simulation is depicted in Figure \ref{lftcrclrmotion}. It shows the global dynamics of magnetic field lines in R4 and the inset images show the zoomed view of magnetic field lines in R4 to highlight the circular motion of magnetic field lines. The bottom boundary is $B_z$ in the main figure while the inset images have the $z-$component of the plasma flow at the bottom boundary (on $xy-$plane). The red vectors represent the plasma flow direction as well as magnitude in all the panels of Figure \ref{lftcrclrmotion} where the anticlockwise pattern of the plasma flow is evident. The global dynamics highlight reconnection of the loop anchored between positive and negative polarities at t=60 in Figure \ref{lftcrclrmotion} as it gets disconnected from the bottom boundary in panels (c)-(d) of Figure \ref{lftcrclrmotion}. The animation accompanying Figure \ref{lftcrclrmotion} highlights an anticlockwise motion of foot points in the
same direction as the plasma flow, indicating field lines to be frozen in the fluid.
The trapped plasma may cause the rotating structure B4 in the observations (c.f. Figure \ref{extrapolation} (a)). However, no such motion is present during the MHD evolution of the same magnetic field lines (not shown). An interesting feature noted in the animation is the clockwise slippage of field lines after the initial anticlockwise rotation. Further analysis of R4 using the direct volume rendering of $|\textbf{J}|/|\textbf{B}|$ is presented in Figure \ref{lftcrclrmotion-SV}. The figure shows $|\textbf{J}|/|\textbf{B}|$ attains high values $\ge225$ (enclosed by the blue rectangles) within the rotating field lines from t$\approx$86 onward. This suggests the slippage of field lines is, once again, related to the high magnetic field gradients.
\par For completeness, we present the snapshots of an overall magnetic field lines morphology including the magnetic structures and topology of regions R1, R2, R3, and R4 together, overlaid with 304 {\AA} and 171 {\AA} from the HMHD and MHD simulations. Figure \ref{Tv304171} (a) shows an instant (at t=75) from the HMHD simulation where the topologies and magnetic structures in R1, R2, R3, and R4, plus the additionally drawn locust color magnetic field lines between R2 and R3 are shown collectively. It shows an excellent match of the magnetic field lines in R2 with the observed tip of W-shaped flare ribbon at B2, which is pointed out by the pink arrow in panel (a). Foot points of the spine-fan geometry around the 3D null orient themselves in the same fashion as the observed tip of the W-shaped flare ribbon at B2 as seen in 304 {\AA} channel of SDO/AIA. The rising loops indicated by the white arrow correspond to the same evolution as shown in Figure \ref{fullR1}. An overall magnetic field lines morphology mentioned in Figure \ref{lftcrclrmotion} (a) is given at the same time (t=75) during the MHD simulation overlaid with 304 {\AA} image in Figure \ref{lftcrclrmotion} (b). Importantly, unlike the HMHD simulation, the MHD simulation does not account for the anchored lower spine and fan magnetic field lines of the 3D null at the center of the B2. Also, the significant rise of overlying maroon magnetic field lines and the circular motion of the material in B4 is captured in the HMHD simulation only. In panel (c) magnetic field lines overlaid with 171 {\AA} image shows the magnetic field lines (higher up in the solar atmosphere) have resemblance with the post-flare loops during the HMHD. Overall, the HMHD evolution seems to be in better agreement with the observations in comparison to the MHD evolution.
\section{Summary and Discussion}
\label{summary}
The paper compares data-based HMHD and MHD simulations using the flaring Active Region NOAA 12734 as a test bed.
Importance of the HMHD stems from the realization that the Hall term in the induction equation cannot be neglected in presence of the magnetic reconnection---the underlying cause of solar flares.
The selected event is the C1.3 class flare on March 08, 2019 around 03:19 UT for the aforementioned comparison. Although the event is analyzed and reported in the literature, it is further explored using the multi-wavelength observations from SDO/AIA. The identified important features are:
an elongated extreme ultraviolet (EUV) counterpart of the eruption on the western side of the AR, a W-shaped flare ribbon and circular motion of cool chromospheric material on the eastern part.
The magnetic field line dynamics near these features are utilized to compare the simulations.
Notably, the simulations
idealize the corona to have an Alfv\`en speed which is two orders of
magnitude smaller than its
typical value. Congruent to the general understanding, the Hall parameter is selected to tie the Hall dynamics to the dissipation scale $\mathcal{O} (\Delta \textbf{x})$
in the spirit of the ILES carried out in the paper. The magnetic reconnection here is
associated with the slippage of magnetic field lines from the plasma parcels, effective at the dissipation scale due to local enhancement of magnetic field gradient. The same enhancement also amplifies the Hall contribution,
presumably enhancing the slippage and thereby making the reconnection faster and more effective than the MHD.
The coronal magnetic field is constructed by extrapolating the photospheric vector magnetic field obtained from the SDO/HMI observations employing the non-FFF technique \citep{Hu2010}. The concentrated distribution of the Lorentz force on the bottom boundary and its decrease with the height justify the use of non-FFF extrapolation for the solar corona. The initial non-zero Lorentz force is also crucial in generating self-consistent flows that initiate the dynamics and cause the magnetic reconnections.
Analyses of the extrapolated magnetic field reveal several magnetic structures and topologies of interest: a flux rope on the western part at flaring location, a 3D null point along with the fan-spine configuration at the centre, a ``Fish-bone-like structure" surrounding the null-line on the eastern part of the AR. All of these structures are found to be co-spatial with the observed flare ribbon brightening.
\par The HMHD simulation shows faster slipping reconnection of the flux rope foot points and overlying magnetic field lines (constituting QSLs above the flux rope) at the flaring location. Consequently, the overlying magnetic field lines rise, eventually reaching higher up in the corona and reconnecting to provide a path for plasma to eject out. The finding is in agreement with the observed elongated EUV counterpart of the eruption on western part of the AR. Contrarily, such significant rise of the flux rope and overlying field lines to subsequently reconnect higher up in the corona is absent in the MHD simulation---signifying the reconnection to be slower compared to the HMHD. Intriguingly, rise and expansion of the flux rope and overlying field lines owing to slipping reconnection on QSLs has also been modelled and observed in an earlier work by \citet{dudik}.
These are typical features of the ``standard solar flare model in 3D'', which allows for a consistent explanation of events which are
not causally connected \citep{dudik}. It also advocates that null-points and true separatrices are not required for the eruptive flares to occur---concurring the results of this work.
HMHD evolution of the fan-spine configuration surrounding the 3D null point is in better agreement with the tip of W-shaped flare ribbon at the centre of the AR. The lower spine and fan magnetic field lines remain anchored to the bottom boundary throughout the evolution which can account for the plasma flowing downward after the reconnection and cause the brightening. Whereas in the MHD, the lower spine gets disconnected and cannot
account for the brightening. The reconnection dynamics around the null-line and the corresponding plasma flow direction is same in the HMHD as well as the MHD simulation and agrees with the observed brightening. Nevertheless, reconnection is earlier in the HMHD. HMHD evolution captures an anti-clockwise circular motion of magnetic field lines in the left part of the AR which is co-spatial with the location of the rotating chromospheric material in eastern side of the AR. No such motion was found in the MHD simulation. Importantly, the simulations explicitly associate
generation of large magnetic field gradients to HMHD compared to MHD, resulting in faster and more efficient field line slippage because of the enhanced Hall term.
Overall, the results documented in the paper show the HMHD explains the flare brightening better than the MHD, prioritizing the requirement to include HMHD in future state-of-the-art data-based numerical simulations.
\section{Acknowledgement}
The simulations are performed using the 100TF cluster Vikram-100 at Physical Research Laboratory, India. We wish to acknowledge the visualization software VAPOR (\url{www.vapor.ucar.edu}), for generating relevant graphics. Q.H. and A.P. acknowledge partial support of NASA grants 80NSSC17K0016, 80NSSC21K1671, LWS 80NSSC21K0003 and NSF awards AGS-1650854 and AGS-1954503. This research was also supported by the Research Council of Norway through its Centres of Excellence scheme, project number 262622, as well as through the Synergy Grant number 810218 (ERC-2018-SyG) of the European Research Council.
| proofpile-arXiv_065-3 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In this paper we present and analyze a high-order time discontinuous Galerkin finite element method for the time integration of second order differential problems as those stemming from e.g. elastic wave propagation phenomena.
Classical approaches for the time integration of second order differential systems employ implicit and explicit finite differences, Leap-frog, Runge-Kutta or Newmark schemes, see e.g. \cite{Ve07,Bu08,QuSaSa07} for a detailed review. In computational seismology, explicit time integration schemes are nowadays preferred to implicit ones, due to their computational cheapness and ease of implementation. Indeed, although being unconditionally stable, implicit methods are typically computationally expensive. The main drawback of explicit methods is that they are conditionally stable and the choice of time step imposed by the Courant-Freidrichs-Levy (CFL) condition can sometimes be a great limitation.
To overcome this limitation one can employ local time stepping (LTS) algorithms \cite{GrMi13,DiGr09,CoFoJo03,Dumbser2007arbitrary} for which the CFL condition is imposed element-wise leading to an optimal choice of the time step. The unique drawback of this approach is the additional synchronization process that one need to take into account for a correct propagation of the wave field from one element to the other.
In this work, we present an implicit time integration method based on a discontinuous Galerkin (DG) approach. Originally, DG methods \cite{ReedHill73,Lesaint74} have been developed to approximate \textit{in space} hyperbolic problems \cite{ReedHill73}, and then generalized to elliptic and parabolic equations \cite{wheeler1978elliptic,arnold1982interior,HoScSu00,CockKarnShu00,
riviere2008discontinuous,HestWar,DiPiEr}. We refer the reader to \cite{riviere2003discontinuous,Grote06} for the application of DG methods to scalar wave equations and to \cite{Dumbser2007arbitrary,WiSt2010,antonietti2012non,
ferroni2016dispersion,antonietti2016stability,AnMa2018,
Antonietti_etal2018,mazzieri2013speed,AnMaMi20,DeGl15} for the elastodynamics problem.
The DG approach has been used also to approximate initial-value problem where the DG paradigm shows some advantage with respect to other implicit schemes such as the Johnson's method, see e.g. \cite{JOHNSON1993,ADJERID2011}. Indeed, since the information follows the positive direction of time, the solution at time-slab $[t_n,t_{n+1}]$ depends only on the solution at the time instant $t_n^-$.
By employing DG methods in both space and time dimensions it leads to a fully DG space-time formulation such as \cite{Delfour81,Vegt2006,WeGeSc2001,AnMaMi20}.
More generally, space-time methods have been largely employed for hyperbolic problems. Indeed, high order approximations in both space and time are simple to obtain, achieving spectral convergence of the space-time error through $p$-refinement. In addition, stability can be achieved with local CFL conditions, as in \cite{MoRi05}, increasing computational efficiency.
Space-time methods can be divided according to which type of space-time partition they employ. In structured techniques \cite{CangianiGeorgoulisHouston_2014,Tezduyar06}, the space-time grid is the cartesian product of a spatial mesh and a time partition. Examples of applications to second order hyperbolic problems can be found in \cite{StZa17,ErWi19,BaMoPeSc20}. Unstructured techniques \cite{Hughes88,Idesman07} employ grids generated considering the time as an additional dimension. See \cite{Yin00,AbPeHa06,DoFiWi16} for examples of applications to first order hyperbolic problems. Unstructured methods may have better properties, however they suffer from the difficulty of generating the mesh, especially for three-dimensional problems.
Among unstructured methods, we mention Trefftz techniques \cite{KrMo16,BaGeLi17,BaCaDiSh18}, in which the numerical solution is looked for in the Trefftz space, and the tent-pitching paradigm \cite{GoScWi17}, in which the space-time elements are progressively built on top of each other in order to grant stability of the numerical scheme. Recently, in \cite{MoPe18,PeScStWi20} a combination of Trefftz and tent-pitching techniques has been proposed with application to first order hyperbolic problems.
Finally, a typical approach for second order differential equations consists in reformulating them as a system of first order hyperbolic equations. Thus, velocity is considered as an additional problem's unkwnown that results in doubling the dimension of the final linear system, cf. \cite{Delfour81,Hughes88,FRENCH1993,JOHNSON1993,ThHe2005}.
The motivation for this work is to overcome the limitations of the space-time DG method presented in \cite{AnMaMi20} for elastodynamics problems. This method integrates the second order (in time) differential problem stemming from the spatial discretization. The resulting stiffness matrix is ill-conditioned making the use of iterative solvers quite difficult. Hence, direct methods are used forcing to store the stiffness matrix and greatly reducing the range of problems affordable by that method. Here, we propose to change the way the time integration is obtained, resulting in a well-conditioned system matrix and making iterative methods employable and complex 3D problems solvable.
In this work, we present a high order discontinuous Galerkin method for time integration of systems of second-order differential equations stemming from space discretization of the visco-elastodynamics problem. The differential (in time) problem is firstly reformulated as a first order system, then, by imposing only weak continuity of tractions across time slabs, we derive a discontinuous Galerkin method. We show the well posedness of the proposed method through the definition of a suitable energy norm, and we prove stability and \emph{a priori} error estimates. The obtained scheme is implicit, unconditionally stable and super-optimal in term of accuracy with respect to the integration time step. In addition, the solution strategy adopted for the associated algebraic linear system reduces the complexity and computational cost of the solution, making three dimensional problems (in space) affordable.
The paper is organized as follows. In Section \ref{Sc:Method} we formulate the problem, present its numerical discretization and show that it is well-posed. The stability and convergence properties of the method are discussed in Section \ref{Sc:Convergence}, where we present \textit{a priori} estimates in a suitable norm. In Section \ref{Sc:AlgebraicFormulation}, the equations are rewritten into the corresponding algebraic linear system and a suitable solution strategy is shown. Finally, in Section \ref{Sc:NumericalResults}, the method is validated through several numerical experiments both in two and three dimensions.
Throughout the paper, we denote by $||\aa||$ the Euclidean norm of a vector $\aa \in \mathbb{R}^d$, $d\ge 1$ and by $||A||_{\infty} = \max_{i=1,\dots,m}\sum_{j=1}^n |a_{ij}|$, the $\ell^{\infty}$-norm of a matrix $A\in\mathbb{R}^{m\times n}$, $m,n\ge1$. For a given $I\subset\mathbb{R}$ and $v:I\rightarrow\mathbb{R}$ we denote by $L^p(I)$ and $H^p(I)$, $p\in\mathbb{N}_0$, the classical Lebesgue and Hilbert spaces, respectively, and endow them with the usual norms, see \cite{AdamsFournier2003}. Finally, we indicate the Lebesgue and Hilbert spaces for vector-valued functions as $\bm{L}^p(I) = [L^p(I)]^d$ and $\bm{H}^p(I) = [H^p(I)]^d$, $d\ge1$, respectively.
\section{Discontinuous Galerkin approximation of a second-order initial value problem}
\label{Sc:Method}
For $T>0$, we consider the following model problem \cite{kroopnick}: find $\bm{u}(t) \in\bm{H}^2(0,T]$ such that
\begin{equation}
\label{Eq:SecondOrderEquation}
\begin{cases}
P\ddot{\bm{u}}(t) + L\dot{\bm{u}}(t)+K\bm{u}(t) = \bm{f}(t) \qquad \forall\, t \in (0,T], \\
\bm{u}(0) = \hat{\bm{u}}_0, \\
\dot{\bm{u}}(0) = \hat{\bm{u}}_1,
\end{cases}
\end{equation}
where $P,L,K \in \mathbb{R}^{d\times d}$, $d\geq 1$ are symmetric, positive definite matrices, $\hat{\bm{u}}_0, \hat{\bm{u}}_1 \in \mathbb{R}^d$ and $\bm{f} \in \bm{L}^2(0,T]$. Then, we introduce a variable $\bm{w}:(0,T]\rightarrow\mathbb{R}^{d}$ that is the first derivative of $\bm{u}$, i.e. $\bm{w}(t) = \dot{\bm{u}}(t)$, and reformulate problem \eqref{Eq:SecondOrderEquation} as a system of first order differential equations:
\begin{equation}
\label{Eq:FirstOrderSystem1}
\begin{cases}
K\dot{\bm{u}}(t) - K\bm{w}(t) = \boldsymbol{0} &\forall\, t\in(0,T], \\
P\dot{\bm{w}}(t) +L\bm{w}(t) + K\bm{u}(t) = \bm{f}(t) &\forall\, t\in(0,T], \\
\bm{u}(0) = \hat{\bm{u}}_0, \\
\bm{w}(0) = \hat{\bm{u}}_1.
\end{cases}
\end{equation}
Note that, since $K$ is a positive definite matrix, the first equation in \eqref{Eq:FirstOrderSystem1} is consistent with the definition of $\bm{w}$. By defining $\bm{z} = [\bm{u},\bm{w}]^T\in\mathbb{R}^{2d}$, $\bm{F}=[\bm{0},\bm{f}]^T\in\mathbb{R}^{2d}$, $\bm{z}_0 = [\hat{\bm{u}}_0,\hat{\bm{u}}_1]^T\in\mathbb{R}^{2d}$ and
\begin{equation}\label{def:KA}
\widetilde{K} = \begin{bmatrix}
K & 0 \\
0 & P
\end{bmatrix}\in\mathbb{R}^{2d\times2d}, \quad
A = \begin{bmatrix}
0 & -K \\
K & L
\end{bmatrix}\in\mathbb{R}^{2d\times2d},
\end{equation}
we can write \eqref{Eq:FirstOrderSystem1} as
\begin{equation}
\label{Eq:FirstOrderSystem2}
\begin{cases}
\tilde{K}\dot{\bm{z}}(t) + A\bm{z}(t) = \bm{F}(t) & \forall\, t\in(0,T], \\
\bm{z}(0) = \bm{z}_0.
\end{cases}
\end{equation}
To integrate in time system \eqref{Eq:FirstOrderSystem2}, we first partition the interval $I=(0,T]$ into $N$ time-slabs $I_n = (t_{n-1},t_n]$ having length $\Delta t_n = t_n-t_{n-1}$, for $n=1,\dots,N$ with $t_0 = 0$ and $t_N = T$, as it is shown in Figure \ref{Fig:TimeDomain}.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{time_domain.png}
\caption{Example of time domain partition (bottom). Zoom of the time domain partition: values $t_n^+$ and $t_n^-$ are also reported (top).}\label{Fig:TimeDomain}
\end{figure}
Next, we incrementally build (on $n$) an approximation of the exact solution $\bm{u}$ in each time slab $I_n$. In the following we will use the notation
\begin{equation*}
(\bm{u},\bm{v})_I = \int_I \bm{u}(s)\cdot\bm{v}(s)\text{d}s, \quad \langle \bm{u},\bm{v} \rangle_t = \bm{u}(t)\cdot \bm{v}(t),
\end{equation*}
where $\aa\cdot\bm{b}$ stands for the euclidean scalar product between tho vectors $\aa,\bm{b}\in\mathbb{R}^d$. We also denote for (a regular enough) $\bm{v}$, the jump operator at $t_n$ as
\begin{equation*}
[\bm{v}]_n = \bm{v}(t_n^+) - \bm{v}(t_n^-) = \bm{v}^+ -\bm{v}^-, \quad \text{for } n\ge 0,
\end{equation*}
where
\begin{equation*}
\bm{v}(t_n^\pm) = \lim_{\epsilon\rightarrow 0^\pm}\bm{v}(t_n+\epsilon), \quad \text{for } n\ge 0.
\end{equation*}
Thus, we focus on the generic interval $I_n$ and assume that the solution on $I_{n-1}$ is known. We multiply equation \eqref{Eq:FirstOrderSystem2} by a (regular enough) test function $\bm{v}(t)\in\mathbb{R}^{2d}$ and integrate in time over $I_n$ obtaining
\begin{equation}
\label{Eq:Weak1}
(\widetilde{K}\dot{\bm{z}},\bm{v})_{I_n} + (A\bm{z},\bm{v})_{I_n} = (\bm{F},\bm{v})_{I_n}.
\end{equation}
Next, since $\bm{u} \in\bm{H}^2(0,T]$ and $\bm{w} = \dot{\bm{u}}$, then $\bm{z}\in\bm{H}^1(0,T]$. Therefore, we can add to \eqref{Eq:Weak1} the null term $\widetilde{K}[\bm{z}]_{n-1}\cdot\bm{v}(t_{n-1}^+)$ getting
\begin{equation}
\label{Eq:Weak2}
(\widetilde{K}\dot{\bm{z}},\bm{v})_{I_n} + (A\bm{z},\bm{v})_{I_n} +\widetilde{K}[\bm{z}]_{n-1}\cdot\bm{v}(t_{n-1}^+) = (\bm{F},\bm{v})_{I_n}.
\end{equation}
Summing up over all time slabs we define the bilinear form $\mathcal{A}:\bm{H}^1(0,T)\times\bm{H}^1(0,T)\rightarrow\mathbb{R}$
\begin{equation}
\label{Eq:BilinearForm}
\mathcal{A}(\bm{z},\bm{v}) = \sum_{n=1}^N (\widetilde{K}\dot{\bm{z}},\bm{v})_{I_n} + (A\bm{z},\bm{v})_{I_n} + \sum_{n=1}^{N-1} \widetilde{K}[\bm{z}]_n\cdot\bm{v}(t_n^+) + \widetilde{K}\bm{z}(0^+)\cdot\bm{v}(0^+),
\end{equation}
and the linear functional $\mathcal{F}:\bm{L}^2(0,T)\rightarrow\mathbb{R}$ as
\begin{equation}
\label{Eq:LinearFunctional}
\mathcal{F}(\bm{v}) = \sum_{n=1}^N (\bm{F},\bm{v})_{I_n} + \widetilde{K}\bm{z}_0\cdot\bm{v}_{0}^+,
\end{equation}
where we have used that $\bm{z}(0^-) = \bm{z}_0$. Now, we introduce the functional spaces
\begin{equation}
\label{Eq:PolynomialSpace}
V_n^{r_n} = \{ \bm{z}:I_n\rightarrow\mathbb{R}^{2d} \text{ s.t. } \bm{z}\in[\mathcal{P}^{r_n}(I_n)]^{2d} \},
\end{equation}
where $\mathcal{P}^{r_n}(I_n)$ is the space of polynomial defined on $I_n$ of maximum degree $r_n$,
\begin{equation}
\label{Eq:L2Space}
\mathcal{V}^{\bm{r}} = \{ \bm{z}\in\bm{L}^2(0,T] \text{ s.t. } \bm{z}|_{I_n} = [\bm{u},\bm{w}]^T\in V_n^{r_n} \},
\end{equation}
and
\begin{equation}
\label{Eq:CGSpace}
\mathcal{V}_{CG}^{\bm{r}} = \{ \bm{z}\in[\mathbb{C}^0(0,T]]^{2d} \text{ s.t. } \bm{z}|_{I_n} = [\bm{u},\bm{w}]^T\in V_n^{r_n} \text{ and } \dot{\bm{u}} = \bm{w} \},
\end{equation}
where $\bm{r} = (r_1,\dots,r_N) \in \mathbb{N}^N$ is the polynomial degree vector
Before assessing the discontinuous Galerkin formulation of problem~\eqref{Eq:FirstOrderSystem2}, we need to introduce, as in \cite{ScWi2010}, the following operator $\mathcal{R}$, that is used only on the purpose of the analysis and does not need to be computed in practice.
\begin{mydef}
\label{Def:Reconstruction}
We define a reconstruction operator $\mathcal{R}:\mathcal{V}^{\bm{r}}\rightarrow\mathcal{V}^{\bm{r}}_{CG}$ such that
\begin{equation}
\label{Eq:Reconstruction}
\begin{split}
(\mathcal{R}'(\bm{z}),\bm{v})_{I_n} &= (\bm{z}',\bm{v})_{I_n} + [\bm{z}]_{n-1}\cdot\bm{v}(t_{n-1}^+) \quad \forall\, \bm{v}\in[\mathcal{P}^{r_n}(I_n)]^{2d}, \\ \mathcal{R}(\bm{z}(t_{n-1}^+)) &= \bm{z}(t_{n-1}^-) \quad \forall\, n =1,\dots,N.
\end{split}
\end{equation}
\end{mydef}
\noindent Now, we can properly define the functional space
\begin{equation}
\label{Eq:DGSpace}
\begin{split}
\mathcal{V}_{DG}^{\bm{r}} = \{& \bm{z}\in\mathcal{V}^{\bm{r}} \text{ and }\exists\, \hat{\bm{z}} = R(\bm{z}) \in\mathcal{V}_{CG}^{\bm{r}}\},
\end{split}
\end{equation}
and introduce the DG formulation of \eqref{Eq:FirstOrderSystem2} reads as follows. Find $\bm{z}_{DG}\in\mathcal{V}_{DG}^{\bm{r}}$ such that
\begin{equation}
\label{Eq:WeakProblem}
\mathcal{A}(\bm{z}_{DG},\bm{v}) = \mathcal{F}(\bm{v}) \qquad \bm{v}\in\mathcal{V}_{DG}^{\bm{r}}.
\end{equation}
For the forthcoming analysis we introduce the following mesh-dependent energy norm.
\begin{myprop}
\label{Pr:Norm}
The function $|||\cdot|||:\mathcal{V}_{DG}^{\bm{r}}\rightarrow\mathbb{R}^{+}$, is defined as
\begin{equation}
\label{Eq:Norm}
|||\bm{z}|||^2 = \sum_{n=1}^N ||\widetilde{L}\bm{z}||_{\bm{L}^2(I_n)}^2 + \frac{1}{2}(\widetilde{K}^{\frac{1}{2}}\bm{z}(0^+))^2 + \frac{1}{2}\sum_{n=1}^{N-1}(\widetilde{K}^{\frac{1}{2}}[\bm{z}]_n)^2 + \frac{1}{2}(\widetilde{K}^{\frac{1}{2}}\bm{z}(T^-))^2,
\end{equation}
with
$
\widetilde{L} = \begin{bmatrix}
0 & 0 \\
0 & L^{\frac{1}{2}}
\end{bmatrix}\in\mathbb{R}^{2d\times2d}.
$
Moreover a norm on $\mathcal{V}_{DG}^{\bm{r}}$.
\end{myprop}
\begin{proof}
It is clear that homogeneity and subadditivity hold. In addition, it is trivial that if $\bm{z} = 0$ then $|||\bm{z}|||=0$. Therefore, we suppose $|||\bm{z}||| = 0$ and observe that
\begin{equation*}
||\widetilde{L}\bm{z}||_{\bm{L}^2(I_n)}=||L^{\frac{1}{2}}\bm{w}||_{\bm{L}^2(I_n)}=0 \quad \forall n=1,\dots,N.
\end{equation*}
Since $L$ is positive definite we have $\bm{w} = \textbf{0} $ on $[0,T]$. Hence, $\bm{w}'=\textbf{0}$ on $[0,T]$. Using this result into \eqref{Eq:DGSpace} and calling $\bm{v} = [\bm{v}_1,\bm{v}_2]^T$, we get
\begin{equation*}
(\hat{\bm{w}}',\bm{v}_2)_{I_n} = 0 \quad \forall \bm{v}_2 \in [\mathcal{P}^r_n(I_n)]^d \text{ and }\forall n=1,\dots,N.
\end{equation*}
Therefore $\hat{\bm{w}}'=\textbf{0}$ on $[0,T]$. In addition, from \eqref{Eq:DGSpace} we get $\textbf{0}=\bm{w}(t_1^-)=\hat{\bm{w}}(t_1^+)$ that combined with the previous result gives $\hat{\bm{w}}=\textbf{0}$ on $[0,T]$.
Now, since $\hat{\bm{z}}\in \mathcal{V}^{\bm{r}}_{CG}$, we have $\hat{\bm{u}}' = \hat{\bm{w}} = \textbf{0}$ on $[0,T]$. Therefore using again \eqref{Eq:DGSpace} we get
\begin{equation*}
(\bm{u}',\bm{v}_1)_{I_n} + [\bm{u}]_{n-1}\cdot \bm{v}_1(t_{n-1}^+)= 0 \quad \forall \bm{v}_1 \in [\mathcal{P}^r_n(I_n)]^d \text{ and }\forall n=1,\dots,N.
\end{equation*}
Take $n = N$, then $[\bm{u}]_{N-1}=\textbf{0}$ (from $|||\bm{z}||| = 0$) and therefore $\bm{u}'=\textbf{0}$ on $I_N$. Combining this result with $\bm{u}(T^-)=\textbf{0}$ we get $\bm{u}=\textbf{0}$ on $I_N$ from which we derive $\textbf{0}=\bm{u}(t_{N-1}^+)=\bm{u}(t_{N-1}^-)$. Iterating until $n=2$ we get $\bm{u}=\textbf{0}$ on $I_n$, for any $n=2,\dots,N$. Moreover
\begin{equation*}
\textbf{0}=\bm{u}(t_1^+)=\bm{u}(t_1^-)=\hat{\bm{u}}(t_1^+)=\hat{\bm{u}}(t_1^-)=\hat{\bm{u}}(0^+)=\bm{u}(0^-),
\end{equation*} since $\hat{\bm{u}}' = \textbf{0}$ on $I_1$. Using again $|||\bm{z}|||=0$ we get $\bm{u}(0^+)=\textbf{0}$, hence $[\bm{u}]_0=\textbf{0}$. Taking $n=1$ we get $\bm{u}=\textbf{0}$ on $I_1$. Thus, $\bm{z}=\textbf{0}$ on $[0,T]$.
\end{proof}
The following result states the well-posedness of \eqref{Eq:WeakProblem}
\begin{myprop}
\label{Pr:WellPosedness} Problem~\eqref{Eq:WeakProblem} admits a unique solution $\bm{u}_{DG} \in \mathcal{V}_{DG}^{\bm{r}}$.
\end{myprop}
\begin{proof}
By taking $\bm{v} = \bm{z}$ we get
\begin{equation*}
\mathcal{A}(\bm{z},\bm{z}) = \sum_{n=1}^N (\widetilde{K}\dot{\bm{z},}\bm{z})_{I_n} + (A\bm{z},\bm{z})_{I_n} + \sum_{n=1}^{N-1} \widetilde{K}[\bm{z}]_n\cdot\bm{z}(t_n^+) + (\widetilde{K}^{\frac{1}{2}}\bm{z})^2.
\end{equation*}
Since $\widetilde{K}$ is symmetric, integrating by parts we have that
\begin{equation*}
(\widetilde{K}\dot{\bm{z}},\bm{z})_{I_n} = \frac{1}{2}\langle \widetilde{K}\bm{z},\bm{z} \rangle_{t_n^-} - \frac{1}{2}\langle \widetilde{K}\bm{z},\bm{z} \rangle_{t_{n-1}^+}.
\end{equation*}
Then, the second term can be rewritten as
\begin{equation*}
(A\bm{z},\bm{z})_{I_n} = (-K\bm{w},\bm{u})_{I_n} + (K\bm{u},\bm{w})_{I_n} + (L\bm{w},\bm{w})_{I_n} = ||\widetilde{L}\bm{z}||_{I_n}^2,
\end{equation*}
cf. also \eqref{def:KA}. Therefore
\begin{equation*}
\mathcal{A}(\bm{z},\bm{z}) = \sum_{n=1}^N ||\widetilde{L}\bm{z}||_{I_n}^2 + (\widetilde{K}^{\frac{1}{2}}\bm{z}(0^+))^2 + \frac{1}{2}\sum_{n=1}^{N-1} (\widetilde{K}^{\frac{1}{2}}[\bm{z}]_n)^2 + (\widetilde{K}^{\frac{1}{2}}\bm{z}(T^-))^2 = |||\bm{z}|||^2.
\end{equation*}
The result follows from Proposition~\ref {Pr:Norm}, the bilinearity of $\mathcal{A}$ and the linearity of $\mathcal{F}$.
\end{proof}
\section{Convergence analysis}\label{Sc:Convergence}
In this section, we first present an \textit{a-priori} stability bound for the numerical solution of \eqref{Eq:WeakProblem} that can be easily obtained by direct application of the Cauchy-Schwarz inequality. Then, we use the latter to prove optimal error estimate for the numerical error, in the energy norm \eqref{Eq:Norm}.
\begin{myprop}
Let $\bm{f} \in \bm{L}^2(0,T]$, $\hat{\bm{u}}_0, \hat{\bm{u}}_1 \in \mathbb{R}^d$, and let $\bm{z}_{DG} \in \mathcal{V}_{DG}^{\bm{r}}$ be the solution of \eqref{Eq:WeakProblem}, then it holds
\begin{equation}
\label{Eq:Stability}
|||\bm{z}_{DG}||| \lesssim \Big(\sum_{n=1}^N ||L^{-\frac{1}{2}}\bm{f}||_{\bm{L}^(0,T)}^2+(K^{\frac{1}{2}}\hat{\bm{u}}_0)^2+(P^{\frac{1}{2}}\hat{\bm{u}}_1)^2\Big)^{\frac{1}{2}}.
\end{equation}
\end{myprop}
\begin{proof}
From the definition of the norm $|||\cdot|||$ given in \eqref{Eq:Norm} and the arithmetic-geometric inequality we have
\begin{equation*}
\begin{split}
|||\bm{z}_{DG}|||^2 &= \mathcal{A}(\bm{z}_{DG},\bm{z}_{DG}) = \mathcal{F}(\bm{z}_{DG}) = \sum_{n=1}^N (\bm{F},\bm{z}_{DG})_{I_n} + \widetilde{K}\bm{z}_0\cdot\bm{z}_{DG}(0^+) \\
&\lesssim \frac{1}{2}\sum_{n=1}^N ||L^{-\frac{1}{2}}\bm{f}||_{\bm{L}^2(I_n)}^2 + \frac{1}{2}\sum_{n=1}^N ||\widetilde{L}\bm{z}_{DG}||_{\bm{L}^2(I_n)}^2 + (\widetilde{K}^{\frac{1}{2}} \bm{z}_{0})^2 + \frac{1}{4}(\widetilde{K}^{\frac{1}{2}} \bm{z}_{DG})^2 \\
&\lesssim \frac{1}{2}\sum_{n=1}^N ||L^{-\frac{1}{2}}\bm{f}||_{\bm{L}^2(I_n)}^2 + (\widetilde{K}^{\frac{1}{2}} \bm{z}_{0})^2 + \frac{1}{2}|||\bm{z}_{DG}|||^2.
\end{split}
\end{equation*}
Hence,
\begin{equation*}
|||\bm{z}_{DG}|||^2 \lesssim \sum_{n=1}^N ||L^{-\frac{1}{2}}\bm{f}||_{\bm{L}^2(I_n)}^2 + (K^{\frac{1}{2}} \hat{\bm{u}}_{0})^2 + (P^{\frac{1}{2}}\hat{\bm{u}}_{1})^2.
\end{equation*}
\end{proof}
Before deriving an a priori estimate for the numerical error we introduce some preliminary results. We refer the interested reader to \cite{ScSc2000} for further details.
\begin{mylemma}
\label{Le:Projector}
Let $I=(-1,1)$ and $u\in L^2(I)$ continuous at $t=1$, the projector $\Pi^r u \in \mathcal{P}^r(I)$, $r\in\mathbb{N}_0$, defined by the $r+1$ conditions
\begin{equation}
\label{Eq:Projector}
\Pi^r u (1) = u(1), \qquad (\Pi^r u,q)_{I} = 0 \quad\forall\, q\in\mathcal{P}^{r-1}(I),
\end{equation}
is well posed. Moreover, let $I=(a,b)$, $\Delta t = b-a$, $r\in\mathbb{N}_0$ and $u\in H^{s_0+1}(I)$ for some $s_0\in\mathbb{N}_0$. Then
\begin{equation}
\label{Eq:ProjectionError}
||u-\Pi^r u||_{L^2(I)}^2 \le C\bigg(\frac{\Delta t}{2}\bigg)^{2(s+1)}\frac{1}{r^2}\frac{(r-s)!}{(r+s)!}||u^{(s+1)}||_{L^2(I)}^2
\end{equation}
for any integer $0\le s \le \min(r,s_0)$. C depends on $s_0$ but it is independent from $r$ and $\Delta t$.
\end{mylemma}
Proceeding similarly to \cite{ScSc2000}, we now prove the following preliminary estimate for the derivative of the projection $\Pi^r u$.
\begin{mylemma}
\label{Le:DerivativeProjectionErrorInf}
Let $u\in H^1(I)$ be continuous at $t=1$. Then, it holds
\begin{equation}
\label{Eq:DerivativeProjectionErrorInf}
||u'-\big(\Pi^r u\big)'||_{L^2(I)}^2 \le C(r+1)\inf_{q \in \mathcal{P}^r(I)} \Bigg\{||u'-q'||_{L^2(I)}^2 \Bigg\}.
\end{equation}
\end{mylemma}
\begin{proof}
Let $u' =\sum_{i=1}^{\infty} u_i L'_i$ be the Legendre expansion of $u'$ with coefficients $u_i\in\mathbb{R}$, $i=1,\dots,\infty$. Then (cfr. Lemma 3.2 in \cite{ScSc2000})
\begin{equation*}
\big(\Pi^r u\big)'=\sum_{i=1}^{r-1} u_i L'_i + \sum_{i=r}^{\infty} u_i L'_r
\end{equation*}
Now, for $r\in\mathbb{N}_0$, we denote by $\widehat{P}^r$ the $L^2(I)$-projection onto $\mathcal{P}^r(I)$. Hence,
\begin{equation*}
u' - \big(\Pi^r u\big)'= \sum_{i=r}^{\infty} u_i L'_i - \sum_{i=r}^{\infty} u_i L'_r = \sum_{i=r+1}^{\infty} u_i L'_i - \sum_{i=r+1}^{\infty} u_i L'_r = u' - \big(\widehat{P}^r u\big)' - \sum_{i=r+1}^{\infty} u_i L'_r.
\end{equation*}
Recalling that $||L'_r||_{L^2(I)} = r(r+2)$ we have
\begin{equation*}
||u' - \big(\Pi^r u\big)'||_{L^2(I)}^2 \le ||u' - \big(\widehat{P}^r u\big)'||_{L^2(I)}^2 - \Bigg|\sum_{i=r+1}^{\infty} u_i\Bigg| r(r+1).
\end{equation*}
Finally, we use that
$ \Bigg|\sum_{i=r+1}^{\infty} u_i\Bigg| \le \frac{C}{r}||u'||_{L^2(I)}
$ (cfr. Lemma~3.6 in \cite{ScSc2000}) and get
\begin{equation}
\label{Eq:DerivativeProjectionError}
||u'-\big(\Pi^r u\big)'||_{L^2(I)}^2 \le C\big\{||u'-\big(\widehat{P}^r u\big)'||_{L^2(I)}^2+(r+1)||u'||_{L^2(I)}^2 \big\}.
\end{equation}
Now consider $q\in\mathcal{P}^r(I)$ arbitrary and insert $u'-q'$ into \eqref{Eq:DerivativeProjectionError}. The thesis follows from the reproducing properties of projectors $\Pi^r u$ and $\widehat{P}^r u$ and from the fact that $||u-\widehat{P}^r u||_{L^2(I)} \le ||u-q||_{L^2(I)} $ for any $q\in\mathcal{P}^r(I)$.
\end{proof}
By employing Proposition~3.9 in \cite{ScSc2000} and Lemma \ref{Le:DerivativeProjectionErrorInf} we obtain the following result.
\begin{mylemma}
\label{Le:DerivativeProjectionError}
Let $I=(a,b)$, $\Delta t = b-a$, $r\in\mathbb{N}_0$ and $u\in H^{s_0+1}(I)$ for some $s_0\in\mathbb{N}_0$. Then
\begin{equation*}
||u'-\big(\Pi^r u\big)'||_{L^2(I)}^2 \lesssim \bigg(\frac{\Delta t}{2}\bigg)^{2(s+1)}(r+2)\frac{(r-s)!}{(r+s)!}||u^{(s+1)}||_{L^2(I)}^2
\end{equation*}
for any integer $0\le s \le \min(r,s_0)$. The hidden constants depend on $s_0$ but are independent from $r$ and $\Delta t$.
\end{mylemma}
Finally we observe that the bilinear form appearing in formulation \eqref{Eq:WeakProblem} is strongly consistent, i.e.
\begin{equation}
\label{Eq:Consistency}
\mathcal{A}(\bm{z}-\bm{z}_{DG},\bm{v}) = 0 \qquad \forall\,\bm{v}\in\mathcal{V}^{\bm{r}}_{DG}.
\end{equation}
We now state the following convergence result.
\begin{myth}
\label{Th:ErrorEstimate}
Let $\hat{\bm{u}}_{0},\hat{\bm{u}}_{1} \in \mathbb{R}^{d}$. Let $\bm{z}$ be the solution of problem~\eqref{Eq:FirstOrderSystem2} and let $\bm{z}_{DG}\in\mathcal{V}_{DG}^{\bm{r}}$ be its finite element approximation. If $\bm{z}|_{I_n}\in \bm{H}^{s_n}(I_n)$, for any $n=1,\dots,N$ with $s_n\geq2$, then it holds
\begin{equation}
\label{Eq:ErrorEstimate}
|||\bm{z}-\bm{z}_{DG}||| \lesssim \sum_{n=1}^N \bigg(\frac{\Delta t}{2}\bigg)^{\mu_n+\frac{1}{2}}\Bigg((r_n+2)\frac{(r_n-\mu_n)!}{(r_n+\mu_n)!}\Bigg)^{\frac{1}{2}}||\bm{z}||_{H^{\mu_n+1}(I_n)},
\end{equation}
where $\mu_n = \min(r_n,s_n)$, for any $n=1,\dots,N$ and the hidden constants depend on the norm of matrices $L$, $K$ and $A$.
\end{myth}
\begin{proof}
We set $\bm{e} = \bm{z} - \bm{z}_{DG} = (\bm{z} - \Pi_I^r \bm{z}) + (\Pi_I^r \bm{z} - \bm{z}_{DG}) = \bm{e}^{\pi} + \bm{e}^{h}$. Hence we have $|||\bm{e}||| \le |||\bm{e}^{\pi}||| + |||\bm{e}^{h}|||$. Employing the properties of the projector \eqref{Eq:Projector} and estimates \eqref{Eq:ProjectionError} and \eqref{Eq:DerivativeProjectionError}, we can bound $|||\bm{e}^{\pi}|||$ as
\begin{equation*}
\begin{split}
|||\bm{e}^{\pi}|||^2 &= \sum_{n=1}^N ||\widetilde{L}\bm{e}^{\pi}||_{L^2(I_n)}^2 + \frac{1}{2}(\widetilde{K}^{\frac{1}{2}}\bm{e}^{\pi}(0^+))^2 + \frac{1}{2}\sum_{n=1}^{N-1}(\widetilde{K}^{\frac{1}{2}}[\bm{e}^{\pi}]_n)^2 + \frac{1}{2}(\widetilde{K}^{\frac{1}{2}}\bm{e}^{\pi}(T^-))^2 \\
& = \sum_{n=1}^N ||\widetilde{L}\bm{e}^{\pi}||_{L^2(I_n)}^2 + \frac{1}{2} \sum_{n=1}^N \Bigg(-\int_{t_{n-1}}^{t_{n}}\widetilde{K}^{\frac{1}{2}}\dot{\bm{e}}^{\pi}(s)ds\Bigg)^2 \\
& \lesssim \sum_{n=1}^N \Big(||\bm{e}^{\pi}||_{L^2(I_n)}^2 + \Delta t ||\dot{\bm{e}^{\pi}}||_{L^2(I_n)}^2 \Big) \\
& \lesssim \sum_{n=1}^N \bigg[\bigg(\frac{\Delta t_n}{2}\bigg)^{2\mu_n+2} \frac{1}{r_n^2} + \bigg(\frac{\Delta t_n}{2}\bigg)^{2\mu_n+1} (r_n+2)\bigg] \frac{(r_n-\mu_n)!}{(r_n+\mu_n)!}||\bm{z}||_{H^{\mu_n+1}(I_n)} \\
& \lesssim \sum_{n=1}^N \bigg(\frac{\Delta t_n}{2}\bigg)^{2\mu_n+1} (r_n+2) \frac{(r_n-\mu_n)!}{(r_n+\mu_n)!}||\bm{z}||_{H^{\mu_n+1}(I_n)},
\end{split}
\end{equation*}
where $\mu_n = \min(r_n,s_n)$, for any $n=1,\dots,N$.
For the term $|||\bm{e}_{h}|||$ we use \eqref{Eq:Consistency} and integrate by parts to get
\begin{equation*}
\begin{split}
|||\bm{e}^{h}|||^2 &= \mathcal{A}(\bm{e}^h,\bm{e}^h) = -\mathcal{A}(\bm{e}^{\pi},\bm{e}^h) \\
& = \sum_{n=1}^N (\widetilde{K}\dot{\bm{e}}^{\pi},\bm{e}^h)_{I_n} + \sum_{n=1}^N(A\bm{e}^{\pi},\bm{e}^h)_{I_n} + \sum_{n=1}^{N-1} \widetilde{K}[\bm{e}^{\pi}]_n\cdot\bm{e}^h(t_n^+) + \widetilde{K}\bm{e}^{\pi}(0^+)\cdot\bm{e}^h(0^+) \\
& = \sum_{n=1}^N (\widetilde{K}\bm{e}^{\pi},\dot{\bm{e}}^h)_{I_n} + \sum_{n=1}^N(A\bm{e}^{\pi},\bm{e}^h)_{I_n} + \sum_{n=1}^{N-1} \widetilde{K}[\bm{e}^{h}]_n\cdot\bm{e}^{\pi}(t_n^-) - \widetilde{K}\bm{e}^{\pi}(T^-)\cdot\bm{e}^h (T^-).
\end{split}
\end{equation*}
Thanks to \eqref{Eq:Projector}, only the second term of the last equation above does not vanish. Thus, we employ the Cauchy-Schwarz and arithmetic-geometric inequalities to obtain
\begin{equation*}
|||\bm{e}^{h}|||^2 = \sum_{n=1}^N(A\bm{e}^{\pi},\bm{e}^h)_{I_n} \lesssim \frac{1}{2} \sum_{n=1}^N ||\bm{e}^{\pi}||_{L^2(I_n)}^2 + \frac{1}{2} \sum_{n=1}^N ||\widetilde{L}\bm{e}^{h}||_{L^2(I_n)}^2 \lesssim \frac{1}{2} \sum_{n=1}^N ||\bm{e}^{\pi}||_{L^2(I_n)}^2 + \frac{1}{2}|||\bm{e}^h|||^2.
\end{equation*}
Hence,
\begin{equation*}
|||\bm{e}^{h}|||^2 \lesssim \sum_{n=1}^N \bigg(\frac{\Delta t_n}{2}\bigg)^{2\mu_n+2} \frac{1}{r_n^2} \frac{(r_n-\mu_n)!}{(r_n+\mu_n)!}||\bm{z}||_{H^{\mu_n+1}(I_n)},
\end{equation*}
where $\mu_n = \min(r_n,s_n)$, for any $n=1,\dots,N$ and the thesis follows.
\end{proof}
\section{Algebraic formulation}
\label{Sc:AlgebraicFormulation}
In this section we derive the algebraic formulation stemming after DG discretization of \eqref{Eq:WeakProblem} for the time slab $I_n$.
We consider on $I_n$ a local polynomial degree $r_n$. In practice, since we use discontinuous functions, we can compute the numerical solution one time slab at time, assuming the initial conditions stemming from the previous time slab known. Hence, problem \eqref{Eq:WeakProblem} reduces to: find $\bm{z}\in V^{r_n}(I_n)$ such that
\begin{equation}
\label{Eq:WeakFormulationReduced}
(\widetilde{K}\dot{\bm{z}},\bm{v})_{I_n} + (A\bm{z},\bm{v})_{I_n} + \langle\widetilde{K}\bm{z},\bm{v}\rangle_{t_{n-1}^+} = (\bm{F},\bm{v})_{I_n} + \widetilde{K}\bm{z}(t_{n-1}^-)\cdot\bm{v}({t_{n-1}^+}), \quad \forall\,n=1,\dots,N.
\end{equation}
Introducing a basis $\{\psi^{\ell}(t)\}_{{\ell}=1,\dots,r_n+1}$ for the polynomial space $\mathbb{P}^{r_n}(I_n)$ we define a vectorial basis $\{ \boldsymbol{\Psi}_i^{\ell}(t) \}_{i=1,\dots,2d}^{{\ell}=1,\dots,r_n+1}$ of $V_n^{r_n}$ where
\begin{equation*}
\{ \boldsymbol{\Psi}_i^{\ell}(t) \}_j =
\begin{cases}
\psi^{\ell}(t) & {\ell} = 1,\dots,r_n+1, \quad \text{if } i=j, \\
0 & {\ell} = 1,\dots,r_n+1, \quad \text{if } i\ne j.
\end{cases}
\end{equation*}
Then, we set $D_n=d(r_n+1)$ and write the trial function $\bm{z}_n = \bm{z}_{DG}|_{I_n} \in V_n^{r_n}$ as
\begin{equation*}
\bm{z}_n(t) = \sum_{j=1}^{2d} \sum_{m=1}^{r_n+1} \alpha_{j}^m \boldsymbol{\Psi}_j^m(t),
\end{equation*}
where $\alpha_{j}^m\in\mathbb{R}$ for $j=1,\dots,2d$, $m=1,\dots,r_n+1$. Writing \eqref{Eq:WeakFormulationReduced} for any test function $\boldsymbol{\Psi}_i^{\ell}(t)$, $i=1,\dots,2d$, $\ell=1\,\dots,r_n+1$ we obtain the linear system
\begin{equation}
\label{Eq:LinearSystem}
M\bm{Z}_n = \bm{G}_n,
\end{equation}
where $\bm{Z}_n,\bm{G}_n \in \mathbb{R}^{2D_n}$ are the vectors of expansion coefficient corresponding to the numerical solution and the right hand side on the interval $I_n$ by the chosen basis. Here $M\in\mathbb{R}^{2D_n\times2D_n}$ is the local stiffness matrix defined as
\begin{equation}
\label{Eq:StiffnessMatrix}
M = \widetilde{K} \otimes (N^1+N^3) + A \otimes N^2
= \begin{bmatrix}
K \otimes (N^1 + N^3) & -K \otimes N^2 \\
K \otimes N^2 & P \otimes (N^1+N^3) + L \otimes N^2
\end{bmatrix},
\end{equation}
where $N^1,N^2,N^3 \in \mathbb{R}^{r_n+1}$ are the local time matrices
\begin{equation}
\label{Eq:TimeMatrices}
N_{{\ell}m}^1 = (\dot{\psi}^m,\psi^{\ell})_{I_n}, \qquad N_{{\ell}m}^2 = (\psi^m,\psi^{\ell})_{I_n}, \qquad N_{{\ell}m}^3 = \langle\psi^m,\psi^{\ell}\rangle_{t_{n-1}^+},
\end{equation}
for $\ell,m=1,...,r_n+1$. Similarly to \cite{ThHe2005}, we reformulate system \eqref{Eq:LinearSystem} to reduce the computational cost of its resolution phase. We first introduce the vectors $\bm{G}_n^u,\, \bm{G}_n^w,\, \bm{U}_n,\, \bm{W}_n \in \mathbb{R}^{D_n}$ such that
\begin{equation*}
\bm{G}_n = \big[\bm{G}_n^u, \bm{G}_n^w\big]^T, \qquad \bm{Z}_n = \big[\bm{U}_n, \bm{W}_n\big]^T
\end{equation*}
and the matrices
\begin{equation}
N^4 = (N^1+N^3)^{-1}, \qquad N^5 = N^4N^2, \qquad N^6 = N^2N^4, \qquad N^7 = N^2N^4N^2.
\end{equation}
Next, we apply a block Gaussian elimination getting
\begin{equation*}
M = \begin{bmatrix}
K \otimes (N^1 + N^3) & -K \otimes N^2 \\
0 & P \otimes (N^1+N^3) + L \otimes N^2 + K \otimes N^7
\end{bmatrix},
\end{equation*}
and
\begin{equation*}
\bm{G}_n = \begin{bmatrix}
\bm{G}_n^u \\
\bm{G}_n^w - \mathcal{I}_d\otimes N^6 \bm{G}_n^u
\end{bmatrix}.
\end{equation*}
We define the matrix $\widehat{M}_n\in\mathbb{R}^{D_n\times D_n}$ as
\begin{equation}\label{Eq:TimeMatrix}
\widehat{M}_n = P \otimes (N^1+N^3) + L \otimes N^2 + K \otimes N^7,
\end{equation}
and the vector $\widehat{\bm{G}}_n\in\mathbb{R}^{D}$ as
\begin{equation}
\widehat{\bm{G}}_n = \bm{G}_n^w - \mathcal{I}_{d}\otimes N^6 \bm{G}_n^u.
\end{equation}
Then, we multiply the first block by $K^{-1}\otimes N^4$ and, exploiting the properties of the Kronecker product, we get
\begin{equation*}
\begin{bmatrix}
\mathcal{I}_{D_n} & -\mathcal{I}_{d} \otimes N^5 \\
0 & \widehat{M}_n
\end{bmatrix}
\begin{bmatrix}
\bm{U}_n \\
\bm{W}_n
\end{bmatrix} =
\begin{bmatrix}
(K^{-1}\otimes N^4)\bm{G}_n^u \\
\widehat{\bm{G}}_n
\end{bmatrix}.
\end{equation*}
Therefore, we first obtain the velocity $\bm{W}_n$ by solving the linear system
\begin{equation}\label{Eq:VelocitySystem}
\widehat{M}_n \bm{W}_n = \widehat{\bm{G}}_n,
\end{equation}
and then, we can compute the displacement $\bm{U}_n$ as
\begin{equation}\label{Eq:DisplacementUpdate1}
\bm{U}_n = \mathcal{I}_{d} \otimes N^5 \bm{W}_n + (K^{-1}\otimes N^4)\bm{G}_n^u.
\end{equation}
Finally, since $\big[\bm{G}_n^u\big]_i^{\ell} = K\bm{U}(t_{n-1}^-)\cdot\boldsymbol{\Psi}_i^{\ell}(t_{n-1}^+)$, by defining $\bar{\bm{G}}_n^u\in \mathbb{R}^{D_n}$ as
\begin{equation}
\big[\bar{\bm{G}}_n^u\big]_i^{\ell} = \bm{U}(t_{n-1}^-)\cdot\boldsymbol{\Psi}_i^{\ell}(t_{n-1}^+),
\end{equation}
we can rewrite \eqref{Eq:DisplacementUpdate1} as
\begin{equation}\label{Eq:AltDisplacementUpdate2}
\bm{U}_n = \mathcal{I}_{d} \otimes N^5 \bm{W}_n + (\mathcal{I}_{d}\otimes N^4)\bar{\bm{G}}_n^u.
\end{equation}
\section{Numerical results}
\label{Sc:NumericalResults}
In this section we report a wide set of numerical experiments to validate the theoretical estimates and asses the performance of the DG method proposed in Section \ref{Sc:Method}. We first present a set of verification tests for scalar- and vector-valued problems, then we test our formulation onto two- and three-dimensional elastodynamics wave propagation problems, through the open source software SPEED (\url{http://speed.mox.polimi.it/}).
\subsection{Scalar problem}
\label{Sec:1DConvergence}
For a time interval $I=[0,T]$, with $T=10$, we solve the scalar problem
\begin{equation}
\label{Eq:ScalarProblem}
\begin{cases}
\dot{u}(t) = w(t) & \forall t\in [0,10],\\
\dot{w}(t) + 5 w(t) + 6u(t) = f(t) & \forall t\in [0,10], \\
u(0) = 2, \\
w(0) = -5,
\end{cases}
\end{equation}
whose exact solution is $ \bm{z}(t) = (w(t),u(t)) = (-3e^{-3t}-3e^{-2t},e^{-3t}+e^{-2t})$ for $t\in[0,10]$.
We partition the time domain $I$ into $N$ time slabs of uniform length $\Delta t$ and we suppose the polynomial degree to be constant for each time-slab, i.e. $r_n = r$, for any $n=1,\dots,N$. We first compute
the error $|||\bm{z}_{DG} -\bm{z} |||$ as a function of the time-step $\Delta t$ for several choices of the polynomial degree $r$, as shown in Figure \ref{Fig:ConvergenceTest0D} (left). The obtained results confirms the super-optimal convergence properties of the scheme as shown in \eqref{Eq:ErrorEstimate}. Finally, since $\bm{z} \in C^{\infty}(\mathbb{R})$, from Figure \ref{Fig:ConvergenceTest0D} (right) we can observe that the numerical error decreases exponentially with respect to the polynomial degree $r$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.49\textwidth]{ConvergenceTest0D.png}
\includegraphics[width=0.49\textwidth]{ConvergenceTest0D_Degree.png}
\caption{Test case of Section~\ref{Sec:1DConvergence}. Left: computed error $|||\bm{z}_{DG}-\bm{z}|||$ as a function of time-step $\Delta t$, with $r = 2,3,4,5$. Right: computed error $|||\bm{z}-\bm{z}_{DG}|||$ as a function of polynomial degree $r$, using a time step $\Delta t = 0.1$.}\label{Fig:ConvergenceTest0D}
\end{figure}
\subsection{Application to a the visco-elastodynamics system}
\label{Sec:AppVE}
In the following experiments we employ the proposed DG method to solve the second-order differential system of equations stemming from the spatial discretization of the visco-elastodynamics equation:
\begin{equation}
\label{Eq:Elastodynamic}
\begin{cases}
\partial_t \bold{u} - \bold{w} = \textbf{0}, & \text{in } \Omega\times(0,T],\\
\rho\partial_{t}\bold{w} + 2\rho\zeta\bold{w} + \rho \zeta^2\bold{u} - \nabla\cdot\boldsymbol{\sigma}(\bold{u}) = \textbf{f}, & \text{in } \Omega\times(0,T],\\
\end{cases}
\end{equation}
where $\Omega\in\mathbb{R}^\mathsf{d}$, $\mathsf{d}=2,3$, is an open bounded polygonal domain. Here, $\rho$ represents the density of the medium, $\zeta$ is a decay factor whose dimension is inverse of time, $\textbf{f}$ is a given source term (e.g. seismic source) and $\boldsymbol{\sigma}$ is the stress tensor encoding the Hooke's law
\begin{equation}
\boldsymbol{\sigma}(\bold{u})_{ij} = \lambda\sum_{k=1}^\mathsf{d} \frac{\partial u_k}{\partial x_k} + \mu \left( \frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i} \right), \quad {\rm for} \; i,j=1,...,\mathsf{d},
\end{equation}
being $\lambda$ and $\mu$ the first and the second Lam\'e parameters, respectively. Problem \eqref{Eq:Elastodynamic} is usually supplemented with boundary conditions for $\bold{u}$ and initial conditions for $\bold{u}$ and $\bold{w}$, that we do not report here for brevity.
Finally, we suppose problem's data are regular enough to gaurantee its well-posedness \cite{AntoniettiFerroniMazzieriQuarteroni_2017}.
By employing a finite element discretization (either in its continuous or discontinuous variant) for the semi-discrete approximation (in space) of \eqref{Eq:Elastodynamic} we obtain the following system
\begin{equation*}
\left( \begin{matrix}
I & 0 \\
0 & P
\end{matrix} \right)\left( \begin{matrix}
\dot{\bm{u}} \\
\dot{\bm{w}}
\end{matrix} \right) + \left( \begin{matrix}
0 & -I \\
K & L
\end{matrix} \right)\left( \begin{matrix}
{\bm{u}} \\
{\bm{w}}
\end{matrix} \right) = \left( \begin{matrix}
\textbf{0} \\
\bm{f}
\end{matrix} \right),
\end{equation*}
that can be easily rewritten as in \eqref{Eq:FirstOrderSystem1}.
We remark that within the matrices and the right hand side are encoded the boundary conditions associated to \eqref{Eq:Elastodynamic}.
For the space discretization of \eqref{Eq:Elastodynamic}, we consider in the following a high order Discontinuous Galerkin method based either on general polygonal meshes (in two dimensions) \cite{AnMa2018} or on unstructured hexahedral meshes (in three dimensions) \cite{mazzieri2013speed}.
For the forthcoming experiments we denote by $h$ the granularity of the spatial mesh and $p$ the order of polynomials employed for space approximation. The combination of space and time DG methods yields to a high order space-time DG method that we denote by STDG.
Remark that the latter has been implemented in the open source software SPEED (\url{http://speed.mox.polimi.it/}).
\subsubsection{A two-dimensional test case with space-time polyhedral meshes}
\label{Sec:2DConvergence}
As a first verification test we consider problem~\eqref{Eq:Elastodynamic} in a bidimensional setting, i.e. $\Omega = (0,1)^2 \subset \mathbb{R}^2$.
We set the mass density $\rho=1$, the Lamé coefficients $\lambda=\mu=1$, $\zeta = 1$ and choose the data $\textbf{f}$ and the initial conditions such that the exact solution of \eqref{Eq:Elastodynamic} is $\textbf{z} = (\bold{u},\bold{w})$ where
\begin{equation*}
\bold{u} = e^{-t}
\begin{bmatrix}
-\sin^2(\pi x)\sin(2\pi y) \\
\sin(2\pi x)\sin^2(\pi y)
\end{bmatrix}, \qquad \bold{w} = \partial_t\bold{u}.
\end{equation*}
We consider a polygonal mesh (see Figure~\ref{fig:dgpolyspace-time}) made by 60 elements and set $p=8$. We take $T=0.4$ and divide the temporal iterval $(0,T]$ into $N$ time-slabs of uniform lenght $\Delta t$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{poly_spacetime.png}
\caption{Test case of Section~\ref{Sec:2DConvergence}. Example of space-time polygonal grid used for the verification test.}
\label{fig:dgpolyspace-time}
\end{figure}
In Figure~\ref{Fig:ConvergenceTest2D} (left) we show the energy norm \eqref{Eq:Norm} of the numerical error $|||\bm{z}_{DG}-\bm{z}|||$ computed for several choices of time polynomial degree $r=1,2,3$ by varying the time step $\Delta t$. We can observe that the error estimate \eqref{Eq:ErrorEstimate} is confirmed by our numerical results. Moreover, from Figure~\ref{Fig:ConvergenceTest2D} (right) we can observe that the numerical error decreases exponentially with respect to the polynomial degree $r$. In the latter case we fixed $\Delta t = 0.1$ and use 10 polygonal elements for the space mesh, cf. Figure~\ref{fig:dgpolyspace-time}.
\begin{figure}[h!]
\includegraphics[width=0.49\textwidth]{ConvergenceTest2D_Time.png}
\includegraphics[width=0.49\textwidth]{ConvergenceTest2D_Degree.png}
\caption{Test case of Section~\ref{Sec:2DConvergence}. Left: computed error $|||\bm{z}-\bm{z}_{DG}|||$ as a function of time-step $\Delta t$ for $r = 1,2,3$, using a space discretization with a polygonal mesh composed of $60$ elements and $p=8$. Right: computed error $|||\bm{z}-\bm{z}_{DG}|||$ as a function of the polynomial degree $r=p$, using a spatial grid composed of 10 elements and a time step $\Delta t = 0.1$. }
\label{Fig:ConvergenceTest2D}
\end{figure}
\subsubsection{A three-dimensional test case with space-time polytopal meshes}
\label{Sec:3DConvergence}
As a second verification test we consider problem~\eqref{Eq:Elastodynamic} for in a three dimensional setting. Here, we consider $\Omega = (0,1)^3 \subset \mathbb{R}^3$, $T=10$ and we set the external force $\boldsymbol{f}$ and the initial conditions so that the exact solution of \eqref{Eq:Elastodynamic} is $\textbf{z} = (\bold{u},\bold{w})$ given by
\begin{equation}
\label{Testcase}
\bold{u} = \cos(3\pi t)
\begin{bmatrix}
\sin(\pi x)^2\sin(2\pi y)\sin(2\pi z) \\
\sin(2\pi x)\sin(\pi y)^2\sin(2\pi z) \\
\sin(2\pi x)\sin(2\pi y)\sin(\pi z)^2
\end{bmatrix}, \quad \bold{w} = -3\pi\cos(3\pi t) \bold{u}.
\end{equation}
We partition $\Omega$ by using a conforming hexahedral mesh of granularity $h$, and we use a uniform time domain partition of step size $\Delta t$ for the time interval $[0,T]$. We choose a polynomial degree $ p \ge 2$ for the space discretization and $ r \ge 1$ for the temporal one. We firstly set $h=0.0125$ corresponding to $512$ elements and fix $p=6$, and let the time step $\Delta t$ varying from $0.4$ to $0.00625$ for $r=1,2,3,4$. The computed energy errors are shown in Figure \ref{Fig:ConvergenceTest3D} (left). We can observe that the numerical results are in agreement with the theoretical ones, cf. Theorem~\ref{Th:ErrorEstimate}. We note that with $r=4$, the error reaches a plateau for $\Delta t \leq 0.025$. However, this effect could be easily overcome by increasing the spatial polynomial degree $p$ and/or by refining the mesh size $h$.
Then, we fix a grid size $h=0.25$, a time step $\Delta t=0.1$ and let vary together the polynomial degrees, $p=r=2,3,4,5$. Figure \ref{Fig:ConvergenceTest3D} (right) shows an exponential decay of the error.
\begin{figure}
\includegraphics[width=0.49\textwidth]{ConvergenceTest3D_Time.png}
\includegraphics[width=0.49\textwidth]{ConvergenceTest3D_Degree.png}
\caption{Test case of Section~\ref{Sec:3DConvergence}. Left: computed errors $|||\bm{z}_{DG}-\boldsymbol{u}|||$ as a function of the time-step $\Delta t$, with $r=1,2,3,4$, $h=0.125$ and $p=6$. Right: computed errors $|||\bm{z}_{DG}-\boldsymbol{u}|||$ as a function of the polynomial degree $p=r$, with $\Delta t = 0.1$, $h=0.25$.}
\label{Fig:ConvergenceTest3D}
\end{figure}
\subsubsection{Plane wave propagation}
\label{Sec:PlaneWave}
The aim of this test is to compare the performance of the proposed method STDG with the space-time DG method (here referred to as STDG$_0$) firstly presented in \cite{Paper_Dg-Time} and then applied to 3D problems in \cite{AnMaMi20}. The difference between STDG$_0$ and STDG is in the way the time approximation is obtain. Indeed, the former integrates the second order in time differential problem, whereas the latter discretizes the first order in time differential system.
On the one hand, as pointed out in \cite{AnMaMi20}, the main limitation of the STDG$_0$ method is the ill-conditioning of the resulting stiffness matrix that makes the use of iterative solvers quite difficult. Hence, for STDG$_0$ direct methods are used forcing to store the stiffness matrix and greatly reducing the range of problems affordable by that method.
On the other hand, even if the final linear systems stemming from STDG$_0$ and STDG methods are very similar (in fact they only differ upon the definition of the (local) time matrices) we obtain for the latter a well-conditioned system matrix making iterative methods employable and complex 3D problems solvable.
Here, we consider a plane wave propagating along the vertical direction in two (horizontally stratified) heterogeneous domains. The source plane wave is polarized in the $x$ direction and its time dependency is given by a unit amplitude Ricker wave with peak frequency at $2~{\rm Hz}$. We impose a free surface condition on the top surface, absorbing boundary conditions on the bottom surface and homogeneous Dirichlet conditions along the $y$ and $z$ direction on the remaining boundaries. We solve the problem in two domains that differs from dimensions and material properties, and are called as Domain A and Domain B, respectively.
Domain A has dimension $\Omega=(0,100)~{\rm m}\times(0,100)~{\rm m}\times(-500,0)~{\rm m}$, cf. Figure~\ref{Fig:TutorialDomain}, and is partitioned into 3 subdomains corresponding to the different material layers, cf. Table~\ref{Tab:TutorialMaterials}. The subdomains are discretized in space with a uniform cartesian hexahedral grid of size $h = 50~{\rm m}$ that results in 40 elements.
Domain B has dimensions $\Omega=(0,100)~{\rm m}\times(0,100)~{\rm m}\times(-1850,0)~{\rm m}$, and has more layers, cf. Figure~\ref{Fig:TorrettaDomain} and Table~\ref{Tab:TorrettaMaterials}. The subdomains are discretized in space with a cartesian hexahedral grid of size $h$ ranging from $15~{\rm m}$ in the top layer to $50~{\rm m}$ in the bottom layer. Hence, the total number of elements is 1225.
\begin{figure}
\begin{minipage}{\textwidth}
\begin{minipage}{0.3\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{TutorialDomain}%
\captionof{figure}{Test case of Section~\ref{Sec:PlaneWave}-Domain A. Computational domain $\Omega = \cup_{\ell=1}^{3}\Omega_{\ell}$.}
\label{Fig:TutorialDomain}
\end{minipage}
\hfill
\begin{minipage}{0.65\textwidth}
\centering
\begin{tabular}{|l|r|r|r|r|r|}
\hline
Layer & Height $[m]$ & $\rho [kg/m^3]$ & $c_p [m/s]$ & $c_s [m/s]$ & $\zeta [1/s]$ \\
\hline
\hline
$\Omega_1$ & $ 150 $ & $1800$ & $600$ & $300$ & $0.166$ \\
\hline
$\Omega_2$ & $ 300 $ & $2200$ & $4000$ & $2000$ & $0.025$ \\
\hline
$\Omega_3$ & $ 50 $ & $2200$ & $4000$ & $2000$ & $0.025$ \\
\hline
\end{tabular}
\captionof{table}{Mechanical properties for test case of Section~\ref{Sec:PlaneWave}-Domain A. Here, the Lam\'e parameters $\lambda$ and $\mu$ can be obtained through the relations $\mu = \rho c_s^2$ and $\lambda = \rho c_p^2 -\mu$.}
\label{Tab:TutorialMaterials}
\end{minipage}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}{\textwidth}
\begin{minipage}{0.3\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{TorrettaDomain}%
\captionof{figure}{Test case of Section~\ref{Sec:PlaneWave}-Domain B. Computational domain $\Omega = \cup_{\ell=1}^{11}\Omega_{\ell}$.}
\label{Fig:TorrettaDomain}
\end{minipage}
\hfill
\begin{minipage}{0.65\textwidth}
\centering
\begin{tabular}{|l|r|r|r|r|r|}
\hline
Layer & Height $[m]$ & $\rho [kg/m^3]$ & $c_p [m/s]$ & $c_s [m/s]$ & $\zeta [1/s]$ \\
\hline
\hline
$\Omega_1$ & $ 15 $ & $1800$ & $1064$ & $236$ & $0.261$ \\
\hline
$\Omega_2$ & $ 15 $ & $1800$ & $1321$ & $294$ & $0.216$ \\
\hline
$\Omega_3$ & $ 20 $ & $1800$ & $1494$ & $332$ & $0.190$ \\
\hline
$\Omega_4$ & $ 30 $ & $1800$ & $1664$ & $370$ & $0.169$ \\
\hline
$\Omega_5$ & $ 40 $ & $1800$ & $1838$ & $408$ & $0.153$ \\
\hline
$\Omega_6$ & $60 $ & $1800$ & $2024$ & $450$ & $0.139$ \\
\hline
$\Omega_7$ & $ 120 $ & $2050$ & $1988$ & $523$ & $0.120$ \\
\hline
$\Omega_8$ & $500 $ & $2050$ & $1920$ & $600$ & $0.105$ \\
\hline
$\Omega_9$ & $ 400 $ & $2400$ & $3030$ & $1515$ & $0.041$ \\
\hline
$\Omega_{10}$ & $ 600 $ & $2400$ & $4180$ & $2090$ & $0.030$ \\
\hline
$\Omega_{11}$ & $ 50 $ & $2450$ & $5100$ & $2850$ & $0.020$ \\
\hline
\end{tabular}
\captionof{table}{Mechanical properties for test case of Section~\ref{Sec:PlaneWave}-Domain B. Here, the Lam\'e parameters $\lambda$ and $\mu$ can be obtained through the relations $\mu = \rho c_s^2$ and $\lambda = \rho c_p^2 -\mu$.}
\label{Tab:TorrettaMaterials}
\end{minipage}
\end{minipage}
\end{figure}
In Figure~\ref{Fig:PlanewaveDisplacement} on the left (resp. on the right) we report the computed displacement $\bm{u}$ along the $x-$axis, registered at point $P=(50, 50, 0)~{\rm m}$ located on the top surface
for Domain A (resp. Domain B).
We compare the results with those obtained in \cite{AnMaMi20}, choosing a polynomial degree $p=r=2$ in both space and time variables and a time step $\Delta t = 0.01$. In both cases, we can observe a perfect agreement of the two solutions.
\begin{figure}
\includegraphics[width=0.49\textwidth]{TutorialDisp.png}
\includegraphics[width=0.49\textwidth]{TorrettaDisp.png}
\caption{Test case of Section~\ref{Sec:PlaneWave}. Computed displacement $\bm{u}$ along $x-$axis registered at $P=(50, 50, 0)~{\rm m}$ obtained employing the proposed formulation, i.e. STDG method, and the method \cite{AnMaMi20}, i.e. STDG$_0$, for Domain A (left) and Domain B (right). We set the polynomial degree $p=r=2$ in both space and time dimensions and time step $\Delta t = 0.01$.}
\label{Fig:PlanewaveDisplacement}
\end{figure}
In Table~\ref{Tab:Comparison} we collect the condition number of the system matrix, the number of GMRES iterations and the execution time for the STDG$_0$ and STDG methods applied on a single time integration step, computed by using Domain A and Domain B, respectively.
From the results we can observe that the proposed STDG method outperforms the STDG$_0$ one, in terms of condition number and GMRES iteration counts for the solution of the corresponding linear system. Clearly, for small problems, when the storage of the system matrix and the use of a direct solvers is possible the STSG$_0$ remains the most efficient solution.
\begin{table}[h!]
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Dom.} &
\multirow{2}{*}{$p$} &
\multicolumn{2}{c|}{Condition number} & \multicolumn{2}{c|}{\# GMRES it.} & \multicolumn{2}{c|}{Execution time [s]} \\ \cline{3-8}
&& STSG$_0$ & STDG & STSG$_0$ & STDG & STSG$_0$ & STDG \\
\hline
\hline
A & 2 & $1.2\cdot10^9$ & $1.3\cdot10^2$ & $1.5\cdot10^4$ & $27$ & $1.1$ & $3.0\cdot10^{-3}$\\
\hline
A & 4 & $2.7\cdot10^{10}$ & $2.8\cdot10^3$ & $>10^6$ & $125$ & $>2200$ & $0.3\cdot10^{-1}$\\
\hline
B & 2 & $1.3\cdot10^{14}$ & $5.0\cdot10^2$ & $4.2\cdot10^5$ & $56$ & $452.3$ & $6.5\cdot10^{-2}$\\
\hline
\end{tabular}
\caption{Test case of Section~\ref{Sec:PlaneWave}. Comparison between the proposed formulation \eqref{Eq:WeakProblem} and the method presented in \cite{AnMaMi20} in terms of conditioning and iterative resolution. We set $p=r$ and we fix the relative tolerance for the GMRES convergence at $10^{-12}$. }
\label{Tab:Comparison}
\end{table}
\subsubsection{Layer over a half-space}
\label{Sec:LOH1}
In this experiment, we test the performance of the STDG method by considering a benchmark test case, cf. \cite{DaBr01} for a real elastodynamics application, known in literature as layer over a half-space (LOH). We let $\Omega=(-15,15)\times(-15,15) \times(0,17)~{\rm km}$ be composed of two layers with different material properties, cf. Table~\ref{Table:LOH1Materials}. The domain is partitioned employing two conforming meshes of different granularity. The ``fine'' (resp. ``coarse'') grid is composed of $352800$ (resp. $122400$) hexahedral elements, varying from size $86~{\rm m}$ (resp. $167~{\rm m}$), in the top layer, to $250~{\rm m}$ (resp. $500~{\rm m}$) in the bottom half-space, cf. Figure~\ref{Fig:LOH1Domain}. On the top surface we impose a free surface condition, i.e. $\boldsymbol{\sigma} \textbf{n} = \textbf{0}$, whereas on the lateral and bottom surfacews we consider absorbing boundary conditions \cite{stacey1988improved}.
\begin{figure} [h!]
\centering
\includegraphics[width=0.9\textwidth]{LOHDomain}%
\captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Computational domain $\Omega = \cup_{\ell=1}^{2}\Omega_{\ell}$ and its partition.}
\label{Fig:LOH1Domain}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{|l|r|r|r|r|r|}
\hline
Layer & Height $[km]$ & $\rho [kg/m^3]$ & $c_p [m/s]$ & $c_s [m/s]$ & $\zeta [1/s]$ \\
\hline
\hline
$\Omega_1$ & $ 1 $ & $2600$ & $2000$ & $4000$ & $0$ \\
\hline
$\Omega_2$ & $ 16 $ & $2700$ & $3464$ & $6000$ & $0$ \\
\hline
\end{tabular}
\caption{Test case of Section~\ref{Sec:LOH1}. Mechanical properties of the medium. Here, the Lam\'e parameters $\lambda$ and $\mu$ can be obtained through the relations $\mu = \rho c_s^2$ and $\lambda = \rho c_p^2 -\mu$.}
\label{Table:LOH1Materials}
\end{table}
The seismic excitation is given by a double couple point source located at the center of the domain expressed by
\begin{equation}
\label{Eq:LOH1Source}
\bm{f}(\bm{x},t) = \nabla \delta (\bm{x}-\bm{x}_S)M_0\bigg(\frac{t}{t_0^2}\bigg)\exp{(-t/t_0)},
\end{equation}
where $\bm{x}_S = (0,0,2)~{\rm km}$, $M_0 = 10^8~{\rm Nm}$ is the scalar seismic moment, $t_0 = 0.1~{\rm s}$ is the smoothness parameter, regulating the frequency content and amplitude of the source time function. The semi-analytical solution is available in \cite{DaBr01} together with further details on the problem's setup.
We employ the STDG method with different choices of polynomial degrees and time integration steps. In Figures~\ref{Fig:LOH1ResultsFine41}-\ref{Fig:LOH1ResultsCoarse44-2} we show the velocity wave field computed at point $(6,8,0)~{\rm km}$ along with the reference solution, in both the time and frequency domains, for the sets of parameters tested. We also report relative seismogram error
\begin{equation}
\label{Eq:LOH1Error}
E = \frac{\sum_{i=1}^{n_S}(\bm{u}_{\delta}(t_i)-\bm{u}(t_i))^2}{\sum_{i=1}^{n_S}(\bm{u}(t_i)^2)},
\end{equation}
where $n_S$ is the number of samples of the seismogram, $\bm{u}_{\delta}(t_i)$ and $\bm{u}(t_i)$ are, respectively, the value of seismogram at sample $t_i$ and the corresponding reference value. In Table~\ref{Table:LOH1Sensitivity} we report the set of discretization parameters employed, together with some results obtaineds in terms of accuracy and computational efficiency.
\begin{figure} [h!]
\centering
\includegraphics[width=0.5\textwidth]{LOH_4_1_Fine_Vel.png}%
\includegraphics[width=0.5\textwidth]{LOH_4_1_Fine_Freq.png}%
\captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Velocity wave field recorded at $(6,8,0)~{\rm km}$ along with the reference solution (black line), in the time domain (left) and frequency domain (right), obtained with the ``fine'' grid, polynomial degree $p=4$ for space and $r=1$ for time domain, and time-step $\Delta t = 10^{-3}~{\rm s}$. The error $E$ is computed as in \eqref{Eq:LOH1Error}.}
\label{Fig:LOH1ResultsFine41}
\end{figure}
\begin{figure} [h!]
\centering
\includegraphics[width=0.49\textwidth]{LOH_4_2_Fine_Vel.png}%
\includegraphics[width=0.49\textwidth]{LOH_4_2_Fine_Freq.png}%
\captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Velocity wave field recorded at $(6,8,0)~{\rm km}$ along with the reference solution (black line), in the time domain (left) and frequency domain (right), obtained with the ``fine'' grid, polynomial degree $p=4$ for space and $r=2$ for time domain, and time-step $\Delta t = 10^{-3}~{\rm s}$. The error $E$ is computed as in \eqref{Eq:LOH1Error}.}
\label{Fig:LOH1ResultsFine42}
\end{figure}
\begin{figure} [h!]
\centering
\includegraphics[width=0.49\textwidth]{LOH_4_4_-3_Coarse_Vel.png}%
\includegraphics[width=0.49\textwidth]{LOH_4_4_-3_Coarse_Freq.png}%
\captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Velocity wave field recorded at $(6,8,0)~{\rm km}$ along with the reference solution (black line), in the time domain (left) and frequency domain (right), obtained with the ``coarse'' grid, polynomial degree $p=4$ for space and $r=4$ for time domain, and time-step $\Delta t = 10^{-3}~{\rm s}$. The error $E$ is computed as in \eqref{Eq:LOH1Error}.}
\label{Fig:LOH1ResultsCoarse44-3}
\end{figure}
\begin{figure} [h!]
\centering
\includegraphics[width=0.49\textwidth]{LOH_4_4_-2_Coarse_Vel.png}%
\includegraphics[width=0.49\textwidth]{LOH_4_4_-2_Coarse_Freq.png}%
\captionof{figure}{Test case of Section~\ref{Sec:LOH1}. Velocity wave field recorded at $(6,8,0)~{\rm km}$ along with the reference solution (black line), in the time domain (left) and frequency domain (right), obtained with the ``coarse'' grid, polynomial degree $p=4$ for space and $r=4$ for time domain, and time-step $\Delta t = 5\cdot10^{-2}~{\rm s}$. The error $E$ is computed as in \eqref{Eq:LOH1Error}.}
\label{Fig:LOH1ResultsCoarse44-2}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Grid} & \multirow{2}{*}{$p$} & \multirow{2}{*}{$r$} & \multirow{2}{*}{$\Delta t~[{\rm s}]$} & GMRES & Exec. Time & Tot. Exec. & \multirow{2}{*}{Error $E$}\\
&&&&iter.&per iter. [s] &Time [s]&\\
\hline
\hline
Fine & $4$ & $1$ & $10^{-3}$ & $6$ & $2.9$ & $3.08\cdot10^{4}$ & $0.015$ \\
\hline
Fine & $4$ & $2$ & $10^{-3}$ & $8$ & $5.6$ & $6.59\cdot10^{4}$ & $0.020$ \\
\hline
Coarse & $4$ & $4$ & $10^{-3}$ & $12$ & $7.6$ & $8.14\cdot10^{4}$ & $0.229$ \\
\hline
Coarse & $4$ & $4$ & $5\cdot10^{-2}$ & $24$ & $27.9$ & $7.22\cdot10^{4}$ & $0.329$ \\
\hline
\end{tabular}
\caption{Test case of Section~\ref{Sec:LOH1}. Set of discretization parameters employed, and corresponding results in terms of computational efficiency and accuracy. The execution times are computed employing $512$ parallel processes, on \textit{Marconi100} cluster located at CINECA (Italy).}
\label{Table:LOH1Sensitivity}
\end{table}
By employing the ``fine'' grid we obtain very good results both in terms of accuracy and efficiency. Indeed, the minimum relative error is less than $2\%$ with time polynomial degree $r=1$, see Figure~\ref{Fig:LOH1ResultsFine41}. Choosing $r=2$, as in Figure~\ref{Fig:LOH1ResultsFine42}, the error is larger (by a factor 40\%) but the solution is still enough accurate. However, in terms of total Execution time, with $r=1$ the algorithm performs better than choosing $r=2$, cf. Table~\ref{Table:LOH1Sensitivity}, column 7.
As shown in Figure~\ref{Fig:LOH1ResultsCoarse44-3}, the ``coarse'' grid produces larger errors and worsen also the computational efficiency, since the number of GMRES iterations for a single time step increases. Doubling the integration time step $\Delta t$, see Figure~\ref{Fig:LOH1ResultsCoarse44-2}, causes an increase of the execution time for a single time step that partly compensate the decrease of total number of time steps. Consequently, the total execution time reduces but only by 12\%. In addition, this choice causes some non-physical oscillations in the code part of the signal that contribute to increase the relative error.
Indeed, we can conclude that for this test case, spatial discretization is the most crucial aspect. Refining the mesh produces a great decrease of the relative error and increases the overall efficiency of the method. Concerning time integration, it appears that the method performs well even with low order polynomial degrees both in terms of computational efficiency and of accuracy.
The method achieves its goal of accurately solving this elastodynamics problem that counts between 119 (``coarse'' grid) and 207 (``fine'' grid) millions of unknowns. The good properties of the proposed STDG method is once again highlighted by the fact that all the presented results are achieved without any preconditioning of the linear system.
\subsection{Seismic wave propagation in the Grenoble valley}
\label{Sec:Grenoble}
In this last experiment, we apply the STDG method to a real geophysical study \cite{ChSt10}. This application consists in the simulation of seismic wave propagation generated by an hypothetical earthquake of magnitude $M_w = 6$ in the Grenoble valley, in the French Alps. The Y-shaped Grenoble valley, whose location is represented in Figure~\ref{Fig:GrenobleDomain}, is filled with late quaternary deposits, a much softer material than the one composing the surrounding mountains. We approximate the mechanical characteristics of the ground by employing three different material layers, whose properties are listed in Table~\ref{Table:GrenobleMaterials}. The alluvial basin layer contains soft sediments that compose the Grenoble's valley and corresponds to the yellow portion of the domain in Figure~\ref{Fig:GrenobleDomain}. Then, the two bedrock layers approximate the stiff materials composing the surrounding Alps and the first crustal layer. The earthquake generation is simulated through a kinematic fault rapture along a plane whose location is represented in Figure~\ref{Fig:GrenobleDomain}.
\begin{figure} [h!]
\centering
\includegraphics[width=0.9\textwidth]{grenoble_paraview_2}%
\caption{Test case of Section~\ref{Sec:Grenoble}. Geophysical domain and its location.}
\label{Fig:GrenobleDomain}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{|@{}|l|r|r|r|r|}
\hline
Layer & $\rho~[{\rm kg/m^3}]$ & $c_s~[\rm{m/s}]$ & $c_p~[\rm{m/s}]$ & $ \zeta~[\rm {1/s}]$ \\
\hline
\hline
Alluvial basin & 2140 + 0.125 $z_{d}$ & 300 + 19 $\sqrt{z_{d}}$ & 1450 + 1.2 $z_{d}$ & 0.01 \\
\hline
Bedrock $(0-3)$ km & 2720 & 3200 & 5600 & 0 \\
\hline
Bedrock $(3-7)$ km & 2770 & 3430 & 5920 & 0 \\
\hline
\end{tabular}
\caption{Test case of Section~\ref{Sec:Grenoble}. Mechanical properties of the medium. Here, the Lam\'e parameters $\lambda$ and $\mu$ can be obtained through the relations $\mu = \rho c_s^2$ and $\lambda = \rho c_p^2 -\mu$. $z_{d}$ measures the depth of a point calculated from the top surface.}
\label{Table:GrenobleMaterials}
\end{table}
The computational domain $\Omega=(0,50)\times(0,47)\times (-7,3)~{\rm km}$ is discretized with a fully unstructured hexahedral mesh represented in Figure~\ref{Fig:GrenobleDomain}. The mesh, composed of $202983$ elements, is refined in the valley with a mesh size $h=100~{\rm m}$, while it is coarser in the bedrock layers reaching $h\approx 1~{\rm km}$.
\begin{figure} [h!]
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{monitors}%
\end{minipage}
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{GrenobleCrossSection}%
\end{minipage}
\caption{Left: surface topography in the Grenoble area. The white line indicates the monitor points examined in Figure~\ref{Fig:GrenobleVel}. Right: cross section of the valley in correspondence of the monitor points.}
\label{Fig:GrenoblePoints}
\end{figure}
\begin{figure} [h!]
\begin{minipage}{\textwidth}
\centering
\includegraphics[width=\textwidth]{sismogrammi}%
\end{minipage}
\caption{Test case of Section~\ref{Sec:Grenoble}. Computed velocity field at the monitored points in Figure~\ref{Fig:GrenoblePoints}, together with the computed peak ground velocity for each monitor point.
Comparisono between the STDG (bloack) solution and the SPECFEM (red) solution \cite{Chaljub2010QuantitativeCO}.}
\label{Fig:GrenobleVel}
\end{figure}
On the top surface we impose a free surface condition, i.e. $\boldsymbol{\sigma} \textbf{n} = \textbf{0}$, whereas on the lateral and bottom surface we consider absorbing boundary conditions \cite{stacey1988improved}. We employ the STDG method with polynomial degrees $p=3$ for the space discretization and $r=1$ for the time integration, together with a time step $\Delta t = 10^{-3}~{\rm s}$. We focus on a set of monitor points whose location is represented in Figure~\ref{Fig:GrenoblePoints}. In Figure~\ref{Fig:GrenobleVel}, we report the velocity field registered at these points compared with the ones obtained with a different code, namely SPECFEM \cite{Chaljub2010QuantitativeCO}. The results are coherent with the different location of the points. Indeed, we observe highly perturbed waves in correspondence of the points $1-7$ that are located in the valley, i.e. in the alluvial material. This is caused by a refraction effect that arises when a wave moves into a soft material from a stiffer one. Moreover, the wave remains trapped inside the layer bouncing from the stiffer interfaces. The absence of this effect is evident from the monitors $8$ and $9$ that are located in the bedrock material. These typical behaviors are clearly visible also in Figure~\ref{Fig:GrenobleSnap}, where the magnitude of the ground velocity is represented for different time instants.
Finally, concerning the computation efficiency of the scheme, we report that, with this choice of discretization parameters, we get a linear system with approximately $36$ millions of degrees of freedom that is solved in $17.5$ hours, employing $512$ parallel processes, on \textit{Marconi100} cluster located at CINECA (Italy).
\begin{figure} [h!]
\centering
\includegraphics[width=0.49\textwidth]{snapshot5}
\includegraphics[width=0.49\textwidth]{snapshot9}
\includegraphics[width=0.49\textwidth]{snapshot13}
\includegraphics[width=0.49\textwidth]{snapshot17}
\caption{Test case of Section~\ref{Sec:Grenoble}. Computed ground velocity at different time instants obtained with polynomial degrees $p=3$ and $r=1$, for space and time, respectively, and $\Delta t = 10^{-3}~s$.}
\label{Fig:GrenobleSnap}
\end{figure}
\section{Conclusions}
In this work we have presented and analyzed a new time Discontinuous Galerkin method for the solution of a system of second-order differential equations. We have built an energy norm that naturally arose by the variational formulation of the problem, and that we have employed to prove well-posedness, stability and error bounds. Through a manipulation of the resulting linear system, we have reduced the computation cost of the solution phase and we have implemented and tested our method in the open-source software SPEED (\url{http://speed.mox.polimi.it/}). Finally, we have verified and validated the proposed numerical algorithm through some two- and three-dimensional benchmarks, as well as real geophysical applications.
\section{Aknowledgements}
This work was partially supported by "National Group of Computing Science" (GNCS-INdAM). P.F. Antonietti has been supported by the PRIN research grant n. 201744KLJL funded by the Ministry of Education, Universities and Research (MIUR).
| proofpile-arXiv_065-25 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The properties of particle interactions determine the evolution of a quantum chromodynamical (QCD) system. Thorough understanding of these properties can help answer many fundamental questions in physics, such as the origin of the Universe or the unification of forces. This is one of the important reasons to collect data with particle accelerators, such as the Large Hadron Collider (LHC) at CERN. However, when collecting this data, we only register complex signals of high dimensionality which we can later interpret as signatures of final particles in the detectors. This interpretation stems from the fact that we, more or less, understand the underlying processes that produce the final particles.
In essence, from the all the particles produced in a collision at the accelerator, only the electron, the proton, the photon and the neutrinos are stable and can be reconstructed with certainty, given that you have the proper detector. Other particles are sometimes also directly detected, given that they reach the active volume of the detector without first decaying. These include muons, neutrons, charged pions and charged kaons. On the other hand, short-lived particles will almost surely decay before reaching the detector and we can only register the particles they decay into.
A similar situation arises with quarks, antiquarks and gluons, the building blocks of colliding nuclei. When a high energy collision happens, a quark within a nucleus behaves almost as if it doesn't interact with neighbouring particles, because of a property called asymptotic freedom. If it is struck with a particle from the other nucleus, it can be given sufficient momentum pointing outwards from the parent nucleus. However, we know that there are no free quarks in nature and that this quark needs to undergo a process called hadronisation. This is a process in which quark-antiquark pairs are generated such that they form hadrons. Most of the hadrons are short-lived and they decay into other, more stable, hadrons. The end result of this process is a jet of particles whose average momentum points in the direction of the original outgoing quark. Unfortunately, we don't know the exact quark, nor gluon, decay properties, which serves as a motivation for this work.
The determination of these properties is a long standing problem in particle physics. To determine them, we turn to already produced data and try to fit decay models onto them. With every new set of data our understanding changes. This is evident from the fact that we want to simulate a collision event, we can obtain, on average, slightly different results with different versions of the same tool \cite{pythia}. Therefore, even though simulation tools are regularly reinforced with new observations from data, we can not expect the complete physical truth from them.
Instead of trying to perform direct fits to data, we propose the use of machine learning methods to determine the decay properties. In fact, the onset of these methods is already hinted in the traditional approach, since a multivariate fit of decay models to data is already a form of a machine learning technique. It is only natural to extend the existing methods since we can't rely entirely on simulated data. In this work, we develop an interpretable model by first simulating a system of particles with well defined masses, decay channels, and decay probabilities. We take this to be the ,,true system'', whose decay properties we pretend not to know and want to reproduce. Mimicking the real world, we assume to only have the data that this system produces in the detector. Next, we employ an iterative method which uses a neural network as a classifier between events produced in the detector by the ,,true system'' and some arbitrary ,,test system''. In the end, we compare the distributions obtained with the iterative method to the ,,true'' distributions.
This paper is organized as follows: in the Materials and methods section we describe the developed artificial physical system and the algorithm used to recover underlying probability distributions of the system. Also, we present in detail the methodology used to obtain the presented results. In the Results section we present our findings to see whether our hypothesis holds true. We conclude the paper with the Discussion section...
\section{Materials and methods}
The code used for the development the particle generator, the neural network models and the calculations is written in the the Python programming language using the Keras module with the TensorFlow2 backend \cite{keras}. The calculations were performed using a standardized PC setup equipped with an NVIDIA Quadro p6000 graphics processing unit.
\subsection{The physical system}
In particle physics, jets are detected as collimated streams of particles. The jet production mechanism is in essence clear: partons from the initial hard process undergo the fragmentation and hadronization processes. In this work, we develop a simplified physical model in which the fragmentation process is modeled as cascaded $1 \rightarrow 2$ independent decays of partons with a constant number of decays. This way, any single jet can be represented as a perfect binary tree of depth $N$, corresponding to $2^N$ particles in the final state. Since the initial parton properties are set, jets can be described by $2^N - 1$ decay parameters. We represent each decay of a mother parton of mass $M$ by four real numbers $(\frac{m_1}{M}, \frac{m_2}{M}, \theta, \phi)$, where $m_1$ and $m_2$ are the masses of the daughter particles and $\theta$ and $\phi$ are the polar and azimuthal angle of the lighter particle, as measured from the rest frame of the mother particle. For simplicity we make all the decays isotropic, which isn't necessarily true in real processes. To fully define our physical system we set a decay probability distribution function $p(m_1, m_2 | M)$, the details of which are given in the following subsection. The aim of our proposed algorithm is to recover these underlying probability distributions, assuming we have no information on them, using only a dataset consisting of jets described with final particles' four-momenta, as one would get from a detector.
\subsection{Particle generator}
\label{ParticleGenerator}
To generate the jets, we developed an algorithm where we take a particle of known mass that undergoes three successive decays. We consider only the possibility of discrete decays, in the sense that the decay product masses and decay probabilities are well defined. We consider a total of 10 types of particles, labelled A -- J, which can only decay into each other. The masses and the decay probabilities of these particles are given in Table \ref{TableParticles}. In this scenario, the ,,decay probabilities'' $p$ are given by the ratios of decay amplitudes. Thus, the total sum of the probabilities for a given particle to decay into others has to be one, and the probabilities describe the number of produced daughters per $N$ decays, scaled by $1/N$.
\vskip 5mm
\begin{table}[h!t!]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{particle} & \multicolumn{2}{|c|}{A} & \multicolumn{2}{|c|}{B} & \multicolumn{2}{|c|}{C} & \multicolumn{2}{|c|}{D}& \multicolumn{2}{|c|}{E} \\\hline
\multicolumn{2}{|c|}{mass} & \multicolumn{2}{|c|}{0.1} & \multicolumn{2}{|c|}{0.6} & \multicolumn{2}{|c|}{1.3} & \multicolumn{2}{|c|}{1.9}& \multicolumn{2}{|c|}{4.4} \\\hline
\multicolumn{2}{|c|}{$p$ / channel} & 1 & A & 0.7 & B & 1 & C & 0.3 & A+C & 0.6 & C+C \\\hline
\multicolumn{2}{|c|}{} & & & 0.3 & A+A & & & 0.3 & A+A & 0.4 & E \\\hline
\multicolumn{2}{|c|}{} & & & & & & & 0.4 & D & & \\\hline\hline
\multicolumn{2}{|c|}{particle} & \multicolumn{2}{|c|}{F} & \multicolumn{2}{|c|}{G} & \multicolumn{2}{|c|}{H} & \multicolumn{2}{|c|}{I}& \multicolumn{2}{|c|}{J} \\\hline
\multicolumn{2}{|c|}{mass} & \multicolumn{2}{|c|}{6.1} & \multicolumn{2}{|c|}{8.4} & \multicolumn{2}{|c|}{14.2} & \multicolumn{2}{|c|}{18.1}& \multicolumn{2}{|c|}{25} \\\hline
\multicolumn{2}{|c|}{$p$ / channel} & 0.5 & A+A & 0.9 & B+B & 0.6 & D+D & 1 & F+G & 0.5 & F+I \\\hline
\multicolumn{2}{|c|}{} & 0.5 & B+C & 0.1 & A+F & 0.25 & D+E & & & 0.4 & G+H \\\hline
\multicolumn{2}{|c|}{} & & & & & 0.15 & E+F & & & 0.1 & E+E \\\hline
\end{tabular}
\caption{Allowed particle decays in the discrete model. The designation $p$/channel shows the probability that a mother particle will decay into specific daughters.}
\label{TableParticles}
\end{table}
\vskip 5mm
Particles A--E are set to be long lived and can thus be detected in the detector, which only sees the decay products after several decays. This can be seen in table \ref{TableParticles} as a probability for a particle to decay into itself. In this way, we assure two things: first, that we have stable particles and second, that each decay in the binary tree is recorded, even if it is represented by a particle decaying into itself. Particles A and C are completely stable, since they only have one ,,decay'' channel, in which they decay back into themselves. On the other hand, particles F--I are hidden resonances: if one of them appears in the i-th step of the decay chain, it will surely decay into other particles in the next, (i+1)-th step of the chain.
To create a jet, we start with particle J, which we call the mother particle, and allow it to decay in one of the decay channels. Each of the daughter particles then decays according to their decay channels, and this procedure repeats a total of 3 times. In the end, we obtain a maximum of 8 particles from the set A--E, with known momenta as measured from the rest frame of the mother particle. An example of a generated jet is given in Fig.\ref{FigRaspadi}.
\begin{figure}[h!t!]
\centering
\begin{forest}
for tree={
grow=east,
edge={->},
parent anchor=east,
child anchor=west,
s sep=1pt,
l sep=1cm
},
[J
[F
[A[A]]
[A[A]]
]
[I
[F
[B]
[C]
]
[G
[B]
[B]
]
]
]
\end{forest}
\caption{An example of the operation of the discrete jet generator. The mother particle J decays into particles I and F. According to decay probabilities, this happens in half the generated jets. The daughter particles subsequently decay two more times, leaving only stable, detectable particles in the final state.}
\label{FigRaspadi}
\end{figure}
\subsection{Introduction to the algorithm}
Let's assume we have two distinct datasets: one that consists of samples from a random variable X distributed with an unknown probability density $p(x)$, which we call the ,,real'' dataset, and the other, which consists of samples from a random variable Y distributed with a known probability density $q(x)$, which we call the ,,test'' dataset. We would like to perform a hypothesis test between $H_{0}:p = p(x)$ and $H_{1}:p = q(x)$ using a likelihood-ratio test. The approach we use follows earlier work employs the Neyman–Pearson lemma \cite{NNNP1, NNNP2, NNNP3}. This lemma states that the likelihood ratio, $\Lambda$, given by:
\begin{equation}
\Lambda (p \mid q)\equiv \frac {{\mathcal {L}}(x \mid real)}{{\mathcal {L}}(x \mid test)} = \frac{p(x)}{q(x)}
\label{NP}
\end{equation}
is the most powerful test at the given significance level \cite{NeyPear}.
We can obtain an approximate likelihood ratio $\Lambda$ by transforming the output of a classifier used to discriminate between the two datasets. Assume that the classifier is a neural network optimized by minimizing the \textit{crossentropy} loss. In this case, the network output gives the probability of $x$ being a part of the real dataset $C_{nn}(x) = p(real \mid x)$ \cite{NNProbability}. If the datasets consist of the same number of samples, we can employ the Bayes' theorem in a simple manner:
\begin{eqnarray}
p(real \mid x) &=& \frac{p(x \mid real)p(real)}{p(x \mid real) p(real)+p(x \mid test)p(test)} \nonumber \\
&=& \frac{p(x \mid p_{\textrm{real}})}{p(x \mid real)+p(x \mid test)} = \frac{\Lambda}{\Lambda+1}\,.
\label{Bayes}
\end{eqnarray}
A simple inversion of Eq.\ref{Bayes} gives:
\begin{equation}
\Lambda = \frac{p(x)}{q(x)} = \frac{C_{\textrm{NN}}(x)}{1 - C_{\textrm{NN}}(x)},
\end{equation}
\begin{equation}
p(x) = \frac{C_{\textrm{NN}}(x)}{1 - C_{\textrm{NN}}(x)} q(x).
\label{pq}
\end{equation}
Therefore, in ideal conditions, the unknown probability density $p(x)$ describing the real dataset can be recovered with the help of the known probability density $q(x)$ and a classifier, using \ref{pq}. It must be noted that \ref{pq} is strictly correct only for optimal classifiers, which are unattainable. In our case, the classifier is optimized by minimizing the \textit{crossentropy} loss defined by:
\begin{equation}
L = -\frac{1}{n}\sum_{i=1}^{n}\left[y(x_i)\ln C_{\textrm{NN}}(x_i) + (1-y(x_i))\ln (1-C_{\textrm{NN}}(x_i)) \right]\,,
\end{equation}
where $y(x_i)$ is 1 if $x_i$ is a part of the real dataset, and 0 if $x_i$ is a part of the test dataset. We can safely assume that the final value of loss of the suboptimal classifier is greater than the final value of loss of the optimal classifier:
\begin{equation}
L_{\textrm{optimal}} < L < \ln{2} \,.
\end{equation}
The value of $\ln 2$ is obtained under the assumption of the \textit{worst} possible classifier. To prove our findings, in the next step we regroup the sums in the loss function in two parts, corresponding to the real and the test distributions:
\begin{equation}
-\frac{1}{n}\sum_{i \in real}\ln C_{\textrm{NN}}^{\textrm{optimal}}(x_i) < -\frac{1}{n}\sum_{i \in real}\ln C_{\textrm{NN}}(x_i) < -\frac{1}{n}\sum_{i \in real}\ln \frac{1}{2},
\label{Lreal}
\end{equation}
\begin{equation}
-\frac{1}{n}\sum_{i \in test}\ln\left[1 - C_{\textrm{NN}}^{\textrm{optimal}}(x_i) \right]< -\frac{1}{n}\sum_{i \in test}\ln\left[1 - C_{\textrm{NN}}(x_i)\right] < -\frac{1}{n}\sum_{i \in test}\ln \frac{1}{2}.
\label{Ltest}
\end{equation}
After expanding inequality \ref{Lreal} we obtain:
\begin{equation}
-\frac{1}{n}\sum_{i \in real}\ln \left[ \frac{C_{\textrm{NN}}^{\textrm{optimal}}(x_i)}{1 - C_{\textrm{NN}}^{\textrm{optimal}}(x_i)}\right] < -\frac{1}{n}\sum_{i \in real}\ln \left[\frac{C_{\textrm{NN}}(x_i)}{1 - C_{\textrm{NN}}(x_i)}\right] < -\frac{1}{n}\sum_{i \in real}\ln 1.
\label{Expanded}
\end{equation}
According to Eq.\ref{pq}, we can recover the real probability density $p(x)$ when using the optimal classifier. However, if one uses a suboptimal classifier, a slightly different probability density $p'(x)$ will be calculated. Since the ratios that appear as arguments of the logarithms in Eq.\ref{Expanded} correspond to distribution ratios, it follows that:
\begin{equation}
-\frac{1}{n}\sum_{i \in real}\ln \left[ \frac{p(x_i)}{q(x_i)}\right] < -\frac{1}{n}\sum_{i \in real}\ln \left[ \frac{p'
(x_i)}{q(x_i)}\right] < -\frac{1}{n}\sum_{i \in real}\ln 1.
\end{equation}
After some simplification this becomes:
\begin{equation}
\sum_{i \in real} \ln p(x_i) > \sum_{i \in real} \ln p'(x_i) > \sum_{i \in real} \ln q(x_i).
\label{proof1}
\end{equation}
If an analogous analysis is carried out for inequality \ref{Ltest} we get:
\begin{equation}
\sum_{i \in test} \ln p(x_i) < \sum_{i \in test} \ln p'(x_i) < \sum_{i \in test} \ln q(x_i).
\label{proof2}
\end{equation}
From this, it can be seen that probability density $p'(x)$ is on average closer to the real probability density $p(x)$ than to the test probability density $q(x)$. In a realistic case, Eq.\ref{pq} can't be used to completely recover the real probability density $p(x)$. However, it can be used in an iterative method; starting with a known distribution $q(x)$, we can approach the real distribution more and more with each iteration step.
\subsection{A simple example}
Let us illustrate the recovery of an unknown probability density by using a classifier on a simple example. We start with a set of 50 000 real numbers generated from a random variable with a probability density given by
\begin{equation}
p_{\textrm{real}}(x) = \frac{1}{4} \mathcal{N}(-1,1) + \frac{3}{4}\mathcal{N}(3,1)\,,
\label{eqpreal}
\end{equation}
where $\mathcal{N}(\mu,\sigma^2)$ denotes a normal distribution. A histogram of values in this set is shown in \ref{hsimple}. Let's now assume we don't know $p_{\textrm{real}}(x)$ and want to recover it using the procedure outlined in the previous subsection. This set will be denoted as the ,,real'' dateset and the underlying probability density will be denoted as the ,,real'' probability density.
\begin{figure}[h!t!]
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\linewidth]{Images/nn_simple_real.png}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\linewidth]{Images/hsimple.png}
\end{subfigure}
\caption{(\textbf{a}) The normalized probability density for the example, given by Eq. \ref{eqpreal}. (\textbf{b}) A histogram of values sampled from the set generated by the same equation.}
\label{hsimple}
\end{figure}
To construct the ,,test'' dataset, we generate values with a uniform probability density in the interval $\left[-10,10 \right]$. Finally, we construct a simple neural network which is used as a classifier that distinguishes the examples from the real dataset from examples from the test dataset. The classifier we use is a simple \textit{feed-forward} neural network with 100 hidden units using a ReLU activation function. The activation function of the final neural network output is the \textit{sigmoid} function, which we use to constrain the output values to the interval [0,1]. After the classifier is trained to discriminate between the two datasets by minimizing the \textit{binary crossentropy} loss, we evaluate its output at 200 equidistant points between -10 and 10. Using Eq.\ref{pq}, the probability distribution $p_{\textrm{calculated}}(x)$ is calculated using the classifier outputs. The calculated $p_{\textrm{calculated}}(x)$ is compared to the real probability density $p_{\textrm{real}}(x)$ and is shown in \ref{nn_simple_0}.
Although the resulting probability density differs from the real probability density due to the non-ideal classifier, we can conclude that the calculated $p_{\textrm{calculated}}(x)$ is considerably closer to $p_{\textrm{real}}(x)$ than to uniform probability density $q(x)$ used to generate the test dataset. Now, if we use the calculated $p_{\textrm{calculated}}(x)$ to construct a new test dataset and repeat the same steps, we can improve the results even more. This procedure can therefore iteratively improve the resemblance of $p_{\textrm{calculated}}(x)$ to $p_{\textrm{real}}(x)$ to the point where the datasets are so similar that the classifier cannot distinguish between them. In this simple example convergence is reached after the 5th iteration, since no significant improvement is observed afterwards. The calculated probability density $p_{\textrm{calculated}}(x)$ after the final iteration is shown in \ref{nn_simple_0} compared to the real distribution $p_{\textrm{real}}(x)$. It is clear that in this case the procedure converges, and we could possibly obtain a better match between $p_{\textrm{calculated}}(x)$ and $p_{\textrm{real}}(x)$ if we used a more optimal classifier.
\begin{figure}[h!t!]
\centering
\includegraphics[width=15cm]{Images/nn_simple_0_new.png}
\caption{The calculated $p_{\textrm{calculated}}(x)$ (blue line) compared to the real probability density $p_{\textrm{real}}$(x) (orange line). (\textbf{a}) The left panel shows the comparison after one iteration of the algorithm, alongside the starting ,,test'' distribution (green line).(\textbf{b}) The right panel shows the comparison after the 5th iteration.}
\label{nn_simple_0}
\end{figure}
In essence, a simple histogram could be used in this simple example to determine the underlying probability distribution instead of using the method described above. However, in case of multivariate probability distributions, which can be products of unknown probability distributions, a histogram approach would not prove useful.
\subsection{Extension to jets}
We would now like to apply the described procedure on the datasets that contain jets. Every jet, represented by a binary tree of depth $N$, consists of $2^N-1$ independent decays producing a maximum of $2^N$ particles in the final state. Since all the decays are isotropic in space, a jet can be described with a 4 $\times$ $(2^N-1)$--dimensional vector $\vec{x}$ and a probability distribution function given by:
\begin{equation}
p\left(\vec{x} \right) = \prod_i^{2^N-1} p(m_1^i, m_2^i | M)p(\theta^i) p(\phi^i),
\label{jet_prob}
\end{equation}
where $i$ denotes the decay index and ($m_1^i$, $m_2^i$, $\theta^i$, $\phi_i$) are the components of the vector $\vec{x}$. Since both angles are uniformly spatially distributed, they contribute to the probability with a simple constant factor. Therefore, when plugging $p\left(\vec{x} \right)$ from Eq.\ref{jet_prob} into Eq.\ref{pq} we can omit angles, since the constant factors will cancel each other out:
\begin{equation}
\prod_i^{2^N-1} p(m_1^i, m_2^i | M) = \frac{C_{NN}(\vec{x})}{1 - C_{NN}(\vec{x})} \prod_i^{2^N-1} q(m_1^i, m_2^i | M).
\label{pq_jets}
\end{equation}
Taking the logarithm of both sides:
\begin{equation}
\sum_i^{2^N-1} \ln p(m_1^i, m_2^i | M) = \ln{C_{NN}(\vec{x})} - \ln({1 - {C_{NN}(\vec{x})}}) + \sum_i^{2^N-1} \ln q(m_1^i, m_2^i | M).
\label{log_pq_jets}
\end{equation}
Unfortunately, we cannot explicitly obtain the probability $p(m_1, m_2 \mid M)$ directly from Eq.\ref{log_pq_jets} without solving a linear system of equations. This task proves to be computationally exceptionally challenging due to the high dimensionality of the dataset. In order to avoid this obstacle, we introduce a neural network $f$ to approximate $\ln p(m_1,m_2|M)$. We can optimize this neural network by minimizing the \textit{mean squared error} applied to the two sides of Eq.\ref{log_pq_jets}.
\subsection{The 2 Neural Networks (2NN) algorithm}
At this point we are ready to recover the underlying probability distributions from an existing dataset that consists of jets described by the four-momenta of the final particles. We denote the jets from this dataset as ,,real''. The building blocks of the full recovery algorithm are two independent neural networks; the aforementioned classifier $C_{NN}$ and the neural network $f$. Based on the usage of 2 neural networks, we dubbed the algorithm \textit{2NN}. The detailed architectures of both networks are given in Appendix A.
The workflow of the 2NN algorithm is simple: first we initialize the parameters of both neural networks. Then, we generate a test dataset using the neural network $f$. The test dataset and the real dataset are fed into the classifier network, which produces a set of linear equations in the form of Eq.\ref{log_pq_jets}. We approximate the solution to these by fitting the neural network $f$, which in turn produces a new test dataset. The procedure is continued iteratively until there are no noticeable changes in the difference of the real and test distributions. More detailed descriptions of the individual steps are given in the next subsections.
\subsubsection{Generating the test dataset}
After the parameters of the neural network $f$ are initialized, we need to generate a test dataset of jets with known decay probabilities $q(\vec{x})$. The input of the neural network $f$ is a vector consisting of 3 real numbers: $a = m_1/M$, $b = m_2/M$ and $M$. We denote the output of the neural network with $f(a,b,M)$. Due to conservation laws, the sum $a+b$ needs to be strictly less or equal to 1. We can assume $a \leq b$ without any loss of generality. In order to manipulate with probabilities a partition function:
\begin{equation}
Z(M) = \int_{\Omega} e^{f(a,b,M)} \,\mathrm{d}a \mathrm{d}b
\label{Z}
\end{equation}
needs to be calculated. Here, $\Omega$ denotes the entire probability space and is shown as the gray area in the left panel of Fig. \ref{prob_space}. To calculate the integral in the above expression, the probability space is discretized into 650 equal areas shown in the right panel of Fig.\ref{prob_space}. These areas are obtained by discretizing the parameters $a$ and $b$ into equidistant segments of length 0.2. After the discretization, the partition function $Z(M)$ then becomes:
\begin{equation}
Z(M) \approx \sum_j \sum_{k} e^{f(a_j,b_k,M)} \,.
\label{Z_discrete}
\end{equation}
\begin{figure}[h!t!]
\begin{center}
\resizebox{\columnwidth}{!}{%
\begin{tikzpicture}
\draw[->,thick,>=stealth] (-1,0) -- (12,0) node[right] {{\huge $a$}};
\draw[->,thick,>=stealth] (0,-1) -- (0,12) node[left] {{\huge $b$}};
\draw[dashed,thick] (0,10)--(10,0);
\draw[dashed,thick] (0,0)--(10,10);
\node[rotate=45] at (8,8.5) {\huge $a = b$};
\node[rotate=-45] at (8,2.5) {\huge $a + b = 1$};
\fill[black!10] (0,0) -- (5,5) -- (0,10) -- cycle;
\node[] at (2,5) {{\fontsize{40}{60}\selectfont $\Omega$}};
\draw[->,thick,>=stealth] (14,0) -- (27,0) node[right] {{\huge $a$}};
\draw[->,thick,>=stealth] (15,-1) -- (15,12) node[left] {{\huge $b$}};
\draw[dashed,thick] (15,10)--(25,0);
\draw[dashed,thick] (15,0)--(25,10);
\node[rotate=45] at (23,8.5) {\huge $a = b$};
\node[rotate=-45] at (23,2.5) {\huge $a + b = 1$};
\foreach \x in {0,1,2,...,25} {
\draw[thick] (15+0.2*\x,-0.2+0.2*\x) -- (15+0.2*\x,10.2-0.2*\x);};
\foreach \y in {0,1,2,...,25} {
\draw[thick] (15,-0.2+0.2*\y) -- (15+0.2*\y,-0.2+0.2*\y);};
\draw[thick] (15,-0.2+0.2*26) -- (15+0.2*25,-0.2+0.2*26);
\foreach \y in {0,1,2,...,25} {
\draw[thick] (15,5.2+0.2*\y) -- (20-0.2*\y,5.2+0.2*\y);};
\end{tikzpicture}%
}
\caption{(\textbf{a}) The left panel shows the entire allowed probability space of the parameters $a$ and $b$, designated by $\Omega$. Due to conservation laws, $a+b \leq 1$ needs to hold true. To describe our system, we selected the case where $a \leq b$, which we can do without loss of generality. (\textbf{b}) The right panel shows the discretized space $\Omega$, as used to evaluate the partition function.}
\label{prob_space}
\end{center}
\end{figure}
To generate the jets which form the test dataset, we must generate each decay in the cascading evolution using the neural network $f$. Each of the decays is generated by picking a particular pair of parameters $(a,b)$ from the 650 possible pairs which form the probability space for a given mass $M$. The decay probability is then given by:
\begin{equation}
q(m_1, m_2 \mid M) = \frac{e^{f(a,b,M)}}{Z(M)}\,.
\label{q}
\end{equation}
After applying this procedure we have a test dataset in which each jet is represented as a list of $2^N$ particles and their four-momenta. For each decay, we also store the pairs $(a^i,b^i)$ as well the corresponding decay probabilities.
\subsubsection{Optimizing the classifier}
The classifier used in this work is a convolutional neural network. The input to these type network are sets of images. For this purposes, all the jets are preprocessed by transforming the list of particles' four-momenta into jet images. Two 32$\times$32 images are produces for a single jet. In both images the axes correspond to the decay angles $\theta$ and $\phi$, while the pixel values are either the energy of the momentum of the particle found in that particular pixel. If a pixel contains two or more particles, their energy and momenta are summed and stored as pixel values. The transformation of the jet representations is done on both the real and the test datasets. We label the ,,real'' jet images with the digit 1 and ,,test'' jet images with the digit 0. The classifier is then optimized by minimizing the \textit{binary crossentropy} loss between the real and the test datasets. The optimization is performed by ADAM algorithm \cite{adam}. It is important to note that the sizes of both datasets need to be the same.
\subsubsection{Optimizing the neural network $f$}
After the classifier is optimized, a new jet dataset is generated by using the neural network $f$. Just as earlier, the generated jets are first transformed to jet images and then fed to the classifier. Since we have access to each of the decay probabilities for each jet, the right side of Eq.\ref{log_pq_jets} can be easily calculated for all the jet vectors $\vec{x}$ in the dataset. This way we can obtain the desired log value of the total probability for each jet $p(\vec{x})$:
\begin{equation}
\ln p(\vec{x}) = \ln{C_{NN}(\vec{x})} - \ln({1 - {C_{NN}(\vec{x})}}) + \sum_i^{2^N-1} \ln q(m_1^i, m_2^i | M).
\label{p}
\end{equation}
Finally, we update the parameters of the neural network $f$ by minimizing the expression given by:
\begin{equation}
L = \frac{1}{n} \sum_i^n \left[ \sum_{j}^{2^N-1} f(a_i^j,b_i^j,M_j) - \ln p_i(\vec{x})\right]^2,
\label{loss}
\end{equation}
where $i$ denotes the jet index and $j$ denotes the decay index in a particular jet. After this step, the weights of the neural network are updated in such a way that the network output values $f(a,b,M)$ are on average closer to the real log value of $p(m_1,m_2 \mid M)$. The updated network $f$ is used to generate the test dataset in the next iteration.
\subsection{Evaluation of the 2NN algorithm}
Upon completion of each iteration of the algorithm, the underlying probability densities can be obtained from the output values of the neural network $f$ according to Eq.\ref{q}. In the Results section the 2NN algorithm is evaluated in terms of the Kullback-Leibler divergence (KL) in the following way \cite{KLD}:
\begin{equation}
KL(M) = \sum_{j} \sum_{k} p_{\textrm{real}} (m_1^j, m_2^k \mid M)\left[
\ln p_{\textrm{real}} (m_1^j, m_2^k \mid M) - f(a^j, b^k, M) + \ln{Z(M)}\right]
\label{kl}
\end{equation}
where the sum is performed over the whole probability space. The KL-divergence is a non-negative measure of the difference between two probability densities defined on same probability space. If the probability densities are identical, the KL divergence is zero.
\subsection{Hardware and software}
Code for calculations in this reasearch are is written in Python programming langauge using \textit{Tensorflow 2} and \textit{Numpy} modules. GPU unit NVIDIA Quadro p6000 obtained from the NVIDIA Grant for academic resarch is used to increase the speed of the performed calculations.
\section{Results}
In this section we present our findings after applying the $2NN$ algorithm on 500 000 jets created using the particle generator described in \ref{ParticleGenerator}. In each iteration, the classifier is optimized using 50 000 randomly picked jets from the ,,real'' dataset and 50 000 jets generated using the neural network $f$. To optimize the neural network $f$, we use 50 000 jets as well. The algorithm performed 800 iterations. After the final iteration of the $2NN$ algorithm we obtain the calculated probability densities, which can be then used to generate samples of jets. First, we show the energy spectrum of the particles in the final state in jets generated by the calculated probabilities in \ref{hE}. This spectrum is directly compared to the energy spectrum of particles taken from jets belonging to the ,,real'' dataset and shown on Figure \ref{hE}.
\begin{figure}[h!t!]
\centering
\includegraphics[width=15cm]{Images/hE.png}
\caption{The energy spectrum of the particles in the final state in jets generated by the calculated probabilities, compared to the energy spectrum of particles taken from jets belonging to the ,,real'' dataset.}
\label{hE}
\end{figure}
The plotted spectra are obtained using 10 000 jets from each dataset. The error bars in the histogram are smaller than the marker size and are hence not visible. A resemblance between the two spectra is notable, especially at higher energies. This points to the fact that the calculated probabilities are approximately correct, so we can use them to generate samples of jets that resemble ,,real'' jets. To further examine the calculated probability densities we need to reconstruct the hidden resonances which are not found in the final state. For this purpose, the calculated probability densities for mother particle masses of $M = 25.0$, $M = 18.1$, $M = 14.2$ and $M = 1.9$ are analyzed and compared to the real probability densities in the following subsections. These masses are chosen since they match the masses of the hidden resonances, as was introduced in table \ref{TableParticles}.
\subsection{Mother particle with mass $M$ = 25.0}
The calculated 2$d$-probability density $p(m_1,m_2 \mid M)$ is shown in Figure \ref{probs25}, compared to the real probability density. A simple eye reveals that 3 possible decays of particle of mass $M = 25.0$ are recognized by the algorithm. After dividing the probability space as in panel (c) in Figure \ref{probs25} with lines $m_2 > 16.0$ and $m_2 < 10.0$, we calculate the mean and the variance of the data on each subspace. As a result, we obtain $(m_1, m_2) = (18.1 \pm 0.5, 6.1 \pm 0.5)$ for $m_2 > 16.0$, $(m_1, m_2) = (14.0 \pm 0.7, 8.4 \pm 0.7)$ for $16.0 \leq m_2 > 10.0$ and $(m1, m2) = (4.8 \pm 0.2, 4.6 \pm 0.2)$ for $m_2 \leq 10.0$. These mean values closely agree with the masses of the resonances expected as the products of decays of the particle with mass $M = 25.0$. The calculated small variances indicate that the algorithm is very precise. The total decay probabilities for each of the subspaces are equal to $p_1 = 0.48$, $p_2 = 0.47$, $p_3 = 0.05$, which approximately agree with the probabilities of decay channels of the particle with mass $M = 25.0$, as defined in table \ref{TableParticles}.
\begin{figure}[h!t!]
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/probability_25.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/preal_25.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/pl_25.png}
\caption{}
\end{subfigure}
\caption{The calculated probability density for a decaying particle of mass $M = 25.0$. (\textbf{a}) The left panel shows the density evaluated on the entire discretized probability space. (\textbf{b}) The probability density of ,,real'' data. (\textbf{c}) A division of the probability space into three subspaces, in order to isolate particular decays.}
\label{probs25}
\end{figure}
These results show that we can safely assume that the $2NN$ algorithm successfully recognizes all the decay modes of the particle that initiates a jet. To quantify the difference between the calculated probabilty density and the real probability density, we use the KL-divergence.
\begin{figure}[h!t!]
\centering
\includegraphics[width=13cm]{Images/kl_25.png}
\caption{The KL-divergence between the calculated and the real probability densities, evaluated in the case of particle of mass $M = 25.0$. The presented results are averaged over 50-iteration intervals. The error bars represent the standard deviation calculated on the same intervals.}
\label{kl25}
\end{figure}
Figure \ref{kl25} shows the dependence of the KL-divergence on the iteration of the $2NN$ algorithm. First, we observe an initial steep decrease in the value of the divergence. Large variations in divergence value are observed later. This is an indicator that the approximate probability density is found relatively quickly - after a few hundred iterations. As the algorithm decreases the width of the peaks found in the probability distribution, the KL-divergence becomes very sensitive to small variations in the location of these peaks and can therefore vary by a large relative amount.
\subsection{Mother particle with mass $M$ = 18.1}
A similar analysis is performed for the particle with mass $M = 18.1$. The calculated probability density is shown in Figure \ref{probs18} compared to the expected probability density. In this case, only one decay is allowed, so a division into probability subspaces is not necessary, as was for the case when $M$=25.0. The calculated mean and the variance of the shown probability density are $(m_1, m_2) = (5.9 \pm 0.4, 8.2 \pm 0.6)$. In this case, just as in the former, the calculated values closely agree with the only possible decay, in which the mother particle decays into two particles of masses 6.1 and 8.4. Also, just as in the previous subsection, the obtained result is very precise. Therefore, the algorithm can successfully find hidden resonances, as well as recognize the decay channels, without ever seeing them in the final state in the ,,real'' dataset.
\begin{figure}[h!t!]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\linewidth]{Images/probability_18.png}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\linewidth]{Images/preal_18.png}
\end{subfigure}
\caption{The calculated probability density for a decaying particle of mass $M = 18.1$. (\textbf{a}) The calculated density evaluated on the entire discretized probability space. (\textbf{b}) The probability density of ,,real'' data. }
\label{probs18}
\end{figure}
The calculated KL-divergence in the case of particle with mass $M = 18.1$ decreases over time in a very smooth manner, as can be seen in Figure \ref{kl18}. We believe this could be due to the simpler expected probability density, which algorithm manages to find very quickly.
\begin{figure}[h!t!]
\centering
\includegraphics[width=13cm]{Images/kl_18.png}
\caption{The KL-divergence between the calculated and the real probability densities, evaluated in the case of particle of mass $M = 18.1$. The presented results are averaged over 50-iteration intervals. The error bars represent the standard deviation calculated on the same intervals.}
\label{kl18}
\end{figure}
\subsection{Mother particle with mass $M$ = 14.2}
Figure \ref{probs14} shows the 2$d$-probability density for the decaying particle of mass $M = 14.2$. In this case, we can identify 3 possible decay channels, which are not as clearly separated as the channels in the previous subsections. Similar to the case of decaying particle of mass $M = 25.0$, we divided the probability space into 3 subpaces, each of which covered one of the possible decays. In this case, the three subspaces cover areas where $m_2 \leq 4.0$, $4.0 < m_2 \leq 5.5 $ and $m_2 > 5.5$. The mean values of the probability density on each of the subspaces are $(m_1,m2) = (2.4 \pm 0.5, 2.9 \pm 0.7)$, $(m_1,m_2)= (2.7 \pm 0.7, 4.3 \pm 0.3)$ and $(m_1,m_2) = (4.4 \pm 0.4, 6.2 \pm 0.3)$, respectively. The allowed decays of a mother particle with mass $M$ = 14.2 in the ,,real'' data are into channels with masses $(1.9,1.9)$, $(1.9, 4.4)$ and $(4.4, 6.2)$, which agree with the calculated results. However, in this case the calculations show higher variance, especially for decays where one of the products is a particle with mass 1.9. The total probabilities of decay in each of the subspaces are 0.89, 0.05 and 0.06, respectively. The relative probabilities of decay channels into particles with masses (4.4, 6.1) and (1.9, 4.4) are approximately the same as expected. However, the algorithm predicts more decays in the channel (1.9,1.9) than expected. The KL-divergence shows a steady decrease with occasional spikes, as shown on Figure \ref{kl14}.
\begin{figure}[h!t!]
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/probability_14.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/preal_14.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/pl_14.png}
\caption{}
\label{probs2c}
\end{subfigure}
\caption{The calculated probability density for a decaying particle of mass $M = 14.2$. (\textbf{a}) The left panel shows the density evaluated on the entire discretized probability space. (\textbf{b}) The probability density of ,,real'' data. (\textbf{c}) A division of the probability space into three subspaces, in order to isolate particular decays.}
\label{probs14}
\end{figure}
\begin{figure}[h!t!]
\centering
\includegraphics[width=13cm]{Images/kl_14.png}
\caption{KL-divergence between calculated and real probability density evaluated for the $M = 14.2$. Results are averaged over the intervals of 50 iteration. Error bars represent standard deviation on the same interval}
\label{kl14}
\end{figure}
\subsection{Mother particle with mass $M$ = 1.9}
The last probability density we analyze is the probability density for the mother particle with mass $M$ = 1.9. Figure \ref{probs2} shows the calculated probability density. It can be seen that one of the decay modes present in the ,,real'' data, namely when the particle decays in the $(0.1, 0.1)$ channel, is not recognized by the algorithm, but the decay mode when the particle decays in the $(0.1, 1.3)$ channel is visible. If we isolate given decay as shown in the right panel of Figure \ref{probs2}, we get a mean value of $(m_1, m_2) = (0.14 \pm 0.09, 1.27 \pm 0.09)$, which agrees with the expected decay. We also observe significant decay probabilities along the line $m_1 + m_2 = 1.9$. The decays that correspond to the points on this line in effect create particles with zero momentum in the rest frame of the mother particle. In the lab frame this corresponds to the daughter particles flying off in the same direction as the mother particle. Since they reach the detector in the same time, they are registered as one particle of total mass $M = 1.9$. Thus, we can conclude that the probabilities on this line have to add up to the total probability of the mother particle not decaying. The calculated probabilities in the case of no decay and in the case when decaying into particles with masses $(0.1,1.3)$ are 0.71 and 0.29, respectively. We note that relative probabilities are not correct, but 2 of the 3 decay modes are still recognized by the algorithm. The KL-divergence in this case can't produce reasonable results, simply because of multiple points in the $(m_1,m_2)$ phase space which produce the same decay and is therefore omitted from the analysis.
\begin{figure}[h!t!]
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/probability_2.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/preal_2.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\linewidth]{Images/pl_2.png}
\caption{}
\end{subfigure}
\caption{The calculated probability density for a decaying particle of mass $M = 1.9$. (\textbf{a}) The left panel shows the density evaluated on the entire discretized probability space. (\textbf{b}) The probability density of ,,real'' data. (\textbf{c}) A division of the probability space into three subspaces, in order to isolate particular decays.}
\label{probs2}
\end{figure}
\subsection{The accuracy of the classifier}
The accuracy of the classifier is defined as the fraction of correctly ,,guessed'' samples on a given dataset. The criterion used for guessing is checking whether the output of the classifier, $C_{NN}$, is greater than 0.5. The accuracy can indirectly indicate how distinguishable are some two datasets. In our algorithm, after starting from a test probability density, we approach the real probabilility density with increasing iteration number, so we can expect that the two jet datasets, the ,,real'' and the ,,test'' dataset, are less and less distinguishable over time. In Figure \ref{acc} we show the accuracy of the classifier in dependence on the iteration number.
\begin{figure}[h!t!]
\centering
\includegraphics[width=13cm]{Images/acc.png}
\caption{The calculated accuracy of the classifier in dependence on the iteration number.}
\label{acc}
\end{figure}
After an initially high value, the accuracy decreases with growing iteration number, which demonstrates that the test dataset becomes more and more similar to the real dataset. Ideally, the datasets are no longer distinguishable by a given classifier if the evaluated accuracy reaches 0.5. Therefore, we can use the evaluated accuracy of the classifier as a criterion for stopping the algorithm. Other measures can also be used as the stopping criterion such as the loss value of the classifer or the area under reciever operating characteristic (ROC) curve of the classifier. In this work, the algorithm is stopped after the accuracy reaches a value of 0.65, because we didn't see any significant decreasy in the accuracy once it reached this value. An accuracy value of 0.65 clearly shows that the classifier is capable of further discriminating between the two datasets. This is explained by the fact that the neural network $f$ and its hyperparameters are not fully optimized. For the algorithm to perform better, we need to optimize the neural network $f$ and possibly improve the architecture for the selected task.
\newpage
\section{Discussion}
In this work we propose a method for calculating underlying probability distributions in particle decays, using only the data that can be collected in a real-world physical system. First, we developed an artificial physical system based on the QCD fragmentation process. Next, we present the core part of the method: the $2NN$ algorithm, which we described in detail. The algorithm performs very well when tested on the developed physical system. It accurately predicts most of the hidden resonant particles, as well as their decay channels, which can occur in the evolution of jets. The energy spectra of the particles in the final state can also be accurately reproduced.
Although tested only on the developed artificial physical system, we believe that the method is general enough to be applicable to real-world physical systems, such as collisions of high-energy particles, with a few possible modifications. For example, we hope that this method can in the future prove helpful in measuring the fragmentation functions of quarks and gluons. Also, one could employ such a method in the search for supersymmetric particles of unknown masses, or in measuring the branching ratios of known decays.
The $2NN$ algorithm does not specify the exact architecture of the neural networks, nor the representation of the data used. Furthermore, the classifier does not need to be a neural network - it can be any machine learning technique which maximizes likelihood. Although the algorithm has a Generative Adversarial Netowrk (GAN)-like structure, it converges readily and does not show usual issues associated with GANs, such as mode collapse or vanishing gradients. The downside of the presented algorithm are high computational requirements. Continuous probability distributions, which we expect to occur in nature, are approximated by discrete probability distributions. In quest for higher precision and a better description of reality, one always aims to increase the resolution of discrete steps, but this carries a high computational cost. Also, the used neural networks are not fully optimized, which slows down the convergence of the algorithm. In conclusion, in order to cut down computational costs, a more thorough analysis of convergence is needed to achieve better performance.
In future work we hope to make the method even more general and thus even more applicable to real-world physical systems. In particular, we want to introduce angle dependent probability distributions, which can be retrieved from some detector data. We would also like to investigate the possibility of including other decay modes, such as $1 \rightarrow 3$ type decays.
\newpage
| proofpile-arXiv_065-32 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
"\\section{Introduction}\n\\IEEEPARstart{I}{mages} captured under low-light conditions often suffer (...TRUNCATED) | proofpile-arXiv_065-35 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
"\n\n\\section{Introduction}\n\\label{sect1}\n\nIn the past decades, magnetic fields in galaxy clust(...TRUNCATED) | proofpile-arXiv_065-40 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
"\\section{Introduction}\n \n \n\n\t\n\n On 22 September 2017, IceCube reported a neutrino eve(...TRUNCATED) | proofpile-arXiv_065-42 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
"\\section{Introduction}\\label{sec1}\nLet $\\Omega\\subseteq \\mathbb{R}^{n}~(n\\geq 2)$ be a domai(...TRUNCATED) | proofpile-arXiv_065-77 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
"\\section{Introduction}\n\nHyperspectral remote sensing technology is a method that organically com(...TRUNCATED) | proofpile-arXiv_065-83 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
"\\section{Introduction}\nComputed Tomography Angiography (CTA) is a commonly used modality in many (...TRUNCATED) | proofpile-arXiv_065-84 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
"\\section{introduction}\n\nAssembly of galaxies in the early universe is a matter of intense debate(...TRUNCATED) | proofpile-arXiv_065-85 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
End of preview. Expand
in Dataset Viewer.
This dataset is a 10% sample of Dolma v1.7, equating to around ~305B tokens and uploaded directly as a Hugging Face dataset.
As a pure sample, it maintains the ODC-BY license.
- Downloads last month
- 530