[ { "text": "Nested sampling, statistical physics and the Potts model: We present a systematic study of the nested sampling algorithm based on the\nexample of the Potts model. This model, which exhibits a first order phase\ntransition for $q>4$, exemplifies a generic numerical challenge in statistical\nphysics: The evaluation of the partition function and thermodynamic\nobservables, which involve high dimensional sums of sharply structured\nmulti-modal density functions. It poses a major challenge to most standard\nnumerical techniques, such as Markov Chain Monte Carlo. In this paper we will\ndemonstrate that nested sampling is particularly suited for such problems and\nit has a couple of advantages. For calculating the partition function of the\nPotts model with $N$ sites: a) one run stops after $O(N)$ moves, so it takes\n$O(N^{2})$ operations for the run, b) only a single run is required to compute\nthe partition function along with the assignment of confidence intervals, c)\nthe confidence intervals of the logarithmic partition function decrease with\n$1/\\sqrt{N}$ and d) a single run allows to compute quantities for all\ntemperatures while the autocorrelation time is very small, irrespective of\ntemperature. Thermodynamic expectation values of observables, which are\ncompletely determined by the bond configuration in the representation of\nFortuin and Kasteleyn, like the Helmholtz free energy, the internal energy as\nwell as the entropy and heat capacity, can be calculated in the same single run\nneeded for the partition function along with their confidence intervals. In\ncontrast, thermodynamic expectation values of magnetic properties like the\nmagnetization and the magnetic susceptibility require sampling the additional\nspin degree of freedom. Results and performance are studied in detail and\ncompared with those obtained with multi-canonical sampling. Eventually the\nimplications of the findings on a parallel implementation of nested sampling\nare outlined.", "category": "physics_comp-ph" }, { "text": "An explicit multistep method for the Wigner problem: An explicit multistep scheme is proposed for solving the initial-value Wigner\nproblem. In this scheme, the integrated form of the Wigner equation is\napproximated by extrapolation or interpolation polynomials on backwards\ncharacteristics, and the pseudo-differential operator is tackled by the\nspectral collocation method. Since it exploits the exact Lagrangian advection,\nthe time stepping of the multistep scheme is not restricted by the CFL-type\ncondition. It is also demonstrated that the calculations of the Wigner\npotential can be carried out by two successive FFTs, thereby reducing the\ncomputational complexity dramatically. Numerical examples illustrating its\naccuracy are presented.", "category": "physics_comp-ph" }, { "text": "Dynamic properties of the warm dense electron gas: an ab initio path\n integral Monte Carlo approach: There is growing interest in warm dense matter (WDM) -- an exotic state on\nthe border between condensed matter and plasmas. Due to the simultaneous\nimportance of quantum and correlation effects WDM is complicated to treat\ntheoretically. A key role has been played by \\textit{ab initio} path integral\nMonte Carlo (PIMC) simulations, and recently extensive results for\nthermodynamic quantities have been obtained. The first extension of PIMC\nsimulations to the dynamic structure factor of the uniform electron gas were\nreported by Dornheim \\textit{et al.} [Phys. Rev. Lett. \\textbf{121}, 255001\n(2018)]. This was based on an accurate reconstruction of the dynamic local\nfield correction. Here we extend this concept to other dynamical quantities of\nthe warm dense electron gas including the dynamic susceptibility, the\ndielectric function and the conductivity.", "category": "physics_comp-ph" }, { "text": "mudirac: a Dirac equation solver for elemental analysis with muonic\n X-rays: We present a new open source software for the integration of the radial Dirac\nequation developed specifically with muonic atoms in mind. The software, called\nmudirac, is written in C++ and can be used to predict frequencies and\nprobabilities of the transitions between levels of the muonic atom. In this\nway, it provides an invaluable tool in helping with the interpretation of\nmuonic X-ray spectra for elemental analysis. We introduce the details of the\nalgorithms used by the software, show the interpretation of a few example\nspectra, and discuss the more complex issues involved with predicting the\nintensities of the spectral lines. The software is available publicly at\nhttps://github.com/muon-spectroscopy-computational-project/mudirac.", "category": "physics_comp-ph" }, { "text": "Quadratic Scaling Bosonic Path Integral Molecular Dynamics: Bosonic exchange symmetry leads to fascinating quantum phenomena, from\nexciton condensation in quantum materials to the superfluidity of liquid\nHelium-4. Unfortunately, path integral molecular dynamics (PIMD) simulations of\nbosons are computationally prohibitive beyond $\\mathord{\\sim} 100$ particles,\ndue to a cubic scaling with the system size. We present an algorithm that\nreduces the complexity from cubic to quadratic, allowing the first simulations\nof thousands of bosons using PIMD. Our method is orders of magnitude faster,\nwith a speedup that scales linearly with the number of particles and the number\nof imaginary time slices (beads). Simulations that would have otherwise taken\ndecades can now be done in days. In practice, the new algorithm eliminates most\nof the added computational cost of including bosonic exchange effects, making\nthem almost as accessible as PIMD simulations of distinguishable particles.", "category": "physics_comp-ph" }, { "text": "A hybrid adaptive multiresolution approach for the efficient simulation\n of reactive flows: Computational studies that use block-structured adaptive mesh refinement\n(AMR) approaches suffer from unnecessarily high mesh resolution in regions\nadjacent to important solution features. This deficiency limits the performance\nof AMR codes. In this work a novel hybrid adaptive multiresolution (HAMR)\napproach to AMR-based calculations is introduced to address this issue. The\nmultiresolution (MR) smoothness indicators are used to identify regions of\nsmoothness on the mesh where the computational cost of individual physics\nsolvers may be decreased by replacing direct calculations with interpolation.\nWe suggest an approach to balance the errors due to the adaptive discretization\nand the interpolation of physics quantities such that the overall accuracy of\nthe HAMR solution is consistent with that of the MR-driven AMR solution. The\nperformance of the HAMR scheme is evaluated for a range of test problems, from\npure hydrodynamics to turbulent combustion.", "category": "physics_comp-ph" }, { "text": "Acceleration beyond lowest order event generation: An outlook on further\n parallelism within MadGraph5_aMC@NLO: An important area of high energy physics studies at the Large Hadron Collider\n(LHC) currently concerns the need for more extensive and precise comparison\ndata. Important tools in this realm are event reweighing and evaluation of more\nprecise next-to-leading order (NLO) processes via Monte Carlo event generators,\nespecially in the context of the upcoming High Luminosity LHC. Current event\ngenerators need to improve throughputs for these studies. MadGraph5_aMC@NLO\n(MG5aMC) is an event generator being used by LHC experiments which has been\naccelerated considerably with a port to GPU and vector CPU architectures, but\nas of yet only for leading order processes. In this contribution a prototype\nfor event reweighing using the accelerated MG5aMC software, as well as plans\nfor an NLO implementation, are presented.", "category": "physics_comp-ph" }, { "text": "Dissipative Particle Dynamics for Systems with Polar Species: In this work we developed a method for simulating polar species in the\ndissipative particle dynamics (DPD) method. The main idea behind the method is\nto treat each bead as a dumb-bell, i.e. two sub-beads (the sub-beads can bear\ncharges) kept at a fixed distance, instead of a point-like particle. It was\nshown that at small enough separations the composite beads act essentially as\nconventional point-like beads. Next, the relation between the bead dipole\nmoment and the bulk dielectric permittivity was obtained. The interaction of\nsingle charges in polar liquid showed that the observed dielectric permittivity\nis somewhat smaller than that obtained for the bulk case at large separation\nbetween the charges; at distances comparable to the bead size the solvation\nshells of the charges start to interfere and oscillations in the observed\npermittivity occur. Such oriented molecules effectively have smaller\npolarizability compared to the bulk liquid, so the field of one charge in the\nvicinity of another charge is reduced not as strong as in the bulk. Finally, we\nshowed why it is necessary to treat the polar species in DPD explicitly instead\nof implicitly by calculating the local polarizability based on the local\nspecies concentrations: the latter leads to the violation of the Newton's third\nlaw resulting in simulation artifacts. We investigated the behavior of a\ncharged colloidal particle at an interface of polar and non-polar liquids. We\nobtained that when the polar molecules are treated explicitly, the charged\ncolloidal particle moved into the polar liquid since it is energetically more\nfavorable for the charged molecules to be immersed in a polar medium; however,\nwithin the \"implicit polarity\" method the colloidal particle is found on top of\na \"bump\" formed by the molecules of the non-polar liquid, which increases the\ninterface area between the liquids instead of decreasing it.", "category": "physics_comp-ph" }, { "text": "Data libraries as a collaborative tool across Monte Carlo codes: The role of data libraries in Monte Carlo simulation is discussed. A number\nof data libraries currently in preparation are reviewed; their data are\ncritically examined with respect to the state-of-the-art in the respective\nfields. Extensive tests with respect to experimental data have been performed\nfor the validation of their content.", "category": "physics_comp-ph" }, { "text": "Wide Ranging Equation of State with Tartarus: a Hybrid Green's\n Function/Orbital based Average Atom Code: Average atom models are widely used to make equation of state tables and for\ncalculating other properties of materials over a wide range of conditions, from\nzero temperature isolated atom to fully ionized free electron gases. The\nnumerical challenge of making these density functional theory based models work\nfor any temperature, density or nuclear species is formidable. Here we present\nin detail a hybrid Green's function/orbital based approach that has proved to\nbe stable and accurate for wide ranging conditions. Algorithmic strategies are\ndiscussed. In particular the decomposition of the electron density into\nnumerically advantageous parts is presented and a robust and rapid self\nconsistent field method based on a quasi-Newton algorithm is given. Example\napplication to the equation of state of lutetium (Z=71) is explored in detail,\nincluding the effect of relativity, finite temperature exchange and\ncorrelation, and a comparison to a less approximate method. The hybrid scheme\nis found to be numerically stable and accurate for lutetium over at least 6\norders of magnitude in density and 5 orders of magnitude in temperature.", "category": "physics_comp-ph" }, { "text": "Generalized lattice Boltzmann method: Modeling, analysis, and elements: In this paper, we first present a unified framework for the modelling of\ngeneralized lattice Boltzmann method (GLBM). We then conduct a comparison of\nthe four popular analysis methods (Chapman-Enskog analysis, Maxwell iteration,\ndirect Taylor expansion and recurrence equations approaches) that have been\nused to obtain the macroscopic Navier-Stokes equations and nonlinear\nconvection-diffusion equations from the GLBM, and show that from mathematical\npoint of view, these four analysis methods are equivalent to each other.\nFinally, we give some elements that are needed in the implementation of the\nGLBM, and also find that some available LB models can be obtained from this\nGLBM.", "category": "physics_comp-ph" }, { "text": "A consistent and conservative model and its scheme for\n $N$-phase-$M$-component incompressible flows: In the present work, we propose a consistent and conservative model for\nmultiphase and multicomponent incompressible flows, where there can be\narbitrary numbers of phases and components. Each phase has a background fluid\ncalled the pure phase, each pair of phases is immiscible, and components are\ndissolvable in some specific phases. The model is developed based on the\nmultiphase Phase-Field model including the contact angle boundary condition,\nthe diffuse domain approach, and the analyses on the proposed consistency\nconditions for multiphase and multicomponent flows. The model conserves the\nmass of individual pure phases, the amount of each component in its dissolvable\nregion, and thus the mass of the fluid mixture, and the momentum of the flow.\nIt ensures that no fictitious phases or components can be generated and that\nthe summation of the volume fractions from the Phase-Field model is unity\neverywhere so that there is no local void or overfilling. It satisfies a\nphysical energy law and it is Galilean invariant. A corresponding numerical\nscheme is developed for the proposed model, whose formal accuracy is 2nd-order\nin both time and space. It is shown to be consistent and conservative and its\nsolution is demonstrated to preserve the Galilean invariance and energy law.\nNumerical tests indicate that the proposed model and scheme are effective and\nrobust to study various challenging multiphase and multicomponent flows.", "category": "physics_comp-ph" }, { "text": "Density Matrix Renormalization for Model Reduction in Nonlinear Dynamics: We present a novel approach for model reduction of nonlinear dynamical\nsystems based on proper orthogonal decomposition (POD). Our method, derived\nfrom Density Matrix Renormalization Group (DMRG), provides a significant\nreduction in computational effort for the calculation of the reduced system,\ncompared to a POD. The efficiency of the algorithm is tested on the one\ndimensional Burgers equations and a one dimensional equation of the Fisher type\nas nonlinear model systems.", "category": "physics_comp-ph" }, { "text": "Automatic grid construction for few-body quantum mechanical calculations: An algorithm for generating optimal nonuniform grids for solving the two-body\nSchr\\\"odinger equation is developed and implemented. The shape of the grid is\noptimized to accurately reproduce the low-energy part of the spectrum of the\nSchr\\\"odinger operator. Grids constructed this way are applicable to more\ncomplex few-body systems where the number of grid points is a critical\nlimitation to numerical accuracy. The utility of the grid generation for\nimproving few-body calculations is illustrated through an application to bound\nstates of He trimers.", "category": "physics_comp-ph" }, { "text": "The role of Coulomb correlation in charge density wave of CuTe: A quasi one-dimensional layered material, CuTe undergoes a charge density\nwave (CDW) transition in Te chains with a modulation vector of $q_{CDW}=(0.4,\n0.0, 0.5)$. Despite the clear experimental evidence for the CDW, the\ntheoretical understanding especially the role of the electron-electron\ncorrelation in the CDW has not been fully explored. Here, using first\nprinciples calculations, we demonstrate the correlation effect of Cu is\ncritical to stabilize the 5$\\times$1$\\times$2 modulation of Te chains. We find\nthat the phonon calculation with the strong Coulomb correlation exhibits the\nimaginary phonon frequency so-called phonon soft mode at $q_{ph0}=(0.4, 0.0,\n0.5)$ indicating the structural instability. The corresponding lattice\ndistortion of the soft mode agrees well with the experimental modulation. These\nresults demonstrate that the CDW transition in CuTe originates from the\ninterplay of the Coulomb correlation and electron-phonon interaction.", "category": "physics_comp-ph" }, { "text": "A Discontinuous Galerkin Time-Domain Method with Dynamically Adaptive\n Cartesian Meshes for Computational Electromagnetics: A discontinuous Galerkin time-domain (DGTD) method based on dynamically\nadaptive Cartesian meshes (ACM) is developed for a full-wave analysis of\nelectromagnetic fields in dispersive media. Hierarchical Cartesian grids offer\nsimplicity close to that of structured grids and the flexibility of\nunstructured grids while being highly suited for adaptive mesh refinement\n(AMR). The developed DGTD-ACM achieves a desired accuracy by refining\nnon-conformal meshes near material interfaces to reduce stair-casing errors\nwithout sacrificing the high efficiency afforded with uniform Cartesian meshes.\nMoreover, DGTD-ACM can dynamically refine the mesh to resolve the local\nvariation of the fields during propagation of electromagnetic pulses. A local\ntime-stepping scheme is adopted to alleviate the constraint on the time-step\nsize due to the stability condition of the explicit time integration.\nSimulations of electromagnetic wave diffraction over conducting and dielectric\ncylinders and spheres demonstrate that the proposed method can achieve a good\nnumerical accuracy at a reduced computational cost compared with uniform\nmeshes. For simulations of dispersive media, the auxiliary differential\nequation (ADE) and recursive convolution (RC) methods are implemented for a\nlocal Drude model and tested for a cold plasma slab and a plasmonic rod. With\nfurther advances of the charge transport models, the DGTD-ACM method is\nexpected to provide a powerful tool for computations of electromagnetic fields\nin complex geometries for applications to high-frequency electronic devices,\nplasmonic THz technologies, as well as laser-induced and microwave plasmas.", "category": "physics_comp-ph" }, { "text": "Assessment of continuous and discrete adjoint method for sensitivity\n analysis in two-phase flow simulations: The efficient method for computing the sensitivities is the adjoint method.\nThe cost of solving an adjoint equation is comparable to the cost of solving\nthe governing equation. Once the adjoint solution is obtained, the\nsensitivities to any number of parameters can be obtained with little effort.\nThere are two methods to develop the adjoint equations: continuous method and\ndiscrete method. In the continuous method, the control theory is applied to the\nforward governing equation and produces an analytical partial differential\nequation for solving the adjoint variable; in the discrete method, the control\ntheory is applied to the discrete form of the forward governing equation and\nproduces a linear system of equations for solving the adjoint variable. In this\narticle, an adjoint sensitivity analysis framework is developed using both the\ncontinuous and discrete methods. These two methods are assessed with the faucet\nflow for steady-state problem and one transient test case based on the BFBT\nbenchmark for transient problem. Adjoint sensitivities from both methods are\nverified by sensitivities given by the perturbation method. Adjoint\nsensitivities from both methods are physically reasonable and match each. The\nsensitivities obtained with discrete method is found to be more accurate than\nthe sensitivities from the continuous method. The continuous method is\ncomputationally more efficient than the discrete method because of the\nanalytical coefficient matrices and vectors. However, difficulties are observed\nin solving the continuous adjoint equation for cases where the adjoint equation\ncontains sharp discontinuities in the source terms; in such cases, the\ncontinuous method is not as robust as the discrete adjoint method.", "category": "physics_comp-ph" }, { "text": "Simulation of Thin-TFETs Using Transition Metal Dichalcogenides: Effect\n of Material Parameters, Gate Dielectric on Electrostatic Device Performance: In recent years, a lot of scientific research effort has been put forth for\nthe investigation of Transition Metal Dichalcogenides (TMDC) and other Two\nDimensional (2D) materials like Graphene, Boron Nitride. Theoretical\ninvestigation on the physical aspects of these materials has revealed a whole\nnew range of exciting applications due to wide tunability in electronic and\noptoelectronic properties. Besides theoretical exploration, these materials\nhave been successfully implemented in electronic and optoelectronic devices\nwith promising results. In this work, we have investigated the effect of\nmonolayer TMDC materials and monolayer TMDC alloys on the performance of a\npromising electronic device that can achieve steep switching characteristics-\nthin Tunneling Filed Effect Transistor or thin-TFET, using self-consistent\ndetermination of conduction, valance band levels in the device and a simplified\nmodel of interlayer tunneling current that treats scattering semi-classically\nand incorporates the energy broadening effect using a Gaussian approximation.\nWe have also explored the effect of gate dielectric material variation,\ninterlayer material variation on the performance of the device.", "category": "physics_comp-ph" }, { "text": "A simplified discrete unified gas kinetic scheme for incompressible flow: The discrete unified gas kinetic scheme (DUGKS) is a new finite volume (FV)\nscheme for continuum and rarefied flows which combines the benefits of both\nLattice Boltzmann Method (LBM) and unified gas kinetic scheme (UGKS). By\nreconstruction of gas distribution function using particle velocity\ncharacteristic line, flux contains more detailed information of fluid flow and\nmore concrete physical nature. In this work, a simplified DUGKS is proposed\nwith reconstruction stage on a whole time step instead of half time step in\noriginal DUGKS. Using temporal/spatial integral Boltzmann Bhatnagar-Gross-Krook\n(BGK) equation, the transformed distribution function with inclusion of\ncollision effect is constructed. The macro and mesoscopic fluxes of the cell on\nnext time step is predicted by reconstruction of transformed distribution\nfunction at interfaces along particle velocity characteristic lines. According\nto the conservation law, the macroscopic variables of the cell on next time\nstep can be updated through its macroscopic flux. Equilibrium distribution\nfunction on next time step can also be updated. Gas distribution function is\nupdated by FV scheme through its predicted mesoscopic flux in a time step.\nCompared with the original DUGKS, the computational process of the proposed\nmethod is more concise because of the omission of half time step flux\ncalculation. Numerical time step is only limited by the Courant-Friedrichs-Lewy\n(CFL) condition and relatively good stability has been preserved. Several test\ncases, including the Couette flow, lid-driven cavity flow, laminar flows over a\nflat plate, a circular cylinder, and an airfoil, as well as micro cavity flow\ncases are conducted to validate present scheme. The numerical simulation\nresults agree well with the references' results.", "category": "physics_comp-ph" }, { "text": "Multiscale reaction-diffusion algorithms: PDE-assisted Brownian dynamics: Two algorithms that combine Brownian dynamics (BD) simulations with\nmean-field partial differential equations (PDEs) are presented. This\nPDE-assisted Brownian dynamics (PBD) methodology provides exact particle\ntracking data in parts of the domain, whilst making use of a mean-field\nreaction-diffusion PDE description elsewhere. The first PBD algorithm couples\nBD simulations with PDEs by randomly creating new particles close to the\ninterface which partitions the domain and by reincorporating particles into the\ncontinuum PDE-description when they cross the interface. The second PBD\nalgorithm introduces an overlap region, where both descriptions exist in\nparallel. It is shown that to accurately compute variances using the PBD\nsimulation requires the overlap region. Advantages of both PBD approaches are\ndiscussed and illustrative numerical examples are presented.", "category": "physics_comp-ph" }, { "text": "Diffusive approximation of a time-fractional Burger's equation in\n nonlinear acoustics: A fractional time derivative is introduced into the Burger's equation to\nmodel losses of nonlinear waves. This term amounts to a time convolution\nproduct, which greatly penalizes the numerical modeling. A diffusive\nrepresentation of the fractional derivative is adopted here, replacing this\nnonlocal operator by a continuum of memory variables that satisfy local-in-time\nordinary differential equations. Then a quadrature formula yields a system of\nlocal partial differential equations, well-suited to numerical integration. The\ndetermination of the quadrature coefficients is crucial to ensure both the\nwell-posedness of the system and the computational efficiency of the diffusive\napproximation. For this purpose, optimization with constraint is shown to be a\nvery efficient strategy. Strang splitting is used to solve successively the\nhyperbolic part by a shock-capturing scheme, and the diffusive part exactly.\nNumerical experiments are proposed to assess the efficiency of the numerical\nmodeling, and to illustrate the effect of the fractional attenuation on the\nwave propagation.", "category": "physics_comp-ph" }, { "text": "Finite-Bandwidth Resonances of High-Order Axial Modes (HOAM) in a\n Gyrotron Cavity: Finite-bandwidth resonances of high-order axial modes (HOAM) in an open\ngyrotron cavity are studied numerically using the GYROSIM problem-oriented\nsoftware package for modelling, simulation and computer-aided design (CAD) of\ngyrotron tubes.", "category": "physics_comp-ph" }, { "text": "TCAD Modeling of Cryogenic nMOSFET ON-State Current and Subthreshold\n Slope: In this paper, through careful calibration, we demonstrate the possibility of\nusing a single set of models and parameters to model the ON current and\nSub-threshold Slope (SS) of an nMOSFET at 300K and 5K using Technology\nComputer-Aided Design (TCAD). The device used is a 0.35um technology nMOSFET\nwith W/L=10um/10um. We show that it is possible to model the abnormal SS by\nusing interface acceptor traps with a density less than 2e12cm-2. We also\npropose trap distribution profiles in the energy space that can be used to\nreproduce other observed SS from 4K to 300K. Although this work does not prove\nor disprove any possible origin of the abnormal SS, it shows that one cannot\ncompletely rule out the interfacial traps as the origin and it shows that\ninterfacial traps can be used to model the abnormal SS before the origin is\nfully understood. We also show that Drain-Induced-Barrier-Lowering (DIBL) is\nmuch reduced at cryogenic temperature due to the abnormal slope and the device\noptimization strategy might need to be revised.", "category": "physics_comp-ph" }, { "text": "Exact numerical simulation of power-law noises: Many simulations of stochastic processes require colored noises: I describe\nhere an exact numerical method to simulate power-law noises: the method can be\nextended to more general colored noises, and is exact for all time steps, even\nwhen they are unevenly spaced (as may often happen for astronomical data, see\ne.g. N. R. Lomb, Astrophys. Space Sci. {\\bf 39}, 447 (1976)). The algorithm has\na well-behaved computational complexity, it produces a nearly perfect Gaussian\nnoise, and its computational efficiency depends on the required degree of noise\nGaussianity.", "category": "physics_comp-ph" }, { "text": "Data-driven computational mechanics: We develop a new computing paradigm, which we refer to as data-driven\ncomputing, according to which calculations are carried out directly from\nexperimental material data and pertinent constraints and conservation laws,\nsuch as compatibility and equilibrium, thus bypassing the empirical material\nmodeling step of conventional computing altogether. Data-driven solvers seek to\nassign to each material point the state from a prespecified data set that is\nclosest to satisfying the conservation laws. Equivalently, data-driven solvers\naim to find the state satisfying the conservation laws that is closest to the\ndata set. The resulting data-driven problem thus consists of the minimization\nof a distance function to the data set in phase space subject to constraints\nintroduced by the conservation laws. We motivate the data-driven paradigm and\ninvestigate the performance of data-driven solvers by means of two examples of\napplication, namely, the static equilibrium of nonlinear three-dimensional\ntrusses and linear elasticity. In these tests, the data-driven solvers exhibit\ngood convergence properties both with respect to the number of data points and\nwith regard to local data assignment. The variational structure of the\ndata-driven problem also renders it amenable to analysis. We show that, as the\ndata set approximates increasingly closely a classical material law in phase\nspace, the data-driven solutions converge to the classical solution. We also\nillustrate the robustness of data-driven solvers with respect to spatial\ndiscretization. In particular, we show that the data-driven solutions of\nfinite-element discretizations of linear elasticity converge jointly with\nrespect to mesh size and approximation by the data set.", "category": "physics_comp-ph" }, { "text": "Numerical Evidence for Divergent Burnett Coefficients: In previous papers [Phys. Rev. A {\\bf 41}, 4501 (1990), Phys. Rev. E {\\bf\n18}, 3178 (1993)], simple equilibrium expressions were obtained for nonlinear\nBurnett coefficients. A preliminary calculation of a 32 particle Lennard-Jones\nfluid was presented in the previous paper. Now, sufficient resources have\nbecome available to address the question of whether nonlinear Burnett\ncoefficients are finite for soft spheres. The hard sphere case is known to have\ninfinite nonlinear Burnett coefficients (ie a nonanalytic constitutive\nrelation) from mode coupling theory. This paper reports a molecular dynamics\ncaclulation of the third order nonlinear Burnett coefficient of a Lennard-Jones\nfluid undergoing colour flow, which indicates that this term is diverges in the\nthermodynamic limit.", "category": "physics_comp-ph" }, { "text": "Influence of A-Posteriori Subcell Limiting on Fault Frequency in\n Higher-Order DG Schemes: Soft error rates are increasing as modern architectures require increasingly\nsmall features at low voltages. Due to the large number of components used in\nHPC architectures, these are particularly vulnerable to soft errors. Hence,\nwhen designing applications that run for long time periods on large machines,\nalgorithmic resilience must be taken into account. In this paper we analyse the\ninherent resiliency of a-posteriori limiting procedures in the context of the\nexplicit ADER DG hyperbolic PDE solver ExaHyPE. The a-posteriori limiter checks\nelement-local high-order DG solutions for physical admissibility, and can thus\nbe expected to also detect hardware-induced errors. Algorithmically, it can be\ninterpreted as element-local checkpointing and restarting of the solver with a\nmore robust finite volume scheme on a fine subgrid. We show that the limiter\nindeed increases the resilience of the DG algorithm, detecting and correcting\nparticularly those faults which would otherwise lead to a fatal failure.", "category": "physics_comp-ph" }, { "text": "An accurate spectral method for solving the Schroedinger equation: The solution of the Lippman-Schwinger (L-S) integral equation is equivalent\nto the the solution of the Schroedinger equation. A new numerical algorithm for\nsolving the L-S equation is described in simple terms, and its high accuracy is\nconfirmed for several physical situations. They are: the scattering of an\nelectron from a static hydrogen atom in the presence of exchange, the\nscattering of two atoms at ultra low temperatures, and barrier penetration in\nthe presence of a resonance for a Morse potential. A key ingredient of the\nmethod is to divide the radial range into partitions, and in each partition\nexpand the solution of the L-S equation into a set of Chebyshev polynomials.\nThe expansion is called \"spectral\" because it converges rapidly to high\naccuracy. Properties of the Chebyshev expansion, such as rapid convergence, are\nillustrated by means of a simple example.", "category": "physics_comp-ph" }, { "text": "DeePN$^2$: A deep learning-based non-Newtonian hydrodynamic model: A long standing problem in the modeling of non-Newtonian hydrodynamics of\npolymeric flows is the availability of reliable and interpretable hydrodynamic\nmodels that faithfully encode the underlying micro-scale polymer dynamics. The\nmain complication arises from the long polymer relaxation time, the complex\nmolecular structure and heterogeneous interaction. DeePN$^2$, a deep\nlearning-based non-Newtonian hydrodynamic model, has been proposed and has\nshown some success in systematically passing the micro-scale structural\nmechanics information to the macro-scale hydrodynamics for suspensions with\nsimple polymer conformation and bond potential. The model retains a\nmulti-scaled nature by mapping the polymer configurations into a set of\nsymmetry-preserving macro-scale features. The extended constitutive laws for\nthese macro-scale features can be directly learned from the kinetics of their\nmicro-scale counterparts. In this paper, we develop DeePN$^2$ using more\ncomplex micro-structural models. We show that DeePN$^2$ can faithfully capture\nthe broadly overlooked viscoelastic differences arising from the specific\nmolecular structural mechanics without human intervention.", "category": "physics_comp-ph" }, { "text": "Yade Documentation: Yade is an extensible open-source framework for discrete numerical models,\nfocused on the Discrete Element Method. The computation parts are written in\nc++ using a flexible object model and allowing independent implementation of\nnew algorithms and interfaces. Python is used for rapid and concise scene\nconstruction, simulation control, postprocessing and debugging. Yade is located\nat yade-dem.org, which contains this documentation. Development is kindly\nhosted on launchpad and GitLab ; they are used for source code, bug tracking\nand source downloads and more. Building, regression tests and packages\ndistribution are hosted on servers of the Grenoble Geomechanics group at\nLaboratoire 3SR, UMS Gricad and Gda\\'nsk University of Technology. Yade\nsupports high precision calculations and Python 3. The development branch is on\nGitLab.", "category": "physics_comp-ph" }, { "text": "On sparse reconstructions in near-field acoustic holography using the\n method of superposition: The method of superposition is proposed in combination with a sparse $\\ell_1$\noptimisation algorithm with the aim of finding a sparse basis to accurately\nreconstruct the structural vibrations of a radiating object from a set of\nacoustic pressure values on a conformal surface in the near-field. The nature\nof the reconstructions generated by the method differs fundamentally from those\ngenerated via standard Tikhonov regularisation in terms of the level of\nsparsity in the distribution of charge strengths specifying the basis. In many\ncases, the $\\ell_1$ optimisation leads to a solution basis whose size is only a\nsmall fraction of the total number of measured data points. The effects of\nchanging the wavenumber, the internal source surface and the (noisy) acoustic\npressure data in general will all be studied with reference to a numerical\nstudy on a cuboid of similar dimensions to a typical loudspeaker cabinet. The\ndevelopment of sparse and accurate reconstructions has a number of advantageous\nconsequences including improved reconstructions from reduced data sets, the\nenhancement of numerical solution methods and wider applications in source\nidentification problems.", "category": "physics_comp-ph" }, { "text": "Wavelet methods to eliminate resonances in the Galerkin-truncated\n Burgers and Euler equations: It is well known that solutions to the Fourier-Galerkin truncation of the\ninviscid Burgers equation (and other hyperbolic conservation laws) do not\nconverge to the physically relevant entropy solution after the formation of the\nfirst shock. This loss of convergence was recently studied in detail in [S. S.\nRay et al., Phys. Rev. E 84, 016301 (2011)], and traced back to the appearance\nof a spatially localized resonance phenomenon perturbing the solution. In this\nwork, we propose a way to remove this resonance by filtering a wavelet\nrepresentation of the Galerkin-truncated equations. A method previously\ndeveloped with a complex-valued wavelet frame is applied and expanded to\nembrace the use of real-valued orthogonal wavelet basis, which we show to yield\nsatisfactory results only under the condition of adding a safety zone in\nwavelet space. We also apply the complex-valued wavelet based method to the 2D\nEuler equation problem, showing that it is able to filter the resonances in\nthis case as well.", "category": "physics_comp-ph" }, { "text": "Finding well-optimized special quasirandom structures with decision\n diagram: The advanced data structure of the zero-suppressed binary decision diagram\n(ZDD) enables us to efficiently enumerate nonequivalent substitutional\nstructures. Not only can the ZDD store a vast number of structures in a\ncompressed manner, but also can a set of structures satisfying given\nconstraints be extracted from the ZDD efficiently. Here, we present a ZDD-based\nefficient algorithm for exhaustively searching for special quasirandom\nstructures (SQSs) that mimic the perfectly random substitutional structure. We\ndemonstrate that the current approach can extract only a tiny number of SQSs\nfrom a ZDD composed of many substitutional structures (>$10^{12}$). As a\nresult, we find SQSs that are optimized better than those proposed in the\nliterature. A series of SQSs should be helpful for estimating the properties of\nsubstitutional solid solutions. Furthermore, the present ZDD-based algorithm\nshould be useful for applying the ZDD to the other structure enumeration\nproblems.", "category": "physics_comp-ph" }, { "text": "Superconductive \"sodalite\"-like clathrate calcium hydride at high\n pressures: Hydrogen-rich compounds hold promise as high-temperature superconductors\nunder high pressures. Recent theoretical hydride structures on achieving\nhigh-pressure superconductivity are composed mainly of H2 fragments. Through a\nsystematic investigation of Ca hydrides with different hydrogen contents using\nparticle-swam optimization structural search, we show that in the stoichiometry\nCaH6 a body-centred cubic structure with hydrogen that forms unusual \"sodalite\"\ncages containing enclathrated Ca stabilizes above pressure 150 GPa. The\nstability of this structure is derived from the acceptance by two H2 of\nelectrons donated by Ca forming a \"H4\" unit as the building block in the\nconstruction of the 3-dimensional sodalite cage. This unique structure has a\npartial occupation of the degenerated orbitals at the zone centre. The\nresultant dynamic Jahn-Teller effect helps to enhance electron-phonon coupling\nand leads to superconductivity of CaH6. A superconducting critical temperature\n(Tc) of 220-235 K at 150 GPa obtained from the solution of the Eliashberg\nequations is the highest among all hydrides studied thus far.", "category": "physics_comp-ph" }, { "text": "Modal Tracking Based on Group Theory: Issues in modal tracking in the presence of crossings and crossing avoidances\nbetween eigenvalue traces are solved via the theory of point groups. The von\nNeumann-Wigner theorem is used as a key factor in predictively determining mode\nbehavior over arbitrary frequency ranges. The implementation and capabilities\nof the proposed procedure are demonstrated using characteristic mode\ndecomposition as a motivating example. The procedure is, nevertheless, general\nand can be applied to an arbitrarily parametrized eigenvalue problems. A\ntreatment of modal degeneracies is included and several examples are presented\nto illustrate modal tracking improvements and the immediate consequences of\nimproper modal tracking. An approach leveraging a symmetry-adapted basis to\naccelerate computation is also discussed. A relationship between geometrical\nand physical symmetries is demonstrated on a practical example.", "category": "physics_comp-ph" }, { "text": "A Deep Finite Difference Emulator for the Fast Simulation of Coupled\n Viscous Burgers' Equation: This work proposes a deep learning-based emulator for the efficient\ncomputation of the coupled viscous Burgers' equation with random initial\nconditions. In a departure from traditional data-driven deep learning\napproaches, the proposed emulator does not require a classical numerical solver\nto collect training data. Instead, it makes direct use of the problem's\nphysics. Specifically, the model emulates a second-order finite difference\nsolver, i.e., the Crank-Nicolson scheme in learning dynamics. A systematic case\nstudy is conducted to examine the model's prediction performance,\ngeneralization ability, and computational efficiency. The computed results are\ngraphically represented and compared to those of state-of-the-art numerical\nsolvers.", "category": "physics_comp-ph" }, { "text": "Global Sensitivity Analysis on Numerical Solver Parameters of\n Particle-In-Cell Models in Particle Accelerator Systems: Every computer model depends on numerical input parameters that are chosen\naccording to mostly conservative but rigorous numerical or empirical estimates.\nThese parameters could for example be the step size for time integrators, a\nseed for pseudo-random number generators, a threshold or the number of grid\npoints to discretize a computational domain. In case a numerical model is\nenhanced with new algorithms and modelling techniques the numerical influence\non the quantities of interest, the running time as well as the accuracy is\noften initially unknown. Usually parameters are chosen on a trial-and-error\nbasis neglecting the computational cost versus accuracy aspects. As a\nconsequence the cost per simulation might be unnecessarily high which wastes\ncomputing resources. Hence, it is essential to identify the most critical\nnumerical parameters and to analyze systematically their effect on the result\nin order to minimize the time-to-solution without losing significantly on\naccuracy. Relevant parameters are identified by global sensitivity studies\nwhere Sobol' indices are common measures. These sensitivities are obtained by\nsurrogate models based on polynomial chaos expansion. In this paper, we first\nintroduce the general methods for uncertainty quantification. We then\ndemonstrate their use on numerical solver parameters to reduce the\ncomputational costs and discuss further model improvements based on the\nsensitivity analysis. The sensitivities are evaluated for neighbouring bunch\nsimulations of the existing PSI Injector II and PSI Ring as well as the\nproposed Daedalus Injector cyclotron and simulations of the rf electron gun of\nthe Argonne Wakefield Accelerator.", "category": "physics_comp-ph" }, { "text": "Local micromorphic non-affine anisotropy for materials incorporating\n elastically bonded fibers: There has been increasing experimental evidence of non-affine elastic\ndeformation mechanisms in biological soft tissues. These observations call for\nnovel constitutive models which are able to describe the dominant underlying\nmicro-structural kinematics aspects, in particular relative motion\ncharacteristics of different phases. This paper proposes a flexible and modular\nframework based on a micromorphic continuum encompassing matrix and fiber\nphases. It features in addition to the displacement field so-called director\nfields which can independently deform and intrinsically carry orientational\ninformation. Accordingly, the fibrous constituents can be naturally associated\nwith the micromorphic directors and their non-affine motion within the bulk\nmaterial can be efficiently captured. Furthermore, constitutive relations can\nbe formulated based on kinematics quantities specifically linked to the\nmaterial response of the matrix, the fibres and their mutual interactions.\nAssociated stress quantities are naturally derived from a micromorphic\nvariational principle featuring dedicated governing equations for displacement\nand director fields. This aspect of the framework is crucial for the truly\nnon-affine elastic deformation description. In contrast to conventional\nmicromorphic approaches, any non-local higher-order material behaviour is\nexcluded, thus significantly reducing the number of material parameters to a\nrange typically found in related classical approaches. In the context of\nbiological soft tissue modeling, the potential and applicability of the\nformulation is studied for a number of academic examples featuring anisotropic\nfiber-reinforced composite material composition to elucidate the micromorphic\nmaterial response as compared with the one obtained using a classical continuum\nmechanics approach.", "category": "physics_comp-ph" }, { "text": "A comparative study of bi-directional Whitham systems: In 1967, Whitham proposed a simplified surface water-wave model which\ncombined the full linear dispersion relation of the full Euler equations with a\nweakly linear approximation. The equation he postulated which is now called the\nWhitham equation has recently been extended to a system of equations allowing\nfor bi-directional propagation of surface waves. A number of different two-way\nsystems have been put forward, and even though they are similar from a modeling\npoint of view, these systems have very different mathematical properties. In\nthe current work, we review some of the existing fully dispersive systems. We\nuse state-of-the-art numerical tools to try to understand existence and\nstability of solutions to the initial-value problem associated to these\nsystems. We also put forward a new system which is Hamiltonian and semi-linear.\nThe new system is shown to perform well both with regard to approximating the\nfull Euler system, and with regard to well posedness properties.", "category": "physics_comp-ph" }, { "text": "An ab initio path integral Monte Carlo simulation method for molecules\n and clusters: application to Li_4 and Li_5^+: A novel method for simulating the statistical mechanics of molecular systems\nin which both nuclear and electronic degrees of freedom are treated quantum\nmechanically is presented. The scheme combines a path integral description of\nthe nuclear variables with a first-principles adiabatic description of the\nelectronic structure. The electronic problem is solved for the ground state\nwithin a density functional approach, with the electronic orbitals expanded in\na localized (Gaussian) basis set. The discretized path integral is computed by\na Metropolis Monte Carlo sampling technique on the normal modes of the\nisomorphic ring-polymer. An effective short-time action correct to order\n$\\tau^4$ is used. The validity and performance of the method are tested in two\nsmall Lithium clusters, namely Li$_4$ and Li$_5^+$. Structural and electronic\nproperties computed within this fully quantum-mechanical scheme are presented\nand compared to those obtained within the classical nuclei approximation.\nQuantum delocalization effects are significant but tunneling turns out to be\nirrelevant at low temperatures.", "category": "physics_comp-ph" }, { "text": "Scaling Properties of Parallelized Multicanonical Simulations: We implemented a parallel version of the multicanonical algorithm and applied\nit to a variety of systems with phase transitions of first and second order.\nThe parallelization relies on independent equilibrium simulations that only\ncommunicate when the multicanonical weight function is updated. That way, the\nMarkov chains efficiently sample the temporary distributions allowing for good\nestimations of consecutive weight functions.\n The systems investigated range from the well known Ising and Potts spin\nsystems to bead-spring polymers. We estimate the speedup with increasing number\nof parallel processes. Overall, the parallelization is shown to scale quite\nwell. In the case of multicanonical simulations of the $q$-state Potts model\n($q\\ge6$) and multimagnetic simulations of the Ising model, the optimal\nperformance is limited due to emerging barriers.", "category": "physics_comp-ph" }, { "text": "Boundary Conditions for Continuum Simulations of Wall-bounded Kinetic\n Plasmas: Continuum kinetic simulations of plasmas, where the distribution function of\nthe species is directly discretized in phase-space, permits fully kinetic\nsimulations without the statistical noise of particle-in-cell methods. Recent\nadvances in numerical algorithms have made continuum kinetic simulations\ncomputationally competitive. This work presents the first continuum kinetic\ndescription of high-fidelity wall boundary conditions that utilize the readily\navailable particle distribution function. The boundary condition is realized\nthrough a reflection function that can capture a wide range of cases from\nsimple specular reflection to more involved first principles models. Examples\nwith detailed discontinuous Galerkin implementation are provided for secondary\nelectron emission using phenomenological and first-principles\nquantum-mechanical models. Results presented in this work demonstrate the\neffect of secondary electron emission on a classical plasma sheath.", "category": "physics_comp-ph" }, { "text": "A Study of Hardening Behavior Based on a Finite-Deformation Gradient\n Crystal-Plasticity Model: A systematic study on the different roles of the governing components of a\nwell-defined finite-deformation gradient crystal-plasticity model proposed by\n(Gurtin, 2008b) is carried out, in order to visualize the capability of the\nmodel in the prediction of a wide range of hardening behaviors as well as\nrate-dependent, scale-variation and Bauschinger-like responses in a single\ncrystal. A function of accumulation rates of dislocations is employed and\nviewed as a measure of formation of short-range interactions which impede\ndislocation movements within a crystal. The model is first represented in the\nreference configuration for the purpose of numerical implementation, and then\nimplemented in the FEM software ABAQUS via a user-defined subroutine (UEL). Our\nsimulation results reveal that the dissipative gradient-strengthening is also\nidentified as a source of isotropic-hardening behavior, which represents the\neffect of cold work introduced by (Gurtin and Ohno, 2011). Moreover, plastic\nflows in predefined slip systems and expansion of accumulation of GNDs are\ndistinctly observed in varying scales and under different loading conditions.", "category": "physics_comp-ph" }, { "text": "Optoacoustic inversion via Volterra kernel reconstruction: In this letter we address the numeric inversion of optoacoustic signals to\ninitial stress profiles. Therefore we put under scrutiny the optoacoustic\nkernel reconstruction problem in the paraxial approximation of the underlying\nwave-equation. We apply a Fourier-series expansion of the optoacoustic Volterra\nkernel and obtain the respective expansion coefficients for a given\n\"apparative\" setup by performing a gauge procedure using synthetic input data.\nThe resulting effective kernel is subsequently used to solve the optoacoustic\nsource reconstruction problem for general signals. We verify the validity of\nthe proposed inversion protocol for synthetic signals and explore the\nfeasibility of our approach to also account for the diffraction transformation\nof signals beyond the paraxial approximation.", "category": "physics_comp-ph" }, { "text": "Exit time distribution in spherically symmetric two-dimensional domains: The distribution of exit times is computed for a Brownian particle in\nspherically symmetric two- dimensional domains (disks, angular sectors, annuli)\nand in rectangles that contain an exit on their boundary. The governing partial\ndifferential equation of Helmholtz type with mixed Dirichlet- Neumann boundary\nconditions is solved analytically. We propose both an exact solution relying on\na matrix inversion, and an approximate explicit solution. The approximate\nsolution is shown to be exact for an exit of vanishing size and to be accurate\neven for large exits. For angular sectors, we also derive exact explicit\nformulas for the moments of the exit time. For annuli and rectangles, the\napproximate expression of the mean exit time is shown to be very accurate even\nfor large exits. The analysis is also extended to biased diffusion. Since the\nHelmholtz equation with mixed boundary conditions is encountered in\nmicrofluidics, heat propagation, quantum billiards, and acoustics, the\ndeveloped method can find numerous applications beyond exit processes.", "category": "physics_comp-ph" }, { "text": "A model for stable interfacial crack growth: We present a model for stable crack growth in a constrained geometry. The\nmorphology of such cracks show scaling properties consistent with self\naffinity. Recent experiments show that there are two distinct self-affine\nregimes, one on small scales whereas the other at large scales. It is believed\nthat two different physical mechanisms are responsible for this. The model we\nintroduce aims to investigate the two mechanisms in a single system. We do find\ntwo distinct scaling regimes in the model.", "category": "physics_comp-ph" }, { "text": "A random batch Ewald method for particle systems with Coulomb\n interactions: We develop a random batch Ewald (RBE) method for molecular dynamics\nsimulations of particle systems with long-range Coulomb interactions, which\nachieves an $O(N)$ complexity in each step of simulating the $N$-body systems.\nThe RBE method is based on the Ewald splitting for the Coulomb kernel with a\nrandom \"mini-batch\" type technique introduced to speed up the summation of the\nFourier series for the long-range part of the splitting. Importance sampling is\nemployed to reduce the induced force variance by taking advantage of the fast\ndecay property of the Fourier coefficients. The stochastic approximation is\nunbiased with controlled variance. Analysis for bounded force fields gives some\ntheoretic support of the method. Simulations of two typical problems of charged\nsystems are presented to illustrate the accuracy and efficiency of the RBE\nmethod in comparison to the results from the Debye-H\\\"uckel theory and the\nclassical Ewald summation, demonstrating that the proposed method has the\nattractiveness of being easy to implement with the linear scaling and is\npromising for many practical applications.", "category": "physics_comp-ph" }, { "text": "A discontinuous Galerkin fast spectral method for the multi-species\n Boltzmann equation: We introduce a fast Fourier spectral method for the multi-species Boltzmann\ncollision operator. The method retains the riveting properties of the\nsingle-species fast spectral method (Gamba et al. SIAM J. Sci. Comput., 39 pp.\nB658--B674 2017) including: (a) spectral accuracy, (b) reduced computational\ncomplexity compared to direct spectral method, (c) reduced memory requirement\nin the precomputation, and (d) applicability to general collision kernels. The\nfast collision algorithm is then coupled with discontinuous Galerkin\ndiscretization in the physical space (Jaiswal et al. J. Comp. Phys., 378 pp.\n178--208 2019) to result in a highly accurate deterministic method (DGFS) for\nthe full Boltzmann equation of gas mixtures. A series of numerical tests is\nperformed to illustrate the efficiency and accuracy of the proposed method.\nVarious benchmarks highlighting different collision kernels, different mass\nratios, momentum transfer, heat transfer, and in particular the diffusive\ntransport have been studied. The results are directly compared with the direct\nsimulation Monte Carlo (DSMC) method.", "category": "physics_comp-ph" }, { "text": "Accelerating fourth-generation machine learning potentials by\n quasi-linear scaling particle mesh charge equilibration: Machine learning potentials (MLP) have revolutionized the field of atomistic\nsimulations by describing the atomic interactions with the accuracy of\nelectronic structure methods at a small fraction of the costs. Most current\nMLPs construct the energy of a system as a sum of atomic energies, which depend\non information about the atomic environments provided in form of predefined or\nlearnable feature vectors. If, in addition, non-local phenomena like long-range\ncharge transfer are important, fourth-generation MLPs need to be used, which\ninclude a charge equilibration (Qeq) step to take the global structure of the\nsystem into account. This Qeq can significantly increase the computational cost\nand thus can become the computational bottleneck for large systems. In this\npaper we present a highly efficient formulation of Qeq that does not require\nthe explicit computation of the Coulomb matrix elements resulting in a\nquasi-linearly scaling method. Moreover, our approach also allows for the\nefficient calculation of energy derivatives, which explicitly consider the\nglobal structure-dependence of the atomic charges as obtained from Qeq. Due to\nits generality, the method is not restricted to MLPs but can also be applied\nwithin a variety of other force fields.", "category": "physics_comp-ph" }, { "text": "A Posteriori Error Estimate and Adaptivity for QM/MM Models of\n Crystalline Defects: Hybrid quantum/molecular mechanics (QM/MM) models play a pivotal role in\nmolecular simulations. These models provide a balance between accuracy,\nsurpassing pure MM models, and computational efficiency, offering advantages\nover pure QM models. Adaptive approaches have been developed to further improve\nthis balance by allowing on-the-fly selection of the QM and MM subsystems as\nnecessary. We propose a novel and robust adaptive QM/MM method for practical\nmaterial defect simulations. To ensure mathematical consistency with the QM\nreference model, we employ machine-learning interatomic potentials (MLIPs) as\nthe MM models. Our adaptive QM/MM method utilizes a residual-based error\nestimator that provides both upper and lower bounds for the approximation\nerror, thus indicating its reliability and efficiency. Furthermore, we\nintroduce a novel adaptive algorithm capable of anisotropically updating the\nQM/MM partitions. This update is based on the proposed residual-based error\nestimator and involves solving a free interface motion problem, which is\nefficiently achieved using the fast marching method. We demonstrate the\nrobustness of our approach via numerical tests on a wide range of crystalline\ndefects.", "category": "physics_comp-ph" }, { "text": "Saturated random packing built of arbitrary polygons under random\n sequential adsorption protocol: Random packings and their properties are a popular and active field of\nresearch. Numerical algorithms that can efficiently generate them are useful\ntools in their study. This paper focuses on random packings produced according\nto the random sequential adsorption (RSA) protocol. Developing the idea\npresented in [G. Zhang, Phys. Rev. E {\\bf 97}, 043311 (2018)], where saturated\nrandom packings built of regular polygons were studied, we create an algorithm\nthat generates strictly saturated packings built of any polygons. Then, the\nalgorithm was used to determine the packing fractions for arbitrary triangles.\nThe highest mean packing density, $0.552814 \\pm 0.000063$, was observed for\ntriangles of side lengths $0.63:1:1$. Additionally, microstructural properties\nof such packings, kinetics of their growth as well as distributions of\nsaturated packing fractions and the number of RSA iterations needed to reach\nsaturation were analyzed.", "category": "physics_comp-ph" }, { "text": "Scattering matrix of arbitrarily shaped objects: Combining Finite\n Elements and Vector Partial Waves: We demonstrate the interest of combining Finite Element calculations with the\nVector Partial Wave formulation (used in T-matrix and Mie theory) in order to\ncharacterize the electromagnetic scattering properties of isolated individual\nscatterers. This method consists of individually feeding the finite element\nproblem with incident Vector Partial Waves in order to numerically determine\nthe T-matrix elements of the scatterer. For a sphere and an ellipsoid, we\ndemonstrate that this method determines the scattering matrix to high accuracy.\nRecurrence relations for a fast determination of the vector partial waves are\ngiven explicitly, and an open-source code allowing the retrieval of the\npresented numerical results is provided.", "category": "physics_comp-ph" }, { "text": "Comment on \"A note on generalized radial mesh generation for plasma\n electronic structure\": In a recent note [High Energy Density Phys. 7, 161 (2011)], B.G. Wilson and\nV. Sonnad proposed a very useful closed form expression for the efficient\ngeneration of analytic log-linear radial meshes. The central point of the note\nis an implicit equation for the parameter h, involving Lambert's function W[x].\nThe authors mention that they are unaware of any direct proof of this equation\n(they obtained it by re-summing the Taylor expansion of h using high-order\ncoefficients obtained by analytic differentiation of the implicit definition\nusing symbolic manipulation). In the present comment, we present a direct proof\nof that equation.", "category": "physics_comp-ph" }, { "text": "Variational and Diffusion Quantum Monte Carlo Calculations with the\n CASINO Code: We present an overview of the variational and diffusion quantum Monte Carlo\nmethods as implemented in the CASINO program. We particularly focus on\ndevelopments made in the last decade, describing state-of-the-art quantum Monte\nCarlo algorithms and software and discussing their strengths and their\nweaknesses. We review a range of recent applications of CASINO.", "category": "physics_comp-ph" }, { "text": "Mechanical, optoelectronic and transport properties of single-layer Ca2N\n and Sr2N electrides: Electride materials offer attractive physical properties due to their loosely\nbound electrons. Ca2N, an electride in the two-dimensional (2D) form was\nsuccessfully recently synthesized. We conducted extensive first-principles\ncalculations to explore the mechanical, electronic, optical and transport\nresponse of single-layer and free-standing Ca2N and Sr2N electrides to external\nstrain. We show that Ca2N and Sr2N sheets present isotropic elastic properties\nwith positive Poisson's ratios, however, they yield around 50% higher tensile\nstrength along the zigzag direction as compared with armchair. We also showed\nthat the strain has negligible effect on the conductivity of the materials; the\ncurrent in the system reduces by less than 32% for the structure under ultimate\nuniaxial strain along the armchair direction. Compressive strain always\nincreases the electronic transport in the systems due to stronger overlap of\nthe atomic orbitals. Our results show that the optical spectra are anisotropic\nfor light polarization parallel and perpendicular to the plane. Interband\ntransition contributions along in-plane polarization are not negligible, by\nconsidering this effect the optical properties of Ca2N and Sr2N sheets in the\nlow frequency regime significantly changed. The insight provided by this study\ncan be useful for the future application of Ca2N and Sr2N in nanodevices.", "category": "physics_comp-ph" }, { "text": "A GPU-Accelerated Fast Summation Method Based on Barycentric Lagrange\n Interpolation and Dual Tree Traversal: We present the barycentric Lagrange dual tree traversal (BLDTT) fast\nsummation method for particle interactions. The scheme replaces well-separated\nparticle-particle interactions by adaptively chosen particle-cluster,\ncluster-particle, and cluster-cluster approximations given by barycentric\nLagrange interpolation at proxy particles on a Chebyshev grid in each cluster.\nThe BLDTT is kernel-independent and the approximations can be efficiently\nmapped onto GPUs, where target particles provide an outer level of parallelism\nand source particles provide an inner level of parallelism. We present an\nOpenACC GPU implementation of the BLDTT with MPI remote memory access for\ndistributed memory parallelization. The performance of the GPU-accelerated\nBLDTT is demonstrated for calculations with different problem sizes, particle\ndistributions, geometric domains, and interaction kernels, as well as for\nunequal target and source particles. Comparison with our earlier\nparticle-cluster barycentric Lagrange treecode (BLTC) demonstrates the superior\nperformance of the BLDTT. In particular, on a single GPU for problem sizes\nranging from $N$=1E5 to 1E8, the BLTC has $O(N\\log N)$ scaling, while the BLDTT\nhas $O(N)$ scaling. In addition, MPI strong scaling results are presented for\nthe BLTC and BLDTT using $N$=64E6 particles on up to 32 GPUs.", "category": "physics_comp-ph" }, { "text": "Nonlinear phase coupling functions: a numerical study: Phase reduction is a general tool widely used to describe forced and\ninteracting self-sustained oscillators. Here we explore the phase coupling\nfunctions beyond the usual first-order approximation in the strength of the\nforce. Taking the periodically forced Stuart-Landau oscillator as the\nparadigmatic model, we determine and numerically analyse the coupling functions\nup to the fourth order in the force strength. We show that the found nonlinear\nphase coupling functions can be used for predicting synchronization regions of\nthe forced oscillator.", "category": "physics_comp-ph" }, { "text": "Koopmans-compliant functionals and potentials and their application to\n the GW100 test-set: Koopmans-compliant (KC) functionals have been shown to provide accurate\nspectral properties through a generalized condition of piece-wise linearity of\nthe total energy as a function of the fractional addition/removal of an\nelectron to/from any orbital. We analyze the performance of different KC\nfunctionals on the GW100 test-set, comparing the ionization potentials (as\nopposite of the energy of the highest occupied orbital) of these 100 molecules\nto those obtained from CCSD(T) total energy differences, and experimental\nresults, finding excellent agreement with a mean absolute error of 0.20 eV for\nthe KIPZ functional, that is state-of-the-art for both DFT-based calculations\nand many-body perturbation theory. We highlight similarities and differences\nbetween KC functionals and other electronic-structure approaches, such as\ndielectric-dependent hybrid functionals and G$_0$W$_0$, both from a theoretical\nand from a practical point of view, arguing that Koopmans-compliant potentials\ncan be considered as a local and orbital-dependent counterpart to the\nelectronic GW self-energy, albeit already including approximate vertex\ncorrections.", "category": "physics_comp-ph" }, { "text": "Entropy and weak solutions in the thermal model for the compressible\n Euler equations: Among the existing models for compressible fluids, the one by Kataoka and\nTsutahara (KT model, Phys. Rev. E 69, 056702, 2004) has a simple and rigorous\ntheoretical background. The drawback of this KT model is that it can cause\nnumerical instability if the local Mach number exceeds 1. The precise mechanism\nof this instability has not yet been clarified. In this paper, we derive\nentropy functions whose local equilibria are suitable to recover the Euler-like\nequations in the framework of the lattice Boltzmann method for the KT model.\nNumerical examples are also given, which are consistent with the above\ntheoretical arguments, and show that the entropy condition is not fully\nguaranteed in KT model. The negative entropy may be the inherent cause for the\nnon-physical oscillations in the vicinity of the shock. In contrast to these\nKarlin's microscopic entropy approach, the corresponding subsidiary entropy\ncondition in the LBM calculation could also be deduced explicitly from the\nmacroscopic version, which provides some insights on the numerical instability\nof the lattice Boltzmann model for shock calculation.", "category": "physics_comp-ph" }, { "text": "Implicit temporal discretization and exact energy conservation for\n particle methods applied to the Poisson-Boltzmann equation: Wereportonanewmultiscalemethodapproachforthestudyofsystemswith wide\nseparation of short-range forces acting on short time scales and long-range\nforces acting on much slower scales. We consider the case of the\nPoisson-Boltzmann equation that describes the long-range forces using the\nBoltzmann formula (i.e. we assume the medium to be in quasi local thermal\nequilibrium). We developed a new approach where fields and particle information\n(mediated by the equations for their moments) are solved self-consistently. The\nnew approach is implicit and numerically stable, providing exact energy\nconservation. We tested different implementations all leading to exact energy\nconservation. The new method requires the solution of a large set of non-linear\nequations. We considered three solution strategies: Jacobian Free Newton\nKrylov, an alternative, called field hiding, based on hiding part of the\nresidual calculation and replacing them with direct solutions and a Direct\nNewton Schwarz solver that considers simplified single particle-based Jacobian.\nThe field hiding strategy proves to be the most efficient approach.", "category": "physics_comp-ph" }, { "text": "Physical-density integral equation methods for scattering from\n multi-dielectric cylinders: An integral equation-based numerical method for scattering from\nmulti-dielectric cylinders is presented. Electromagnetic fields are represented\nvia layer potentials in terms of surface densities with physical\ninterpretations. The existence of null-field representations then adds superior\nflexibility to the modeling. Local representations are used for fast field\nevaluation at points away from their sources. Partially global representations,\nconstructed as to reduce the strength of kernel singularities, are used for\nnear-evaluations. A mix of local- and partially global representations is also\nused to derive the system of integral equations from which the physical\ndensities are solved. Unique solvability is proven for the special case of\nscattering from a homogeneous cylinder under rather general conditions. High\nachievable accuracy is demonstrated for several examples found in the\nliterature.", "category": "physics_comp-ph" }, { "text": "Local Solution Method for Numerical Solving of the Wave Propagation\n Problem: A new method for numerical solving of boundary problem for ordinary\ndifferential equations with slowly varying coefficients which is aimed at\nbetter representation of solutions in the regions of their rapid oscillations\nor exponential increasing (decreasing) is proposed. It is based on\napproximation of the solution to find in the form of superposition of certain\npolynomial- exponential basic functions. The method is studied for the\nHelmholtz equation in comparison with the standard finite difference method.\nThe numerical tests have shown the convergence of the method proposed. In\ncomparison with the finite difference method the same accuracy is obtained on\nsubstantially rarer mesh. This advantage becomes more pronounced, if the\nsolution varies very rapidly.", "category": "physics_comp-ph" }, { "text": "Large Scale Distributed Linear Algebra With Tensor Processing Units: We have repurposed Google Tensor Processing Units (TPUs),\napplication-specific chips developed for machine learning, into large-scale\ndense linear algebra supercomputers. The TPUs' fast inter-core interconnects\n(ICI)s, physically two-dimensional network topology, and high-bandwidth memory\n(HBM) permit distributed matrix multiplication algorithms to rapidly become\ncomputationally bound. In this regime, the matrix-multiply units (MXU)s\ndominate the runtime, yielding impressive scaling, performance, and raw size:\noperating in float32 precision, a full 2048-core pod of third generation TPUs\ncan multiply two matrices with linear size $N= 220= 1 048 576$ in about 2\nminutes. Via curated algorithms emphasizing large, single-core matrix\nmultiplications, other tasks in dense linear algebra can similarly scale. As\nexamples, we present (i) QR decomposition; (ii) resolution of linear systems;\nand (iii) the computation of matrix functions by polynomial iteration,\ndemonstrated by the matrix polar factorization.", "category": "physics_comp-ph" }, { "text": "A Finite Element Method for Electrowetting on Dielectric: We consider the problem of electrowetting on dielectric (EWoD). The system\ninvolves the dynamics of a conducting droplet, which is immersed in another\ndielectric fluid, on a dielectric substrate under an applied voltage. The fluid\ndynamics is modeled by the two-phase incompressible Navier-Stokes equations\nwith the standard interface conditions, the Navier slip condition on the\nsubstrate, and a contact angle condition which relates the dynamic contact\nangle and the contact line velocity, as well as the kinematic condition for the\nevolution of the interface. The electric force acting on the fluid interface is\nmodeled by Maxwell's equations in the domain occupied by the dielectric fluid\nand the dielectric substrate. We develop a numerical method for the model based\non its weak form. This method combines the finite element method for the\nNavier-Stokes equations on a fixed bulk mesh with a parametric finite element\nmethod for the dynamics of the fluid interface, and the boundary integral\nmethod for the electric force along the fluid interface. Numerical examples are\npresented to demonstrate the accuracy and convergence of the numerical method,\nthe effect of various physical parameters on the interface profile, and other\ninteresting phenomena such as the transportation of droplet driven by the\napplied non-uniform electric potential difference.", "category": "physics_comp-ph" }, { "text": "Well-balanced nodal discontinuous Galerkin method for Euler equations\n with gravity: We present a well-balanced nodal discontinuous Galerkin (DG) scheme for\ncompressible Euler equations with gravity. The DG scheme makes use of\ndiscontinuous Lagrange basis functions supported at Gauss-Lobatto-Legendre\n(GLL) nodes together with GLL quadrature using the same nodes. The\nwell-balanced property is achieved by a specific form of source term\ndiscretization that depends on the nature of the hydrostatic solution, together\nwith the GLL nodes for quadrature of the source term. The scheme is able to\npreserve isothermal and polytropic stationary solutions upto machine precision\non any mesh composed of quadrilateral cells and for any gravitational\npotential. It is applied on several examples to demonstrate its well-balanced\nproperty and the improved resolution of small perturbations around the\nstationary solution.", "category": "physics_comp-ph" }, { "text": "Simulating Plasmon Resonances of Gold Nanoparticles with Bipyramidal\n Shapes by Boundary Element Methods: Computational modeling and accurate simulations of localized surface plasmon\nresonance (LSPR) absorption properties are reported for gold nanobipyramids\n(GNBs), a class of metal nanoparticle that features highly tunable,\ngeometrydependent optical properties. GNB bicone models with spherical tips\nperformed best in reproducing experimental LSPR spectra while the comparison\nwith other geometrical models provided a fundamental understanding of base\nshapes and tip effects on the optical properties of GNBs. Our results\ndemonstrated the importance of averaging all geometrical parameters determined\nfrom transmission electron microscopy images to build representative models of\nGNBs. By assessing the performances of LSPR absorption spectra simulations\nbased on a quasi-static approximation, we provided an applicability range of\nthis approach as a function of the nanoparticle size, paving the way to the\ntheoretical study of the coupling between molecular electron densities and\nmetal nanoparticles in GNB-based nanohybrid systems, with potential\napplications in the design of nanomaterials for bioimaging, optics and\nphotocatalysis.", "category": "physics_comp-ph" }, { "text": "Combined effects of fluid type and particle shape on particles flow in\n microfluidic platforms: Recent numerical analyses to optimize the design of microfluidic devices for\nmore effective entrapment or segregation of surrogate circulating tumor cells\n(CTCs) from healthy cells have been reported in the literature without\nconcurrently accommodating the non-Newtonian nature of the body fluid and the\nnon-uniform geometric shapes of the CTCs. Through a series of two-dimensional\nproof-of-concept simulations with increased levels of complexity (e.g., number\nof particles, inline obstacles), we investigated the validity of the\nassumptions of the Newtonian fluid behavior for pseudoplastic fluids and the\ncircular particle shape for different-shaped particles (DSPs) in the context of\nmicrofluidics-facilitated shape-based segregation of particles. Simulations\nwith a single DSP revealed that even in the absence of internal geometric\ncomplexities of a microfluidics channel, the aforementioned assumptions led to\n0.11-0.21W (W is the channel length) errors in lateral displacements of DSPs,\nup to 3-20% errors in their velocities, and 3-5% errors in their travel times.\nWhen these assumptions were applied in simulations involving multiple DSPs in\ninertial microfluidics with inline obstacles, errors in the lateral\ndisplacements of DSPs were as high as 0.78W and in their travel times up to\n23%, which led to different (un)symmetric flow and segregation patterns of\nDSPs. Thus, the fluid type and particle shape should be included in numerical\nmodels and experiments to assess the performance of microfluidics for targeted\ncell (e.g., CTCs) harvesting.", "category": "physics_comp-ph" }, { "text": "Lattice Boltzmann Method for Fluid-Structure Interaction with\n incompressible NeoHookean materials in small perturbations: This paper deals with the numerical modelling of the interaction between a\nfluid and an incompressible solid (Neo Hookean) in small perturbations with the\nlattice Boltzmann method (LBM). In order to use a monolithic formulation and to\nsolve the whole problem with the lattice Boltzmann method, an Eulerian approach\nis employed for the solid medium. The initial problem is thus transformed into\na diphasic problem and a unique LBM solver is used for both phases (fluid and\nsolid). With this approach, the force at the fluid-solid interface does not\nneed to be explicitly computed. It is intrinsic to the method. This new method\napproach is validated with three academic cases: the deformation of a solid at\nthe bottom of a lid driven cavity, with steady and unsteady boundary conditions\nat the top wall of the cavity and the deformation and motion of a disk in a lid\ndriven cavity.", "category": "physics_comp-ph" }, { "text": "Quantics Tensor Cross Interpolation for High-Resolution, Parsimonious\n Representations of Multivariate Functions in Physics and Beyond: Multivariate functions of continuous variables arise in countless branches of\nscience. Numerical computations with such functions typically involve a\ncompromise between two contrary desiderata: accurate resolution of the\nfunctional dependence, versus parsimonious memory usage. Recently, two\npromising strategies have emerged for satisfying both requirements: (i) The\nquantics representation, which expresses functions as multi-index tensors, with\neach index representing one bit of a binary encoding of one of the variables;\nand (ii) tensor cross interpolation (TCI), which, if applicable, yields\nparsimonious interpolations for multi-index tensors. Here, we present a\nstrategy, quantics TCI (QTCI), which combines the advantages of both schemes.\nWe illustrate its potential with an application from condensed matter physics:\nthe computation of Brillouin zone integrals.", "category": "physics_comp-ph" }, { "text": "Quantifying parameter uncertainties in optical scatterometry using\n Bayesian inversion: We present a Newton-like method to solve inverse problems and to quantify\nparameter uncertainties. We apply the method to parameter reconstruction in\noptical scatterometry, where we take into account a priori information and\nmeasurement uncertainties using a Bayesian approach. Further, we discuss the\ninfluence of numerical accuracy on the reconstruction result.", "category": "physics_comp-ph" }, { "text": "Application of a Spectral Method to Simulate Quasi-Three-Dimensional\n Underwater Acoustic Fields: The calculation of a three-dimensional underwater acoustic field has always\nbeen a key problem in computational ocean acoustics. Traditionally, this\nsolution is usually obtained by directly solving the acoustic Helmholtz\nequation using a finite difference or finite element algorithm. Solving the\nthree-dimensional Helmholtz equation directly is computationally expensive. For\nquasi-three-dimensional problems, the Helmholtz equation can be processed by\nthe integral transformation approach, which can greatly reduce the\ncomputational cost. In this paper, a numerical algorithm for a\nquasi-three-dimensional sound field that combines an integral transformation\ntechnique, stepwise coupled modes and a spectral method is designed. The\nquasi-three-dimensional problem is transformed into a two-dimensional problem\nusing an integral transformation strategy. A stepwise approximation is then\nused to discretize the range dependence of the two-dimensional problem; this\napproximation is essentially a physical discretization that further reduces the\nrange-dependent two-dimensional problem to a one-dimensional problem. Finally,\nthe Chebyshev--Tau spectral method is employed to accurately solve the\none-dimensional problem. We provide the corresponding numerical program SPEC3D\nfor the proposed algorithm and describe several representative numerical\nexamples. In the numerical experiments, the consistency between SPEC3D and the\nanalytical solution/high-precision finite difference program COACH verifies the\nreliability and capability of the proposed algorithm. A comparison of running\ntimes illustrates that the algorithm proposed in this paper is significantly\nfaster than the full three-dimensional algorithm in terms of computational\nspeed.", "category": "physics_comp-ph" }, { "text": "Multiple scattering simulation via physics-informed neural networks: This work presents a physics-driven machine learning framework for the\nsimulation of acoustic scattering problems. The proposed framework relies on a\nphysics-informed neural network (PINN) architecture that leverages prior\nknowledge based on the physics of the scattering problem as well as a tailored\nnetwork structure that embodies the concept of the superposition principle of\nlinear wave interaction. The framework can also simulate the scattered field\ndue to rigid scatterers having arbitrary shape as well as high-frequency\nproblems. Unlike conventional data-driven neural networks, the PINN is trained\nby directly enforcing the governing equations describing the underlying\nphysics, hence without relying on any labeled training dataset. Remarkably, the\nnetwork model has significantly lower discretization dependence and offers\nsimulation capabilities akin to parallel computation. This feature is\nparticularly beneficial to address computational challenges typically\nassociated with conventional mesh-dependent simulation methods. The performance\nof the network is investigated via a comprehensive numerical study that\nexplores different application scenarios based on acoustic scattering.", "category": "physics_comp-ph" }, { "text": "Stochastic and Mixed Density Functional Theory within the projector\n augmented wave formalism for the simulation of warm dense matter: Stochastic and mixed stochastic-deterministic density functional theory (DFT)\nare promising new approaches for the calculation of the equation-of-state and\ntransport properties in materials under extreme conditions. In the intermediate\nwarm dense matter regime, a state between correlated condensed matter and\nkinetic plasma, electrons can range from being highly localized around nuclei\nto delocalized over the whole simulation cell. The plane-wave basis\npseudo-potential approach is thus the typical tool of choice for modeling such\nsystems at the DFT level. Unfortunately, the stochastic DFT methods scale as\nthe square of the maximum plane-wave energy in this basis. To reduce the effect\nof this scaling, and improve the overall description of the electrons within\nthe pseudo-potential approximation, we present stochastic and mixed DFT\ndeveloped and implemented within the projector augmented wave formalism. We\ncompare results between the different DFT approaches for both single-point and\nmolecular dynamics trajectories and present calculations of self-diffusion\ncoefficients of solid density carbon from 1 to 50 eV.", "category": "physics_comp-ph" }, { "text": "On the nonreflecting boundary operators for the general two dimensional\n Schr\u00f6dinger equation: Of the two main objectives we pursue in this paper, the first one consists in\nthe studying operators of the form\n$(\\partial_t-i\\triangle_{\\Gamma})^{\\alpha},\\,\\,\\alpha=1/2,-1/2,-1,\\ldots,$\nwhere $\\triangle_{\\Gamma}$ is the Laplace-Beltrami operator. These operators\narise in the context of nonreflecting boundary conditions in the\npseudo-differential approach for the general Schr\\\"odinger equation. The\ndefinition of such operators is discussed in various settings and a formulation\nin terms of fractional operators is provided. The second objective consists in\nderiving corner conditions for a rectangular domain in order to make such\ndomains amenable to the pseudo-differential approach. Stability and uniqueness\nof the solution is investigated for each of these novel boundary conditions.", "category": "physics_comp-ph" }, { "text": "Approximations of the modified Bessel functions of the second kind\n $K_\u03bd$. Applications in random field generation: We propose an analytical approximation for the modified Bessel function of\nthe second kind $K_\\nu$. The approximation is derived from an exponential\nansatz imposing global constrains. It yields local and global errors of less\nthan one percent and a speed-up in the computing time of $3$ orders in\nmagnitude in comparison with traditional approaches. We demonstrate the\nvalidity of our approximation for the task of generating long-range correlated\nrandom fields.", "category": "physics_comp-ph" }, { "text": "Metadynamics of paths: We present a method to sample reactive pathways via biased molecular dynamics\nsimulations in trajectory space. We show that the use of enhanced sampling\ntechniques enables unconstrained exploration of multiple reaction routes. Time\ncorrelation functions are conveniently computed via reweighted averages along a\nsingle trajectory and kinetic rates are accessed at no additional cost. These\nabilities are illustrated analyzing a model potential and the umbrella\ninversion of NH$_3$ in water. The algorithm allows a parallel implementation\nand promises to be a powerful tool for the study of rare events.", "category": "physics_comp-ph" }, { "text": "The role of electromagnetic trapped modes in extraordinary transmission\n in nanostructured materials: We assert that the physics underlying the extraordinary light transmission\n(reflection) in nanostructured materials can be understood from rather general\nprinciples based on the formal scattering theory developed in quantum\nmechanics. The Maxwell equations in passive (dispersive and absorptive) linear\nmedia are written in the form of the Schr\\\"{o}dinger equation to which the\nquantum mechanical resonant scattering theory (the Lippmann-Schwinger\nformalism) is applied. It is demonstrated that the existence of long-lived\nquasistationary eigenstates of the effective Hamiltonian for the Maxwell theory\nnaturally explains the extraordinary transmission properties observed in\nvarious nanostructured materials. Such states correspond to quasistationary\nelectromagnetic modes trapped in the scattering structure. Our general approach\nis also illustrated with an example of the zero-order transmission of the\nTE-polarized light through a metal-dielectric grating structure. Here a direct\non-the-grid solution of the time-dependent Maxwell equations demonstrates the\nsignificance of resonances (or trapped modes) for extraordinary light\ntransmissio", "category": "physics_comp-ph" }, { "text": "Concurrent Cuba: The parallel version of the multidimensional numerical integration package\nCuba is presented and achievable speed-ups discussed.", "category": "physics_comp-ph" }, { "text": "Machine Learning-based models in particle-in-cell codes for advanced\n physics extensions: In this paper we propose a methodology for the efficient implementation of\nMachine Learning (ML)-based methods in particle-in-cell (PIC) codes, with a\nfocus on Monte-Carlo or statistical extensions to the PIC algorithm. The\npresented approach allows for neural networks to be developed in a Python\nenvironment, where advanced ML tools are readily available to proficiently\ntrain and test them. Those models are then efficiently deployed within\nhighly-scalable and fully parallelized PIC simulations during runtime. We\ndemonstrate this methodology with a proof-of-concept implementation within the\nPIC code OSIRIS, where a fully-connected neural network is used to replace a\nsection of a Compton scattering module. We demonstrate that the ML-based method\nreproduces the results obtained with the conventional method and achieves\nbetter computational performance. These results offer a promising avenue for\nfuture applications of ML-based methods in PIC, particularly for physics\nextensions where an ML-based approach can provide a higher performance\nincrease.", "category": "physics_comp-ph" }, { "text": "Precise Kohn-Sham total-energy calculations at reduced cost: The standard way to calculate the Kohn-Sham orbitals utilizes an\napproximation of the potential. The approximation consists in a projection of\nthe potential into a finite subspace of basis functions. The orbitals,\ncalculated with the projected potential, are used to evaluate the kinetic part\nof the total energy, but the true potential is used to evaluate the interaction\nenergy with the electron density. Consequently, the Kohn-Sham total-energy\nexpression loses its stationary behaviour as a functional of the potential. It\nwill be discussed that this stationarity is important for the calculation of\nprecise total energies at low computational cost and an approach will be\npresented that practically restores stationarity by perturbation theory. The\nadvantage of this approach will be illustrated with total-energy results for\nthe example of a disordered CrFeCoNi high entropy alloy.", "category": "physics_comp-ph" }, { "text": "Embedding Hard Physical Constraints in Neural Network Coarse-Graining of\n 3D Turbulence: In the recent years, deep learning approaches have shown much promise in\nmodeling complex systems in the physical sciences. A major challenge in deep\nlearning of PDEs is enforcing physical constraints and boundary conditions. In\nthis work, we propose a general framework to directly embed the notion of an\nincompressible fluid into Convolutional Neural Networks, and apply this to\ncoarse-graining of turbulent flow. These physics-embedded neural networks\nleverage interpretable strategies from numerical methods and computational\nfluid dynamics to enforce physical laws and boundary conditions by taking\nadvantage the mathematical properties of the underlying equations. We\ndemonstrate results on three-dimensional fully-developed turbulence, showing\nthat this technique drastically improves local conservation of mass, without\nsacrificing performance according to several other metrics characterizing the\nfluid flow.", "category": "physics_comp-ph" }, { "text": "Quantum Particle Swarm Optimization for Electromagnetics: A new particle swarm optimization (PSO) technique for electromagnetic\napplications is proposed. The method is based on quantum mechanics rather than\nthe Newtonian rules assumed in all previous versions of PSO, which we refer to\nas classical PSO. A general procedure is suggested to derive many different\nversions of the quantum PSO algorithm (QPSO). The QPSO is applied first to\nlinear array antenna synthesis, which is one of the standard problems used by\nantenna engineers. The performance of the QPSO is compared against an improved\nversion of the classical PSO. The new algorithm outperforms the classical one\nmost of the time in convergence speed and achieves better levels for the cost\nfunction. As another application, the algorithm is used to find a set of\ninfinitesimal dipoles that produces the same near and far fields of a circular\ndielectric resonator antenna (DRA). In addition, the QPSO method is employed to\nfind an equivalent circuit model for the DRA that can be used to predict some\ninteresting parameters like the Q-factor. The QPSO contains only one control\nparameter that can be tuned easily by trial and error or by suggested simple\nlinear variation. Based on our understanding of the physical background of the\nmethod, various explanations of the theoretical aspects of the algorithm are\npresented.", "category": "physics_comp-ph" }, { "text": "Multiple scale kinetic simulations with the energy conserving semi\n implicit particle in cell (PIC) method: The recently developed energy conserving semi-implicit method (ECsim) for PIC\nsimulation is applied to multiple scale problems where the electron-scale\nphysics needs to be only partially retained and the interest is on the\nmacroscopic or ion-scale processes. Unlike hybrid methods, the ECsim is capable\nof providing kinetic electron information, such as wave-electron interaction\n(Landau damping or cyclotron resonance) and non-Maxwellian electron velocity\ndistributions. However, like hybrid, the ECsim does not need to resolve all\nelectron scales, allowing time steps and grid spacing orders of magnitude\nlarger than in explicit PIC schemes. The additional advantage of the ECsim is\nthat the stability at large scale is obtained while conserving energy exactly.\nThree examples are presented: ion acoustic waves, electron acoustic instability\nand reconnection processes.", "category": "physics_comp-ph" }, { "text": "A block triple-relaxation-time lattice Boltzmann model for nonlinear\n anisotropic convection-diffusion equations: A block triple-relaxation-time (B-TriRT) lattice Boltzmann model for general\nnonlinear anisotropic convection-diffusion equations (NACDEs) is proposed, and\nthe Chapman-Enskog analysis shows that the present B-TriRT model can recover\nthe NACDEs correctly. There are some striking features of the present B-TriRT\nmodel: firstly, the relaxation matrix of B-TriRT model is partitioned into\nthree relaxation parameter blocks, rather than a diagonal matrix in general\nmultiple-relaxation-time (MRT) model; secondly, based on the analysis of\nhalf-way bounce-back (HBB) scheme for Dirichlet boundary conditions, we obtain\nan expression to determine the relaxation parameters; thirdly, the anisotropic\ndiffusion tensor can be recovered by the relaxation parameter block that\ncorresponds to the first-order moment of non-equilibrium distribution function.\nA number of simulations of isotropic and anisotropic convection-diffusion\nequations are conducted to validate the present B-TriRT model. The results\nindicate that the present model has a second-order accuracy in space, and is\nalso more accurate and more stable than some available lattice Boltzmann\nmodels.", "category": "physics_comp-ph" }, { "text": "High order solution of Poisson problems with piecewise constant\n coefficients and interface jumps: We present a fast and accurate algorithm to solve Poisson problems in complex\ngeometries, using regular Cartesian grids. We consider a variety of\nconfigurations, including Poisson problems with interfaces across which the\nsolution is discontinuous (of the type arising in multi-fluid flows). The\nalgorithm is based on a combination of the Correction Function Method (CFM) and\nBoundary Integral Methods (BIM). Interface and boundary conditions can be\ntreated in a fast and accurate manner using boundary integral equations, and\nthe associated BIM. Unfortunately, BIM can be costly when the solution is\nneeded everywhere in a grid, e.g. fluid flow problems. We use the CFM to\ncircumvent this issue. The solution from the BIM is used to rewrite the problem\nas a series of Poisson problems in rectangular domains - which requires the BIM\nsolution at interfaces/boundaries only. These Poisson problems involve\ndiscontinuities at interfaces, of the type that the CFM can handle. Hence we\nuse the CFM to solve them (to high order of accuracy) with finite differences\nand a Fast Fourier Transform based fast Poisson solver. We present 2-D examples\nof the algorithm applied to Poisson problems involving complex geometries,\nincluding cases in which the solution is discontinuous. We show that the\nalgorithm produces solutions that converge with either 3rd or 4th order of\naccuracy, depending on the type of boundary condition and solution\ndiscontinuity.", "category": "physics_comp-ph" }, { "text": "Active Training of Physics-Informed Neural Networks to Aggregate and\n Interpolate Parametric Solutions to the Navier-Stokes Equations: The goal of this work is to train a neural network which approximates\nsolutions to the Navier-Stokes equations across a region of parameter space, in\nwhich the parameters define physical properties such as domain shape and\nboundary conditions. The contributions of this work are threefold:\n 1) To demonstrate that neural networks can be efficient aggregators of whole\nfamilies of parameteric solutions to physical problems, trained using data\ncreated with traditional, trusted numerical methods such as finite elements.\nAdvantages include extremely fast evaluation of pressure and velocity at any\npoint in physical and parameter space (asymptotically, ~3 $\\mu s$ / query), and\ndata compression (the network requires 99\\% less storage space compared to its\nown training data).\n 2) To demonstrate that the neural networks can accurately interpolate between\nfinite element solutions in parameter space, allowing them to be instantly\nqueried for pressure and velocity field solutions to problems for which\ntraditional simulations have never been performed.\n 3) To introduce an active learning algorithm, so that during training, a\nfinite element solver can automatically be queried to obtain additional\ntraining data in locations where the neural network's predictions are in most\nneed of improvement, thus autonomously acquiring and efficiently distributing\ntraining data throughout parameter space.\n In addition to the obvious utility of Item 2, above, we demonstrate an\napplication of the network in rapid parameter sweeping, very precisely\npredicting the degree of narrowing in a tube which would result in a 50\\%\nincrease in end-to-end pressure difference at a given flow rate. This\ncapability could have applications in both medical diagnosis of arterial\ndisease, and in computer-aided design.", "category": "physics_comp-ph" }, { "text": "Strain tunable pudding-mold-type band structure and thermoelectric\n properties of SnP$_3$ monolayer: Recent studies indicated the interesting metal-to-semiconductor transition\nwhen layered bulk GeP3 and SnP3 are restricted to the monolayer or bilayer, and\nSnP3 monolayer has been predicted to possess high carrier mobility and\npromising thermoelectric performance. Here, we investigate the biaxial strain\neffect on the electronic and thermoelectric properties of SnP3 monolayer. Our\nfirst-principles calculations combined with Boltzmann transport theory indicate\nthat SnP3 monolayer has the pudding-mold-type valence band structure, giving\nrise to a large p-type Seebeck coefficient and a high p-type power factor. The\ncompressive biaxial strain can decrease the energy gap and result in the\nmetallicity. In contrast, the tensile biaxial strain increases the energy gap,\nand increases the n-type Seebeck coefficient and decreases the n-type\nelectrical conductivity. Although the lattice thermal conductivity becomes\nlarger at a tensile biaxial strain due to the increased maximum frequency of\nthe acoustic phonon modes and the increased phonon group velocity, it is still\nlow, only e.g. 3.1 W/(mK) at room temperature with the 6% tensile biaxial\nstrain. Therefore, SnP3 monolayer is a good thermoelectric material with low\nlattice thermal conductivity even at the 6% tensile strain, and the tensile\nstrain is beneficial to the increase of the n-type Seebeck coefficient.", "category": "physics_comp-ph" }, { "text": "Surrogate Models for Rainfall Nowcasting: Nowcasting (or short-term weather forecasting) is particularly important in\nthe case of extreme events as it helps prevent human losses. Many of our\nactivities, however, also depend on the weather. Therefore, nowcasting has\nshown to be useful in many different domains. Currently, immediate rainfall\nforecasts in France are calculated using the Arome-NWC model developed by\nM\\'et\\'eo-France, which is a complex physical model. Arome-NWC forecasts are\nstored with a 15 minute time interval. A higher time resolution is, however,\ndesirable for other meteorological applications. Complex model calculations,\nsuch as Arome-NWC, can be very expensive and time consuming. A surrogate model\naims at producing results which are very close to the ones obtained using a\ncomplex model, but with largely reduced calculation times. Building a surrogate\nmodel requires only a few calculations with the real model. Once the surrogate\nmodel is built, further calculations can be quickly realized. In this study, we\npropose to build surrogate models for immediate rainfall forecasts with two\ndifferent approaches: combining Proper Orthogonal Decomposition (POD) and\nKriging, or combining POD and Random Forest (RF). We show that results obtained\nwith our surrogate models are not only close to the ones obtained by Arome-NWC,\nbut they also have a higher time resolution (1 minute) with a reduced\ncalculation time.", "category": "physics_comp-ph" }, { "text": "LCAONet: Message-passing with physically optimized atomic basis\n functions: A Model capable of handling various elemental species and substances is\nessential for discovering new materials in the vast phase and compound space.\nMessage-passing neural networks (MPNNs) are promising as such models, in which\nvarious vector operations model the atomic interaction with its neighbors.\nHowever, conventional MPNNs tend to overlook the importance of physicochemical\ninformation for each node atom, relying solely on the geometric features of the\nmaterial graph. We propose the new three-body MPNN architecture with a\nmessage-passing layer that utilizes optimized basis functions based on the\nelectronic structure of the node elemental species. This enables conveying the\nmessage that includes physical information and better represents the\ninteraction for each elemental species. Inspired by the LCAO (linear\ncombination of atomic orbitals) method, a classical method for calculating the\norbital interactions, the linear combination of atomic basis constructed based\non the wave function of hydrogen-like atoms is used for the present\nmessage-passing. Our model achieved higher prediction accuracy with smaller\nparameters than the state-of-the-art models on the challenging crystalline\ndataset containing many elemental species, including heavy elements. Ablation\nstudies have also shown that the proposed method is effective in improving\naccuracy. Our implementation is available online.", "category": "physics_comp-ph" }, { "text": "On the use of mixed potential formulation for finite-element analysis of\n large-scale magnetization problems with large memory demand: The finite-element analysis of three-dimensional magnetostatic problems in\nterms of magnetic vector potential has proven to be one of the most efficient\ntools capable of providing the excellent quality results but becoming\ncomputationally expensive when employed to modeling of large-scale\nmagnetization problems in the presence of applied currents and nonlinear\nmaterials due to subnational number of the model degrees of freedom. In order\nto achieve a similar quality of calculation at lower computational cost, we\npropose to use for modeling such problems the combination of magnetic vector\nand total scalar potentials as an alternative to magnetic vector potential\nformulation. The potentials are applied to conducting and nonconducting parts\nof the problem domain, respectively and coupled together across their common\ninterfacing boundary. For nonconducting regions, the thin cuts are constructed\nto ensure their simply connectedness and therefore the consistency of the mixed\nformulation. The implementation in the finite-element method of both\nformulations is discussed in detail with difference between the two emphasized.\nThe numerical performance of finite-element modeling in terms of combined\npotentials is assessed against the magnetic vector potential formulation for\ntwo magnetization models, the Helmholtz coil, and the dipole magnet. We show\nthat mixed formulation can provide a substantial reduction in the computational\ncost as compared to its vector counterpart for a similar accuracy of both\nmethods.", "category": "physics_comp-ph" }, { "text": "Accelerating the convergence of path integral dynamics with a\n generalized Langevin equation: The quantum nature of nuclei plays an important role in the accurate\nmodelling of light atoms such as hydrogen, but it is often neglected in\nsimulations due to the high computational overhead involved. It has recently\nbeen shown that zero-point energy effects can be included comparatively cheaply\nin simulations of harmonic and quasi-harmonic systems by augmenting classical\nmolecular dynamics with a generalized Langevin equation (GLE). Here we describe\nhow a similar approach can be used to accelerate the convergence of path\nintegral (PI) molecular dynamics to the exact quantum mechanical result in more\nstrongly anharmonic systems exhibiting both zero point energy and tunnelling\neffects. The resulting PI-GLE method is illustrated with applications to a\ndouble-well tunnelling problem and to liquid water.", "category": "physics_comp-ph" }, { "text": "Fourier-based numerical approximation of the Weertman equation for\n moving dislocations: This work discusses the numerical approximation of a nonlinear\nreaction-advection-diffusion equation, which is a dimensionless form of the\nWeertman equation. This equation models steadily-moving dislocations in\nmaterials science. It reduces to the celebrated Peierls-Nabarro equation when\nits advection term is set to zero. The approach rests on considering a\ntime-dependent formulation, which admits the equation under study as its\nlong-time limit. Introducing a Preconditioned Collocation Scheme based on\nFourier transforms, the iterative numerical method presented solves the\ntime-dependent problem, delivering at convergence the desired numerical\nsolution to the Weertman equation. Although it rests on an explicit\ntime-evolution scheme, the method allows for large time steps, and captures the\nsolution in a robust manner. Numerical results illustrate the efficiency of the\napproach for several types of nonlinearities.", "category": "physics_comp-ph" }, { "text": "Efficient numerical integration of neutrino oscillations in matter: A special purpose solver, based on the Magnus expansion, well suited for the\nintegration of the linear three neutrino oscillations equations in matter is\nproposed. The computations are speeded up to two orders of magnitude with\nrespect to a general numerical integrator, a fact that could smooth the way for\nmassive numerical integration concomitant with experimental data analyses.\nDetailed illustrations about numerical procedure and computer time costs are\nprovided.", "category": "physics_comp-ph" }, { "text": "Optimal estimates of diffusion coefficients from molecular dynamics\n simulations: Translational diffusion coefficients are routinely estimated from molecular\ndynamics simulations. Linear fits to mean squared displacement (MSD) curves\nhave become the de facto standard, from simple liquids to complex\nbiomacromolecules. Nonlinearities in MSD curves at short times are handled with\na wide variety of ad hoc practices, such as partial and piece-wise fitting of\nthe data. Here, we present a rigorous framework to obtain reliable estimates of\nthe diffusion coefficient and its statistical uncertainty. We also assess in a\nquantitative manner if the observed dynamics is indeed diffusive. By accounting\nfor correlations between MSD values at different times, we reduce the\nstatistical uncertainty of the estimator and thereby increase its efficiency.\nWith a Kolmogorov-Smirnov test, we check for possible anomalous diffusion. We\nprovide an easy-to-use Python data analysis script for the estimation of\ndiffusion coefficients. As an illustration, we apply the formalism to molecular\ndynamics simulation data of pure TIP4P-D water and a single ubiquitin protein.\nIn a companion paper [J. Chem. Phys. XXX, YYYYY (2020)], we demonstrate its\nability to recognize deviations from regular diffusion caused by systematic\nerrors in a common trajectory \"unwrapping\" scheme that is implemented in\npopular simulation and visualization software.", "category": "physics_comp-ph" }, { "text": "Pseudospectral time-domain (PSTD) methods for the wave equation:\n Realising boundary conditions with discrete sine and cosine transforms: Pseudospectral time domain (PSTD) methods are widely used in many branches of\nacoustics for the numerical solution of the wave equation, including biomedical\nultrasound and seismology. The use of the Fourier collocation spectral method\nin particular has many computational advantages. However, the use of a discrete\nFourier basis is also inherently restricted to solving problems with periodic\nboundary conditions. Here, a family of spectral collocation methods based on\nthe use of a sine or cosine basis is described. These retain the computational\nadvantages of the Fourier collocation method but instead allow homogeneous\nDirichlet (sound-soft) and Neumann (sound-hard) boundary conditions to be\nimposed. The basis function weights are computed numerically using the discrete\nsine and cosine transforms, which can be implemented using O(N log N)\noperations analogous to the fast Fourier transform. Practical details of how to\nimplement spectral methods using discrete sine and cosine transforms are\nprovided. The technique is then illustrated through the solution of the wave\nequation in a rectangular domain subject to different combinations of boundary\nconditions. The extension to boundaries with arbitrary real reflection\ncoefficients or boundaries that are non-reflecting is also demonstrated using\nthe weighted summation of the solutions with Dirichlet and Neumann boundary\nconditions.", "category": "physics_comp-ph" }, { "text": "The $\\textit{u}$-series: A separable decomposition for electrostatics\n computation with improved accuracy: The evaluation of electrostatic energy for a set of point charges in a\nperiodic lattice is a computationally expensive part of molecular dynamics\nsimulations (and other applications) because of the long-range nature of the\nCoulomb interaction. A standard approach is to decompose the Coulomb potential\ninto a near part, typically evaluated by direct summation up to a cutoff\nradius, and a far part, typically evaluated in Fourier space. In practice, all\ndecomposition approaches involve approximations---such as cutting off the\nnear-part direct sum---but it may be possible to find new decompositions with\nimproved tradeoffs between accuracy and performance. Here we present the\n$\\textit{u-series}$, a new decomposition of the Coulomb potential that is more\naccurate than the standard (Ewald) decomposition for a given amount of\ncomputational effort, and achieves the same accuracy as the Ewald decomposition\nwith approximately half the computational effort. These improvements, which we\ndemonstrate numerically using a lipid membrane system, arise because the\n$\\textit{u}$-series is smooth on the entire real axis and exact up to the\ncutoff radius. Additional performance improvements over the Ewald decomposition\nmay be possible in certain situations because the far part of the\n$\\textit{u}$-series is a sum of Gaussians, and can thus be evaluated using\nalgorithms that require a separable convolution kernel; we describe one such\nalgorithm that reduces communication latency at the expense of communication\nbandwidth and computation, a tradeoff that may be advantageous on modern\nmassively parallel supercomputers.", "category": "physics_comp-ph" }, { "text": "Solving Multi-Dimensional Schr\u00f6dinger Equations Based on EPINNs: Due to the good performance of neural networks in high-dimensional and\nnonlinear problems, machine learning is replacing traditional methods and\nbecoming a better approach for eigenvalue and wave function solutions of\nmulti-dimensional Schr\\\"{o}dinger equations. This paper proposes a numerical\nmethod based on neural networks to solve multiple excited states of\nmulti-dimensional stationary Schr\\\"{o}dinger equation. We introduce the\northogonal normalization condition into the loss function, use the frequency\nprinciple of neural networks to automatically obtain multiple excited state\neigenfunctions and eigenvalues of the equation from low to high energy levels,\nand propose a degenerate level processing method. The use of equation residuals\nand energy uncertainty makes the error of each energy level converge to 0,\nwhich effectively avoids the order of magnitude interference of error\nconvergence, improves the accuracy of wave functions, and improves the accuracy\nof eigenvalues as well. Comparing our results to the previous work, the\naccuracy of the harmonic oscillator problem is at least an order of magnitude\nhigher with fewer training epochs. We complete numerical experiments on typical\nanalytically solvable Schr\\\"{o}dinger equations, e.g., harmonic oscillators and\nhydrogen-like atoms, and propose calculation and evaluation methods for each\nphysical quantity, which prove the effectiveness of our method on eigenvalue\nproblems. Our successful solution of the excited states of the hydrogen atom\nproblem provides a potential idea for solving the stationary Schr\\\"{o}dinger\nequation for multi-electron atomic molecules.", "category": "physics_comp-ph" }, { "text": "Boundary Element Solution of Electromagnetic Fields for Non-Perfect\n Conductors at Low Frequencies and Thin Skin Depths: A novel boundary element formulation for solving problems involving eddy\ncurrents in the thin skin depth approximation is developed. It is assumed that\nthe time-harmonic magnetic field outside the scatterers can be described using\nthe quasistatic approximation. A two-term asymptotic expansion with respect to\na small parameter characterizing the skin depth is derived for the magnetic and\nelectric fields outside and inside the scatterer, which can be extended to\nhigher order terms if needed. The introduction of a special surface operator\n(the inverse surface gradient) allows the reduction of the problem complexity.\nA method to compute this operator is developed. The obtained formulation\noperates only with scalar quantities and requires computation of surface\noperators that are usual for boundary element (method of moments) solutions to\nthe Laplace equation. The formulation can be accelerated using the fast\nmultipole method. The method is much faster than solving the vector Maxwell\nequations. The obtained solutions are compared with the Mie solution for\nscattering from a sphere and the error of the solution is studied. Computations\nfor much more complex shapes of different topologies, including for magnetic\nand electric field cages used in testing are also performed and discussed.", "category": "physics_comp-ph" }, { "text": "Fast, feature-rich weakly-compressible SPH on GPU: coding strategies and\n compiler choices: GPUSPH was the first implementation of the weakly-compressible Smoothed\nParticle Hydrodynamics method to run entirely on GPU using CUDA. Version 5,\nreleased in June 2018, features a radical restructuring of the code, offering a\nmore structured implementation of several features and specialized optimization\nof most heavy-duty computational kernels. While these improvements have led to\na measurable performance boost (ranging from 15\\% to 30\\% depending on the test\ncase and hardware configuration), it has also uncovered some of the limitations\nof the official CUDA compiler (\\texttt{nvcc}) offered by NVIDIA, especially in\nregard to developer friendliness. This has led to an effort to support\nalternative compilers, particularly Clang, with surprising performance gains.", "category": "physics_comp-ph" }, { "text": "Accelerating Auxiliary-Field Quantum Monte Carlo Simulations of Solids\n with Graphical Processing Unit: We outline how auxiliary-field quantum Monte Carlo (AFQMC) can leverage\ngraphical processing units (GPUs) to accelerate the simulation of solid state\nsytems. By exploiting conservation of crystal momentum in the one- and\ntwo-electron integrals we show how to efficiently formulate the algorithm to\nbest utilize current GPU architectures. We provide a detailed description of\ndifferent optimization strategies and profile our implementation relative to\nstandard approaches, demonstrating a factor of 40 speed up over a CPU\nimplementation. With this increase in computational power we demonstrate the\nability of AFQMC to systematically converge solid state calculations with\nrespect to basis set and system size by computing the cohesive energy of Carbon\nin the diamond structure to within 0.02 eV of the experimental result.", "category": "physics_comp-ph" }, { "text": "Wavelet Scattering Networks for Atomistic Systems with Extrapolation of\n Material Properties: The dream of machine learning in materials science is for a model to learn\nthe underlying physics of an atomic system, allowing it to move beyond\ninterpolation of the training set to the prediction of properties that were not\npresent in the original training data. In addition to advances in machine\nlearning architectures and training techniques, achieving this ambitious goal\nrequires a method to convert a 3D atomic system into a feature representation\nthat preserves rotational and translational symmetry, smoothness under small\nperturbations, and invariance under re-ordering. The atomic orbital wavelet\nscattering transform preserves these symmetries by construction, and has\nachieved great success as a featurization method for machine learning energy\nprediction. Both in small molecules and in the bulk amorphous\n$\\text{Li}_{\\alpha}\\text{Si}$ system, machine learning models using wavelet\nscattering coefficients as features have demonstrated a comparable accuracy to\nDensity Functional Theory at a small fraction of the computational cost. In\nthis work, we test the generalizability of our $\\text{Li}_{\\alpha}\\text{Si}$\nenergy predictor to properties that were not included in the training set, such\nas elastic constants and migration barriers. We demonstrate that statistical\nfeature selection methods can reduce over-fitting and lead to remarkable\naccuracy in these extrapolation tasks.", "category": "physics_comp-ph" }, { "text": "Benchmarking five global optimization approaches for nano-optical shape\n optimization and parameter reconstruction: Numerical optimization is an important tool in the field of computational\nphysics in general and in nano-optics in specific. It has attracted attention\nwith the increase in complexity of structures that can be realized with\nnowadays nano-fabrication technologies for which a rational design is no longer\nfeasible. Also, numerical resources are available to enable the computational\nphotonic material design and to identify structures that meet predefined\noptical properties for specific applications. However, the optimization\nobjective function is in general non-convex and its computation remains\nresource demanding such that the right choice for the optimization method is\ncrucial to obtain excellent results. Here, we benchmark five global\noptimization methods for three typical nano-optical optimization problems:\n\\removed{downhill simplex optimization, the limited-memory\nBroyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, particle swarm\noptimization, differential evolution, and Bayesian optimization}\n\\added{particle swarm optimization, differential evolution, and Bayesian\noptimization as well as multi-start versions of downhill simplex optimization\nand the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm}. In\nthe shown examples from the field of shape optimization and parameter\nreconstruction, Bayesian optimization, mainly known from machine learning\napplications, obtains significantly better results in a fraction of the run\ntimes of the other optimization methods.", "category": "physics_comp-ph" }, { "text": "Two-dimensional transition metal oxides Mn2O3 realized quantum anomalous\n Hall effect: The quantum anomalous Hall effect is a intriguing topological nontrivial\nphase arising from spontaneous magnetization and spin-orbit coupling. However,\nthe tremendously harsh realizing requirements of the quantum anomalous Hall\neffects in magnetic topological insulators of Cr or V-doped (Bi,Sb)2Te3 film,\nhinder its practical applications. Here, we use first principles calculations\nto predict that the three Mn2O3 structure is an intrinsic ferromagnetic Chern\ninsulator. Remarkably, a quantum anomalous Hall phase of Chern number C = -2 is\nfound, and there are two corresponding gapless chiral edge states appearing\ninside the bulk gap. More interestingly, only a small tensile strain is needed\nto induce the phase transition from Cmm2 and C222 phase to P6/mmm phase.\nMeanwhile, a topological quantum phase transition between a quantum anomalous\nHall phase and a trivial insulating phase can be realize. The combination of\nthese novel properties renders the two-dimensional ferromagnet a promising\nplatform for high effciency electronic and spintronic applications.", "category": "physics_comp-ph" }, { "text": "Bond diluted Levy spin-glass model and a new finite size scaling method\n to determine a phase transition: A spin-glass transition occurs both in and out of the limit of validity of\nmean-field theory on a diluted one dimensional chain of Ising spins where\nexchange bonds occur with a probability decaying as the inverse power of the\ndistance. Varying the power in this long-range model corresponds, in a\none-to-one relationship, to change the dimension in spin-glass short-range\nmodels. Using different finite size scaling methods evidence for a spin-glass\ntransition is found also for systems whose equivalent dimension is below the\nupper critical dimension at zero magnetic field. The application of a new\nmethod is discussed, that can be exported to systems in a magnetic field.", "category": "physics_comp-ph" }, { "text": "Accurate Least-Squares P$_N$ Scaling based on Problem Optical Thickness\n for solving Neutron Transport Problems: In this paper, we present an accurate and robust scaling operator based on\nmaterial optical thickness (OT) for the least-squares spherical harmonics\n(LSP$_N$) method for solving neutron transport problems. LSP$_N$ without proper\nscaling is known to be erroneous in highly scattering medium, if the optical\nthickness of the material is large. A previously presented scaling developed by\nManteuffel, et al.\\ does improve the accuracy of LSP$_N$, in problems where the\nmaterial is optically thick. With the method, however, essentially no scaling\nis applied in optically thin materials, which can lead to an erroneous solution\nwith presence of highly scattering medium. Another scaling approach, called the\nreciprocal-removal (RR) scaled LSP$_N$, which is equivalent to the self-adjoint\nangular flux (SAAF) equation, has numerical issues in highly-scattering\nmaterials due to a singular weighting. We propose a scaling based on optical\nthickness that improves the solution in optically thick media while avoiding\nthe singularity in the SAAF formulation.", "category": "physics_comp-ph" }, { "text": "Data-Driven Computing in Dynamics: We formulate extensions to Data Driven Computing for both distance minimizing\nand entropy maximizing schemes to incorporate time integration. Previous works\nfocused on formulating both types of solvers in the presence of static\nequilibrium constraints. Here formulations assign data points a variable\nrelevance depending on distance to the solution and on maximum-entropy\nweighting, with distance minimizing schemes discussed as a special case. The\nresulting schemes consist of the minimization of a suitably-defined free energy\nover phase space subject to compatibility and a time-discretized momentum\nconservation constraint. The present selected numerical tests that establish\nthe convergence properties of both types of Data Driven solvers and solutions.", "category": "physics_comp-ph" }, { "text": "New models for PIXE simulation with Geant4: Particle induced X-ray emission (PIXE) is a physical effect that is not yet\nadequately modelled in Geant4. The current status as in Geant4 9.2 release is\nreviewed and new developments are described. The capabilities of the software\nprototype are illustrated in application to the shielding of the X-ray\ndetectors of the eROSITA telescope on the upcoming Spectrum-X-Gamma space\nmission.", "category": "physics_comp-ph" }, { "text": "The world beyond physics: how big is it?: We discuss the possibility that the complexity of biological systems may lie\nbeyond the predictive capabilities of theoretical physics: in Stuart Kauffman's\nwords, there is a World Beyond Physics (WBP). It is argued that, in view of\nmodern developments of statistical mechanics, the WBP is smaller than one might\nanticipate from the standpoint of fundamental physical theories.", "category": "physics_comp-ph" }, { "text": "PolyPIC: the Polymorphic-Particle-in-Cell Method for Fluid-Kinetic\n Coupling: Particle-in-Cell (PIC) methods are widely used computational tools for fluid\nand kinetic plasma modeling. While both the fluid and kinetic PIC approaches\nhave been successfully used to target either kinetic or fluid simulations,\nlittle was done to combine fluid and kinetic particles under the same PIC\nframework. This work addresses this issue by proposing a new PIC method,\nPolyPIC, that uses polymorphic computational particles. In this numerical\nscheme, particles can be either kinetic or fluid, and fluid particles can\nbecome kinetic when necessary, e.g. particles undergoing a strong acceleration.\nWe design and implement the PolyPIC method, and test it against the Landau\ndamping of Langmuir and ion acoustic waves, two stream instability and sheath\nformation. We unify the fluid and kinetic PIC methods under one common\nframework comprising both fluid and kinetic particles, providing a tool for\nadaptive fluid-kinetic coupling in plasma simulations.", "category": "physics_comp-ph" }, { "text": "Exploratory numerical experiments with a macroscopic theory of\n interfacial interactions: Phenomenological theories of interfacial interactions have targeted\nterrestrial applications since long time and their exploitation has inspired\nour research programme to build up a macroscopic theory of gas-surface\ninteractions targeting the complex phenomenology of hypersonic reentry flows as\nalternative to standard methods based on accommodation coefficients. The\nobjective of this paper is the description of methods employed and results\nachieved in an exploratory study, that is, the unsteady heat transfer between\ntwo solids in contact with and without interface. It is a simple\nnumerical-demonstrator test case designed to facilitate quick numerical\ncalculations and to bring forth already sufficiently meaningful aspects\nrelevant to thermal protection due to the formation of the interface. The paper\nbegins with a brief introduction on the subject matter and a review of relevant\nliterature. Then the case is considered in which the interface is absent. The\nimportance of tension continuity as boundary condition on the same footing of\nheat-flux continuity is recognised and the role of the former in governing the\nestablishment of the temperature-difference distribution over the separation\nsurface is explicitly shown. Evidence is given that the standard\ntemperature-continuity boundary condition is just a particular case.\nSubsequently the case in which the interface is formed between the solids is\nanalysed. The coupling among the heat-transfer equations applicable in the\nsolids and the balance equation for the surface thermodynamic energy formulated\nin terms of the surface temperature is discussed. Results are illustrated for\nplanar and cylindrical configuration; they show unequivocally that the\nthermal-protection action of the interface turns out to be driven exclusively\nby thermophysical properties of the solids and of the interface; accommodation\ncoefficients are not needed.", "category": "physics_comp-ph" }, { "text": "Efficient simulation of multidimensional phonon transport using\n energy-based variance-reduced Monte Carlo formulations: We present a new Monte Carlo method for obtaining solutions of the Boltzmann\nequation for describing phonon transport in micro and nanoscale devices. The\nproposed method can resolve arbitrarily small signals (e.g. temperature\ndifferences) at small constant cost and thus represents a considerable\nimprovement compared to traditional Monte Carlo methods whose cost increases\nquadratically with decreasing signal. This is achieved via a control-variate\nvariance reduction formulation in which the stochastic particle description\nonly solves for the deviation from a nearby equilibrium, while the latter is\ndescribed analytically. We also show that simulating an energy-based Boltzmann\nequation results in an algorithm that lends itself naturally to exact energy\nconservation thereby considerably improving the simulation fidelity.\nSimulations using the proposed method are used to investigate the effect of\nporosity on the effective thermal conductivity of silicon. We also present\nsimulations of a recently developed thermal conductivity spectroscopy process.\nThe latter simulations demonstrate how the computational gains introduced by\nthe proposed method enable the simulation of otherwise intractable multiscale\nphenomena.", "category": "physics_comp-ph" }, { "text": "Improved Methods for Mixing-Limited Spray Modeling: The realization that interfacial features play little role in diesel spray\nvaporization and advection has changed the modus operandi for spray modeling.\nLagrangian particle tracking has typically been focused on droplet behavior,\nwith sub-models for breakup, collision, and interfacially limited vaporization.\nIn contrast, the mixing-oriented spray models are constructed so that gas\nentrainment is the limiting factor in the evolution of momentum and energy. In\nthis work, a new spray model, ELMO (Eulerian-Lagrangian Mixing-Oriented), is\nimplemented in a three-dimensional CFD code with two-way coupling with the gas\nphase. The model is verified and validated with three canonical sprays\nincluding spray A, H, and G from the ECN database.", "category": "physics_comp-ph" }, { "text": "GeVn complexes for silicon-based room-temperature single-atom\n nanoelectronics: We characterize germanium-vacancy GeVn complexes in silicon using\nfirst-principles Density Functional Theory calculations with\nscreening-dependent hybrid functionals. We report on the local geometry and\nelectronic excited states of these defects, including charge transition levels\ncorresponding to the addition of one or more electrons to the defect. Our main\ntheoretical result concerns the GeV complex, which we show to give rise to two\nexcited states deep in the gap, at -0.51 and -0.35 eV from the conduction band,\nconsistently with the available spectroscopic data. The adopted theoretical\nscheme, suitable to compute a reliable estimate of the wavefunction decay,\nleads us to predict that such states are associated to an electron localization\nover a length of about 0.45 nm. By combining the electronic properties of the\nbare silicon vacancy, carrying deep states in the band gap, with the spatial\ncontrollability arising from single Ge ion implantation techniques, the GeVn\ncomplex emerges as a suitable ingredient for silicon-based room-temperature\nsingle-atom devices.", "category": "physics_comp-ph" }, { "text": "A simple approximation for the distribution of ions between charged\n plates in the weak coupling regime: The solution of the Poisson--Boltzmann equation for counterions confined\nbetween two charged plates is known analytically up to a constant, namely, the\nion density in the middle of the channel. This quantity is relevant also\nbecause it gives access, through the contact theorem, to the osmotic pressure\nof the system. Here we compare the values of the ion density obtained by\nnumerical and simulation approaches, and report a useful analytic approximation\nfor the weak coupling regime in the absence of added salt, which predicts the\nvalue of the ion density in the worst case within 5%. The inclusion of higher\norder terms in a Laurent expansion can further improve the accuracy, at the\nexpense of simplicity.", "category": "physics_comp-ph" }, { "text": "Shape optimization of phononic band gap structures using the\n homogenization approach: The paper deals with optimization of the acoustic band gaps computed using\nthe homogenized model of strongly heterogeneous elastic composite which is\nconstituted by soft inclusions periodically distributed in stiff elastic\nmatrix. We employ the homogenized model of such medium to compute intervals ---\nband gaps --- of the incident wave frequencies for which acoustic waves cannot\npropagate. It was demonstrated that the band gaps distribution can be\ninfluenced by changing the shape of inclusions. Therefore, we deal with the\nshape optimization problem to maximize low-frequency band gaps; their bounds\nare determined by analysing the effective mass tensor of the homogenized\nmedium. Analytic transformation formulas are derived which describe dispersion\neffects of resizing the inclusions. The core of the problem lies in sensitivity\nof the eigenvalue problem associated with the microstructure. Computational\nsensitivity analysis is developed, which allows for efficient usage of the\ngradient based optimization methods. Numerical examples with 2D structures are\nreported to illustrate the effects of optimization with stiffness constraint.\nThis study is aimed to develop modelling tools which can be used in optimal\ndesign of new acoustic devices for \"smart systems\".", "category": "physics_comp-ph" }, { "text": "Optimizing working parameters of the twin-range cutoff method in terms\n of accuracy and efficiency: We construct a priori error estimation for the force error of the twin-range\ncutoff method, which is widely used to treat the short-range non-bonded\ninteractions in molecular simulations. Based on the error and cost estimation,\nwe develop a work flow that can automatically determine the nearly most\nefficient twin-range cutoff parameters (i.e. the cutoff radii and the neighbor\nlist updating frequency) prior to a simulation for a predetermined accuracy.\nBoth the error estimate and the parameter tuning method are demonstrated to be\neffective by testing simulations of the standard Lennard-Jones 6-12 fluid in\ngas, liquid as well as supercritical state. We recommend the tuned twin-range\ncutoff method that can save precious user time and computational resources.", "category": "physics_comp-ph" }, { "text": "Computing eigenfrequency sensitivities near exceptional points: Exceptional points are spectral degeneracies of non-Hermitian systems where\nboth eigenfrequencies and eigenmodes coalesce. The eigenfrequency sensitivities\nnear an exceptional point are significantly enhanced, whereby they diverge\ndirectly at the exceptional point. Capturing this enhanced sensitivity is\ncrucial for the investigation and optimization of exceptional-point-based\napplications, such as optical sensors. We present a numerical framework, based\non contour integration and algorithmic differentiation, to accurately and\nefficiently compute eigenfrequency sensitivities near exceptional points. We\ndemonstrate the framework to an optical microdisk cavity and derive a\nsemi-analytical solution to validate the numerical results. The computed\neigenfrequency sensitivities are used to track the exceptional point along an\nexceptional surface in the parameter space. The presented framework can be\napplied to any kind of resonance problem, e.g., with arbitrary geometry or with\nexceptional points of arbitrary order.", "category": "physics_comp-ph" }, { "text": "Projectability disentanglement for accurate and automated\n electronic-structure Hamiltonians: Maximally-localized Wannier functions (MLWFs) are a powerful and broadly used\ntool to characterize the electronic structure of materials, from chemical\nbonding to dielectric response to topological properties. Most generally, one\ncan construct MLWFs that describe isolated band manifolds, e.g. for the valence\nbands of insulators, or entangled band manifolds, e.g. in metals or describing\nboth the valence and the conduction manifolds in insulators. Obtaining MLWFs\nthat describe a target manifold accurately and with the most compact\nrepresentation often requires chemical intuition and trial and error, a\nchallenging step even for experienced researchers and a roadblock for automated\nhigh-throughput calculations. Here, we present a powerful approach that\nautomatically provides MLWFs spanning the occupied bands and their natural\ncomplement for the empty states, resulting in Wannier Hamiltonian models that\nprovide a tight-binding picture of optimized atomic orbitals in crystals. Key\nto the success of the algorithm is the introduction of a projectability measure\nfor each Bloch state onto atomic orbitals (here, chosen from the\npseudopotential projectors) that determines if that state should be kept\nidentically, discarded, or mixed into a disentangling algorithm. We showcase\nthe accuracy of our method by comparing a reference test set of 200 materials\nagainst the selected-columns-of-the-density-matrix algorithm, and its\nreliability by constructing Wannier Hamiltonians for 21737 materials from the\nMaterials Cloud.", "category": "physics_comp-ph" }, { "text": "Accurately simulating nine-dimensional phase space of relativistic\n particles in strong fields: Next-generation high-power lasers that can be focused to intensities\nexceeding 10^23 W/cm^2 are enabling new physics and applications. The physics\nof how these lasers interact with matter is highly nonlinear, relativistic, and\ncan involve lowest-order quantum effects. The current tool of choice for\nmodeling these interactions is the particle-in-cell (PIC) method. In strong\nfields, the motion of charged particles and their spin is affected by radiation\nreaction. Standard PIC codes usually use Boris or its variants to advance the\nparticles, which requires very small time steps in the strong-field regime to\nobtain accurate results. In addition, some problems require tracking the spin\nof particles, which creates a 9D particle phase space (x, u, s). Therefore,\nnumerical algorithms that enable high-fidelity modeling of the 9D phase space\nin the strong-field regime are desired. We present a new 9D phase space\nparticle pusher based on analytical solutions to the position, momentum and\nspin advance from the Lorentz force, together with the semi-classical form of\nRR in the Landau-Lifshitz equation and spin evolution given by the\nBargmann-Michel-Telegdi equation. These analytical solutions are obtained by\nassuming a locally uniform and constant electromagnetic field during a time\nstep. The solutions provide the 9D phase space advance in terms of a particle's\nproper time, and a mapping is used to determine the proper time step for each\nparticle from the simulation time step. Due to the analytical integration, the\nconstraint on the time step needed to resolve trajectories in ultra-high fields\ncan be greatly reduced. We present single-particle simulations and full PIC\nsimulations to show that the proposed particle pusher can greatly improve the\naccuracy of particle trajectories in 9D phase space for given laser fields. A\ndiscussion on the numerical efficiency of the proposed pusher is also provided.", "category": "physics_comp-ph" }, { "text": "A new approach for electronic heat conduction in molecular dynamics\n simulations: We present a new approach for the two-temperature molecular dynamics (MD)\nmodel for coupled simulations of electronic and phonon heat conduction in\nnanoscale systems. The proposed method uses a master equation to perform heat\nconduction of the electronic temperature eschewing the need to use a basis set\nto evaluate operators. This characteristic allows us to seamlessly couple the\nelectronic heat conduction model with molecular dynamics codes without the need\nto introduce an auxiliary mesh. We implemented the methodology in the\nLarge-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code and\nthrough multiple examples, we validated the methodology. We then study the\neffect of electron-phonon interaction in high energy irradiation simulations\nand the effect of laser pulse on metallic materials. We show that the model\nprovides an atomic level description in complex geometries of energy transfer\nbetween phonons and electrons. Thus, the proposed approach provides an\nalternative way to the two-temperature molecular dynamics models. The parallel\nperformance and some aspects of the implementation are presented.", "category": "physics_comp-ph" }, { "text": "Wigner-Smith Time Delay Matrix for Electromagnetics: Theory and\n Phenomenology: Wigner-Smith (WS) time delay concepts have been used extensively in quantum\nmechanics to characterize delays experienced by particles interacting with a\npotential well. This paper formally extends WS time delay theory to Maxwell's\nequations and explores its potential applications in electromagnetics. The WS\ntime delay matrix relates a lossless and reciprocal system's scattering matrix\nto its frequency derivative and allows for the construction of modes that\nexperience well-defined group delays when interacting with the system. The\nmatrix' entries for guiding, scattering, and radiating systems are energy-like\noverlap integrals of the electric and/or magnetic fields that arise upon\nexcitation of the system via its ports. The WS time delay matrix has numerous\napplications in electromagnetics, including the characterization of group\ndelays in multiport systems, the description of electromagnetic fields in terms\nof elementary scattering processes, and the characterization of frequency\nsensitivities of fields and multiport antenna impedance matrices.", "category": "physics_comp-ph" }, { "text": "Inverting the Kohn-Sham equations with physics-informed machine learning: Electronic structure theory calculations offer an understanding of matter at\nthe quantum level, complementing experimental studies in materials science and\nchemistry. One of the most widely used methods, density functional theory\n(DFT), maps a set of real interacting electrons to a set of fictitious\nnon-interacting electrons that share the same probability density. Ensuring\nthat the density remains the same depends on the exchange-correlation (XC)\nenergy and, by a derivative, the XC potential. Inversions provide a method to\nobtain exact XC potentials from target electronic densities, in hopes of\ngaining insights into accuracy-boosting approximations. Neural networks provide\na new avenue to perform inversions by learning the mapping from density to\npotential. In this work, we learn this mapping using physics-informed machine\nlearning (PIML) methods, namely physics informed neural networks (PINNs) and\nFourier neural operators (FNOs). We demonstrate the capabilities of these two\nmethods on a dataset of one-dimensional atomic and molecular models. The\ncapabilities of each approach are discussed in conjunction with this\nproof-of-concept presentation. The primary finding of our investigation is that\nthe combination of both approaches has the greatest potential for inverting the\nKohn-Sham equations at scale.", "category": "physics_comp-ph" }, { "text": "Hydrodynamic behavior of the Pseudo-Potential lattice Boltzmann method\n for interfacial flows: The lattice Boltzmann method (LBM) is routinely employed in the simulation of\ncomplex multiphase flows comprising bulk phases separated by non-ideal\ninterfaces. LBM is intrinsically mesoscale with an hydro-dynamic equivalence\npopularly set by the Chapman-Enskog analysis, requiring that fields slowly vary\nin space and time. The latter assumptions become questionable close to\ninterfaces, where the method is also known to be affected by spurious non\nhydrodynamical contributions. This calls for quantitative hydrodynamical\nchecks. In this paper we analyze the hydrodynamic behaviour of LBM\npseudo-potential models for the problem of break-up of a liquid ligament\ntriggered by the Plateau-Rayleigh instability. Simulations are performed at\nfixed interface thickness, while increasing the ligament radius, i.e. in the\n\"sharp interface\" limit. Influence of different LBM collision operators is also\nassessed. We find that different distributions of spurious currents along the\ninterface may change the outcome of the pseudo-potential model simulations\nquite sensibly, which suggests that a proper fine-tuning of pseudo-potential\nmodels in time-dependent problems is needed before the utilization in concrete\napplications. Taken all together, we argue that the results of the proposed\nstudy provide a valuable insight for engineering pseudo-potential model\napplications involving the hydrodynamics of liquid jets.", "category": "physics_comp-ph" }, { "text": "A Phase-Field Model for Fluid-Structure-Interaction: In this paper, we develop a novel phase-field model for fluid-structure\ninteraction (FSI), that is capable to handle very large deformations as well as\ntopology changes like contact of the solid to the domain boundary. The model is\nbased on a fully Eulerian description of the velocity field in both, the fluid\nand the elastic domain. Viscous and elastic stresses in the Navier-Stokes\nequations are restricted to the corresponding domains by multiplication with\ntheir characteristic functions. To obtain the elastic stress, an additional\nOldroyd-B - like equation is solved. Thermodynamically consistent forces are\nderived by energy variation. The convergence of the derived equations to the\ntraditional sharp interface formulation of fluid-structure interaction is shown\nby matched asymptotic analysis. The model is evaluated in a challenging\nbenchmark scenario of an elastic body traversing a fluid channel. A comparison\nto reference values from Arbitrary Lagrangian Eulerian (ALE) simulations shows\nvery good agreement. We highlight some distinct advantages of the new model,\nlike the avoidance of re-triangulations and the stable inclusion of surface\ntension. Further, we demonstrate how simple it is to include contact dynamics\ninto the model, by simulating a ball bouncing off a wall. We extend this\nscenario to include adhesion of the ball, which to our knowledge, cannot be\nsimulated with any other FSI model. While we have restricted simulations to\nfluid-structure interaction, the model is capable to simulate any combination\nof viscous fluids, visco-elastic fluids and elastic solids.", "category": "physics_comp-ph" }, { "text": "Numerical Simulation Of Impregnation In Porous Media By Self-organized\n Gradient Percolation Method: The aim of this work is to develop a new numerical method to overcome the\ncomputational difficulties of numerical simulation of unsaturated impregnation\nin porous media. The numerical analysis by classical methods (F.E.M,\ntheta-method, ...) for this phenomenon require small time-step and space\ndiscretization to ensure both convergence and accuracy. Yet this leads to a\nhigh computational cost. Moreover, a very small time-step can lead to spurious\noscillations that impact the precision of the results. Thus, we propose to use\na Self-organized Gradient Percolation (SGP) algorithm to reduce the\ncomputational cost and overcome these numerical drawbacks. The (SGP) method is\nbased on gradient percolation theory, relevant to calculation of local\nsaturation. The initialization of this algorithm is driven by an analytic\nsolution of the homogenous diffusion equation, which is a convolution between a\nProbability Density Function (PDF) and a smoothing function. Thus, we propose\nto reproduce the evolution of the capillary pressure profiles by the evolution\nof the standard deviation of the PDF. This algorithm is validated by comparing\nthe results with the capillary pressure profiles and the mass gain curve\nobtained by finite element simulations and experimental measurements,\nrespectively. The computational time of the proposed algorithm is lower than\nthat of finite element models for one-dimension case. In conclusion, the SGP\nmethod permits to reduce the computational cost and does not produce spurious\noscillations. The work is still going on for extension in 3D and the first\nresults are promising.", "category": "physics_comp-ph" }, { "text": "On the similarity of meshless discretizations of Peridynamics and\n Smooth-Particle Hydrodynamics: This paper discusses the similarity of meshless discretizations of\nPeridynamics and Smooth-Particle-Hydrodynamics (SPH), if Peridynamics is\napplied to classical material models based on the deformation gradient. We show\nthat the discretized equations of both methods coincide if nodal integration is\nused. This equivalence implies that Peridynamics reduces to an old meshless\nmethod and all instability problems of collocation-type particle methods apply.\nThese instabilities arise as a consequence of the nodal integration scheme,\nwhich causes rank-deficiency and leads to spurious zero-energy modes. As a\nresult of the demonstrated equivalence to SPH, enhanced implementations of\nPeridynamics should employ more accurate integration schemes.", "category": "physics_comp-ph" }, { "text": "A robust incompressible Navier-Stokes solver for high density ratio\n multiphase flows: This paper presents a robust, adaptive numerical scheme for simulating high\ndensity ratio and high shear multiphase flows on locally refined Cartesian\ngrids that adapt to the evolving interfaces and track regions of high\nvorticity. The algorithm combines the interface capturing level set method with\na variable-coefficient incompressible Navier-Stokes solver that is demonstrated\nto stably resolve material contrast ratios of up to six orders of magnitude.\nThe discretization approach ensures second-order pointwise accuracy for both\nvelocity and pressure with several physical boundary treatments, including\nvelocity and traction boundary conditions. The paper includes several test\ncases that demonstrate the order of accuracy and algorithmic scalability of the\nflow solver. To ensure the stability of the numerical scheme in the presence of\nhigh density and viscosity ratios, we employ a consistent treatment of mass and\nmomentum transport in the conservative form of discrete equations. This\nconsistency is achieved by solving an additional mass balance equation, which\nwe approximate via a strong stability preserving Runga-Kutta time integrator\nand by employing the same mass flux (obtained from the mass equation) in the\ndiscrete momentum equation. The scheme uses higher-order total variation\ndiminishing (TVD) and convection-boundedness criterion (CBC) satisfying limiter\nto avoid numerical fluctuations in the transported density field. The\nhigh-order bounded convective transport is done on a dimension-by-dimension\nbasis, which makes the scheme simple to implement. We also demonstrate through\nseveral test cases that the lack of consistent mass and momentum transport in\nnon-conservative formulations, which are commonly used in practice, or the use\nof non-CBC satisfying limiters can yield very large numerical error and very\npoor accuracy for convection-dominant high density ratio flows.", "category": "physics_comp-ph" }, { "text": "FINETUNA: Fine-tuning Accelerated Molecular Simulations: Machine learning approaches have the potential to approximate Density\nFunctional Theory (DFT) for atomistic simulations in a computationally\nefficient manner, which could dramatically increase the impact of computational\nsimulations on real-world problems. However, they are limited by their accuracy\nand the cost of generating labeled data. Here, we present an online active\nlearning framework for accelerating the simulation of atomic systems\nefficiently and accurately by incorporating prior physical information learned\nby large-scale pre-trained graph neural network models from the Open Catalyst\nProject. Accelerating these simulations enables useful data to be generated\nmore cheaply, allowing better models to be trained and more atomistic systems\nto be screened. We also present a method of comparing local optimization\ntechniques on the basis of both their speed and accuracy. Experiments on 30\nbenchmark adsorbate-catalyst systems show that our method of transfer learning\nto incorporate prior information from pre-trained models accelerates\nsimulations by reducing the number of DFT calculations by 91%, while meeting an\naccuracy threshold of 0.02 eV 93% of the time. Finally, we demonstrate a\ntechnique for leveraging the interactive functionality built in to VASP to\nefficiently compute single point calculations within our online active learning\nframework without the significant startup costs. This allows VASP to work in\ntandem with our framework while requiring 75% fewer self-consistent cycles than\nconventional single point calculations. The online active learning\nimplementation, and examples using the VASP interactive code, are available in\nthe open source FINETUNA package on Github.", "category": "physics_comp-ph" }, { "text": "A discontinuous Galerkin method for wave propagation in orthotropic\n poroelastic media with memory terms: In this paper, we investigate wave propagation in orthotropic poroelastic\nmedia by studying the time-domain poroelastic equations. Both the low frequency\nBiot's (LF-Biot) equations and the Biot-Johnson-Koplik-Dashen (Biot-JKD) models\nare considered. In LF-Biot equations, the dissipation terms are proportional to\nthe relative velocity between the fluid and the solid by a constant. Contrast\nto this, the dissipation terms in the Biot-JKD model are in the form of time\nconvolution (memory) as a result of the frequency-dependence of fluid-solid\ninteraction at the underlying microscopic scale in the frequency domain. The\ndynamic tortuosity and permeability described by Darcy's law are two crucial\nfactors in this problem, and highly linked to the viscous force. In the Biot\nmodel, the key difficulty is to handle the viscous term when the pore fluid is\nviscous flow. In the Biot-JKD dynamic permeability model, the convolution\noperator involves order $1/2$ shifted fractional derivatives in the time\ndomain, which is challenging to discretize. In this work, we utilize the\nmultipoint Pad$\\acute{e}$ (or Rational) approximation for Stieltjes function to\napproximate the dynamic tortuosity and then obtain an augmented system of\nequations which avoids storing the solutions of the past time. The Runge-Kutta\ndiscontinuous Galerkin (RKDG) method is used to compute the numerical solution,\nand numerical examples are presented to demonstrate the high order accuracy and\nstability of the method.", "category": "physics_comp-ph" }, { "text": "Second Order Unconditional Positive Preserving Schemes for\n Non-equilibrium Reactive Flows with Mass and Mole Balance: In this study, a family of second order process based modified Patankar\nRunge-Kutta schemes is proposed with both the mass and mole maintained in\nbalance while preserving the positivity of density and pressure with the time\nstep determined by convection terms. The accuracy analysis is conducted to\nderive the sufficient and necessary conditions for the Runge-Kutta and Patankar\ncoefficients. Coupling with the finite volume method, the proposed schemes are\nextended to Euler equations with non-equilibrium reacting source terms.\nBenchmark tests are given to prove the prior order of accuracy and validate the\npositive-preserving property for both density and pressure.", "category": "physics_comp-ph" }, { "text": "Central Moment Lattice Boltzmann Method using a Pressure-based\n Formulation for Multiphase Flows at High Density Ratios and including Effects\n of Surface Tension and Marangoni Stresses: Simulation of multiphase flows require coupled capturing or tracking of the\ninterfaces in conjunction with the solution of fluid motion often occurring at\nmultiple scales. We will present unified cascaded LB methods based on central\nmoments for the solution of the incompressible two-phase flows at high density\nratios and for capturing of the interfacial dynamics. Based on a modified\ncontinuous Boltzmann equation (MCBE) for two-phase flows, where a kinetic\ntransformation to the distribution function involving the pressure field is\nintroduced to reduce the associated numerical stiffness at high density\ngradients, a central moment cascaded LB formulation for computing the fluid\nmotion will be constructed. In this LB scheme, the collision step is prescribed\nby the relaxation of various central moments to their equilibria that are\nreformulated in terms of the pressure field obtained via matching to the\ncontinuous equilibria based on the transformed Maxwell distribution.\nFurthermore, the differential treatments for the effects of the source term\nrepresenting the change due to the pressure field and of the source term due to\nthe interfacial tension force and body forces appearing in the MCBE on\ndifferent moments are consistently accounted for in this cascaded LB solver\nthat computes the pressure and velocity fields. In addition, another cascaded\nLB scheme via modified equilibria will be developed to solve for the\ninterfacial dynamics represented by a phase field model based on the\nconservative Allen-Cahn equation. Based on numerical simulations of a variety\nof two-phase flow benchmark problems at high density ratios and involving the\neffects of surface tension and its tangential gradients (Marangoni stresses),\nwe will validate our unified cascaded LB approach and also demonstrate\nimprovements in numerical stability.", "category": "physics_comp-ph" }, { "text": "Thermal transport across grain boundaries in polycrystalline silicene: a\n multiscale modeling: During the fabrication process of large scale silicene through common\nchemical vapor deposition (CVD) technique, polycrystalline films are quite\nlikely to be produced, and the existence of Kapitza thermal resistance along\ngrain boundaries could result in substantial changes of their thermal\nproperties. In the present study, the thermal transport along polycrystalline\nsilicene was evaluated by performing a multiscale method. Non-equilibrium\nmolecular dynamics simulations (NEMD) was carried out to assess the interfacial\nthermal resistance of various constructed grain boundaries in silicene as well\nas to examine the effects of tensile strain and the mean temperature on the\ninterfacial thermal resistance. In the following stage, the effective thermal\nconductivity of polycrystalline silicene was investigated considering the\neffects of grain size and tensile strain. Our results indicate that the average\nvalues of Kapitza conductance at grain boundaries at room temperature were\nestimated nearly 2.56*10^9 W/m2K and 2.46*10^9 W/m2K through utilizing Tersoff\nand Stillinger-Weber interatomic potentials, respectively. Also, in spite of\nthe mean temperature whose increment does not change Kapitza resistance, the\ninterfacial thermal resistance can be controlled by applying strain.\nFurthermore, it was found that, by tuning the grain size of polycrystalline\nsilicene, its thermal conductivity can be modulated up to one order of\nmagnitude.", "category": "physics_comp-ph" }, { "text": "Moffatt vortices in the lid-driven cavity flow: In incompressible viscous flows in a confined domain, vortices are known to\nform at the corners and in the vicinity of separation points. The existence of\na sequence of vortices (known as Moffatt vortices) at the corner with\ndiminishing size and rapidly decreasing intensity has been indicated by\nphysical experiments as well as mathematical asymptotics. In this work, we\nestablish the existence of Moffatt vortices for the flow in the famous\nLid-driven square cavity at moderate Reynolds numbers by using an efficient\nNavier-Stokes solver on non-uniform space grids. We establish that Moffatt\nvortices in succession follow fixed geometric ratios in size and intensities\nfor a particular Reynolds number. In order to eliminate the possibility of\nspurious solutions, we confirm the physical presence of the small scales by\npressure gradient computation along the walls.", "category": "physics_comp-ph" }, { "text": "Localizing energy in granular materials: A device for absorbing and storing short duration impulses in an initially\nuncompressed one-dimensional granular chain is presented. Simply stated, short\nregions of sufficiently soft grains are embedded in a hard granular chain.\nThese grains exhibit long-lived standing waves of predictable frequencies\nregardless of the timing of the arrival of solitary waves from the larger\nmatrix. We explore the origins, symmetry, and energy content of the soft region\nand its intrinsic modes.", "category": "physics_comp-ph" }, { "text": "A numerical algorithm for solving the coupled Schr\u00f6dinger equations\n using inverse power method: The inverse power method is a numerical algorithm to obtain the eigenvectors\nof a matrix. In this work, we develop an iteration algorithm, based on the\ninverse power method, to numerically solve the Schr\\\"odinger equation that\ncouples an arbitrary number of components. Such an algorithm can also be\napplied to the multi-body systems. To show the power and accuracy of this\nmethod, we also present an example of solving the Dirac equation under the\npresence of an external scalar potential and a constant magnetic field, with\nsource code publicly available.", "category": "physics_comp-ph" }, { "text": "A conservative and non-dissipative Eulerian formulation for the\n simulation of soft solids in fluids: Soft solids in fluids find wide range of applications in science and\nengineering, especially in the study of biological tissues and membranes. In\nthis study, an Eulerian finite volume approach has been developed to simulate\nfully resolved incompressible hyperelastic solids immersed in a fluid. We have\nadopted the recently developed reference map technique (RMT) by Valkov et. al\n(J. Appl. Mech., 82, 2015) and assessed multiple improvements for this\napproach.These modifications maintain the numerical robustness of the solver\nand allow the simulations without any artificial viscosity in the solid regions\n(to stabilize the solver). This has also resulted in eliminating the striations\n(\"wrinkles\") of the fluid-solid interface that was seen before and hence\nobviates the need for any additional routines to achieve a smooth interface. An\napproximate projection method has been used to project the velocity field onto\na divergence free field. Cost and accuracy improvements of the modifications on\nthe method have also been discussed.", "category": "physics_comp-ph" }, { "text": "Solving Fluctuation-Enhanced Poisson-Boltzmann Equations: Electrostatic correlations and fluctuations in ionic systems can be described\nwithin an extended Poisson-Boltzmann theory using a Gaussian variational form.\nThe resulting equations are challenging to solve because they require the\nsolution of a non-linear partial differential equation for the pair correlation\nfunction. This has limited existing studies to simple approximations or to\none-dimensional geometries. In this paper we show that the numerical solution\nof the equations is greatly simplified by the use of selective inversion of a\nfinite difference operator which occurs in the theory. This selective inversion\npreserves the sparse structure of the problem and leads to substantial savings\nin computer effort. In one and two dimensions further simplifications are made\nby using a mixture of selective inversion and Fourier techniques.", "category": "physics_comp-ph" }, { "text": "Uquantchem: A versatile and easy to use Quantum Chemistry Computational\n Software: In this paper we present the Uppsala Quantum Chemistry package (UQUANTCHEM),\na new and versatile computational platform with capabilities ranging from\nsimple Hartree-Fock calculations to state of the art First principles Extended\nLagrangian Born Oppenheimer Molecular Dynamics (XL- BOMD) and diffusion quantum\nMonte Carlo (DMC). The UQUANTCHEM package is distributed under the general\npublic license and can be directly downloaded from the code web-site. Together\nwith a presentation of the different capabilities of the uquantchem code and a\nmore technical discus- sion on how these capabilities have been implemented, a\npresentation of the user-friendly aspect of the package on the basis of the\nlarge number of default settings will also be presented. Furthermore, since the\ncode has been parallelized within the framework of the message passing\ninterface (MPI), the timing of some benchmark calculations are reported to\nillustrate how the code scales with the number of computational nodes for\ndifferent levels of chemical theory.", "category": "physics_comp-ph" }, { "text": "N-, B-, P-, Al-, As-, Ga-graphdiyne/graphyne lattices: First-principles\n investigation of mechanical, optical and electronic properties: Graphdiyne and graphyne are carbon-based two-dimensional (2D) porous atomic\nlattices, with outstanding physics and excellent application prospects for\nadvanced technologies, like nanoelectronics and energy storage systems. During\nthe last year, B- and N-graphdiyne nanomembranes were experimentally realized.\nMotivated by the latest experimental advances, in this work we predicted novel\nN-, B-, P-, Al-, As-, Ga-graphdiyne/graphyne 2D lattices. We then conducted\ndensity functional theory simulations to obtain the energy minimized structures\nand explore the mechanical, thermal stability, electronic and optical\ncharacteristics of these novel porous nanosheets. Acquired theoretical results\nreveal that the predicted carbon-based lattices are thermally stable. It was\nmoreover found that these novel 2D nanostructures can exhibit remarkably high\ntensile strengths or stretchability. The electronic structure analysis reveals\nsemiconducting electronic character for the predicted monolayers. Moreover, the\noptical results indicate that the first absorption peaks of the imaginary part\nof the dielectric function for these novel porous lattices along the in-plane\ndirections are in the visible, IR and near-IR (NIR) range of light. This work\nhighlights the outstanding properties of graphdiyne/graphyne lattices and\nrecommends them as promising candidates to design stretchable energy storage\nand nanoelectronics systems.", "category": "physics_comp-ph" }, { "text": "Data-Driven Forecasting of Non-Equilibrium Solid-State Dynamics: We present a data-driven approach to efficiently approximate nonlinear\ntransient dynamics in solid-state systems. Our proposed machine-learning model\ncombines a dimensionality reduction stage with a nonlinear vector\nautoregression scheme. We report an outstanding time-series forecasting\nperformance combined with an easy to deploy model and an inexpensive training\nroutine. Our results are of great relevance as they have the potential to\nmassively accelerate multi-physics simulation software and thereby guide to\nfuture development of solid-state based technologies.", "category": "physics_comp-ph" }, { "text": "Annealed Importance Sampling: Simulated annealing - moving from a tractable distribution to a distribution\nof interest via a sequence of intermediate distributions - has traditionally\nbeen used as an inexact method of handling isolated modes in Markov chain\nsamplers. Here, it is shown how one can use the Markov chain transitions for\nsuch an annealing sequence to define an importance sampler. The Markov chain\naspect allows this method to perform acceptably even for high-dimensional\nproblems, where finding good importance sampling distributions would otherwise\nbe very difficult, while the use of importance weights ensures that the\nestimates found converge to the correct values as the number of annealing runs\nincreases. This annealed importance sampling procedure resembles the second\nhalf of the previously-studied tempered transitions, and can be seen as a\ngeneralization of a recently-proposed variant of sequential importance\nsampling. It is also related to thermodynamic integration methods for\nestimating ratios of normalizing constants. Annealed importance sampling is\nmost attractive when isolated modes are present, or when estimates of\nnormalizing constants are required, but it may also be more generally useful,\nsince its independent sampling allows one to bypass some of the problems of\nassessing convergence and autocorrelation in Markov chain samplers.", "category": "physics_comp-ph" }, { "text": "Computing the heat conductivity of fluids from density fluctuations: Equilibrium molecular dynamics simulations, in combination with the\nGreen-Kubo (GK) method, have been extensively used to compute the thermal\nconductivity of liquids. However, the GK method relies on an ambiguous\ndefinition of the microscopic heat flux, which depends on how one chooses to\ndistribute energies over atoms. This ambiguity makes it problematic to employ\nthe GK method for systems with non-pairwise interactions. In this work, we show\nthat the hydrodynamic description of thermally driven density fluctuations can\nbe used to obtain the thermal conductivity of a bulk fluid unambiguously,\nthereby bypassing the need to define the heat flux. We verify that, for a model\nfluid with only pairwise interactions, our method yields estimates of thermal\nconductivity consistent with the GK approach. We apply our approach to compute\nthe thermal conductivity of a non-pairwise additive water model at\nsupercritical conditions, and then of a liquid hydrogen system described by a\nmachine-learning interatomic potential, at 33 GPa and 2000 K.", "category": "physics_comp-ph" }, { "text": "Using data-reduction techniques to analyse biomolecular trajectories: This chapter discusses the way in which dimensionality reduction algorithms\nsuch as diffusion maps and sketch-map can be used to analyze molecular dynamics\ntrajectories. The first part discusses how these various algorithms function,\nas well as practical issues such as landmark selection and how these algorithms\ncan be used when the data to be analyzed, comes from enhanced sampling\ntrajectories. In the later parts, a comparison between the results obtained by\napplying various algorithms to two sets of sample data is performed and\ndiscussed. This section is then followed by a summary of how one algorithm, in\nparticular, sketch-map, has been applied to a range of problems. The chapter\nconcludes with a discussion on the directions that we believe this field is\ncurrently moving.", "category": "physics_comp-ph" }, { "text": "$rp$-adaptation for compressible flows: We present an $rp$-adaptation strategy for high-fidelity simulation of\ncompressible inviscid flows with shocks. The mesh resolution in regions of flow\ndiscontinuities is increased by using a variational optimiser to $r$-adapt the\nmesh and cluster degrees of freedom there. In regions of smooth flow, we\nlocally increase or decrease the local resolution through increasing or\ndecreasing the polynomial order of the elements, respectively. This dual\napproach allows us to take advantage of the strengths of both methods for best\ncomputational performance, thereby reducing the overall cost of the simulation.\nThe adaptation workflow uses a sensor for both discontinuities and smooth\nregions that is cheap to calculate, but the framework is general and could be\nused in conjunction with other feature-based sensors or error estimators. We\ndemonstrate this proof-of-concept using two geometries at transonic and\nsupersonic flow regimes. The method has been implemented in the open-source\nspectral/$hp$ element framework $Nektar++$, and its dedicated high-order mesh\ngeneration tool $NekMesh$. The results show that the proposed $rp$-adaptation\nmethodology is a reasonably cost-effective way of improving accuracy.", "category": "physics_comp-ph" }, { "text": "jVMC: Versatile and performant variational Monte Carlo leveraging\n automated differentiation and GPU acceleration: The introduction of Neural Quantum States (NQS) has recently given a new\ntwist to variational Monte Carlo (VMC). The ability to systematically reduce\nthe bias of the wave function ansatz renders the approach widely applicable.\nHowever, performant implementations are crucial to reach the numerical state of\nthe art. Here, we present a Python codebase that supports arbitrary NQS\narchitectures and model Hamiltonians. Additionally leveraging automatic\ndifferentiation, just-in-time compilation to accelerators, and distributed\ncomputing, it is designed to facilitate the composition of efficient NQS\nalgorithms.", "category": "physics_comp-ph" }, { "text": "Capturing shocks and turbulence spectra in compressible flows. Part 2: A\n new hybrid PPM/WENO method: In the Part 1 of the present paper the performance of several different low\nand high-order finite-volume methods were assessed by investigating how well\nthey can capture the turbulent spectra of a compressible flow where small\nsmooth turbulent structures interact with shocks and discontinuities. The\ncomparisons showed that a second-order Godunov method with PPM interpolation\nprovides results virtually the same as a fourth-order WENO scheme but at a\nsignificant lower cost. However, it is shown that the PPM method fails to\nprovide an accurate representation in the high-frequency range of the spectra.\nIn the present paper we show that this specific issue comes from the\nslope-limiting procedure and a novel hybrid PPM/WENO method is developed, which\nhas the ability to capture the turbulent spectra with the accuracy of a\nformally high-order method, but at the cost of the second-order Godunov method.\nOverall, it is shown that virtually the same physical solution can be obtained\nmuch faster by refining a simulation with the second-order method and carefully\nchosen numerical procedures, rather than running a coarse high-order\nsimulation.", "category": "physics_comp-ph" }, { "text": "Numerical coupling of aerosol emissions, dry removal, and turbulent\n mixing in the E3SM Atmosphere Model version 1 (EAMv1), part I: dust budget\n analyses and the impacts of a revised coupling scheme: An earlier study evaluating the dust life cycle in the Energy Exascale Earth\nSystem Model (E3SM) Atmosphere Model version 1 (EAMv1) has revealed that the\nsimulated global mean dust lifetime is substantially shorter when higher\nvertical resolution is used, primarily due to significant strengthening of dust\ndry removal in source regions. This paper demonstrates that the sequential\nsplitting of aerosol emissions, dry removal, and turbulent mixing in the\nmodel's time integration loop, especially the calculation of dry removal after\nsurface emissions and before turbulent mixing, is the primary reason for the\nvertical resolution sensitivity reported in that earlier study. Based on this\nreasoning, we propose a simple revision to the numerical process coupling\nscheme, which moves the application of the surface emissions to after dry\nremoval and before turbulent mixing. The revised scheme allows newly emitted\nparticles to be transported aloft by turbulence before being removed from the\natmosphere, and hence better resembles the dust life cycle in the real world.\nSensitivity experiments are conducted and analyzed to evaluate the impact of\nthe revised coupling on the simulated aerosol climatology in EAMv1.", "category": "physics_comp-ph" }, { "text": "Validation of fluorescence transition probability calculations: A systematic and quantitative validation of the K and L shell X-ray\ntransition probability calculations according to different theoretical methods\nhas been performed against experimental data. This study is relevant to the\noptimization of data libraries used by software systems, namely Monte Carlo\ncodes, dealing with X-ray fluorescence. The results support the adoption of\ntransition probabilities calculated according to the Hartree-Fock approach,\nwhich manifest better agreement with experimental measurements than\ncalculations based on the Hartree-Slater method.", "category": "physics_comp-ph" }, { "text": "Surface roughness in finite element meshes: We present a practical approach for constructing meshes of general rough\nsurfaces with given autocorrelation functions based on the unstructured meshes\nof nominally smooth surfaces. The approach builds on a well-known method to\nconstruct correlated random numbers from white noise using a decomposition of\nthe autocorrelation matrix. We discuss important details arising in practical\napplications to the physicalmodeling of surface roughness and provide a\nsoftware implementation to enable use of the approach with a broad range of\nnumerical methods in various fields of science and engineering.", "category": "physics_comp-ph" }, { "text": "Carrier Selectivity and Passivation at the Group V elemental 2D\n Material--Si Interface of a PV Device: This study investigates the interfacial characteristics relevant to\nphotovoltaic (PV) devices of the Group--V elemental 2D layers with Si. The\nsurface passivation and carrier selectivity of the interface between $\\alpha$\nand $\\beta$ allotropes of arsenene, antimonene, and bismuthene monolayers with\nSi (100) and Si(111) were estimated \\emph{via} first--principles calculations.\nAmongst the various interface configurations studied, all of the Si(111)--based\nslabs and only a couple of the Si(100)--based slabs are found to be stable.\nBader charge analysis reveals that charge transfer from/to the Si slab to\n(As)/from (Sb and Bi) in the 2D layer occurs, indicating a strong interaction\nbetween atoms across the interface. Comparing within the various configurations\nof a particular charge (electron or hole) selective layer, the structural\ndistortion of the Si slab is the lowest for $\\alpha$--As/Si and $\\beta$-Bi/Si.\nThis translates as a lower surface density of states (DOS) in the band gap\narising out of the Si slab when integrated with $\\alpha$--arsenene and\n$\\beta$--bismuthene, implying better surface passivation. All-in-all, our\nanalysis suggests $\\alpha$-As as the best candidate for a passivating electron\nselective layer, while $\\beta$-Bi can be a promising candidate for a\npassivating hole selective layer.", "category": "physics_comp-ph" }, { "text": "A space-averaged model of branched structures: Many biological systems and artificial structures are ramified, and present a\nhigh geometric complexity. In this work, we propose a space-averaged model of\nbranched systems for conservation laws. From a one-dimensional description of\nthe system, we show that the space-averaged problem is also one-dimensional,\nrepresented by characteristic curves, defined as streamlines of the\nspace-averaged branch directions. The geometric complexity is then captured\nfirstly by the characteristic curves, and secondly by an additional forcing\nterm in the equations. This model is then applied to mass balance in a pipe\nnetwork and momentum balance in a tree under wind loading.", "category": "physics_comp-ph" }, { "text": "Heuristic Methods and Performance Bounds for Photonic Design: In the photonic design problem, a scientist or engineer chooses the physical\nparameters of a device to best match some desired device behavior. Many\ninstances of the photonic design problem can be naturally stated as a\nmathematical optimization problem that is computationally difficult to solve\nglobally. Because of this, several heuristic methods have been developed to\napproximately solve such problems. These methods often produce very good\ndesigns, and, in many practical applications, easily outperform 'traditional'\ndesigns that rely on human intuition. Yet, because these heuristic methods do\nnot guarantee that the approximate solution found is globally optimal, the\nquestion remains of just how much better a designer might hope to do. This\nquestion is addressed by performance bounds or impossibility results, which\ndetermine a performance level that no design can achieve. We focus on\nalgorithmic performance bounds, which involve substantial computation to\ndetermine. We illustrate a variety of both heuristic methods and performance\nbounds on two examples. In these examples (and many others not reported here)\nthe performance bounds show that the heuristic designs are nearly optimal, and\ncan considered globally optimal in practice. This review serves to clearly set\nup the photonic design problem and unify existing approaches for calculating\nperformance bounds, while also providing some natural generalizations and\nproperties.", "category": "physics_comp-ph" }, { "text": "Homogenization of the vibro-acoustic transmission on perforated plates: The paper deals with modelling of acoustic waves which propagate in inviscid\nfluids interacting with perforated elastic plates. The plate can be replaced by\nan interface on which transmission conditions are derived by homogenization of\na problem describing vibroacoustic fluid-structure interactions in a\ntransmission layer in which the plate is embedded. The Reissner-Mindlin theory\nof plates is adopted for periodic perforations designed by arbitrary\ncylindrical holes with axes orthogonal to the plate midplane. The homogenized\nmodel of the vibroacoustic transmission is obtained using the two-scale\nasymptotic analysis with respect to the layer thickness which is proportional\nto the plate thickness and to the perforation period. The nonlocal, implicit\ntransmission conditions involve a jump in the acoustic potential and its normal\none-side derivatives across the interface which represents the plate with a\ngiven thickness. The homogenized model was implemented using the finite element\nmethod and validated using direct numerical simulations of the non-homogenized\nproblem. Numerical illustrations of the vibroacoustic transmission are\npresented.", "category": "physics_comp-ph" }, { "text": "Standardized Non-Intrusive Reduced Order Modeling Using Different\n Regression Models With Application to Complex Flow Problems: In recent years, numerical methods in industrial applications have evolved\nfrom a pure predictive tool towards a means for optimization and control. Since\nstandard numerical analysis methods have become prohibitively costly in such\nmulti-query settings, a variety of reduced order modeling (ROM) approaches have\nbeen advanced towards complex applications. In this context, the driving\napplication for this work is twin-screw extruders (TSEs): manufacturing devices\nwith an important economic role in plastics processing. Modeling the flow\nthrough a TSE requires non-linear material models and coupling with the heat\nequation alongside intricate mesh deformations, which is a comparatively\ncomplex scenario. We investigate how a non-intrusive, data-driven ROM can be\nconstructed for this application. We focus on the well-established proper\northogonal decomposition (POD) with regression albeit we introduce two\nadaptations: standardizing both the data and the error measures as well as --\ninspired by our space-time simulations -- treating time as a discrete\ncoordinate rather than a continuous parameter. We show that these steps make\nthe POD-regression framework more interpretable, computationally efficient, and\nproblem-independent. We proceed to compare the performance of three different\nregression models: Radial basis function (RBF) regression, Gaussian process\nregression (GPR), and artificial neural networks (ANNs). We find that GPR\noffers several advantages over an ANN, constituting a viable and\ncomputationally inexpensive non-intrusive ROM. Additionally, the framework is\nopen-sourced to serve as a starting point for other practitioners and\nfacilitate the use of ROM in general engineering workflows.", "category": "physics_comp-ph" }, { "text": "Algorithms for uniform particle initialization in domains with complex\n boundaries: Accurate mesh-free simulation of fluid flows involving complex boundaries\nrequires that the boundaries be captured accurately in terms of particles. In\nthe context of incompressible/weakly-compressible fluid flow, the SPH method is\nmore accurate when the particle distribution is uniform. Hence, for the time\naccurate simulation of flow in the presence of complex boundaries, one must\nhave both an accurate boundary discretization as well as a uniform distribution\nof particles to initialize the simulation. This process of obtaining an initial\nuniform distribution of particles is called \"particle packing\". In this paper,\nvarious particle packing algorithms present in the literature are implemented\nand compared. An improved SPH-based algorithm is proposed which produces\nuniform particle distributions of both the fluid and solid domains in two and\nthree dimensions. Some challenging geometries are constructed to demonstrate\nthe accuracy of the new algorithm. The implementation of the algorithm is open\nsource and the manuscript is fully reproducible.", "category": "physics_comp-ph" }, { "text": "Physics-integrated machine learning: embedding a neural network in the\n Navier-Stokes equations. Part I: In this paper the physics- (or PDE-) integrated machine learning (ML)\nframework is investigated. The Navier-Stokes (NS) equations are solved using\nTensorflow library for Python via Chorin's projection method. The methodology\nfor the solution is provided, which is compared with a classical solution\nimplemented in Fortran. This solution is integrated with a neural network (NN).\nSuch integration allows one to train a NN embedded in the NS equations without\nhaving the target (labeled training) data for the direct outputs from the NN;\ninstead, the NN is trained on the field data (quantities of interest), which\nare the solutions for the NS equations. To demonstrate the performance of the\nframework, a case study is formulated: the 2D lid-driven cavity with\nnon-constant velocity-dependent dynamic viscosity is considered. A NN is\ntrained to predict the dynamic viscosity from the velocity fields. The\nperformance of the physics-integrated ML is compared with classical ML\nframework, when a NN is directly trained on the available data (fields of the\ndynamic viscosity). Both frameworks showed similar accuracy; however, despite\nits complexity and computational cost, the physics-integrated ML offers\nprincipal advantages, namely: (i) the target outputs (labeled training data)\nfor a NN might be unknown and can be recovered using PDEs; (ii) it is not\nnecessary to extract and preprocess information (training targets) from big\ndata, instead it can be extracted by PDEs; (iii) there is no need to employ a\nphysics- or scale-separation assumptions to build a closure model. The\nadvantage (i) is demonstrated in this paper, while the advantages (ii) and\n(iii) are the subjects for future work. Such integration of PDEs with ML opens\na door for a tighter data-knowledge connection, which may potentially influence\nthe further development of the physics-based modelling with ML for data-driven\nthermal fluid models.", "category": "physics_comp-ph" }, { "text": "Modeling of nonequilibrium surface growth by a limited mobility model\n with distributed diffusion length: Kinetic Monte-Carlo (KMC) simulations are a well-established numerical tool\nto investigate the time-dependent surface morphology in molecular beam epitaxy\n(MBE) experiments. In parallel, simplified approaches such as limited mobility\n(LM) models characterized by a fixed diffusion length have been studied. Here,\nwe investigate an extended LM model to gain deeper insight into the role of\ndiffusional processes concerning the growth morphology. Our model is based on\nthe stochastic transition rules of the Das Sarma-Tamborena (DT) model, but\ndiffers from the latter via a variable diffusion length. A first guess for this\nlength can be extracted from the saturation value of the mean-squared\ndisplacement calculated from short KMC simulations. Comparing the resulting\nsurface morphologies in the sub- and multilayer growth regime to those obtained\nfrom KMC simulations, we find deviations which can be cured by adding\nfluctuations to the diffusion length. This mimics the stochastic nature of\nparticle diffusion on a substrate, an aspect which is usually neglected in LM\nmodels. We propose to add fluctuations to the diffusion length by choosing this\nquantity for each adsorbed particle from a Gaussian distribution, where the\nvariance of the distribution serves as a fitting parameter. We show that the\ndiffusional fluctuations have a huge impact on cluster properties during\nsubmonolayer growth as well as on the surface profile in the high coverage\nregime. The analysis of the surface morphologies on one- and two-dimensional\nsubstrates during sub- and multilayer growth shows that the LM model can\nproduce structures that are indistinguishable to the ones from KMC simulations\nat arbitrary growth conditions.", "category": "physics_comp-ph" }, { "text": "Near-field simulations of pellet ablation for disruptions mitigation in\n tokamaks: Detailed numerical studies of the ablation of a single neon pellet in the\nplasma disruption mitigation parameter space have been performed. Simulations\nwere carried out using FronTier, a hydrodynamic and low magnetic Reynolds\nnumber MHD code with explicit tracking of material interfaces. FronTier's\nphysics models resolve the pellet surface ablation and the formation of a\ndense, cold cloud of ablated material, the deposition of energy from hot plasma\nelectrons passing through the ablation cloud, expansion of the ablation cloud\nalong magnetic field lines and the radiation losses. A local thermodynamic\nequilibrium model based on Saha equations has been used to resolve atomic\nprocesses in the cloud and Redlich-Kwong corrections to the ideal gas equation\nof state for cold and dense gases have been used near the pellet surface. The\nFronTier pellet code is the next generation of the code described in [R.\nSamulyak, T. Lu, P. Parks, Nuclear Fusion, (47) 2007, 103--118]. It has been\nvalidated against the semi-analytic improved Neutral Gas Shielding model in the\n1D spherically symmetric approximation. Main results include quantification of\nthe influence of atomic processes and Redlich-Kwong corrections on the pellet\nablation in spherically symmetric approximation and verification of analytic\nscaling laws in a broad range of pellet and plasma parameters. Using axially\nsymmetric MHD simulations, properties of ablation channels and the reduction of\npellet ablation rates in magnetic fields of increasing strength have been\nstudied. While the main emphasis has been given to neon pellets for the plasma\ndisruption mitigation, selected results on deuterium fueling pellets have also\nbeen presented.", "category": "physics_comp-ph" }, { "text": "Coupling lattice Boltzmann model for simulation of thermal flows on\n standard lattices: In this paper, a coupling lattice Boltzmann (LB) model for simulating thermal\nflows on the standard D2Q9 lattice is developed in the framework of the\ndouble-distribution-function (DDF) approach in which the viscous heat\ndissipation and compression work are considered. In the model, a density\ndistribution function is used to simulate the flow field, while a total energy\ndistribution function is employed to simulate the temperature field. The\ndiscrete equilibrium density and total energy distribution functions are\nobtained from the Hermite expansions of the corresponding continuous\nequilibrium distribution functions. The pressure given by the equation of state\nof perfect gases is recovered in the macroscopic momentum and energy equations.\nThe coupling between the momentum and energy transports makes the model\napplicable for general thermal flows such as non-Boussinesq flows, while the\nexisting DDF LB models on standard lattices are usually limited to Boussinesq\nflows in which the temperature variation is small. Meanwhile, the simple\nstructure and basic advantages of the DDF LB approach are retained. The model\nis tested by numerical simulations of thermal Couette flow, attenuation-driven\nacoustic streaming, and natural convection in a square cavity with small and\nlarge temperature differences. The numerical results are found to be in good\nagreement with the analytical solutions and/or other numerical results reported\nin the literature.", "category": "physics_comp-ph" }, { "text": "Lattice Boltzmann method for relativistic hydrodynamics: Issues on\n conservation law of particle number and discontinuities: In this paper, we aim to address several important issues about the recently\ndeveloped lattice Boltzmann (LB) model for relativistic hydrodynamics [M.\nMendoza et al., Phys. Rev. Lett. 105, 014502 (2010); Phys. Rev. D 82, 105008\n(2010)]. First, we study the conservation law of particle number in the\nrelativistic LB model. Through the Chapman-Enskog analysis, it is shown that in\nthe relativistic LB model the conservation equation of particle number is a\nconvection-diffusion equation rather than a continuity equation, which makes\nthe evolution of particle number dependent on the relaxation time. Furthermore,\nwe investigate the origin of the discontinuities appeared in the relativistic\nproblems with high viscosities, which were reported in a recent study [D. Hupp\net al., Phys. Rev. D 84, 125015 (2011)]. A multiple-relaxation-time (MRT)\nrelativistic LB model is presented to examine the influences of different\nrelaxation times on the discontinuities. Numerical experiments show the\ndiscontinuities can be eliminated by setting the relaxation time $\\tau_e$\n(related to the bulk viscosity) to be sufficiently smaller than the relaxation\ntime $\\tau_v$ (related to the shear viscosity). Meanwhile, it is found that the\nrelaxation time $\\tau_\\varepsilon$, which has no effect on the conservation\nequations at the Navier-Stokes level, will affect the numerical accuracy of the\nrelativistic LB model. Moreover, the accuracy of the relativistic LB model for\nsimulating moderately relativistic problems is also investigated.", "category": "physics_comp-ph" }, { "text": "EasyNData: A simple tool to extract numerical values from published\n plots: The comparison of numerical data with published plots is a frequently\noccurring task. In this article I present a short computer program written in\nJava(TM) helping in those cases where someone wants to get the numbers out of a\nplot but is not able to read the plot with a decent accuracy and cannot contact\nthe author of the plot directly for whatever reason. The accuracy reached by\nthis method depends on many factors. For the examples illustrated in this paper\na precision at the level of a few per mille could be reached. The tool might\nhelp in improving the quality of future publications.", "category": "physics_comp-ph" }, { "text": "Characterization and Integration of the Singular Test Integrals in the\n Method-of-Moments Implementation of the Electric-Field Integral Equation: In this paper, we characterize the logarithmic singularities arising in the\nmethod of moments from the Green's function in integrals over the test domain,\nand we use two approaches for designing geometrically symmetric quadrature\nrules to integrate these singular integrands. These rules exhibit better\nconvergence properties than quadrature rules for polynomials and, in general,\nlead to better accuracy with a lower number of quadrature points. We\ndemonstrate their effectiveness for several examples encountered in both the\nscalar and vector potentials of the electric-field integral equation (singular,\nnear-singular, and far interactions) as compared to the commonly employed\npolynomial scheme and the double Ma--Rokhlin--Wandzura (DMRW) rules, whose\nsample points are located asymmetrically within triangles.", "category": "physics_comp-ph" }, { "text": "The Voigt and complex error function: Huml\u00ed\u010dek's rational\n approximation generalized: Accurate yet efficient computation of the Voigt and complex error function is\na challenge since decades in astrophysics and other areas of physics. Rational\napproximations have attracted considerable attention and are used in many\ncodes, often in combination with other techniques. The 12-term code \"cpf12\" of\nHuml\\'i\\v{c}ek (1979) achieves an accuracy of five to six significant digits\nthroughout the entire complex plane. Here we generalize this algorithm to a\nlarger (even) number of terms. The $n=16$ approximation has a relative accuracy\nbetter than $10^{-5}$ for almost the entire complex plane except for very small\nimaginary values of the argument even without the correction term required for\nthe cpf12 algorithm. With 20 terms the accuracy is better than $10^{-6}$. In\naddition to the accuracy assessment we discuss methods for optimization and\npropose a combination of the 16-term approximation with the asymptotic\napproximation of Huml\\'i\\v{c}ek (1982) for high efficiency.", "category": "physics_comp-ph" }, { "text": "The Accuracy of Restricted Boltzmann Machine Models of Ising Systems: Restricted Boltzmann machine (RBM) provide a general framework for modeling\nphysical systems, but their behavior is dependent on hyperparameters such as\nthe learning rate, the number of hidden nodes and the form of the threshold\nfunction. This article accordingly examines in detail the influence of these\nparameters on Ising spin system calculations. A tradeoff is identified between\nthe accuracy of statistical quantities such as the specific heat and that of\nthe joint distribution of energy and magnetization. The optimal structure of\nthe RBM therefore depends intrinsically on the physical problem to which it is\napplied.", "category": "physics_comp-ph" }, { "text": "Low rank representations for quantum simulation of electronic structure: The quantum simulation of quantum chemistry is a promising application of\nquantum computers. However, for N molecular orbitals, the $\\mathcal{O}(N^4)$\ngate complexity of performing Hamiltonian and unitary Coupled Cluster Trotter\nsteps makes simulation based on such primitives challenging. We substantially\nreduce the gate complexity of such primitives through a two-step low-rank\nfactorization of the Hamiltonian and cluster operator, accompanied by\ntruncation of small terms. Using truncations that incur errors below chemical\naccuracy, we are able to perform Trotter steps of the arbitrary basis\nelectronic structure Hamiltonian with $\\mathcal{O}(N^3)$ gate complexity in\nsmall simulations, which reduces to $\\mathcal{O}(N^2 \\log N)$ gate complexity\nin the asymptotic regime, while our unitary Coupled Cluster Trotter step has\n$\\mathcal{O}(N^3)$ gate complexity as a function of increasing basis size for a\ngiven molecule. In the case of the Hamiltonian Trotter step, these circuits\nhave $\\mathcal{O}(N^2)$ depth on a linearly connected array, an improvement\nover the $\\mathcal{O}(N^3)$ scaling assuming no truncation. As a practical\nexample, we show that a chemically accurate Hamiltonian Trotter step for a 50\nqubit molecular simulation can be carried out in the molecular orbital basis\nwith as few as 4,000 layers of parallel nearest-neighbor two-qubit gates,\nconsisting of fewer than 100,000 non-Clifford rotations. We also apply our\nalgorithm to iron-sulfur clusters relevant for elucidating the mode of action\nof metalloenzymes.", "category": "physics_comp-ph" }, { "text": "Iterative method to compute the Fermat points and Fermat distances of\n multiquarks: The multiquark confining potential is proportional to the total distance of\nthe fundamental strings linking the quarks and antiquarks. We address the\ncomputation of the total string distance an of the Fermat points where the\ndifferent strings meet. For a meson (quark-antiquark system) the distance is\ntrivially the quark-antiquark distance. For a baryon (three quark system) the\nproblem was solved geometrically from the onset, by Fermat and by Torricelli.\nThe geometrical solution can be determined just with a rule and a compass, but\ntranslation of the geometrical solution to an analytical expression is not as\ntrivial. For tetraquarks, pentaquarks, hexaquarks, etc, the geometrical\nsolution is much more complicated. Here we provide an iterative method,\nconverging fast to the correct Fermat points and the total distances, relevant\nfor the multiquark potentials. We also review briefly the geometrical methods\nleading to the Fermat points and to the total distances.", "category": "physics_comp-ph" }, { "text": "Sailfish: a flexible multi-GPU implementation of the lattice Boltzmann\n method: We present Sailfish, an open source fluid simulation package implementing the\nlattice Boltzmann method (LBM) on modern Graphics Processing Units (GPUs) using\nCUDA/OpenCL. We take a novel approach to GPU code implementation and use\nrun-time code generation techniques and a high level programming language\n(Python) to achieve state of the art performance, while allowing easy\nexperimentation with different LBM models and tuning for various types of\nhardware. We discuss the general design principles of the code, scaling to\nmultiple GPUs in a distributed environment, as well as the GPU implementation\nand optimization of many different LBM models, both single component (BGK, MRT,\nELBM) and multicomponent (Shan-Chen, free energy). The paper also presents\nresults of performance benchmarks spanning the last three NVIDIA GPU\ngenerations (Tesla, Fermi, Kepler), which we hope will be useful for\nresearchers working with this type of hardware and similar codes.", "category": "physics_comp-ph" }, { "text": "An acoustic and shock wave capturing compact high-order gas-kinetic\n scheme with spectral-like resolution: In this paper, a compact high-order gas-kinetic scheme (GKS) with spectral\nresolution will be presented and used in the simulation of acoustic and shock\nwaves. For accurate simulation, the numerical scheme is required to have\nexcellent dissipation-dispersion preserving property, while the wave modes,\npropagation characteristics, and wave speed of the numerical solution should be\nkept as close as possible to the exact solution of governing equations. For\ncompressible flow simulation with shocks, the numerical scheme has to be\nequipped with proper numerical dissipation to make a crispy transition in the\nshock layer. Based on the high-order gas evolution model, the GKS provides a\ntime accurate solution at a cell interface, from which both time accurate flux\nfunction and the time evolving flow variables can be obtained. The GKS updates\nexplicitly both cell-averaged conservative flow variables and the cell-averaged\ngradients by applying Gauss-theorem along the boundary of the control volume.\nBased on the cell-averaged flow variables and cell-averaged gradients, a\nreconstruction with compact stencil can be obtained. With the same stencil of a\nsecond-order scheme, a reconstruction up to 8th-order spacial accuracy can be\nconstructed, which include the nonlinear and linear reconstruction for the\nnon-equilibrium and equilibrium states respectively. The GKS unifies the\nnonlinear and linear reconstruction through a time evolution process at a cell\ninterface from the non-equilibrium state to an equilibrium one. In the region\nbetween these two limits, the contribution from nonlinear and linear\nreconstructions depends on the weighting functions of $\\exp(-\\Delta t/\\tau)$\nand $(1-\\exp(-\\Delta t /\\tau))$, where $\\Delta t$ is the time step and $\\tau$\nis the particle collision time, which is enhanced in the shock region. As a\nresult, both shock and acoustic wave can be captured accurately in GKS.", "category": "physics_comp-ph" }, { "text": "Imaging 3D Chemistry at 1 nm Resolution with Fused Multi-Modal Electron\n Tomography: Measuring the three-dimensional (3D) distribution of chemistry in nanoscale\nmatter is a longstanding challenge for metrological science. The inelastic\nscattering events required for 3D chemical imaging are too rare, requiring high\nbeam exposure that destroys the specimen before an experiment completes. Even\nlarger doses are required to achieve high resolution. Thus, chemical mapping in\n3D has been unachievable except at lower resolution with the most\nradiation-hard materials. Here, high-resolution 3D chemical imaging is achieved\nnear or below one nanometer resolution in a Au-Fe$_3$O$_4$ metamaterial,\nCo$_3$O$_4$ - Mn$_3$O$_4$ core-shell nanocrystals, and\nZnS-Cu$_{0.64}$S$_{0.36}$ nanomaterial using fused multi-modal electron\ntomography. Multi-modal data fusion enables high-resolution chemical tomography\noften with 99\\% less dose by linking information encoded within both elastic\n(HAADF) and inelastic (EDX / EELS) signals. Now sub-nanometer 3D resolution of\nchemistry is measurable for a broad class of geometrically and compositionally\ncomplex materials.", "category": "physics_comp-ph" }, { "text": "Tensor-structured algorithm for reduced-order scaling large-scale\n Kohn-Sham density functional theory calculations: We present a tensor-structured algorithm for efficient large-scale DFT\ncalculations by constructing a Tucker tensor basis that is adapted to the\nKohn-Sham Hamiltonian and localized in real-space. The proposed approach uses\nan additive separable approximation to the Kohn-Sham Hamiltonian and an $L_1$\nlocalization technique to generate the 1-D localized functions that constitute\nthe Tucker tensor basis. Numerical results show that the resulting Tucker\ntensor basis exhibits exponential convergence in the ground-state energy with\nincreasing Tucker rank. Further, the proposed tensor-structured algorithm\ndemonstrated sub-quadratic scaling with system size for both systems with and\nwithout a gap, and involving many thousands of atoms. This reduced-order\nscaling has also resulted in the proposed approach outperforming plane-wave DFT\nimplementation for systems beyond 2,000 electrons.", "category": "physics_comp-ph" }, { "text": "Tensile properties of structural I clathrate hydrates:Role of guest-host\n hydrogen bonding ability: Clathrate hydrates (CHs) are one of the most promising molecular structures\nin applications of gas capture and storage, and gas separations. Fundamental\nknowledge of mechanical characteristics of CHs is of crucial importance for\nassessing gas storage and separations at cold conditions, as well as\nunderstanding their stability and formation mechanisms. Here, the tensile\nmechanical properties of structural I CHs encapsulating a variety of guest\nspecies (methane, ammonia, sulfureted hydrogen, formaldehyde, methanol, and\nmethyl mercaptan) that have different abilities to form hydrogen (H-) bonds\nwith water molecule are explored by classical molecular dynamics (MD)\nsimulations. All investigated CHs are structurally stable clathrate structures.\nBasic mechanical properties of CHs including tensile limit and Young's modulus\nare dominated by the H-bonding ability of host-guest molecules and the guest\nmolecular polarity. CHs containing small methane, formaldehyde and sulfureted\nhydrogen guest molecules that possess weak H-bonding ability are mechanically\nrobust clathrate structures and mechanically destabilized via brittle failure\non the (1 0 1) plane. However, those entrapping methyl mercaptan, methanol, and\nammonia that have strong H-bonding ability are mechanically weak molecular\nstructures and mechanically destabilized through ductile failure as a result of\ngradual global dissociation of clathrate cages.", "category": "physics_comp-ph" }, { "text": "Photon elastic scattering simulation: validation and improvements to\n Geant4: Several models for the simulation of photon elastic scattering are\nquantitatively evaluated with respect to a large collection of experimental\ndata retrieved from the literature. They include models based on the form\nfactor approximation, on S-matrix calculations and on analytical\nparameterizations; they exploit publicly available data libraries and\ntabulations of theoretical calculations. Some of these models are currently\nimplemented in general purpose Monte Carlo systems; some have been implemented\nand evaluated for the first time in this paper for possible use in Monte Carlo\nparticle transport. The analysis mainly concerns the energy range between 5 keV\nand a few MeV. The validation process identifies the newly implemented model\nbased on second order S-matrix calculations as the one best reproducing\nexperimental measurements. The validation results show that, along with\nRayleigh scattering, additional processes, not yet implemented in Geant4 nor in\nother major Monte Carlo systems, should be taken into account to realistically\ndescribe photon elastic scattering with matter above 1 MeV. Evaluations of the\ncomputational performance of the various simulation algorithms are reported\nalong with the analysis of their physics capabilities.", "category": "physics_comp-ph" }, { "text": "An adaptive scalable fully implicit algorithm based on stabilized finite\n element for reduced visco-resistive MHD: The magnetohydrodynamics (MHD) equations are continuum models used in the\nstudy of a wide range of plasma physics systems, including the evolution of\ncomplex plasma dynamics in tokamak disruptions. However, efficient numerical\nsolution methods for MHD are extremely challenging due to disparate time and\nlength scales, strong hyperbolic phenomena, and nonlinearity. Therefore the\ndevelopment of scalable, implicit MHD algorithms and high-resolution adaptive\nmesh refinement strategies is of considerable importance. In this work, we\ndevelop a high-order stabilized finite-element algorithm for the reduced\nvisco-resistive MHD equations based on the MFEM finite element library\n(mfem.org). The scheme is fully implicit, solved with the Jacobian-free\nNewton-Krylov (JFNK) method with a physics-based preconditioning strategy. Our\npreconditioning strategy is a generalization of the physics-based\npreconditioning methods in [Chacon, et al, JCP 2002] to adaptive, stabilized\nfinite elements. Algebraic multigrid methods are used to invert sub-block\noperators to achieve scalability. A parallel adaptive mesh refinement scheme\nwith dynamic load-balancing is implemented to efficiently resolve the\nmulti-scale spatial features of the system. Our implementation uses the MFEM\nframework, which provides arbitrary-order polynomials and flexible adaptive\nconforming and non-conforming meshes capabilities. Results demonstrate the\naccuracy, efficiency, and scalability of the implicit scheme in the presence of\nlarge scale disparity. The potential of the AMR approach is demonstrated on an\nisland coalescence problem in the high Lundquist-number regime ($\\ge 10^7$)\nwith the successful resolution of plasmoid instabilities and thin current\nsheets.", "category": "physics_comp-ph" }, { "text": "Towards optimal explicit time-stepping schemes for the gyrokinetic\n equations: The nonlinear gyrokinetic equations describe plasma turbulence in laboratory\nand astrophysical plasmas. To solve these equations, massively parallel codes\nhave been developed and run on present-day supercomputers. This paper describes\nmeasures to improve the efficiency of such computations, thereby making them\nmore realistic. Explicit Runge-Kutta schemes are considered to be well suited\nfor time-stepping. Although the numerical algorithms are often highly\noptimized, performance can still be improved by a suitable choice of the\ntime-stepping scheme, based on spectral analysis of the underlying operator.\nHere, an operator splitting technique is introduced to combine first-order\nRunge-Kutta-Chebychev schemes for the collision term with fourth-order schemes\nfor the remaining terms. In the nonlinear regime, based on the observation of\neigenvalue shifts due to the (generalized) $E\\times B$ advection term, an\naccurate and robust estimate for the nonlinear timestep is developed. The\npresented techniques can reduce simulation times by factors of up to three in\nrealistic cases. This substantial speedup encourages the use of similar\ntimestep optimized explicit schemes not only for the gyrokinetic equation, but\nalso for other applications with comparable properties.", "category": "physics_comp-ph" }, { "text": "Modeling nonlinear wave-body interaction with the Harmonic Polynomial\n Cell method combined with the Immersed Boundary Method on a fixed grid: To model the propagation of large water waves and associated loads applied to\noffshore structures, scientists and engineers have a need of fast and accurate\nmodels. A wide range of models have been developped in order to predict\nwave-fields and hydrodynamic loads at small scale, from the linear potential\nboundary element method to complete CFD codes, based on the Navier-Stokes\nequations. Although the latters are well adapted to solve the wave-structure\ninteraction at small scale, their use is limited due to the computational cost\nof such models and numerical diffusion. Alternative approaches, capturing the\nnonlinear effects, are thus needed. Shao and Faltinsen [5] proposed an\ninnovative technique, called \" harmonic polynomial cell \" (HPC) method to\ntackle this problem. This approach is implemented and tested in 2 dimensions\n(x, z), first on a standing wave problem and then to evaluate the nonlinear\nforces acting on a fixed submerged cylinder.", "category": "physics_comp-ph" }, { "text": "Foam: A General purpose Monte Carlo Cellular Algorithm: A general-purpose, self-adapting Monte Carlo (MC) algorithm implemented in\nthe program {\\tt Foam} is described. The high efficiency of the MC, that is\nsmall maximum weight or variance of the MC weight is achieved by means of\ndividing the integration domain into small cells. The cells can be\n$n$-dimensional simplices, hyperrectangles or a Cartesian product of them. The\ngrid of cells, ``foam'', is produced in the process of the binary split of the\ncells. The next cell to be divided and the position/direction of the division\nhyperplane is chosen by the algorithm which optimizes the ratio of the maximum\nweight to the average weight or (optionally) the total variance. The algorithm\nis able to deal, in principle, with an arbitrary pattern of the singularities\nin the distribution.", "category": "physics_comp-ph" }, { "text": "Spin and charge distributions in Graphene/Nickel (111) substrate under\n Rashba spin-orbital coupling: To understand the coupling factor between Rashba spin-orbital interaction and\nferromagnetic proximity effect, we design a Monte Carlo algorithm to simulate\nthe spin and charge distributions for the room-temperature Rashba material,\nGraphene/Nickel(111) substrate, at finite temperature. We observe that the rate\nof exchange fluctuation is a key player to produce giant Rashba spin-orbit\nsplitting in graphene. More importantly, we monitor the Rashba spin-splitting\nphenomenon where the spin-polarized electrons may be escaped from two opposite\nedges upon heating. However, the escaped electrons show Gaussian-like\ndistribution in interior area that is important for spintronic engineers to\noptimize the efficiency of spin-state detection. In addition, we investigate if\nour Monte Carlo model can explain why room-temperature Rashba effect is\nobserved in Graphene/Nickel(111) substrate experimentally. All results are\npresented in physical units.", "category": "physics_comp-ph" }, { "text": "Effective Non-oscillatory Regularized L$_1$ Finite Elements for Particle\n Transport Simulations: In this work, we present a novel regularized L$_1$ (RL$_1$) finite element\nspatial discretization scheme for radiation transport problems. We review the\nrecently developed least-squares finite element method in nuclear applications.\nWe then derive an L$_1$ finite element by minimizing the L$_1$ norm of the\ntransport residual. To ensure the stability on incident boundary, we newly\ndevelop a consistent L$_1$ boundary condition (BC). The numerical tests\ndemonstrate such a method effectively prevents the oscillations which would\noccur to least-squares finite element when discontinuity exists such as void\nand absorber. Further, the RL$_1$ method is accurate in problems with\nscattering.", "category": "physics_comp-ph" }, { "text": "A Note on Symplectic Algorithms: We present the symplectic algorithm in the Lagrangian formalism for the\nHamiltonian systems by virtue of the noncommutative differential calculus with\nrespect to the discrete time and the Euler--Lagrange cohomological concepts. We\nalso show that the trapezoidal integrator is symplectic in certain sense.", "category": "physics_comp-ph" }, { "text": "Laminography as a tool for imaging large-size samples with high\n resolution: Despite the increased brilliance of the new generation synchrotron sources,\nthere is still a challenge with high-resolution scanning of very thick and\nabsorbing samples, such as the whole mouse brain stained with heavy elements,\nand, extending further, brains of primates. Samples are typically cut into\nsmaller parts, to ensure a sufficient X-ray transmission, and scanned\nseparately. Compared to the standard tomography setup where the sample would be\ncut into many pillars, the laminographic geometry operates with slab-shaped\nsections significantly reducing the number of sample parts to be prepared, the\ncutting damage and data stitching problems. In this work, we present a\nlaminography pipeline for imaging large samples (> 1 cm) at micrometer\nresolution. The implementation includes a low-cost instrument setup installed\nat the 2-BM micro-CT beamline of the Advanced Photon Source (APS).\nAdditionally, we present sample mounting, scanning techniques, data stitching\nprocedures, a fast reconstruction algorithm with low computational complexity,\nand accelerated reconstruction on multi-GPU systems for processing large-scale\ndatasets. The applicability of the whole laminography pipeline was demonstrated\nwith imaging 4 sequential slabs throughout the entire mouse brain sample\nstained with osmium, in total generating approximately 12TB of raw data for\nreconstruction.", "category": "physics_comp-ph" }, { "text": "Image Inversion and Uncertainty Quantification for Constitutive Laws of\n Pattern Formation: The forward problems of pattern formation have been greatly empowered by\nextensive theoretical studies and simulations, however, the inverse problem is\nless well understood. It remains unclear how accurately one can use images of\npattern formation to learn the functional forms of the nonlinear and nonlocal\nconstitutive relations in the governing equation. We use PDE-constrained\noptimization to infer the governing dynamics and constitutive relations and use\nBayesian inference and linearization to quantify their uncertainties in\ndifferent systems, operating conditions, and imaging conditions. We discuss the\nconditions to reduce the uncertainty of the inferred functions and the\ncorrelation between them, such as state-dependent free energy and reaction\nkinetics (or diffusivity). We present the inversion algorithm and illustrate\nits robustness and uncertainties under limited spatiotemporal resolution,\nunknown boundary conditions, blurry initial conditions, and other non-ideal\nsituations. Under certain situations, prior physical knowledge can be included\nto constrain the result. Phase-field, reaction-diffusion, and\nphase-field-crystal models are used as model systems. The approach developed\nhere can find applications in inferring unknown physical properties of complex\npattern-forming systems and in guiding their experimental design.", "category": "physics_comp-ph" }, { "text": "Abelian-Higgs Cosmic String Evolution with CUDA: Topological defects form at cosmological phase transitions by the Kibble\nmechanism, with cosmic strings and superstrings having the most interesting\nphenomenology. A rigorous analysis of their astrophysical consequences is\nlimited by the availability of accurate numerical simulations, and therefore by\nhardware resources and computation time. Improving the speed and efficiency of\nexisting codes is therefore essential. All current cosmic string simulations\nwere performed on Central Processing Units. In previous work we presented a\nGeneral Purpose Graphics Processing Unit implementation of the evolution of\ncosmological domain wall networks. Here we continue this paradigm shift and\ndiscuss an analogous implementation for local Abelian-Higgs strings networks.\nWe discuss the implementation algorithm (including the discretization used and\nhow to calculate network averaged quantities) and then showcase its performance\nand current bottlenecks. We validate the code by directly comparing our results\nfor the canonical scaling properties of the networks in the radiation and\nmatter eras with those in the literature, finding very good agreement. We\nfinally highlight possible directions for improving the scalability of the\ncode.", "category": "physics_comp-ph" }, { "text": "Using rectangular collocation with finite difference derivatives to\n solve electronic Schrodinger equation: We show that a rectangular collocation method, equivalent to evaluating all\nmatrix elements with a quadrature-like scheme and using more points than basis\nfunctions, is an effective approach for solving the electronic Schr\\\"odinger\nequation (ESE). We test the ideas by computing several solutions of the ESE for\nthe H atom and the H2+ cation and several solutions of a Kohn-Sham equation for\nCO and H2O. In all cases, we achieve millihartree accuracy. Two key advantages\nof the collocation method we use are: 1) collocation points need not have a\nparticular distribution or spacing and can be chosen to reduce the required\nnumber of points; 2) the better the basis, is the less sensitive are the\nresults to the choice of the point set. The ideas of this paper make it\npossible to use any basis functions and thus open to the door to using basis\nfunctions that are not Gaussians or plane waves. We use basis functions that\nare similar to Slater type orbitals. They are rarely used with the variational\nmethod, but present no problems when used with collocation.", "category": "physics_comp-ph" }, { "text": "First-principles calculations of the electronic and optical properties\n of penta-graphene monolayer: study of many-body effects: In the present work, first-principles calculations based on the density\nfunctional theory (DFT), GW approximation and Bethe-Salpeter equation (BSE) are\nperformed to study the electronic and optical properties of penta-graphene (PG)\nmonolayer. The results indicated that PG is a semiconductor with an indirect\nband gap of approximately 2.32 eV at the DFT- GGA level. We found that the\nutilization of the GW approximation based on many-body perturbation theory led\nto an increase in the band gap, resulting in a quasi-direct gap of 5.35 eV.\nAdditionally, we employed the G0W0 - RP A and G0W0 - BSE approximations to\ncalculate the optical spectra in the absence and in the presence of\nelectron-hole interaction, respectively. The results demonstrated that the\ninclusion of electron-hole interaction caused a red-shift of the absorption\nspectrum towards lower energies compared to the spectrum obtained from the G0W0\n- RP A approximation. With the electron-hole interaction, it is found that the\noptical absorption spectra are dominated by the first bound exciton with a\nsignificant binding energy 3.07 eV. The study concluded that the PG monolayer,\nwith a wider band gap and enhanced excitonic effects, holds promise as a\nsuitable candidate for the design and fabrication of optoelectronic components.", "category": "physics_comp-ph" }, { "text": "Deep learning Markov and Koopman models with physical constraints: The long-timescale behavior of complex dynamical systems can be described by\nlinear Markov or Koopman models in a suitable latent space. Recent variational\napproaches allow the latent space representation and the linear dynamical model\nto be optimized via unsupervised machine learning methods. Incorporation of\nphysical constraints such as time-reversibility or stochasticity into the\ndynamical model has been established for a linear, but not for arbitrarily\nnonlinear (deep learning) representations of the latent space. Here we develop\ntheory and methods for deep learning Markov and Koopman models that can bear\nsuch physical constraints. We prove that the model is an universal approximator\nfor reversible Markov processes and that it can be optimized with either\nmaximum likelihood or the variational approach of Markov processes (VAMP). We\ndemonstrate that the model performs equally well for equilibrium and\nsystematically better for biased data compared to existing approaches, thus\nproviding a tool to study the long-timescale processes of dynamical systems.", "category": "physics_comp-ph" }, { "text": "CIP/multi-moment finite volume method with arbitrary order of accuracy: This paper presents a general formulation of the CIP/multi-moment finite\nvolume method (CIP/MM FVM) for arbitrary order of accuracy. Reconstruction up\nto arbitrary order can be built on single cell by adding extra derivative\nmoments at the cell boundary. The volume integrated average (VIA) is updated\nvia a flux-form finite volume formulation, whereas the point-based derivative\nmoments are computed as local derivative Riemann problems by either direct\ninterpolation or approximate Riemann solvers.", "category": "physics_comp-ph" }, { "text": "Entropy and weak solutions in the LBGK model: In this paper, we derive entropy functions whose local equilibria are\nsuitable to recover the Euler-like equations in the framework of the Lattice\nBoltzmann method. Numerical examples are also given, which are consistent with\nthe above theoretical arguments.", "category": "physics_comp-ph" }, { "text": "Machine learning approaches for analyzing and enhancing molecular\n dynamics simulations: Molecular dynamics (MD) has become a powerful tool for studying biophysical\nsystems, due to increasing computational power and availability of software.\nAlthough MD has made many contributions to better understanding these complex\nbiophysical systems, there remain methodological difficulties to be surmounted.\nFirst, how to make the deluge of data generated in running even a microsecond\nlong MD simulation human comprehensible. Second, how to efficiently sample the\nunderlying free energy surface and kinetics. In this short perspective, we\nsummarize machine learning based ideas that are solving both of these\nlimitations, with a focus on their key theoretical underpinnings and remaining\nchallenges.", "category": "physics_comp-ph" }, { "text": "Hierarchical multiscale quantification of material uncertainty: The macroscopic behavior of many materials is complex and the end result of\nmechanisms that operate across a broad range of disparate scales. An imperfect\nknowledge of material behavior across scales is a source of epistemic\nuncertainty of the overall material behavior. However, assessing this\nuncertainty is difficult due to the complex nature of material response and the\nprohibitive computational cost of integral calculations. In this paper, we\nexploit the multiscale and hierarchical nature of material response to develop\nan approach to quantify the overall uncertainty of material response without\nthe need for integral calculations. Specifically, we bound the uncertainty at\neach scale and then combine the partial uncertainties in a way that provides a\nbound on the overall or integral uncertainty. The bound provides a conservative\nestimate on the uncertainty. Importantly, this approach does not require\nintegral calculations that are prohibitively expensive. We demonstrate the\nframework on the problem of ballistic impact of a polycrystalline magnesium\nplate. Magnesium and its alloys are of current interest as promising\nlight-weight structural and protective materials. Finally, we remark that the\napproach can also be used to study the sensitivity of the overall response to\nparticular mechanisms at lower scales in a materials-by-design approach.", "category": "physics_comp-ph" }, { "text": "The Thermal Discrete Dipole Approximation (T-DDA) for near-field\n radiative heat transfer simulations in three-dimensional arbitrary geometries: A novel numerical method called the Thermal Discrete Dipole Approximation\n(T-DDA) is proposed for modeling near-field radiative heat transfer in\nthree-dimensional arbitrary geometries. The T-DDA is conceptually similar to\nthe Discrete Dipole Approximation, except that the incident field originates\nfrom thermal oscillations of dipoles. The T-DDA is described in details in the\npaper, and the method is tested against exact results of radiative conductance\nbetween two spheres separated by a sub-wavelength vacuum gap. For all cases\nconsidered, the results calculated from the T-DDA are in good agreement with\nthose from the analytical solution. When considering frequency-independent\ndielectric functions, it is observed that the number of sub-volumes required\nfor convergence increases as the sphere permittivity increases. Additionally,\nsimulations performed for two silica spheres of 0.5 micrometer-diameter show\nthat the resonant modes are predicted accurately via the T-DDA. For separation\ngaps of 0.5 micrometer and 0.2 micrometer, the relative differences between the\nT-DDA and the exact results are 0.35% and 6.4%, respectively, when 552\nsub-volumes are used to discretize a sphere. Finally, simulations are performed\nfor two cubes of silica separated by a sub-wavelength gap. The results revealed\nthat faster convergence is obtained when considering cubical objects rather\nthan curved geometries. This work suggests that the T-DDA is a robust numerical\napproach that can be employed for solving a wide variety of near-field thermal\nradiation problems in three-dimensional geometries.", "category": "physics_comp-ph" }, { "text": "Moment distributiuons of clusters and molecules in the adiabatic rotor\n model: We present a Fortran program to compute the distribution of dipole moments of\nfree particles for use in analyzing molecular beams experiments that measure\nmoments by deflection in an inhomogeneous field. The theory is the same for\nmagnetic and electric dipole moments, and is based on a thermal ensemble of\nclassical particles that are free to rotate and that have moment vectors\naligned along a principal axis of rotation. The theory has two parameters, the\nratio of the magnetic (or electric) dipole energy to the thermal energy, and\nthe ratio of moments of inertia of the rotor.", "category": "physics_comp-ph" }, { "text": "Second order front tracking algorithm for Stefan problem on a regular\n grid: A brief review of the Stefan problem of solidification from a mixture, and\nits main numerical solution methods is given. Simulation of this problem in 2D\nor 3D is most practically done on a regular grid, where a sharp solid-liquid\ninterface moves relative to the grid. For this problem, a new simulation method\nis developed that manifestly conserves mass, and that simulates the motion of\nthe interface to second order in the grid size. When applied to an isothermal\nsimulation of solidification from solution in 1D at 50% supersaturation for\nonly 5 grid points, the motion of the interface is accurate to 5.5%; and for 10\npoints the result is accurate to 1.5%. The method should be applicable to 2D or\n3D with relative ease. This opens the door to large scale simulations with\nmodest computer power.", "category": "physics_comp-ph" }, { "text": "A boundary-integral approach for the Poisson-Boltzmann equation with\n polarizable force fields: Implicit-solvent models are widely used to study the electrostatics in\ndissolved biomolecules, which are parameterized using force fields. Standard\nforce fields treat the charge distribution with point charges, however, other\nforce fields have emerged which offer a more realistic description by\nconsidering polarizability. In this work, we present the implementation of the\npolarizable and multipolar force field AMOEBA, in the boundary integral\nPoisson-Boltzmann solver \\texttt{PyGBe}. Previous work from other researchers\ncoupled AMOEBA with the finite-difference solver APBS, and found difficulties\nto effectively transfer the multipolar charge description to the mesh. A\nboundary integral formulation treats the charge distribution analytically,\noverlooking such limitations. We present verification and validation results of\nour software, compare it with the implementation on APBS, and assess the\nefficiency of AMOEBA and classical point-charge force fields in a\nPoisson-Botlzmann solver. We found that a boundary integral approach performs\nsimilarly to a volumetric method on CPU, however, it presents an important\nspeedup when ported to the GPU. Moreover, with a boundary element method, the\nmesh density to correctly resolve the electrostatic potential is the same for\nstardard point-charge and multipolar force fields. Finally, we saw that\npolarizability plays an important role to consider cooperative effects, for\nexample, in binding energy calculations.", "category": "physics_comp-ph" }, { "text": "Defect theory under steady illuminations and applications: Illumination has been long known to affect semiconductor defect properties\nduring either growth or operating process. Current theories of studying the\nillumination effects on defects usually have the assumption of unaffected\nformation energies of neutral defects as well as defect transition energy\nlevels, and use the quasi-Fermi levels to describe behaviors of excess carriers\nwith conclusions at variance. In this work, we first propose a method to\nsimulate steady illumination conditions, based on which we demonstrate that\nformation energies of neutral defects and defect transition energy levels are\ninsensitive to illumination. Then, we show that optical and thermal excitation\nof electrons can be seen equivalent with each other to reach a steady electron\ndistribution in a homogeneous semiconductor. Consequently, the electron\ndistribution can be characterized using just one effective temperature T' and\none universal Fermi level E_F' for a homogeneous semiconductor under continuous\nand steady illuminations, which can be seen as a combination of\nquasi-equilibrium electron system with T' and a lattice system with T. Using\nthe new concepts, we uncover the universal mechanisms of illumination effects\non charged defects by treating the band edge states explicitly in the same\nfooting as the defect states. We find that the formation energies of band edge\n'defect' states shift with increased T' of electrons, thus affecting the E_F',\nchanging defect ionic probabilities, and affecting concentrations of charged\ndefects. We apply our theory to study the illumination effects on the doping\nbehaviors in GaN:Mg and CdTe:Sb, obtaining results in accordance with\nexperimental observations. More interesting experimental defect-related\nphenomena under steady illuminations are expected to be understood from our\ntheory.", "category": "physics_comp-ph" }, { "text": "Physics-based r-adaptive algorithms for high-speed flows and plasma\n simulations: The computational modeling of high-speed flows (e.g. hypersonic) and space\nplasmas is characterized by a plethora of complex physical phenomena, in\nparticular involving strong oblique shocks, bow shocks and/or shock waves\nboundary layer interactions. The characterization of those flows requires\naccurate, robust and advanced numerical techniques. To this end, adaptive mesh\nalgorithms provide an automatic way to improve the quality of the numerical\nresults, by increasing the mesh density where required in order to resolve the\nmost critical physical features. In this work, we propose a r-adaptive\nalgorithm that consists in repositioning mesh nodes as resulting from the\nsolution of a physics-driven pseudo-elastic system of equations. The developed\nmesh refinement techniques are based upon spring networks deriving from linear,\nsemi-torsional and ortho-semi- torsional analogies, but driven by a combination\nof local physical and geometrical properties depending on a user-defined\nmonitoring flow variable. Furthermore, a mesh quality indicator is developed\nwithin this work in order to grade and investigate the quality of an adapted\nmesh. Finally, a refinement stop indicator is proposed and demonstrated in\norder to further automatize the resulting adaptive simulation. All new\nphysics-based mesh motion algorithms are illustrated through multiple examples\nthat emphasize the applicability to different physical models and problems\ntogether with the improved quality of the results.", "category": "physics_comp-ph" }, { "text": "Solution of the Monoenergetic Neutron Transport Equation in a Half Space: The analytical solution of neutron transport equation has fascinated\nmathematicians and physicists alike since the Milne half-space problem was\nintroduce in 1921 [1]. Numerous numerical solutions exist, but understandably,\nthere are only a few analytical solutions, with the prominent one being the\nsingular eigenfunction expansion (SEE) introduced by Case [2] in 1960. For the\nhalf-space, the method, though yielding, an elegant analytical form resulting\nfrom half-range completeness, requires numerical evaluation of complicated\nintegrals. In addition, one finds closed form analytical expressions only for\nthe infinite medium and half-space cases. One can find the flux in a slab only\niteratively. That is to say, in general one must expend a considerable\nnumerical effort to get highly precise benchmarks from SEE. As a result,\ninvestigators have devised alternative methods, such as the CN [3], FN [4] and\nGreens Function Method (GFM) [5] based on the SEE have been devised. These\nmethods take the SEE at their core and construct a numerical method around the\nanalytical form. The FN method in particular has been most successful in\ngenerating highly precise benchmarks. No method yielding a precise numerical\nsolution has yet been based solely on a fundamental discretization until now.\nHere, we show for the albedo problem with a source on the vacuum boundary of a\nhomogeneous medium, a precise numerical solution is possible via Lagrange\ninterpolation over a discrete set of directions.", "category": "physics_comp-ph" }, { "text": "Displaced path integral formulation for the momentum distribution of\n quantum particles: The proton momentum distribution, accessible by deep inelastic neutron\nscattering, is a very sensitive probe of the potential of mean force\nexperienced by the protons in hydrogen-bonded systems. In this work we\nintroduce a novel estimator for the end to end distribution of the Feynman\npaths, i.e. the Fourier transform of the momentum distribution. In this\nformulation, free particle and environmental contributions factorize. Moreover,\nthe environmental contribution has a natural analogy to a free energy surface\nin statistical mechanics, facilitating the interpretation of experiments. The\nnew formulation is not only conceptually but also computationally advantageous.\nWe illustrate the method with applications to an empirical water model,\nab-initio ice, and one dimensional model systems.", "category": "physics_comp-ph" }, { "text": "Making extreme computations possible with virtual machines: State-of-the-art algorithms generate scattering amplitudes for high-energy\nphysics at leading order for high-multiplicity processes as compiled code (in\nFortran, C or C++). For complicated processes the size of these libraries can\nbecome tremendous (many GiB). We show that amplitudes can be translated to\nbyte-code instructions, which even reduce the size by one order of magnitude.\nThe byte-code is interpreted by a Virtual Machine with runtimes comparable to\ncompiled code and a better scaling with additional legs. We study the\nproperties of this algorithm, as an extension of the Optimizing Matrix Element\nGenerator (O'Mega). The bytecode matrix elements are available as alternative\ninput for the event generator WHIZARD. The bytecode interpreter can be\nimplemented very compactly, which will help with a future implementation on\nmassively parallel GPUs.", "category": "physics_comp-ph" }, { "text": "An Evaluation of Polarisability Tensors of Arbitrarily Shaped Highly\n Conducting Bodies: A full-wave numerical scheme of polarisability tensors evaluation is\npresented. The method accepts highly conducting bodies of arbitrary shape and\nexplicitly accounts for the radiation as well as ohmic losses. The method is\nverified on canonical bodies with known polarisability tensors, such as a\nsphere and a cube, as well as on realistic scatterers. The theoretical\ndevelopments are followed by a freely available code whose sole user input is\nthe triangular mesh covering the surface of the body under consideration.", "category": "physics_comp-ph" }, { "text": "Asymptotic Approximant for the Falkner-Skan Boundary-Layer equation: We demonstrate that the asymptotic approximant applied to the Blasius\nboundary layer flow over a flat plat (Barlow et al., 2017 Q. J. Mech. Appl.\nMath., 70(1): 21-48) yields accurate analytic closed-form solutions to the\nFalkner-Skan boundary layer equation for flow over a wedge having angle\n$\\beta\\pi/2$ to the horizontal. A wide range of wedge angles satisfying\n$\\beta\\in[-0.198837735, 1]$ are considered, and the previously established\nnon-unique solutions for $\\beta<0$ having positive and negative shear rates\nalong the wedge are accurately represented. The approximant is used to\ndetermine the singularities in the complex plane that prescribe the radius of\nconvergence of the power series solution to the Falkner-Skan equation. An\nattractive feature of the approximant is that it may be constructed quickly by\nrecursion compared with traditional Pad\\'e approximants that require a matrix\ninversion. The accuracy of the approximant is verified by numerical solutions,\nand benchmark numerical values are obtained that characterize the asymptotic\nbehavior of the Falkner-Skan solution at large distances from the wedge.", "category": "physics_comp-ph" }, { "text": "A simple field function for solving complex and dynamic fluid-solid\n system on Cartesian grid: In this paper, a simple field function is presented for facilitating the\nsolution of complex and dynamic fluid-solid systems on Cartesian grids with\ninterface-resolved fluid-fluid, fluid-solid, and solid-solid interactions. For\na Cartesian-grid-discretized computational domain segmented by a set of solid\nbodies, this field function explicitly tracks each subdomain with multiple\nresolved interfacial node layers. As a result, the presented field function\nenables low-memory-cost multidomain node mapping, efficient node remapping,\nfast collision detection, and expedient surface force integration.\nImplementation algorithms for the field function and its described\nfunctionalities are also presented. Equipped with a deterministic multibody\ncollision model, numerical experiments involving fluid-solid systems with flow\nconditions ranging from subsonic to supersonic states are conducted to validate\nand illustrate the applicability of the proposed field function.", "category": "physics_comp-ph" }, { "text": "Understanding the Sampling Algorithm for Watt Spectrum: We provide details in understanding the Watt spectrum sampling method. The\nalgorithm is given in \"R12\" from \"3rd Monte Carlo Sampler\" without detailed\nderivation. We rederive the algorithm by optimizating the sampling efficiency\nof the rejection method.", "category": "physics_comp-ph" }, { "text": "Controlling bubble coalescence in metallic foams: A simple phase\n field-based approach: The phase-field method is used as a basis to develop a strictly mass\nconserving, yet simple, model for simulation of two-phase flow. The model is\naimed to be applied for the study of structure evolution in metallic foams. In\nthis regard, the critical issue is to control the rate of bubble coalescence\ncompared to concurrent processes such as their rearrangement due to fluid\nmotion. In the present model, this is achieved by tuning the interface energy\nas a free parameter. The model is validated by a number of benchmark tests.\nFirst, stability of a two dimensional bubble is investigated by the\nYoung-Laplace law for different values of the interface energy. Then, the\ncoalescence of two bubbles is simulated until the system reaches equilibrium\nwith a circular shape. To address the major capability of the present model for\nthe formation of foam structure, the bubble coalescence is simulated for\nvarious values of interface energy in order to slow down the merging process.\nThese simulations are repeated in the presence of a rotational flow to\nhighlight the fact that the model allows to suppress the coalescence process\ncompared to the motion of bubbles relative to each other.", "category": "physics_comp-ph" }, { "text": "Emergence of a single cluster in Vicsek's model at very low noise: The classic Vicsek model [Phys.Rev.Lett. {\\bf75},1226(1995)] is studied in\nthe regime of very low noise intensities, which is shown to be characterized by\na cluster (MC) that contains a macroscopic fraction of the system particles. It\nis shown that the well-known power-law behavior of the cluster size\ndistribution loses its cutoff becoming bimodal at very low noise intensities: A\npeak develops for larger sizes to settle the emergence of the MC. The average\ncluster number m*, is introduced as a parameter that properly describes this\nchange, i.e. a line in the noise-speed phase portrait can be identified to\nseparates both regimes. The average largest cluster parameter also develops\nlarge fluctuations at a non zero critical noise. Finite size scaling analysis\nis performed to show that a phase transition to a macroscopic cluster is taking\nplace. Consistency of the results with the literature is also checked and\ncommented upon.", "category": "physics_comp-ph" }, { "text": "Mie Scattering of Phonons by Point Defects in IV-VI Semiconductors PbTe\n and GeTe: Point defects in solids such as vacancy and dopants often cause large thermal\nresistance. Because the lattice site occupied by a point defect has a much\nsmaller size than phonon wavelengths, the scattering of thermal acoustic\nphonons by point defects in solids has been widely assumed to be the Rayleigh\nscattering type. In contrast to this conventional perception, using an ab\ninitio Green's function approach, we show that the scattering by point defects\nin PbTe and GeTe exhibits Mie scattering characterized by a weaker frequency\ndependence of the scattering rates and highly asymmetric scattering phase\nfunctions. These unusual behaviors occur because the strain field induced by a\npoint defect can extend for a long distance much larger than the lattice\nspacing. Because of the asymmetric scattering phase functions, the widely used\nrelaxation time approximation fails with an error of ~20% at 300K in predicting\nlattice thermal conductivity when the vacancy fraction is 1%. Our results show\nthat the phonon scattering by point defects in IV-VI semiconductors cannot be\ndescribed by the simple kinetic theory combined with Rayleigh scattering.", "category": "physics_comp-ph" }, { "text": "A mesh adaptivity scheme on the Landau-de Gennes functional minimization\n case in 3D, and its driving efficiency: This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral\nmeshes by a posteriori error estimates based on metrics, studied on the case of\na nonlinear finite element minimization scheme for the Landau-de Gennes free\nenergy functional of nematic liquid crystals. Newton's iteration for tensor\nfields is employed with steepest descent method possibly stepping in.\n Aspects relating the driving of mesh adaptivity within the nonlinear scheme\nare considered. The algorithmic performance is found to depend on at least two\nfactors: when to trigger each single mesh adaptation, and the precision of the\ncorrelated remeshing. Each factor is represented by a parameter, with its\nvalues possibly varying for every new mesh adaptation. We empirically show that\nthe time of the overall algorithm convergence can vary considerably when\ndifferent sequences of parameters are used, thus posing a question about\noptimality.\n The extensive testings and debugging done within this work on the simulation\nof systems of nematic colloids substantially contributed to the upgrade of an\nopen source finite element-oriented programming language to its 3D meshing\npossibilities, as also to an outer 3D remeshing module.", "category": "physics_comp-ph" }, { "text": "ColDICE: a parallel Vlasov-Poisson solver using moving adaptive\n simplicial tessellation: Resolving numerically Vlasov-Poisson equations for initially cold systems can\nbe reduced to following the evolution of a three-dimensional sheet evolving in\nsix-dimensional phase-space. We describe a public parallel numerical algorithm\nconsisting in representing the phase-space sheet with a conforming,\nself-adaptive simplicial tessellation of which the vertices follow the\nLagrangian equations of motion. The algorithm is implemented both in six- and\nfour-dimensional phase-space. Refinement of the tessellation mesh is performed\nusing the bisection method and a local representation of the phase-space sheet\nat second order relying on additional tracers created when needed at runtime.\nIn order to preserve in the best way the Hamiltonian nature of the system,\nrefinement is anisotropic and constrained by measurements of local Poincar\\'e\ninvariants. Resolution of Poisson equation is performed using the fast Fourier\nmethod on a regular rectangular grid, similarly to particle in cells codes. To\ncompute the density projected onto this grid, the intersection of the\ntessellation and the grid is calculated using the method of Franklin and\nKankanhalli (1993) generalised to linear order. As preliminary tests of the\ncode, we study in four dimensional phase-space the evolution of an initially\nsmall patch in a chaotic potential and the cosmological collapse of a\nfluctuation composed of two sinusoidal waves. We also perform a \"warm\" dark\nmatter simulation in six-dimensional phase-space that we use to check the\nparallel scaling of the code.", "category": "physics_comp-ph" }, { "text": "Hybrid-Delta Tracking on a Structured Mesh in MCATK: Monte Carlo Application Toolkit (MCATK) commonly uses surface tracking on a\nstructured mesh to compute scalar fluxes. In this mode, higher fidelity\nrequires more mesh cells and isotopes and thus more computational overhead --\nsince every time a particle changes cells, new cross-sections must be found for\nall materials in a given cell -- even if no collision occurs in that cell. We\nimplement a hybrid version of Woodcock (delta) tracking on this imposed mesh to\nalleviate the number of cross-section lookups. This algorithm computes an\nenergy-dependent microscopic majorant cross section is computed for the\nproblem. Each time a particle enters a new cell, rather than computing a true\nmacroscopic cross-section over all isotopes in the cell, the microscopic\nmajorant cross-section is simply multiplied by the total number density of the\ncell to obtain a macroscopic majorant cross-section for the cell. Delta\ntracking is then performed within that single cell. This increases performance\nwith minimal code changes, speeding up the solve time by a factor of 1.5 --\n1.75 for k-eigenvalue simulations and 1.2 -- 1.6 for fixed source simulations\nin a series of materially complex criticality benchmarks.", "category": "physics_comp-ph" }, { "text": "A call to arms: making the case for more reusable libraries: The traditional foundation of science lies on the cornerstones of theory and\nexperiment. Theory is used to explain experiment, which in turn guides the\ndevelopment of theory. Since the advent of computers and the development of\ncomputational algorithms, computation has risen as the third cornerstone of\nscience, joining theory and experiment on an equal footing. Computation has\nbecome an essential part of modern science, amending experiment by enabling\naccurate comparison of complicated theories to sophisticated experiments, as\nwell as guiding by triage both the design and targets of experiments and the\ndevelopment of novel theories and computational methods.\n Like experiment, computation relies on continued investment in\ninfrastructure: it requires both hardware (the physical computer on which the\ncalculation is run) as well as software (the source code of the programs that\nperforms the wanted simulations). In this Perspective, I discuss present-day\nchallenges on the software side in computational chemistry, which arise from\nthe fast-paced development of algorithms, programming models, as well as\nhardware. I argue that many of these challenges could be solved with reusable\nopen source libraries, which are a public good, enhance the reproducibility of\nscience, and accelerate the development and availability of state-of-the-art\nmethods and improved software.", "category": "physics_comp-ph" }, { "text": "Quantum Monte Carlo study of the first-row atoms and ions: Quantum Monte Carlo calculations of the first-row atoms Li-Ne and their\nsingly-positively-charged ions are reported. Multi-determinant-Jastrow-backflow\ntrial wave functions are used which recover more than 98% of the correlation\nenergy at the Variational Monte Carlo (VMC) level and more than 99% of the\ncorrelation energy at the Diffusion Monte Carlo (DMC) level for both the atoms\nand ions. We obtain the first ionization potentials to chemical accuracy. We\nalso report scalar relativistic corrections to the energies, mass-polarization\nterms, and one- and two-electron expectation values.", "category": "physics_comp-ph" }, { "text": "Calculating vibrational spectra with sum of product basis functions\n without storing full-dimensional vectors or matrices: We propose an iterative method for computing vibrational spectra that\nsignificantly reduces the memory cost of calculations. It uses a direct product\nprimitive basis, but does not require storing vectors with as many components\nas there are product basis functions. Wavefunctions are represented in a basis\neach of whose functions is a sum of products (SOP) and the factorizable\nstructure of the Hamiltonian is exploited. If the factors of the SOP basis\nfunctions are properly chosen, wavefunctions are linear combinations of a small\nnumber of SOP basis functions. The SOP basis functions are generated using a\nshifted block power method. The factors are refined with a rank reduction\nalgorithm to cap the number of terms in a SOP basis function. The ideas are\ntested on a 20-D model Hamiltonian and a realistic CH$_3$CN (12 dimensional)\npotential. For the 20-D problem, to use a standard direct product iterative\napproach one would need to store vectors with about $10^{20}$ components and\nwould hence require about $8 \\times 10^{11}$ GB. With the approach of this\npaper only 1 GB of memory is necessary. Results for CH$_3$CN agree well with\nthose of a previous calculation on the same potential.", "category": "physics_comp-ph" }, { "text": "What drives adsorption of ions on surface of nanodiamonds in aqueous\n solutions?: It is not yet clear what drives the adsorption of ions on detonation\nnanodiamonds (DNDs), which plays a critical role on the loading (unloading) of\nchemotherapeutic drugs on (from) the surface of DNDs in their targeted therapy\napplications. Furthermore, effects of adsorbed ions on the hydration layers of\nwater around DNDs with different surface chemistries have not been studied yet.\nThrough a series of Molecular Dynamics simulations, we found out that the law\nof matching water affinity generally explains well the adsorption patterns of\nions onto the surface functional groups of DNDs. Depending on whether the water\naffinity of the ion matches with that of the surface functional group or not,\nthe former predominantly forms either Contact Ion-Pair (CIP) or Solvent-shared\nIon-Pair (SIP) with the latter. In this regard, Na$^{+}$ and Mg$^{2+}$ have the\nhighest tendencies to form, respectively, CIP and SIP associations with\n${-}$COO$^{-}$ functional groups. In the extreme case of 84 ${-}$COO$^{-}$\ngroups on DND${-}$COOH, however, we observed few Mg$^{2+}$${-}$COO$^{-}$ CIP\nassociations, for which we have proposed a hypothesis based on the entropy\ngains. Furthermore, Mg$^{2+}$ and to a lesser extent Ca$^{2+}$ in cooperation\nwith ${-}$COO$^{-}$ functional groups on the surface of charged DND${-}$COOH\nlead to relatively high residence times of water in the first hydration layer\nof DND. This study also provides a firsthand molecular level insight about the\npreferential orientation of water in the vicinity of positively charged\nDND${-}$H, on which prior experimental studies have not yet reached a\nconsensus.", "category": "physics_comp-ph" }, { "text": "MAESTROeX: A Massively Parallel Low Mach Number Astrophysical Solver: We present MAESTROeX, a massively parallel solver for low Mach number\nastrophysical flows. The underlying low Mach number equation set allows for\nefficient, long-time integration for highly subsonic flows compared to\ncompressible approaches. MAESTROeX is suitable for modeling full spherical\nstars as well as well as planar simulations of dynamics within localized\nregions of a star, and can robustly handle several orders of magnitude of\ndensity and pressure stratification. Previously, we have described the\ndevelopment of the predecessor of MAESTROeX, called MAESTRO, in a series of\npapers. Here, we present a new, greatly simplified temporal integration scheme\nthat retains the same order of accuracy as our previous approaches. We also\nexplore the use of alternative spatial mapping of the one-dimensional base\nstate onto the full Cartesian grid. The code leverages the new AMReX software\nframework for block-structured adaptive mesh refinement (AMR) applications,\nallowing for scalability to large fractions of leadership-class machines. Using\nour previous studies on the convective phase of single-degenerate progenitor\nmodels of Type Ia supernovae as a guide, we characterize the performance of the\ncode and validate the new algorithmic features. Like MAESTRO, MAESTROeX is\nfully open source.", "category": "physics_comp-ph" }, { "text": "Detecting Symmetries with Neural Networks: Identifying symmetries in data sets is generally difficult, but knowledge\nabout them is crucial for efficient data handling. Here we present a method how\nneural networks can be used to identify symmetries. We make extensive use of\nthe structure in the embedding layer of the neural network which allows us to\nidentify whether a symmetry is present and to identify orbits of the symmetry\nin the input. To determine which continuous or discrete symmetry group is\npresent we analyse the invariant orbits in the input. We present examples based\non rotation groups $SO(n)$ and the unitary group $SU(2).$ Further we find that\nthis method is useful for the classification of complete intersection\nCalabi-Yau manifolds where it is crucial to identify discrete symmetries on the\ninput space. For this example we present a novel data representation in terms\nof graphs.", "category": "physics_comp-ph" }, { "text": "Extracting ice phases from liquid water: why a machine-learning water\n model generalizes so well: We investigate the structural similarities between liquid water and 53 ices,\nincluding 20 knowncrystalline phases. We base such similarity comparison on the\nlocal environments that consist of atoms within a certain cutoff radius of a\ncentral atom. We reveal that liquid water explores the localenvironments of the\ndiverse ice phases, by directly comparing the environments in these phases\nusing general atomic descriptors, and also by demonstrating that a\nmachine-learning potential trained on liquid water alone can predict the\ndensities, the lattice energies, and vibrational properties of theices. The\nfinding that the local environments characterising the different ice phases are\nfound in water sheds light on water phase behaviors, and rationalizes the\ntransferability of water models between different phases.", "category": "physics_comp-ph" }, { "text": "Imaginary time density functional calculation of ground states for\n second-row atoms using CWDVR approach: We have developed the Coulomb wave function discrete variable representation\n(CWDVR) method to solve the imaginary time dependent Kohn - Sham equation on\nthe many - electronic second row atoms. The imaginary time dependent Kohn -\nSham equation is numerically solved using the CWDVR method. We have presented\nthat the results of calculation for second row Li, Be, B, C, N, O and F atoms\nare in good agreement with other best available values using the Mathematica\n7.0 programm.", "category": "physics_comp-ph" }, { "text": "AdaptiveBandit: A multi-armed bandit framework for adaptive sampling in\n molecular simulations: Sampling from the equilibrium distribution has always been a major problem in\nmolecular simulations due to the very high dimensionality of conformational\nspace. Over several decades, many approaches have been used to overcome the\nproblem. In particular, we focus on unbiased simulation methods such as\nparallel and adaptive sampling. Here, we recast adaptive sampling schemes on\nthe basis of multi-armed bandits and develop a novel adaptive sampling\nalgorithm under this framework, \\UCB. We test it on multiple simplified\npotentials and in a protein folding scenario. We find that this framework\nperforms similarly or better in every type of test potentials compared to\nprevious methods. Furthermore, it provides a novel framework to develop new\nsampling algorithms with better asymptotic characteristics.", "category": "physics_comp-ph" }, { "text": "Ray-Tracing studies in a perturbed atmosphere: I- The initial value\n problem: We report the development of a new ray-tracing simulation tool having the\npotential of the full characterization of a radio link through the accurate\nstudy of the propagation path of the signal from the transmitting to the\nreceiving antennas across a perturbed atmosphere. The ray-tracing equations are\nsolved, with controlled accuracy, in three dimensions (3D) and the propagation\ncharacteristics are obtained using various refractive index models. The\nlaunching of the rays, the atmospheric medium and its disturbances are\ncharacterized in 3D. The novelty in the approach stems from the use of special\nnumerical techniques dealing with so called stiff differential equations\nwithout which no solution of the ray-tracing equations is possible. Starting\nwith a given launching angle, the solution consists of the ray trajectory, the\npropagation time information at each point of the path, the beam spreading, the\ntransmitted (resp. received) power taking account of the radiation pattern and\norientation of the antennas and finally, the polarization state of the beam.\nSome previously known results are presented for comparative purposes and new\nresults are presented as well as some of the capabilities of the software.", "category": "physics_comp-ph" }, { "text": "Accurate multiple time step in biased molecular simulations: Many recently introduced enhanced sampling techniques are based on biasing\ncoarse descriptors (collective variables) of a molecular system on the fly.\nSometimes the calculation of such collective variables is expensive and becomes\na bottleneck in molecular dynamics simulations. An algorithm to treat smooth\nbiasing forces within a multiple time step framework is here discussed. The\nimplementation is simple and allows a speed up when expensive collective\nvariables are employed. The gain can be substantial when using massively\nparallel or GPU-based molecular dynamics software. Moreover, a theoretical\nframework to assess the sampling accuracy is introduced, which can be used to\nassess the choice of the integration time step in both single and multiple time\nstep biased simulations.", "category": "physics_comp-ph" }, { "text": "Sub-Picosecond Carrier Dynamics Explored using Automated High-Throughput\n Studies of Doping Inhomogeneity within a Bayesian Framework: Bottom-up production of semiconductor nanomaterials is often accompanied by\ninhomogeneity resulting in a spread in electronic properties which may be\ninfluenced by the nanoparticle geometry, crystal quality, stoichiometry or\ndoping. Using photoluminescence spectroscopy of a population of more than\n20,000 individual Zn-doped GaAs nanowires, we reveal inhomogeneity in, and\ncorrelation between doping and nanowire diameter by use of a Bayesian\nstatistical approach. Recombination of hot-carriers is shown to be responsible\nfor the photoluminescence lineshape; by exploiting lifetime variation across\nthe population, we reveal hot-carrier dynamics at the sub-picosecond timescale\nshowing interband electronic dynamics. High-throughput spectroscopy together\nwith a Bayesian approach are shown to provide unique insight in an\ninhomogeneous nanomaterial population, and can reveal electronic dynamics\notherwise requiring complex pump-probe experiments in highly non-equilibrium\nconditions.", "category": "physics_comp-ph" }, { "text": "Maximus: a Hybrid Particle-in-Cell Code for Microscopic Modeling of\n Collisionless Plasmas: A second-order accurate divergence-conserving hybrid particle-in-cell code\nMaximus has been developed for microscopic modeling of collisionless plasmas.\nThe main specifics of the code include a constrained transport algorithm for\nexact conservation of magnetic field divergence, a Boris-type particle pusher,\na weighted particle momentum deposit on the cells of the 3D spatial grid, an\nability to model multispecies plasmas and an adaptive time step. The code is\nefficiently parallelized for running on supercomputers by means of the message\npassing interface (MPI) technology; an analysis of parallelization efficiency\nand overall resource intensity is presented. A Maximus simulation of the\nshocked flow in the Solar wind is shown to agree well with the observations of\nthe Ion Release Module (IRM) aboard the Active Magnetospheric Particle Tracer\nExplorers interplanetary mission.", "category": "physics_comp-ph" }, { "text": "Boosting Monte Carlo simulations of spin glasses using autoregressive\n neural networks: The autoregressive neural networks are emerging as a powerful computational\ntool to solve relevant problems in classical and quantum mechanics. One of\ntheir appealing functionalities is that, after they have learned a probability\ndistribution from a dataset, they allow exact and efficient sampling of typical\nsystem configurations. Here we employ a neural autoregressive distribution\nestimator (NADE) to boost Markov chain Monte Carlo (MCMC) simulations of a\nparadigmatic classical model of spin-glass theory, namely the two-dimensional\nEdwards-Anderson Hamiltonian. We show that a NADE can be trained to accurately\nmimic the Boltzmann distribution using unsupervised learning from system\nconfigurations generated using standard MCMC algorithms. The trained NADE is\nthen employed as smart proposal distribution for the Metropolis-Hastings\nalgorithm. This allows us to perform efficient MCMC simulations, which provide\nunbiased results even if the expectation value corresponding to the probability\ndistribution learned by the NADE is not exact. Notably, we implement a\nsequential tempering procedure, whereby a NADE trained at a higher temperature\nis iteratively employed as proposal distribution in a MCMC simulation run at a\nslightly lower temperature. This allows one to efficiently simulate the\nspin-glass model even in the low-temperature regime, avoiding the divergent\ncorrelation times that plague MCMC simulations driven by local-update\nalgorithms. Furthermore, we show that the NADE-driven simulations quickly\nsample ground-state configurations, paving the way to their future utilization\nto tackle binary optimization problems.", "category": "physics_comp-ph" }, { "text": "Charge transfer excitations with range separated functionals using\n improved virtual orbitals: We present an implementation of range separated functionals utilizing the\nSlater-function on grids in real space in the projector augmented waves method.\nThe screened Poisson equation is solved to evaluate the necessary screened\nexchange integrals on Cartesian grids. The implementation is verified against\nexisting literature and applied to the description of charge transfer\nexcitations. We find very slow convergence for calculations within linear\nresponse time-dependent density functional theory and unoccupied orbitals of\nthe canonical Fock operator. Convergence can be severely improved by using\nHuzinaga's virtual orbitals instead. This combination furthermore enables an\naccurate determination of long-range charge transfer excitations by means of\nground-state calculations.", "category": "physics_comp-ph" }, { "text": "Online Change Point Detection in Molecular Dynamics With Optical Random\n Features: Proteins are made of atoms constantly fluctuating, but can occasionally\nundergo large-scale changes. Such transitions are of biological interest,\nlinking the structure of a protein to its function with a cell. Atomic-level\nsimulations, such as Molecular Dynamics (MD), are used to study these events.\nHowever, molecular dynamics simulations produce time series with multiple\nobservables, while changes often only affect a few of them. Therefore,\ndetecting conformational changes has proven to be challenging for most\nchange-point detection algorithms. In this work, we focus on the identification\nof such events given many noisy observables. In particular, we show that the\nNo-prior-Knowledge Exponential Weighted Moving Average (NEWMA) algorithm can be\nused along optical hardware to successfully identify these changes in\nreal-time. Our method does not need to distinguish between the background of a\nprotein and the protein itself. For larger simulations, it is faster than using\ntraditional silicon hardware and has a lower memory footprint. This technique\nmay enhance the sampling of the conformational space of molecules. It may also\nbe used to detect change-points in other sequential data with a large number of\nfeatures.", "category": "physics_comp-ph" }, { "text": "Band structure of Si/Ge core-shell nanowires along [110] direction\n modulated by external uniaxial strain: Strain modulated electronic properties of Si/Ge core-shell nanowires along\n[110] direction were reported based on first principles density-functional\ntheory calculations. Particularly, the energy dispersion relationship of the\nconduction/valence band was explored in detail. At the {\\Gamma} point, the\nenergy levels of both bands are significantly altered by applied uniaxial\nstrain, which results in an evident change of band gap. In contrast, for the K\nvectors far away from {\\Gamma}, the variation of the conduction/valence band\nwith strain is much reduced. In addition, with a sufficient tensile strain\n(~1%), the valence band edge (VBE) shifts away from {\\Gamma}, which indicates\nthat the band gap of the Si/Ge core-shell nanowires experiences a transition\nfrom direct to indirect. Our studies further showed that effective masses of\ncharge carriers can be also tuned by the external uniaxial strain. The\neffective mass of the hole increases dramatically with a tensile strain, while\nstrain shows a minimal effect on tuning the effective mass of the electron.\nFinally, the relation between strain and the conduction/valence band edge is\ndiscussed thoroughly in terms of site-projected wave-function characters.", "category": "physics_comp-ph" }, { "text": "Mechanism of O$_2$ influence on the decomposition process of the\n eco-friendly gas insulating medium C$_4$F$_7$N/CO$_2$: The C$_4$F$_7$N/CO$_2$/O$_2$ gas mixture is the most promising eco-friendly\ngas insulation medium available. However, there are few studies on the\nmechanism of the influence of the buffer gas O2 ratio and its role in the\ndecomposition characteristics of C4F7N/CO2. In this paper, based on the ReaxFF\nreaction molecular dynamics method and density functional theory, a simulation\nof the thermal decomposition process of the C$_4$F$_7$N/CO$_2$ mixture under\ndifferent O2 ratios was carried out at temperatures in the range 2000-3000 K. A\nconstructed model of the C4F7N/CO2/O2 mixture reaction system was used that\nincluded the possible reaction paths, product distribution characteristics and\ntheir generation rates. The calculation results show that the thermal\ndecomposition of C$_4$F$_7$N/CO$_2$/O$_2$ mainly generates species such as\nCF$_3$, CF$_2$, CF, F, C$_2$F$_5$, C$_2$F$_4$, C$_2$F$_2$, C$_3$F$_7$,\nC$_2$F$_2$N, C$_3$F$_4$N, CFN, CN, CO, O, and C. Among them, the two particles\nCF$_2$ and CN are the most abundant. The first decomposition time of\nC$_4$F$_7$N is advanced by the addition of O$_2$, while the amount of\nC$_4$F$_7$N decomposed and the generation of major decomposed particles\ndecreases. The addition of 0%-4% of O$_2$ decreases the reaction rate of the\nmain decomposition reaction in the reaction system. Quantum chemical\ncalculations show that the dissociation process occurring from the combination\nof C$_4$F$_7$N with O atom is more likely to occur compared to the direct\ndissociation process of C$_4$F$_7$N molecules. The conclusions of this study\nprovide a theoretical basis for the optimization of the application ratio of\nC$_4$F$_7$N/CO$_2$/O$_2$ and the diagnosis of its equipment operation and\nmaintenance.", "category": "physics_comp-ph" }, { "text": "Large scale ab-initio simulations of dislocations: We present a novel methodology to compute relaxed dislocations core\nconfigurations, and their energies in crystalline metallic materials using\nlarge-scale \\emph{ab-intio} simulations. The approach is based on MacroDFT, a\ncoarse-grained density functional theory method that accurately computes the\nelectronic structure but with sub-linear scaling resulting in a tremendous\nreduction in cost. Due to its implementation in \\emph{real-space}, MacroDFT has\nthe ability to harness petascale resources to study materials and alloys\nthrough accurate \\emph{ab-initio} calculations. Thus, the proposed methodology\ncan be used to investigate dislocation cores and other defects where long range\nelastic defects play an important role, such as in dislocation cores, grain\nboundaries and near precipitates in crystalline materials. We demonstrate the\nmethod by computing the relaxed dislocation cores in prismatic dislocation\nloops and dislocation segments in magnesium (Mg). We also study the interaction\nenergy with a line of Aluminum (Al) solutes. Our simulations elucidate the\nessential coupling between the quantum mechanical aspects of the dislocation\ncore and the long range elastic fields that they generate. In particular, our\nquantum mechanical simulations are able to describe the logarithmic divergence\nof the energy in the far field as is known from classical elastic theory. In\norder to reach such scaling, the number of atoms in the simulation cell has to\nbe exceedingly large, and cannot be achieved with the state-of-the-art density\nfunctional theory implementations.", "category": "physics_comp-ph" }, { "text": "NeuralNEB -- Neural Networks can find Reaction Paths Fast: Quantum mechanical methods like Density Functional Theory (DFT) are used with\ngreat success alongside efficient search algorithms for studying kinetics of\nreactive systems. However, DFT is prohibitively expensive for large scale\nexploration. Machine Learning (ML) models have turned out to be excellent\nemulators of small molecule DFT calculations and could possibly replace DFT in\nsuch tasks. For kinetics, success relies primarily on the models capability to\naccurately predict the Potential Energy Surface (PES) around transition-states\nand Minimal Energy Paths (MEPs). Previously this has not been possible due to\nscarcity of relevant data in the literature. In this paper we train state of\nthe art equivariant Graph Neural Network (GNN)-based models on around 10.000\nelementary reactions from the Transition1x dataset. We apply the models as\npotentials for the Nudged Elastic Band (NEB) algorithm and achieve a Mean\nAverage Error (MAE) of 0.13+/-0.03 eV on barrier energies on unseen reactions.\nWe compare the results against equivalent models trained on QM9 and ANI1x. We\nalso compare with and outperform Density Functional based Tight Binding (DFTB)\non both accuracy and computational resource. The implication is that ML models,\ngiven relevant data, are now at a level where they can be applied for\ndownstream tasks in quantum chemistry transcending prediction of simple\nmolecular features.", "category": "physics_comp-ph" }, { "text": "A strategy to suppress recurrence in grid-based Vlasov solvers: In this paper we propose a strategy to suppress the recurrence effect present\nin grid-based Vlasov solvers. This method is formulated by introducing a cutoff\nfrequency in Fourier space. Since this cutoff only has to be performed after a\nnumber of time steps, the scheme can be implemented efficiently and can\nrelatively easily be incorporated into existing Vlasov solvers. Furthermore,\nthe scheme proposed retains the advantage of grid-based methods in that high\naccuracy can be achieved. This is due to the fact that in contrast to the\nscheme proposed by Abbasi et al. no statistical noise is introduced into the\nsimulation. We will illustrate the utility of the method proposed by performing\na number of numerical simulations, including the plasma echo phenomenon, using\na discontinuous Galerkin approximation in space and a Strang splitting based\ntime integration.", "category": "physics_comp-ph" }, { "text": "A Martini coarse-grained model of the calcein fluorescent dye: Calcein leakage assays are a standard experimental set-up for probing the\nextent of damage induced by external agents on synthetic lipid vesicles. The\nfluorescence signal associated with calcein release from liposomes is the\nsignature of vesicle disruption, transient pore formation or vesicle fusion.\nThis type of assay is widely used to test the membrane disruptive effect of\nbiological macromolecules, such as proteins, antimicrobial peptides and RNA and\nis also used on synthetic nanoparticles with a polymer, metal or oxide core.\nLittle is known about the effect that calcein and other fluorescent dyes may\nhave on the properties of lipid bilayers, potentially altering their structure\nand permeability. Here we develop a coarse-grained model of calcein that is\ncompatible with the Martini force field for lipids. We validate the model by\ncomparing its dimerization free energy, aggregation behavior at different\nconcentrations and interaction with a\n1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) membrane to those\nobtained at atomistic resolution. Our coarse-grained description of calcein\nmakes it suitable for the simulation of large calcein-filled liposomes and of\ntheir interactions with external agents, allowing for a direct comparison\nbetween simulations and experimental liposome leakage assays.", "category": "physics_comp-ph" }, { "text": "Predicting Efficiency in master-slave grid computing systems: This work reports a quantitative analysis to predicting the efficiency of\ndistributed computing running in three models of complex networks:\nBarab\\'asi-Albert, Erd\\H{o}s-R\\'enyi and Watts-Strogatz. A master/slave\ncomputing model is simulated. A node is selected as master and distributes\ntasks among the other nodes (the clients). Topological measurements associated\nwith the master node (e.g. its degree or betwenness centrality) are extracted\nand considered as predictors of the total execution time. It is found that the\ncloseness centrality provides the best alternative. The effect of network size\nwas also investigated.", "category": "physics_comp-ph" }, { "text": "Bayesian optimization package: PHYSBO: PHYSBO (optimization tools for PHYSics based on Bayesian Optimization) is a\nPython library for fast and scalable Bayesian optimization. It has been\ndeveloped mainly for application in the basic sciences such as physics and\nmaterials science. Bayesian optimization is used to select an appropriate input\nfor experiments/simulations from candidate inputs listed in advance in order to\nobtain better output values with the help of machine learning prediction.\nPHYSBO can be used to find better solutions for both single and multi-objective\noptimization problems. At each cycle in the Bayesian optimization, a single\nproposal or multiple proposals can be obtained for the next\nexperiments/simulations. These proposals can be obtained interactively for use\nin experiments. PHYSBO is available at\nhttps://github.com/issp-center-dev/PHYSBO.", "category": "physics_comp-ph" }, { "text": "Compressive Spectral Renormalization Method: In this paper a novel numerical scheme for finding the sparse self-localized\nstates of a nonlinear system of equations with missing spectral data is\nintroduced. As in the Petviashivili's and the spectral renormalization method,\nthe governing equation is transformed into Fourier domain, but the iterations\nare performed for far fewer number of spectral components (M) than classical\nversions of the these methods with higher number of spectral components (N).\nAfter the converge criteria is achieved for M components, N component signal is\nreconstructed from M components by using the l1 minimization technique of the\ncompressive sampling. This method can be named as compressive spectral\nrenormalization (CSRM) method. The main advantage of the CSRM is that, it is\ncapable of finding the sparse self-localized states of the evolution\nequation(s) with many spectral data missing.", "category": "physics_comp-ph" }, { "text": "A peridynamic approach to flexoelectricity: A flexoelectric peridynamic (PD) theory is proposed. Using the PD framework,\nthe formulation introduces, perhaps for the first time, a nanoscale\nflexoelectric coupling that entails non-uniform strain in centrosymmetric\ndielectrics. This potentially enables PD modeling of a large class of phenomena\nin solid dielectrics involving cracks, discontinuities etc. wherein large\nstrain gradients are present and the classical electromechanical theory based\non partial differential equations do not directly apply. Derived from\nHamilton's principle, PD electromechanical equations are shown to satisfy the\nglobal balance requirements. Linear PD constitutive equations reflect the\nelectromechanical coupling effect, with the mechanical force state affected by\nthe polarization state and the electrical force state in turn by the\ndisplacement state. An analytical solution of the PD electromechanical\nequations in the integral form is presented for the static case when a point\nmechanical force and a point electric force act in a three dimensional infinite\nsolid dielectric. A parametric study on how the different length scales\ninfluence the response is also undertaken.", "category": "physics_comp-ph" }, { "text": "HOOMD-blue: A Python package for high-performance molecular dynamics and\n hard particle Monte Carlo simulations: HOOMD-blue is a particle simulation engine designed for nano- and\ncolloidal-scale molecular dynamics and hard particle Monte Carlo simulations.\nIt has been actively developed since March 2007 and available open source since\nAugust 2008. HOOMD-blue is a Python package with a high performance C++/CUDA\nbackend that we built from the ground up for GPU acceleration. The Python\ninterface allows users to combine HOOMD-blue with with other packages in the\nPython ecosystem to create simulation and analysis workflows. We employ\nsoftware engineering practices to develop, test, maintain, and expand the code.", "category": "physics_comp-ph" }, { "text": "Fast and Efficient Calculations of Structural Invariants of Chirality: Chirality plays an important role in physics, chemistry, biology, and other\nfields. It describes an essential symmetry in structure. However, chirality\ninvariants are usually complicated in expression or difficult to evaluate. In\nthis paper, we present five general three-dimensional chirality invariants\nbased on the generating functions. And the five chiral invariants have four\ncharacteristics:(1) They play an important role in the detection of symmetry,\nespecially in the treatment of 'false zero' problem. (2) Three of the five\nchiral invariants decode an universal chirality index. (3) Three of them are\nproposed for the first time. (4) The five chiral invariants have low order no\nbigger than 4, brief expression, low time complexity O(n) and can act as\ndescriptors of three-dimensional objects in shape analysis. The five chiral\ninvariants give a geometric view to study the chiral invariants. And the\nexperiments show that the five chirality invariants are effective and\nefficient, they can be used as a tool for symmetry detection or features in\nshape analysis.", "category": "physics_comp-ph" }, { "text": "Application of a linear elastic - brittle interface model to the crack\n initiation and propagation at fibre-matrix interface under biaxial transverse\n loads: The crack onset and propagation at the fibre-matrix interface in a composite\nunder tensile/compressive remote biaxial transverse loads is studied by a new\nlinear elastic - (perfectly) brittle interface model. In this model the\ninterface is represented by a continuous distribution of springs which\nsimulates the presence of a thin elastic layer. The constitutive law for the\ncontinuous distribution of normal and tangential of initially linear elastic\nsprings takes into account possible frictionless elastic contact between fibre\nand matrix once a portion of the interface is broken. A brittle failure\ncriterion is employed for the distribution of springs, which enables the study\nof crack onset and propagation. This interface failure criterion takes into\naccount the variation of the interface fracture toughness with the fracture\nmode mixity. The main advantages of the present interface model are its\nsimplicity, robustness and its computational efficiency when the so-called\nsequentially linear analysis is applied. Moreover, in the present plane strain\nproblem of a single fibre embedded in a matrix subjected to uniform remote\ntransverse loads, this model can be used to obtain analytic predictions of\ninterface crack onset. The numerical results provided by a 2D boundary element\nanalysis show that a fibre-matrix interface failure initiates by onset of a\nfinite debond in the neighbourhood of an interface point where the failure\ncriterion is reached first (under increasing proportional load), this debond\nfurther propagating along the interface in mixed mode or even, in some\nconfigurations, with the crack tip under compression. The analytical\npredictions of the debond onset position and associated critical load are used\nfor checking the computational procedure implemented, an excellent agreement\nbeing obtained.", "category": "physics_comp-ph" }, { "text": "Molecular Conformation Generation via Shifting Scores: Molecular conformation generation, a critical aspect of computational\nchemistry, involves producing the three-dimensional conformer geometry for a\ngiven molecule. Generating molecular conformation via diffusion requires\nlearning to reverse a noising process. Diffusion on inter-atomic distances\ninstead of conformation preserves SE(3)-equivalence and shows superior\nperformance compared to alternative techniques, whereas related generative\nmodelings are predominantly based upon heuristical assumptions. In response to\nthis, we propose a novel molecular conformation generation approach driven by\nthe observation that the disintegration of a molecule can be viewed as casting\nincreasing force fields to its composing atoms, such that the distribution of\nthe change of inter-atomic distance shifts from Gaussian to Maxwell-Boltzmann\ndistribution. The corresponding generative modeling ensures a feasible\ninter-atomic distance geometry and exhibits time reversibility. Experimental\nresults on molecular datasets demonstrate the advantages of the proposed\nshifting distribution compared to the state-of-the-art.", "category": "physics_comp-ph" }, { "text": "Efficient and accurate methods for solving the time-dependent spin-1\n Gross-Pitaevskii equation: We develop a numerical method for solving the spin-1 Gross-Pitaevskii\nequation. The basis of our work is a two-way splitting of the spin-1 evolution\nequation that leads to two exactly solvable flows. We use this to implement a\nsecond-order and a fourth-order symplectic integration method. These are the\nfirst fully symplectic methods for evolving spin-1 condensates. We develop two\nnon-trivial numerical tests to compare our methods against two other\napproaches.", "category": "physics_comp-ph" }, { "text": "A fluctuating lattice-Boltzmann method with improved Galilean invariance: In this paper we show that standard implementations of fluctuating Lattice\nBoltzmann methods do not obey Galilean invariance at a fundamental level. In\ntrying to remedy this we are led to a novel kind of multi-relaxation time\nlattice Boltzmann methods where the collision matrix depends on the local\nvelocity. This new method is conceptually elegant but numerically inefficient.\nWith a small numerical trick, however, this method recovers nearly the original\nefficiency and allows the practical implementation of fluctuating lattice\nBoltzmann methods with significantly improved Galilean invariance. This will be\nimportant for applications of fluctuating lattice Boltzmann for non-equilibrium\nsystems involving strong flow fields.", "category": "physics_comp-ph" }, { "text": "The equilibrium-diffusion limit for radiation hydrodynamics: The equilibrium-diffusion approximation (EDA) is used to describe certain\nradiation-hydrodynamic (RH) environments. When this is done the RH equations\nreduce to a simplified set of equations. The EDA can be derived by\nasymptotically analyzing the full set of RH equations in the\nequilibrium-diffusion limit. We derive the EDA this way and show that it and\nthe associated set of simplified equations are both first-order accurate with\ntransport corrections occurring at second order. Having established the EDA's\nfirst-order accuracy we then analyze the grey nonequilibrium-diffusion\napproximation and the grey Eddington approximation and show that they both\npreserve this first-order accuracy. Further, these approximations preserve the\nEDA's first-order accuracy when made in either the comoving-frame (CMF) or the\nlab-frame (LF). While analyzing the Eddington approximation, we found that the\nCMF and LF radiation-source equations are equivalent when neglecting ${\\cal\nO}(\\beta^2)$ terms and compared in the LF. Of course, the radiation pressures\nare not equivalent. It is expected that simplified physical models and\nnumerical discretizations of the RH equations that do not preserve this\nfirst-order accuracy will not retain the correct equilibrium-diffusion\nsolutions. As a practical example, we show that nonequilibrium-diffusion\nradiative-shock solutions devolve to equilibrium-diffusion solutions when the\nasymptotic parameter is small.", "category": "physics_comp-ph" }, { "text": "An approach to first principles electronic structure calculation by\n symbolic-numeric computation: This article is an introduction to a new approach to first principles\nelectronic structure calculation. The starting point is the\nHartree-Fock-Roothaan equation, in which molecular integrals are approximated\nby polynomials by way of Taylor expansion with respect to atomic coordinates\nand other variables. It leads to a set of polynomial equations whose solutions\nare eigenstate, which is designated as algebraic molecular orbital equation.\nSymbolic computation, especially, Gr\\\"obner bases theory, enables us to rewrite\nthe polynomial equations into more trimmed and tractable forms with identical\nroots, from which we can unravel the relationship between physical parameters\n(wave function, atomic coordinates, and others) and numerically evaluate them\none by one in order. Furthermore, this method is a unified way to solve the\nelectronic structure calculation, the optimization of physical parameters, and\nthe inverse problem as a forward problem.", "category": "physics_comp-ph" }, { "text": "Nonequilibrium phonon mean free paths in anharmonic chains: Harnessing the power of low-dimensional materials in thermal applications\ncalls for a solid understanding of the anomalous thermal properties of such\nsystems. We analyze thermal conduction in one-dimensional systems by\ndetermining the frequency-dependent phonon mean free paths (MFPs) for an\nanharmonic chain, delivering insight into the diverging thermal conductivity\nobserved in computer simulations. In our approach, the MFPs are extracted from\nthe length-dependence of the spectral heat current obtained from nonequilibrium\nmolecular dynamics simulations. At low frequencies, the results reveal a\npower-law dependence of the MFPs on frequency, in agreement with the diverging\nconductivity and the recently determined equilibrium MFPs. At higher\nfrequencies, however, the nonequilibrium MFPs consistently exceed the\nequilibrium MFPs, highlighting the differences between the two quantities.\nExerting pressure on the chain is shown to suppress the mean free paths and to\ngenerate a weaker divergence of MFPs at low frequencies. The results deliver\nimportant insight into anomalous thermal conduction in low-dimensional systems\nand also reveal differences between the MFPs obtained from equilibrium and\nnonequilibrium simulations.", "category": "physics_comp-ph" }, { "text": "A mass-preserving level set method for simulating 2D/3D fluid flows with\n evolving interface: Within the context of Eulerian approaches, we aim to develop a new\ninterface-capturing solver to predict two-phase flow in 2D/3D Cartesian meshes.\nTo achieve mass conservation and to capture interface topology accurately, a\nmass-preserving level set advection equation cast in the scalar sign-distance\nfunction is developed. The novelty of the proposed Eulerian solver lies in the\nintroduction of a scalar speed function to rigorously reconstruct the classical\nlevel set equation. Through several benchmark problems, the proposed flow\nsolver for solving incompressible two-phase viscous flow equations has been\nverified.", "category": "physics_comp-ph" }, { "text": "Performance Enhancement for High-order Gas-kinetic Scheme Based on\n WENO-adaptive-order Reconstruction: High-order gas-kinetic scheme (HGKS) has been well-developed in the past\nyears. Abundant numerical tests including hypersonic flow, turbulence, and\naeroacoustic problems, have been used to validate its accuracy, efficiency, and\nrobustness. However, there are still rooms for its further improvement.\nFirstly, the reconstruction in the previous scheme mainly achieves a\nthird-order accuracy for the initial non-equilibrium states. At the same time,\nthe equilibrium state in space and time in HGKS has to be reconstructed\nseparately. Secondly, it is complicated to get reconstructed data at Gaussian\npoints from the WENO-type method in high dimensions. For HGKS, besides the\npoint-wise values at the Gaussian points it also requires the slopes in both\nnormal and tangential directions of a cell interface. Thirdly, there exists\nvisible spurious overshoot/undershoot at weak discontinuities from the previous\nHGKS with the standard WENO reconstruction. In order to overcome these\ndifficulties, in this paper we use an improved reconstruction for HGKS. The\nWENO with adaptive order (WENO-AO) method is implemented for reconstruction.A\nwhole polynomial inside each cell is provided in WENO-AO reconstruction. The\nHGKS becomes simpler than the previous one with the direct implementation of\ncell interface values and their slopes from WENO-AO. The additional\nreconstruction of equilibrium state at the beginning of each time step can be\navoided as well by dynamically merging the reconstructed non-equilibrium\nslopes. The new HGKS essentially releases or totally removes the above existing\nproblems in previous HGKS. The accuracy of the scheme from 1D to 3D from the\nnew HGKS can recover the theoretical order of accuracy of the WENO\nreconstruction.In the two- and three-dimensional simulations, the new HGKS\nshows better robustness and efficiency than the previous scheme in all test\ncases.", "category": "physics_comp-ph" }, { "text": "A Conservative Discontinuous Galerkin Discretization for the Total\n Energy Formulation of the Reacting Navier Stokes Equations: This paper describes the total energy formulation of the compressible\nreacting Navier-Stokes equations which is solved numerically using a fully\nconservative discontinuous Galerkin finite element method (DG). Previous\napplications of DG to the compressible reacting Navier-Stokes equations\nrequired nonconservative fluxes or stabilization methods in order to suppress\nunphysical oscillations in pressure that led to the failure of simple test\ncases. In this paper, we demonstrate that material interfaces with a\ntemperature discontinuity result in numerical unphysical pressure oscillations\nif the species internal energy is nonlinear with respect to temperature. We\ndemonstrate that a temperature discontinuity is the only type of material\ninterface that results in unphysical pressure oscillations for a conservative\ndiscretization of the total energy formulation. Furthermore, we demonstrate\nthat unphysical pressure oscillations will be generated at any material\ninterface, including material interfaces with at which the temperature is\ncontinuous, if the thermodynamics are frozen during the temporal integration of\nthe conserved state. Additionally, we demonstrate that the oscillations are\namplified if the specific heat at constant pressure is incorrectly evaluated\ndirectly from the NASA polynomial expressions. Instead, the mean value, which\nwe derive in this manuscript, should be used to compute the specific heat at\nconstant pressure. This can reduce the amplitude of, but not prevent,\nunphysical oscillations where the species concentrations numerically mix. We\nthen present solutions to several test cases using the total energy formulation\nand demonstrate spurious pressure oscillations were not generated for material\ninterfaces if the temperature is continuous and that it is better behaved than\nfrozen thermodynamic formulations if the temperature is discontinuous.", "category": "physics_comp-ph" }, { "text": "Sensitivity Analysis and Uncertainty Quantification on Point Defect\n Kinetics Equations with Perturbation Analysis: The concentration of radiation-induced point defects in general materials\nunder irradiation is commonly described by the point defect kinetics equations\nbased on rate theory. However, the parametric uncertainty in describing the\nrate constants of competing physical processes such as recombination and loss\nto sinks can lead to a large uncertainty in predicting the time-evolving point\ndefect concentrations. Here, based on the perturbation theory, we derived up to\nthe third order correction to the solution of point defect kinetics equations.\nThis new set of equations enable a full description of continuously changing\nrate constants, and can accurately predict the solution up to $50\\%$ deviation\nin these rate constants. These analyses can also be applied to reveal the\nsensitivity of solution to input parameters and aggregated uncertainty from\nmultiple rate constants.", "category": "physics_comp-ph" }, { "text": "Diffraction Problem and Amplitudes-Phases Dispersion of Eigen Fields of\n a Nonlinear Dielectric Layer: The open nonlinear electrodynamic system - nonlinear transverse\nnon-homogeneous dielectric layer, is an example of inorganic system having the\nproperties of self-organization, peculiar to biological systems. The necessary\nprecondition of effects of self-organization is the presence of a flow of\nenergy acting in system from an external source, due to which the system gets\nability to independent formation of structures. On an example of the transverse\nnon-homogeneous, isotropic, nonmagnetic, linearly polarized, nonlinear (a\nKerr-like dielectric nonlinearity) dielectric layer the constructive approach\nof the analysis of amplitudes-phases dispersion of eigen oscillation-wave\nfields of nonlinear object are shown. The norm of an eigen field is defined\nfrom the solution of a diffraction problem of plane waves or excitation of\npoint or compact source of a nonlinear layer.", "category": "physics_comp-ph" }, { "text": "Training collective variables for enhanced sampling via neural networks\n based discriminant analysis: A popular way to accelerate the sampling of rare events in molecular dynamics\nsimulations is to introduce a potential that increases the fluctuations of\nselected collective variables. For this strategy to be successful, it is\ncritical to choose appropriate variables. Here we review some recent\ndevelopments in the data-driven design of collective variables, with a focus on\nthe combination of Fisher's discriminant analysis and neural networks. This\napproach allows to compress the fluctuations of metastable states into a\nlow-dimensional representation. We illustrate through several examples the\neffectiveness of this method in accelerating the sampling, while also\nidentifying the physical descriptors that undergo the most significant changes\nin the process.", "category": "physics_comp-ph" }, { "text": "Diagnostic data integration using deep neural networks for real-time\n plasma analysis: Recent advances in acquisition equipment is providing experiments with\ngrowing amounts of precise yet affordable sensors. At the same time an improved\ncomputational power, coming from new hardware resources (GPU, FPGA, ACAP), has\nbeen made available at relatively low costs. This led us to explore the\npossibility of completely renewing the chain of acquisition for a fusion\nexperiment, where many high-rate sources of data, coming from different\ndiagnostics, can be combined in a wide framework of algorithms. If on one hand\nadding new data sources with different diagnostics enriches our knowledge about\nphysical aspects, on the other hand the dimensions of the overall model grow,\nmaking relations among variables more and more opaque. A new approach for the\nintegration of such heterogeneous diagnostics, based on composition of deep\nvariational autoencoders, could ease this problem, acting as a structural\nsparse regularizer. This has been applied to RFX-mod experiment data,\nintegrating the soft X-ray linear images of plasma temperature with the\nmagnetic state.\n However to ensure a real-time signal analysis, those algorithmic techniques\nmust be adapted to run in well suited hardware. In particular it is shown that,\nattempting a quantization of neurons transfer functions, such models can be\nmodified to create an embedded firmware. This firmware, approximating the deep\ninference model to a set of simple operations, fits well with the simple logic\nunits that are largely abundant in FPGAs. This is the key factor that permits\nthe use of affordable hardware with complex deep neural topology and operates\nthem in real-time.", "category": "physics_comp-ph" }, { "text": "A lattice Boltzmann method for thin liquid film hydrodynamics: We propose a novel approach to the numerical simulation of thin film flows,\nbased on the lattice Boltzmann method. We outline the basic features of the\nmethod, show in which limits the expected thin film equations are recovered and\nperform validation tests. The numerical scheme is applied to the viscous\nRayleigh-Taylor instability of a thin film and to the spreading of a sessile\ndrop towards its equilibrium contact angle configuration. We show that the\nCox-Voinov law is satisfied, and that the effect of a tunable slip length on\nthe substrate is correctly captured. We address, then, the problem of a droplet\nsliding on an inclined plane, finding that the Capillary number scales linearly\nwith the Bond number, in agreement with experimental results. At last, we\ndemonstrate the ability of the method to handle heterogenous and complex\nsystems by showcasing the controlled dewetting of a thin film on a chemically\nstructured substrate.", "category": "physics_comp-ph" }, { "text": "Computing Curvature for Volume of Fluid Methods using Machine Learning: In spite of considerable progress, computing curvature in Volume of Fluid\n(VOF) methods continues to be a challenge. The goal is to develop a function or\na subroutine that returns the curvature in computational cells containing an\ninterface separating two immiscible fluids, given the volume fraction in the\ncell and the adjacent cells. Currently, the most accurate approach is to fit a\ncurve (2D), or a surface (3D), matching the volume fractions and finding the\ncurvature by differentiation. Here, a different approach is examined. A\nsynthetic data set, relating curvature to volume fractions, is generated using\nwell-defined shapes where the curvature and volume fractions are easily found\nand then machine learning is used to fit the data (training). The resulting\nfunction is used to find the curvature for shapes not used for the training and\nimplemented into a code to track moving interfaces. The results suggest that\nusing machine learning to generate the relationship is a viable approach that\nresults in reasonably accurate predictions.", "category": "physics_comp-ph" }, { "text": "A finite element perspective on non-linear FFT-based micromechanical\n simulations: Fourier solvers have become efficient tools to establish structure-property\nrelations in heterogeneous materials. Introduced as an alternative to the\nFinite Element (FE) method, they are based on fixed-point solutions of the\nLippmann-Schwinger type integral equation. Their computational efficiency\nresults from handling the kernel of this equation by the Fast Fourier Transform\n(FFT). However, the kernel is derived from an auxiliary homogeneous linear\nproblem, which renders the extension of FFT-based schemes to non-linear\nproblems conceptually difficult. This paper aims to establish a link between\nFE- and FFT-based methods, in order to develop a solver applicable to general\nhistory- and time-dependent material models. For this purpose, we follow the\nstandard steps of the FE method, starting from the weak form, proceeding to the\nGalerkin discretization and the numerical quadrature, up to the solution of\nnon-linear equilibrium equations by an iterative Newton-Krylov solver. No\nauxiliary linear problem is thus needed. By analyzing a two-phase laminate with\nnon-linear elastic, elasto-plastic, and visco-plastic phases, and by\nelasto-plastic simulations of a dual-phase steel microstructure, we demonstrate\nthat the solver exhibits robust convergence. These results are achieved by\nre-using the non-linear FE technology, with the potential of further extensions\nbeyond small-strain inelasticity considered in this paper.", "category": "physics_comp-ph" }, { "text": "Calculation of the electromagnetic scattering by non-spherical particles\n based on the volume integral equation in the spherical wave function basis: The paper presents a method for calculation of non-spherical particle\nT-matrices based on the volume integral equation and the spherical vector wave\nfunction basis, and relies on the Generalized Source Method rationale. The\ndeveloped method appears to be close to the invariant imbedding approach, and\nthe derivations aims at intuitive demonstration of the calculation scheme. In\nparallel calculation of single columns of T-matrix is considered in detail, and\nit is shown that this way not only has a promising potential of parallelization\nbut also yields an almost zero power balance for purely dielectric particles.", "category": "physics_comp-ph" }, { "text": "Temporal Integrators for Fluctuating Hydrodynamics: Including the effect of thermal fluctuations in traditional computational\nfluid dynamics requires developing numerical techniques for solving the\nstochastic partial differential equations of fluctuating hydrodynamics. These\nLangevin equations possess a special fluctuation-dissipation structure that\nneeds to be preserved by spatio-temporal discretizations in order for the\ncomputed solution to reproduce the correct long-time behavior. In particular,\nnumerical solutions should approximate the Gibbs-Boltzmann equilibrium\ndistribution, and ideally this will hold even for large time step sizes. We\ndescribe finite-volume spatial discretizations for the fluctuating Burgers and\nfluctuating incompressible Navier-Stokes equations that obey a discrete\nfluctuation-dissipation balance principle just like the continuum equations. We\ndevelop implicit-explicit predictor-corrector temporal integrators for the\nresulting stochastic method-of-lines discretization. These stochastic\nRunge-Kutta schemes treat diffusion implicitly and advection explicitly, are\nweakly second-order accurate for additive noise for small time steps, and give\na good approximation to the equilibrium distribution even for very strong\nfluctuations. Numerical results demonstrate that a midpoint predictor-corrector\nscheme is very robust over a broad range of time step sizes.", "category": "physics_comp-ph" }, { "text": "Periodic three-body orbits in the Coulomb potential: We numerically discovered around 100 distinct nonrelativistic collisionless\nperiodic three-body orbits in the Coulomb potential in vacuo, with vanishing\nangular momentum, for equal-mass ions with equal absolute values of charges.\nThese orbits are classified according to their symmetry and topology, and a\nlinear relation is established between the periods, at equal energy, and the\ntopologies of orbits. Coulombic three-body orbits can be formed in ion traps,\nsuch as the Paul, or the Penning one, where one can test the period vs topology\nprediction.", "category": "physics_comp-ph" }, { "text": "Machine learning and the physical sciences: Machine learning encompasses a broad range of algorithms and modeling tools\nused for a vast array of data processing tasks, which has entered most\nscientific disciplines in recent years. We review in a selective way the recent\nresearch on the interface between machine learning and physical sciences. This\nincludes conceptual developments in machine learning (ML) motivated by physical\ninsights, applications of machine learning techniques to several domains in\nphysics, and cross-fertilization between the two fields. After giving basic\nnotion of machine learning methods and principles, we describe examples of how\nstatistical physics is used to understand methods in ML. We then move to\ndescribe applications of ML methods in particle physics and cosmology, quantum\nmany body physics, quantum computing, and chemical and material physics. We\nalso highlight research and development into novel computing architectures\naimed at accelerating ML. In each of the sections we describe recent successes\nas well as domain-specific methodology and challenges.", "category": "physics_comp-ph" }, { "text": "Polymer translocation through a nanopore: a two-dimensional Monte Carlo\n simulation: We investigate the problem of polymer translocation through a nanopore in the\nabsence of an external driving force. To this end, we use the two-dimensional\n(2D) fluctuating bond model with single-segment Monte Carlo moves. To overcome\nthe entropic barrier without artificial restrictions, we consider a polymer\nwhich is initially placed in the middle of the pore, and study the escape time\nrequired for the polymer to completely exit the pore on either end. In\nparticular, we examined the effect of the pore length on the escape time.", "category": "physics_comp-ph" }, { "text": "Band alignment of two-dimensional lateral heterostructures: Recent experimental synthesis of two-dimensional (2D) heterostructures opens\na door to new opportunities in tailoring the electronic properties for novel 2D\ndevices. Here, we show that a wide range of lateral 2D heterostructures could\nhave a prominent advantage over the traditional three-dimensional (3D)\nheterostructures, because their band alignments are insensitive to the\ninterfacial conditions. They should be at the Schottky-Mott limits for\nsemiconductor-metal junctions and at the Anderson limits for semiconductor\njunctions, respectively. This fundamental difference from the 3D\nheterostructures is rooted in the fact that, in the asymptotic limit of large\ndistance, the effect of the interfacial dipole vanishes for 2D systems. Due to\nthe slow decay of the dipole field and the dependence on the vacuum thickness,\nhowever, studies based on first-principles calculations often failed to reach\nsuch a conclusion. Taking graphene/hexagonal-BN and MoS2/WS2 lateral\nheterostructures as the respective prototypes, we show that the converged\njunction width can be order of magnitude longer than that for 3D junctions. The\npresent results provide vital guidance to high-quality transport devices\nwherever a lateral 2D heterostructure is involved.", "category": "physics_comp-ph" }, { "text": "Magnetic-field modeling with surface currents: Physical and\n computational principles of bfieldtools: Surface currents provide a general way to model static magnetic fields in\nsource-free volumes. To facilitate the use of surface currents in\nmagneto-quasistatic problems, we have implemented a set of computational tools\nin a Python package named bfieldtools. In this work, we describe the physical\nand computational principles of this toolset. To be able to work with surface\ncurrents of arbitrary shape, we discretize the currents on triangle meshes\nusing piecewise-linear stream functions. We apply analytical discretizations of\nintegral equations to obtain the magnetic field and potentials associated with\nthe discrete stream function. In addition, we describe the computation of the\nspherical multipole expansion and a novel surface-harmonic expansion for\nsurface currents, both of which are useful for representing the magnetic field\nin source-free volumes with a small number of parameters. Last, we share\nexamples related to magnetic shielding and surface-coil design using the\npresented tools.", "category": "physics_comp-ph" }, { "text": "Modeling heavy ion ionization loss in the MARS15 code: The needs of various accelerator and space projects stimulated recent\ndevelopments to the MARS Monte Carlo code. One of the essential parts of those\nis heavy ion ionization energy loss. This paper describes an implementation of\nseveral corrections to dE/dx in order to take into account the deviations from\nthe Bethe theory at low and high energies as well as the effect of a finite\nnuclear size at ultra-relativistic energies. Special attention is paid to the\ntransition energy region where the onset of the effect of a finite nuclear size\nis observed. Comparisons with experimental data and NIST data are presented.", "category": "physics_comp-ph" }, { "text": "GPU performance analysis of a nodal discontinuous Galerkin method for\n acoustic and elastic models: Finite element schemes based on discontinuous Galerkin methods possess\nfeatures amenable to massively parallel computing accelerated with general\npurpose graphics processing units (GPUs). However, the computational\nperformance of such schemes strongly depends on their implementation. In the\npast, several implementation strategies have been proposed. They are based\nexclusively on specialized compute kernels tuned for each operation, or they\ncan leverage BLAS libraries that provide optimized routines for basic linear\nalgebra operations. In this paper, we present and analyze up-to-date\nperformance results for different implementations, tested in a unified\nframework on a single NVIDIA GTX980 GPU. We show that specialized kernels\nwritten with a one-node-per-thread strategy are competitive for polynomial\nbases up to the fifth and seventh degrees for acoustic and elastic models,\nrespectively. For higher degrees, a strategy that makes use of the NVIDIA\ncuBLAS library provides better results, able to reach a net arithmetic\nthroughput 35.7% of the theoretical peak value.", "category": "physics_comp-ph" }, { "text": "Monte Carlo Simulation for Particle Detectors: Monte Carlo simulation is an essential component of experimental particle\nphysics in all the phases of its life-cycle: the investigation of the physics\nreach of detector concepts, the design of facilities and detectors, the\ndevelopment and optimization of data reconstruction software, the data analysis\nfor the production of physics results. This note briefly outlines some research\ntopics related to Monte Carlo simulation, that are relevant to future\nexperimental perspectives in particle physics. The focus is on physics aspects:\nconceptual progress beyond current particle transport schemes, the\nincorporation of materials science knowledge relevant to novel detection\ntechnologies, functionality to model radiation damage, the capability for\nmulti-scale simulation, quantitative validation and uncertainty quantification\nto determine the predictive power of simulation. The R&D on simulation for\nfuture detectors would profit from cooperation within various components of the\nparticle physics community, and synergy with other experimental domain sharing\nsimilar simulation requirements.", "category": "physics_comp-ph" }, { "text": "Mass-Zero constrained dynamics and statistics for the shell model in\n magnetic field: In several domains of physics, including first principle simulations and\nclassical models for polarizable systems, the minimization of an energy\nfunction with respect to a set of auxiliary variables must be performed to\ndefine the dynamics of physical degrees of freedom. In this paper, we discuss a\nrecent algorithm proposed to efficiently and rigorously simulate this type of\nsystems: the Mass-Zero (MaZe) Constrained Dynamics. In MaZe the minimum\ncondition is imposed as a constraint on the auxiliary variables treated as\ndegrees of freedom of zero inertia driven by the physical system. The method is\nformulated in the Lagrangian framework, enabling the properties of the approach\nto emerge naturally from a fully consistent dynamical and statistical\nviewpoint. We begin by presenting MaZe for typical minimization problems where\nthe imposed constraints are holonomic and summarizing its key formal\nproperties, notably the exact Born-Oppenheimer dynamics followed by the\nphysical variables and the exact sampling of the corresponding physical\nprobability density. We then generalize the approach to the case of conditions\non the auxiliary variables that linearly involve their velocities. Such\nconditions occur, for example, when describing systems in external magnetic\nfield and they require to adapt MaZe to integrate semiholonomic constraints.\nThe new development is presented in the second part of this paper and\nillustrated via a proof-of-principle calculation of the charge transport\nproperties of a simple classical polarizable model of NaCl.", "category": "physics_comp-ph" }, { "text": "Learning Variational Data Assimilation Models and Solvers: This paper addresses variational data assimilation from a learning point of\nview. Data assimilation aims to reconstruct the time evolution of some state\ngiven a series of observations, possibly noisy and irregularly-sampled. Using\nautomatic differentiation tools embedded in deep learning frameworks, we\nintroduce end-to-end neural network architectures for data assimilation. It\ncomprises two key components: a variational model and a gradient-based solver\nboth implemented as neural networks. A key feature of the proposed end-to-end\nlearning architecture is that we may train the NN models using both supervised\nand unsupervised strategies. Our numerical experiments on Lorenz-63 and\nLorenz-96 systems report significant gain w.r.t. a classic gradient-based\nminimization of the variational cost both in terms of reconstruction\nperformance and optimization complexity. Intriguingly, we also show that the\nvariational models issued from the true Lorenz-63 and Lorenz-96 ODE\nrepresentations may not lead to the best reconstruction performance. We believe\nthese results may open new research avenues for the specification of\nassimilation models in geoscience.", "category": "physics_comp-ph" }, { "text": "The effect of quantization on the FCIQMC sign problem: The sign problem in Full Configuration Interaction Quantum Monte Carlo\n(FCIQMC) without annihilation can be understood as an instability of the\npsi-particle population to the ground state of the matrix obtained by making\nall off-diagonal elements of the Hamiltonian negative. Such a matrix, and hence\nthe sign problem, is basis dependent. In this paper we discuss the properties\nof a physically important basis choice: first versus second quantization. For a\ngiven choice of single-particle orbitals, we identify the conditions under\nwhich the fermion sign problem in the second quantized basis of antisymmetric\nSlater determinants is identical to the sign problem in the first quantized\nbasis of unsymmetrized Hartree products. We also show that, when the two\ndiffer, the fermion sign problem is always less severe in the second quantized\nbasis. This supports the idea that FCIQMC, even in the absence of annihilation,\nimproves the sign problem relative to first quantized methods. Finally, we\npoint out some theoretically interesting classes of Hamiltonians where first\nand second quantized sign problems differ, and others where they do not.", "category": "physics_comp-ph" }, { "text": "Iterative diagonalization of symmetric matrices in mixed precision: Diagonalization of a large matrix is the computational bottleneck in many\napplications such as electronic structure calculations. We show that a speedup\nof over 30% can be achieved by exploiting 32-bit floating point operations,\nwhile keeping 64-bit accuracy. Moreover, most of the computationally expensive\noperations are performed by level-3 BLAS/LAPACK routines in our implementation,\nthus leading to optimal performance on most platforms. Further improvement can\nbe made by using problem-specific preconditioners which take into account\nnondiagonal elements.", "category": "physics_comp-ph" }, { "text": "MolSieve: A Progressive Visual Analytics System for Molecular Dynamics\n Simulations: Molecular Dynamics (MD) simulations are ubiquitous in cutting-edge\nphysio-chemical research. They provide critical insights into how a physical\nsystem evolves over time given a model of interatomic interactions.\nUnderstanding a system's evolution is key to selecting the best candidates for\nnew drugs, materials for manufacturing, and countless other practical\napplications. With today's technology, these simulations can encompass millions\nof unit transitions between discrete molecular structures, spanning up to\nseveral milliseconds of real time. Attempting to perform a brute-force analysis\nwith data-sets of this size is not only computationally impractical, but would\nnot shed light on the physically-relevant features of the data. Moreover, there\nis a need to analyze simulation ensembles in order to compare similar processes\nin differing environments. These problems call for an approach that is\nanalytically transparent, computationally efficient, and flexible enough to\nhandle the variety found in materials based research. In order to address these\nproblems, we introduce MolSieve, a progressive visual analytics system that\nenables the comparison of multiple long-duration simulations. Using MolSieve,\nanalysts are able to quickly identify and compare regions of interest within\nimmense simulations through its combination of control charts, data-reduction\ntechniques, and highly informative visual components. A simple programming\ninterface is provided which allows experts to fit MolSieve to their needs. To\ndemonstrate the efficacy of our approach, we present two case studies of\nMolSieve and report on findings from domain collaborators.", "category": "physics_comp-ph" }, { "text": "Multistability, local pattern formation, and global collective firing in\n a small-world network of non-leaky integrate-and-fire neurons: We investigate numerically the collective dynamical behavior of pulse-coupled\nnon-leaky integrate-and-fire-neurons that are arranged on a two-dimensional\nsmall-world network. To ensure ongoing activity, we impose a probability for\nspontaneous firing for each neuron. We study network dynamics evolving from\ndifferent sets of initial conditions in dependence on coupling strength and\nrewiring probability. Beside a homogeneous equilibrium state for low coupling\nstrength, we observe different local patterns including cyclic waves, spiral\nwaves, and turbulent-like patterns, which -- depending on network parameters --\ninterfere with the global collective firing of the neurons. We attribute the\nvarious network dynamics to distinct regimes in the parameter space. For the\nsame network parameters different network dynamics can be observed depending on\nthe set of initial conditions only. Such a multistable behavior and the\ninterplay between local pattern formation and global collective firing may be\nattributable to the spatiotemporal dynamics of biological networks.", "category": "physics_comp-ph" }, { "text": "Updated Core Libraries of the ALPS Project: The open source ALPS (Algorithms and Libraries for Physics Simulations)\nproject provides a collection of physics libraries and applications, with a\nfocus on simulations of lattice models and strongly correlated electron\nsystems. The libraries provide a convenient set of well-documented and reusable\ncomponents for developing condensed matter physics simulation codes, and the\napplications strive to make commonly used and proven computational algorithms\navailable to a non-expert community. In this paper we present an update of the\ncore ALPS libraries. We present in particular new Monte Carlo libraries and new\nGreen's function libraries.", "category": "physics_comp-ph" }, { "text": "A Fast Algorithm for the Analysis of Scattering by Elongated Cavities: The electromagnetic scattering from elongated, arbitrarily shaped, open-ended\ncavities have been studied extensively over the years. In this paper we\nintroduce the fast encapsulating domain decomposition (EDD) scheme for the\nanalysis of radar cross section (RCS) of such open-ended cavities. Problem\ndefinition, key principles, analysis, and implementation of the proposed\nsolution scheme are presented in detail. The EDD advantages stem from domain\ndecomposition along the elongated dimension and representing the fields on the\ncross-sections in the spectral domain, which enables us to separate the fields\ninto in- and out-going waves. This diagonolizes the translation between the\ncross sections, thus reducing the per segment computational complexity from\n$O((N^A)^3)$ to $O(N^W(N^A)^2)$, where $N^A$ is the number of aperture unknowns\nand $N^W$ is the number of wall unknowns per segment, satisfying $N^W< 1000 electrons), applicable to\nmetals and insulators alike. In lieu of explicit diagonalization of the\nKohn-Sham Hamiltonian on every self-consistent field (SCF) iteration, we employ\na two-level Chebyshev polynomial filter based complementary subspace strategy\nto: 1) compute a set of vectors that span the occupied subspace of the\nHamiltonian; 2) reduce subspace diagonalization to just partially occupied\nstates; and 3) obtain those states in an efficient, scalable manner via an\ninner Chebyshev-filter iteration. By reducing the necessary computation to just\npartially occupied states, and obtaining these through an inner Chebyshev\niteration, our approach reduces the cost of large metallic calculations\nsignificantly, while eliminating subspace diagonalization for insulating\nsystems altogether. We describe the implementation of the method within the\nframework of the Discontinuous Galerkin (DG) electronic structure method and\nshow that this results in a computational scheme that can effectively tackle\nbulk and nano systems containing tens of thousands of electrons, with chemical\naccuracy, within a few minutes or less of wall clock time per SCF iteration on\nlarge-scale computing platforms. We anticipate that our method will be\ninstrumental in pushing the envelope of large-scale ab initio molecular\ndynamics. As a demonstration of this, we simulate a bulk silicon system\ncontaining 8,000 atoms at finite temperature, and obtain an average SCF step\nwall time of 51 seconds on 34,560 processors; thus allowing us to carry out 1.0\nps of ab initio molecular dynamics in approximately 28 hours (of wall time).", "category": "physics_comp-ph" }, { "text": "Monte Carlo methods on a fixed volume system of Silicon-Germanium atoms: Since the electrons of a silicon-germanium system are bounded, external\nquantum effects are negligible. In this manuscript, we hold the volume constant\nwhile varying all other parameters, such as pressure, temperature, germanium\nchemical potential (or germanium concentration), energy, mole number and atomic\nbond structure, resulting in an observation of hysteresis in the system.", "category": "physics_comp-ph" }, { "text": "Insights on finite size effects in Ab-initio study of CO adsorption and\n dissociation on Fe 110 surface: Adsorption and dissociation of hydrocarbons on metallic surfaces represent\ncrucial steps to carburization of metal. Here, we use density functional theory\ntotal energy calculations with the climbing-image nudged elastic band method to\nestimate the adsorption energies and dissociation barriers for different CO\ncoverages with surface supercells of different sizes. For the absorption of CO,\nthe contribution from van der Waals interaction in the computation of\nadsorption parameters is found important in small systems with high\nCO-coverages. The dissociation process involves carbon insertion into the Fe\nsurface causing a lattice deformation that requires a larger surface system for\nunrestricted relaxation. We show that, in larger surface systems associated\nwith dilute CO-coverages, the dissociation barrier is significantly decreased.\nThe elastic deformation of the surface is generic and can potentially\napplicable for all similar metal-hydrocarbon reactions and therefore a dilute\ncoverage is necessary for the simulation of these reactions as isolated\nprocesses.", "category": "physics_comp-ph" }, { "text": "Brownian dynamics simulations with hard-body interactions: Spherical\n particles: A novel approach to account for hard-body interactions in (overdamped)\nBrownian dynamics simulations is proposed for systems with non-vanishing force\nfields. The scheme exploits the analytically known transition probability for a\nBrownian particle on a one-dimensional half-line. The motion of a Brownian\nparticle is decomposed into a component that is affected by hard-body\ninteractions and into components that are unaffected. The hard-body\ninteractions are incorporated by replacing the affected component of motion by\nthe evolution on a half-line. It is discussed under which circumstances this\napproach is justified. In particular, the algorithm is developed and formulated\nfor systems with space-fixed obstacles and for systems comprising spherical\nparticles. The validity and justification of the algorithm is investigated\nnumerically by looking at exemplary model systems of soft matter, namely at\ncolloids in flow fields and at protein interactions. Furthermore, a thorough\ndiscussion of properties of other heuristic algorithms is carried out.", "category": "physics_comp-ph" }, { "text": "Valence band structure calculations of strained Ge$_{1-x}$Sn$_x$ quantum\n well pFETs: The dependence of valence band structures of Ge$_{1-x}$Sn$_x$ with 0 $\\leq$\n$x$ $\\leq$ 0.2 on Sn content, biaxial strain, and substrate orientation is\ncalculated using the nonlocal empirical pseudopotential method. The first\nvalence subband structure in p-type Ge cap/fully strained Ge$_{1-x}$Sn$_x$\nquantum well/Ge (001) and (111) inversion layers are theoretically studied\nusing the 6$\\times$6 k$\\cdot$p model. A wave-function coupling of a Ge cap with\nrespect to a strained Ge$_{1-x}$Sn$_x$ quantum well, which is influenced by the\ncap thickness, valence band offset, and confined effective mass, changes the\nenergy dispersion relation in the two-dimensional $k$-space. The increase in Sn\ncontent and the decrease in cap thickness increase the hole population in the\nstrained Ge$_{1-x}$Sn$_x$ quantum well to reduce the transport effective mass\nat the zone center in the Ge/strained Ge$_{1-x}$Sn$_x$/Ge inversion layers.", "category": "physics_comp-ph" }, { "text": "Evolutions in photoelectric cross section calculations and their\n validation: This paper updates and complements a previously published evaluation of\ncomputational methods for total and partial cross sections, relevant to\nmodeling the photoelectric effect in Monte Carlo particle transport. It\nexamines calculation methods that have become available since the publication\nof the previous paper, some of which claim improvements over previous\ncalculations; it tests them with statistical methods against the same sample of\nexperimental data collected for the previous evaluation. No statistically\nsignificant improvements are observed with respect to the calculation method\nidentified in the previous paper as the state of the art for the intended\npurpose, encoded in the EPDL97 data library. Some of the more recent\ncomputational methods exhibit significantly lower capability to reproduce\nexperimental measurements than the existing alternatives.", "category": "physics_comp-ph" }, { "text": "Fast, accurate, and transferable many-body interatomic potentials by\n symbolic regression: The length and time scales of atomistic simulations are limited by the\ncomputational cost of the methods used to predict material properties. In\nrecent years there has been great progress in the use of machine learning\nalgorithms to develop fast and accurate interatomic potential models, but it\nremains a challenge to develop models that generalize well and are fast enough\nto be used at extreme time and length scales. To address this challenge, we\nhave developed a machine learning algorithm based on symbolic regression in the\nform of genetic programming that is capable of discovering accurate,\ncomputationally efficient manybody potential models. The key to our approach is\nto explore a hypothesis space of models based on fundamental physical\nprinciples and select models within this hypothesis space based on their\naccuracy, speed, and simplicity. The focus on simplicity reduces the risk of\noverfitting the training data and increases the chances of discovering a model\nthat generalizes well. Our algorithm was validated by rediscovering an exact\nLennard-Jones potential and a Sutton Chen embedded atom method potential from\ntraining data generated using these models. By using training data generated\nfrom density functional theory calculations, we found potential models for\nelemental copper that are simple, as fast as embedded atom models, and capable\nof accurately predicting properties outside of their training set. Our approach\nrequires relatively small sets of training data, making it possible to generate\ntraining data using highly accurate methods at a reasonable computational cost.\nWe present our approach, the forms of the discovered models, and assessments of\ntheir transferability, accuracy and speed.", "category": "physics_comp-ph" }, { "text": "GPUMD: A package for constructing accurate machine-learned potentials\n and performing highly efficient atomistic simulations: We present our latest advancements of machine-learned potentials (MLPs) based\non the neuroevolution potential (NEP) framework introduced in [Fan et al.,\nPhys. Rev. B 104, 104309 (2021)] and their implementation in the open-source\npackage GPUMD. We increase the accuracy of NEP models both by improving the\nradial functions in the atomic-environment descriptor using a linear\ncombination of Chebyshev basis functions and by extending the angular\ndescriptor with some four-body and five-body contributions as in the atomic\ncluster expansion approach. We also detail our efficient implementation of the\nNEP approach in graphics processing units as well as our workflow for the\nconstruction of NEP models, and we demonstrate their application in large-scale\natomistic simulations. By comparing to state-of-the-art MLPs, we show that the\nNEP approach not only achieves above-average accuracy but also is far more\ncomputationally efficient. These results demonstrate that the GPUMD package is\na promising tool for solving challenging problems requiring highly accurate,\nlarge-scale atomistic simulations. To enable the construction of MLPs using a\nminimal training set, we propose an active-learning scheme based on the latent\nspace of a pre-trained NEP model. Finally, we introduce three separate Python\npackages, GPYUMD, CALORINE, and PYNEP, which enable the integration of GPUMD\ninto Python workflows.", "category": "physics_comp-ph" }, { "text": "An Ising Model Approach to Malware Epidemiology: We introduce an Ising approach to study the spread of malware. The Ising\nspins up and down are used to represent two states--online and offline--of the\nnodes in the network. Malware is allowed to propagate amongst online nodes and\nthe rate of propagation was found to increase with data traffic. For a more\nefficient network, the spread of infection is much slower; while for a\ncongested network, infection spreads quickly.", "category": "physics_comp-ph" }, { "text": "A Load Balance Strategy for Hybrid Particle-Mesh Methods: We present a load balancing strategy for hybrid particle-mesh methods that is\nbased on domain decomposition and element-local time measurement. This new\nstrategy is compared to our previous approach, which assumes a constant\nweighting factor for each particle to determine the computational load. The\ntimer-based load balancing is applied to a plasma expansion simulation. The\nperformance of the new algorithm is compared to results presented in the past\nand a significant improvement in terms of computational efficiency is shown.", "category": "physics_comp-ph" }, { "text": "Objective Methods for Assessing Models for Wildfire Spread: Models for wildfires must be stochastic if their ability to represent\nwildfires is to be objectively assessed. The need for models to be stochastic\nemerges naturally from the physics of the fire, and methods for assessing fit\nare constructed to exploit information found in the time evolution of the burn\nregion.", "category": "physics_comp-ph" }, { "text": "An adaptive timestepping methodology for particle advance in coupled\n CFD-DEM simulations: An adpative integration technique for time advancement of particle motion in\nthe context of coupled computational fluid dynamics (CFD) - discrete element\nmethod (DEM) simulations is presented in this work. CFD-DEM models provide an\naccurate description of multiphase physical systems where a granular phase\nexists in an underlying continuous medium. The time integration of the granular\nphase in these simulations present unique computational challenges due to large\nvariations in time scales associated with particle collisions. The algorithm\npresented in this work uses a local time stepping approach to resolve\ncollisional time scales for only a subset of particles that are in close\nproximity to potential collision partners, thereby resulting in substantial\nreduction of computational cost. This approach is observed to be 2-3X faster\nthan traditional explicit methods for problems that involve both dense and\ndilute regions, while maintaining the same level of accuracy.", "category": "physics_comp-ph" }, { "text": "The critical role of hot carrier cooling in optically excited structural\n transitions: The hot carrier cooling occurs in most photoexcitation-induced phase\ntransitions (PIPTs), but its role has often been neglected in many theoretical\nsimulations as well as in proposed mechanisms. Here, by including the\npreviously ignored hot carrier cooling in real-time time-dependent density\nfunctional theory (rt-TDDFT) simulations, we investigated the role of hot\ncarrier cooling in PIPTs. Taking IrTe2 as an example, we reveal that the\ncooling of hot electrons from the higher energy levels of spatially extended\nstates to the lower energy levels of the localized Ir-Ir dimer antibonding\nstates strengthens remarkably the atomic driving forces and enhances atomic\nkinetic energy. These two factors combine to dissolute the Ir-Ir dimers on a\ntimescale near the limit of atomic motions, thus initiating a deterministic\nkinetic phase transition. We further demonstrate that the subsequent cooling\ninduces nonradiative recombination of photoexcited electrons and holes, leading\nto the ultrafast recovery of the Ir-Ir dimers observed experimentally. These\nfindings provide a complete picture of the atomic dynamics in optically excited\nstructural phase transitions.", "category": "physics_comp-ph" }, { "text": "Imaginary time propagation code for large-scale two-dimensional\n eigenvalue problems in magnetic fields: We present a code for solving the single-particle, time-independent\nSchr\\\"odinger equation in two dimensions. Our program utilizes the imaginary\ntime propagation (ITP) algorithm, and it includes the most recent developments\nin the ITP method: the arbitrary order operator factorization and the exact\ninclusion of a (possibly very strong) magnetic field. Our program is able to\nsolve thousands of eigenstates of a two-dimensional quantum system in\nreasonable time with commonly available hardware. The main motivation behind\nour work is to allow the study of highly excited states and energy spectra of\ntwo-dimensional quantum dots and billiard systems with a single versatile code,\ne.g., in quantum chaos research. In our implementation we emphasize a modern\nand easily extensible design, simple and user-friendly interfaces, and an\nopen-source development philosophy.", "category": "physics_comp-ph" }, { "text": "The Transferability Limits of Static Benchmarks: Every practical method to solve the Schr\\\"odinger equation for interacting\nmany-particle systems introduces approximations. Such methods are therefore\nplagued by systematic errors. For computational chemistry, it is decisive to\nquantify the specific error for some system under consideration. Traditionally,\nthe primary resource for such an error assessment have been benchmarking\nresults, usually taken from the literature. However, their transferability to a\nspecific molecular system, and hence, the reliability of the traditional\napproach always remains uncertain to some degree. In this communication, we\nelaborate on the shortcomings of this traditional way of static benchmarking by\nexploiting statistical analyses at the example of one of the largest quantum\nchemical benchmark sets available. We demonstrate the uncertainty of error\nestimates in the light of the choice of reference data selected for a benchmark\nstudy. To alleviate the issues with static benchmarks, we advocate to rely\ninstead on a rolling and system-focused approach for rigorously quantifying the\nuncertainty of a quantum chemical result.", "category": "physics_comp-ph" }, { "text": "A unified algorithm for interfacial flows with incompressible and\n compressible fluids: The majority of available numerical algorithms for interfacial two-phase\nflows either treat both fluid phases as incompressible (constant density) or\ntreat both phases as compressible (variable density). This presents a\nlimitation for the prediction of many two-phase flows, such as subsonic fuel\ninjection, as treating both phases as compressible is computationally expensive\ndue to the very stiff pressure-density-temperature coupling of liquids. A\nframework with the capability of treating one phase compressible and the other\nphase incompressible, therefore, has a significant potential to improve the\ncomputational performance and still capture all important physical mechanisms.\nWe propose a numerical algorithm that can simulate interfacial flows in all\nMach number regimes, ranging from $M=0$ to $M > 1$, including interfacial flows\nin which compressible and incompressible fluids interact, within the same\npressure-based framework and conservative finite-volume discretisation. For\ninterfacial flows with only incompressible fluids or with only compressible\nfluids, the proposed pressure-based algorithm and finite-volume discretisation\nreduce to numerical frameworks that have already been presented in the\nliterature. Representative test cases are used to validate the proposed\nalgorithm, including mixed compressible-incompressible interfacial flows with\nacoustic waves, shock waves and rarefaction fans.", "category": "physics_comp-ph" }, { "text": "Simulating both parity sectors of the Hubbard Model with Tensor Networks: Tensor networks are a powerful tool to simulate a variety of different\nphysical models, including those that suffer from the sign problem in Monte\nCarlo simulations. The Hubbard model on the honeycomb lattice with non-zero\nchemical potential is one such problem. Our method is based on projected\nentangled pair states (PEPS) using imaginary time evolution. We demonstrate\nthat it provides accurate estimators for the ground state of the model,\nincluding cases where Monte Carlo simulations fail miserably. In particular it\nshows near to optimal, that is linear, scaling in lattice size. We also present\na novel approach to directly simulate the subspace with an odd number of\nfermions. It allows to independently determine the ground state in both\nsectors. Without a chemical potential this corresponds to half filling and the\nlowest energy state with one additional electron or hole. We identify several\nstability issues, such as degenerate ground states and large single particle\ngaps, and provide possible fixes.", "category": "physics_comp-ph" }, { "text": "Gaussian mixture model clustering algorithms for the analysis of\n high-precision mass measurements: The development of the phase-imaging ion-cyclotron resonance (PI-ICR)\ntechnique for use in Penning trap mass spectrometry (PTMS) increased the speed\nand precision with which PTMS experiments can be carried out. In PI-ICR, data\nsets of the locations of individual ion hits on a detector are created showing\nhow ions cluster together into spots according to their cyclotron frequency.\nIdeal data sets would consist of a single, 2D-spherical spot with no other\nnoise, but in practice data sets typically contain multiple spots,\nnon-spherical spots, or significant noise, all of which can make determining\nthe locations of spot centers non-trivial. A method for assigning groups of\nions to their respective spots and determining the spot centers is therefore\nessential for further improving precision and confidence in PI-ICR experiments.\nWe present the class of Gaussian mixture model (GMM) clustering algorithms as\nan optimal solution. We show that on simulated PI-ICR data, several types of\nGMM clustering algorithms perform better than other clustering algorithms over\na variety of typical scenarios encountered in PI-ICR. The mass spectra of\n$^{163}\\text{Gd}$, $^{163m}\\text{Gd}$, $^{162}\\text{Tb}$, and\n$^{162m}\\text{Tb}$ measured using PI-ICR at the Canadian Penning trap mass\nspectrometer were checked using GMMs, producing results that were in close\nagreement with the previously published values.", "category": "physics_comp-ph" }, { "text": "A space-time smooth artificial viscosity method with wavelet noise\n indicator and shock collision scheme, Part 1: the 1-D case: In this first part of two papers, we extend the C-method developed in [40]\nfor adding localized, space-time smooth artificial viscosity to nonlinear\nsystems of conservation laws that propagate shock waves, rarefaction waves, and\ncontact discontinuities in one space dimension. For gas dynamics, the C-method\ncouples the Euler equations to a scalar reaction-diffusion equation, whose\nsolution $C$ serves as a space-time smooth artificial viscosity indicator.\n The purpose of this paper is the development of a high-order numerical\nalgorithm for shock-wall collision and bounce-back. Specifically, we generalize\nthe original C-method by adding a new collision indicator, which naturally\nactivates during shock-wall collision. Additionally, we implement a new\nhigh-frequency wavelet-based noise detector together with an efficient and\nlocalized noise removal algorithm. To test the methodology, we use a highly\nsimplified WENO-based discretization scheme. We show that our scheme improves\nthe order of accuracy of our WENO algorithm, handles extremely strong\ndiscontinuities (ranging up to nine orders of magnitude), allows for shock\ncollision and bounce back, and removes high frequency noise. The causes of the\nwell-known \"wall heating\" phenomenon are discussed, and we demonstrate that\nthis particular pathology can be effectively treated in the framework of the\nC-method. This method is generalized to two space dimensions in the second part\nof this work [41].", "category": "physics_comp-ph" }, { "text": "Inferring Hidden Symmetries of Exotic Magnets from Detecting Explicit\n Order Parameters: An unconventional magnet may be mapped onto a simple ferromagnet by the\nexistence of a high-symmetry point. Knowledge of conventional ferromagnetic\nsystems may then be carried over to provide insight into more complex orders.\nHere we demonstrate how an unsupervised and interpretable machine-learning\napproach can be used to search for potential high-symmetry points in\nunconventional magnets without any prior knowledge of the system. The method is\napplied to the classical Heisenberg-Kitaev model on a honeycomb lattice, where\nour machine learns the transformations that manifest its hidden $O(3)$\nsymmetry, without using data of these high-symmetry points. Moreover, we\nclarify that, in contrast to the stripy and zigzag orders, a set of $D_2$ and\n$D_{2h}$ ordering matrices provides a more complete description of the\nmagnetization in the Heisenberg-Kitaev model. In addition, our machine also\nlearns the local constraints at the phase boundaries, which manifest a\nsubdimensional symmetry. This paper highlights the importance of explicit order\nparameters to many-body spin systems and the property of interpretability for\nthe physical application of machine-learning techniques.", "category": "physics_comp-ph" }, { "text": "Micromagnetic understanding of stochastic resonance driven by\n spin-transfertorque: In this paper, we employ micromagnetic simulations to study non-adiabatic\nstochastic resonance (NASR) excited by spin-transfer torque in a\nsuper-paramagnetic free layer nanomagnet of a nanoscale spin valve. We find\nthat NASR dynamics involves thermally activated transitions among two static\nstates and a single dynamic state of the nanomagnet and can be well understood\nin the framework of Markov chain rate theory. Our simulations show that a\ndirect voltage generated by the spin valve at the NASR frequency is at least\none order of magnitude greater than the dc voltage generated off the NASR\nfrequency. Our computations also reproduce the main experimentally observed\nfeatures of NASR such as the resonance frequency, the temperature dependence\nand the current bias dependence of the resonance amplitude. We propose a simple\ndesign of a microwave signal detector based on NASR driven by spin transfer\ntorque.", "category": "physics_comp-ph" }, { "text": "Systematic Finite-Sampling Inaccuracy in Free Energy Differences and\n Other Nonlinear Quantities: Systematic inaccuracy is inherent in any computational estimate of a\nnon-linear average, such as the free energy difference (Delta-F) between two\nstates or systems, because of the availability of only a finite number of data\nvalues, N. In previous work, we outlined the fundamental statistical\ndescription of this ``finite-sampling error.'' We now give a more complete\npresentation of (i) rigorous general bounds on the free energy and other\nnonlinear averages, which underscore the universality of the phenomenon; (ii)\nasymptotic N->infinity expansions of the average behavior of the\nfinite-sampling error in Delta-F estimates; (iii) illustrative examples of\nlarge-N behavior, both in free-energy and other calculations; and (iv) the\nuniversal, large-N relation between the average finite-sampling error and the\nfluctuation in the error. An explicit role is played by Levy and Gaussian\nlimiting distributions.", "category": "physics_comp-ph" }, { "text": "Fixed-density boundary conditions in overdamped Langevin simulations of\n diffusion in channels: We consider the numerical integration of Langevin equations for particles in\na channel, in the presence of boundary conditions fixing the concentration\nvalues at the ends. This kind of boundary condition appears for instance when\nconsidering the diffusion of ions in molecular channels, between the different\nconcentrations at both sides of the cellular membrane. For this application the\noverdamped limit of Brownian motion (leading to a first order Langevin\nequation) is most convenient, but in previous works some difficulties\nassociated with this limit were found for the implementation of the boundary\nconditions. We derive here an algorithm that, unlike previous attempts, does\nnot require the simulation of particle reservoirs or the consideration of\nvelocity variables or adjustable parameters. Simulations of Brownian particles\nin simple cases show that results agree perfectly with theory, both for the\nlocal concentration values and for the resulting particle flux in\nnonequilibrium situations. The algorithm is appropriate for the modeling of\nmore complex ionic channels and, in general, for the treatment of analogous\nboundary conditions in other physical models using first order Langevin\nequations. ***This version corrects misprints in two equations of the published\npaper***", "category": "physics_comp-ph" }, { "text": "Iterative frequency-domain seismic wave solvers based on multi-level\n domain-decomposition preconditioners: Frequency-domain full-waveform inversion (FWI) is suitable for long-offset\nstationary-recording acquisition, since reliable subsurface models can be\nreconstructed with a few frequencies and attenuation is easily implemented\nwithout computational overhead. In the frequency domain, wave modelling is a\nHelmholtz-type boundary-value problem which requires to solve a large and\nsparse system of linear equations per frequency with multiple right-hand sides\n(sources). This system can be solved with direct or iterative methods. While\nthe former are suitable for FWI application on 3D dense OBC acquisitions\ncovering spatial domains of moderate size, the later should be the approach of\nchoice for sparse node acquisitions covering large domains (more than 50\nmillions of unknowns). Fast convergence of iterative solvers for Helmholtz\nproblems remains however challenging due to the non definiteness of the\nHelmholtz operator, hence requiring efficient preconditioners. In this study,\nwe use the Krylov subspace GMRES iterative solver combined with a multi-level\ndomain-decomposition preconditioner. Discretization relies on continuous finite\nelements on unstructured tetrahedral meshes to comply with complex geometries\nand adapt the size of the elements to the local wavelength ($h$-adaptivity). We\nassess the convergence and the scalability of our method with the acoustic 3D\nSEG/EAGE Overthrust model up to a frequency of 20~Hz and discuss its efficiency\nfor multi right-hand side processing.", "category": "physics_comp-ph" }, { "text": "Emission spectra of p-Si and p-Si:H models generated by ab initio\n molecular dynamics methods: We created 4 p-Si models and 4 p-Si:H models all with 50% porosity. The\nmodels contain 32, 108, 256 and 500 silicon atoms with a pore parallel to one\nof the simulational cell axes and a regular cross-section. We obtained the\ndensities of states of our models by means of ab initio computational methods.\nWe wrote a code to simulate the emission spectra of our structures considering\nparticular excitations an decay conditions. After comparing the simulated\nspectra with the experimental results, we observe that the position of the\nmaximum of the emission spectra might be related with the size of the silicon\nbackbone for the p-Si models as the quantum confinement models say and with the\nhydrogen concentration for the p-Si:H structures. We conclude that the quantum\nconfinement model can be used to explain the emission of the p-Si structures\nbut, in the case of the p-Si:H models it is necessary to consider others\ntheories.", "category": "physics_comp-ph" }, { "text": "Explicitly correlated plane waves: Accelerating convergence in periodic\n wavefunction expansions: We present an investigation into the use of an explicitly correlated plane\nwave basis for periodic wavefunction expansions at the level of second-order\nM{\\o}ller-Plesset perturbation theory (MP2). The convergence of the electronic\ncorrelation energy with respect to the one-electron basis set is investigated\nand compared to conventional MP2 theory in a finite homogeneous electron gas\nmodel. In addition to the widely used Slater-type geminal correlation factor,\nwe also derive and investigate a novel correlation factor that we term\nYukawa-Coulomb. The Yukawa-Coulomb correlation factor is motivated by analytic\nresults for two electrons in a box and allows for a further improved\nconvergence of the correlation energies with respect to the employed basis set.\nWe find the combination of the infinitely delocalized plane waves and local\nshort-ranged geminals provides a complementary, and rapidly convergent basis\nfor the description of periodic wavefunctions. We hope that this approach will\nexpand the scope of discrete wavefunction expansions in periodic systems.", "category": "physics_comp-ph" }, { "text": "Multiple Time Step Integrators in Ab Initio Molecular Dynamics: Multiple time-scale algorithms exploit the natural separation of time-scales\nin chemical systems to greatly accelerate the efficiency of molecular dynamics\nsimulations. Although the utility of these methods in systems where the\ninteractions are described by empirical potentials is now well established,\ntheir application to ab initio molecular dynamics calculations has been limited\nby difficulties associated with splitting the ab initio potential into fast and\nslowly varying components. Here we show that such a timescale separation is\npossible using two different schemes: one based on fragment decomposition and\nthe other on range separation of the Coulomb operator in the electronic\nHamiltonian. We demonstrate for both water clusters and a solvated hydroxide\nion that multiple time-scale molecular dynamics allows for outer time steps of\n2.5 fs, which are as large as those obtained when such schemes are applied to\nempirical potentials, while still allowing for bonds to be broken and reformed\nthroughout the dynamics. This permits computational speedups of up to 4.4x,\ncompared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5\nfs time step, while maintaining the same energy conservation and accuracy.", "category": "physics_comp-ph" }, { "text": "Fiend -- Finite Element Quantum Dynamics: We present Fiend - a simulation package for three-dimensional single-particle\ntime-dependent Schr\\\"odinger equation for cylindrically symmetric systems.\nFiend has been designed for the simulation of electron dynamics under\ninhomogeneus vector potentials such as in nanostructures, but it can also be\nused to study, e.g., nonlinear light-matter interaction in atoms and linear\nmolecules. The light-matter interaction can be included via the minimal\ncoupling principle in its full rigour, beyond the conventional dipole\napproximation. The underlying spatial discretization is based on the finite\nelement method (FEM), and time-stepping is provided either via the\ngeneralized-{\\alpha} or Crank-Nicolson methods. The software is written in\nPython 3.6, and it utilizes state-of-the-art linear algebra and FEM backends\nfor performance-critical tasks. Fiend comes along with an extensive API\ndocumentation, a user guide, simulation examples, and allows for easy\ninstallation via Docker or the Python Package Index.", "category": "physics_comp-ph" }, { "text": "Acoustic cloaking: geometric transform, homogenization and a genetic\n algorithm: A general process is proposed to experimentally design anisotropic\ninhomogeneous metamaterials obtained through a change of coordinate in the\nHelmholtz equation. The method is applied to the case of a cylindrical\ntransformation that allows to perform cloaking. To approximate such complex\nmetamaterials we apply results of the theory of homogenization and combine them\nwith a genetic algorithm. To illustrate the power of our approach, we design\nthree types of cloaks composed of isotropic concentric layers structured with\nthree types of perforations: curved rectangles, split rings and crosses. These\ncloaks have parameters compatible with existing technology and they mimic the\nbehavior of the transformed material. Numerical simulations have been performed\nto qualitatively and quantitatively study the cloaking efficiency of these\nmetamaterials.", "category": "physics_comp-ph" }, { "text": "A hybrid approach to simulate the homogenized irreversible\n elastic-plastic deformations and damage of foams by neural networks: Classically, the constitutive behavior of materials is described either\nphenomenologically, or by homogenization approaches. Phenomenological\napproaches are computationally very efficient, but are limited for complex\nnon-linear and irreversible mechanisms. Such complex mechanisms can be\ndescribed well by computational homogenization, but respective FE$^2$\ncomputations are very expensive. As an alternative way, neural networks have\nbeen proposed for constitutive modeling, using either experiments or\ncomputational homogenization results for training. However, the application of\nthis method to irreversible material behavior is not trivial. The present\ncontribution presents a hybrid methodology to embed neural networks into the\nestablished framework of rate-independent plasticity. Both, the yield function\nand the evolution equations of internal state variables are represented by\nneural networks. Respective training data for a foam material are generated\nfrom RVE-simulations under monotonic loading. It is demonstrated that this\nhybrid multi-scale neural network approach (HyMNNA) allows to simulate\nefficiently even the anisotropic elastic-plastic behavior of foam structures\nwith coupled anisotropic evolution of damage and non-associated plastic flow.", "category": "physics_comp-ph" }, { "text": "Nonstandard Finite Difference Time Domain (NSFDTD) Method for Solving\n the Schr\u00f6dinger Equation: In this paper, an improvement of the finite difference time domain (FDTD)\nmethod using a non-standard finite difference scheme is presented. The standard\nnumerical scheme for the second derivative in the spatial domain is replaced by\na non-standard numerical scheme. In order to apply the non-standard FDTD\n(NSFDTD), first estimates of eigen-energies of a system are needed and computed\nby the standard FDTD method. These first eigen-energies are then used by the\nNSFDTD method to obtain improved eigen-energies. The NSFDTD method can be\nperformed iteratively using the resulting eigen-energies to obtain accurate\nresults. In this paper, the NS-FDTD method is validated for infinite square\nwell, harmonic oscillator and Morse potentials.", "category": "physics_comp-ph" }, { "text": "Recurrent Localization Networks applied to the Lippmann-Schwinger\n Equation: The bulk of computational approaches for modeling physical systems in\nmaterials science derive from either analytical (i.e. physics based) or\ndata-driven (i.e. machine-learning based) origins. In order to combine the\nstrengths of these two approaches, we advance a novel machine learning approach\nfor solving equations of the generalized Lippmann-Schwinger (L-S) type. In this\nparadigm, a given problem is converted into an equivalent L-S equation and\nsolved as an optimization problem, where the optimization procedure is\ncalibrated to the problem at hand. As part of a learning-based loop unrolling,\nwe use a recurrent convolutional neural network to iteratively solve the\ngoverning equations for a field of interest. This architecture leverages the\ngeneralizability and computational efficiency of machine learning approaches,\nbut also permits a physics-based interpretation. We demonstrate our learning\napproach on the two-phase elastic localization problem, where it achieves\nexcellent accuracy on the predictions of the local (i.e., voxel-level) elastic\nstrains. Since numerous governing equations can be converted into an equivalent\nL-S form, the proposed architecture has potential applications across a range\nof multiscale materials phenomena.", "category": "physics_comp-ph" }, { "text": "The POOL Data Storage, Cache and Conversion Mechanism: The POOL data storage mechanism is intended to satisfy the needs of the LHC\nexperiments to store and analyze the data from the detector response of\nparticle collisions at the LHC proton-proton collider. Both the data rate and\nthe data volumes will largely differ from the past experience. The POOL data\nstorage mechanism is intended to be able to cope with the experiment's\nrequirements applying a flexible multi technology data persistency mechanism.\nThe developed technology independent approach is flexible enough to adopt new\ntechnologies, take advantage of existing schema evolution mechanisms and allows\nusers to access data in a technology independent way. The framework consists of\nseveral components, which can be individually adopted and integrated into\nexisting experiment frameworks.", "category": "physics_comp-ph" }, { "text": "Leveraging Neural Networks with Attention Mechanism for High-Order\n Accuracy in Charge Density in Particle-in-Cell Simulation: In this research, we introduce an innovative three-network architecture that\ncomprises an encoder-decoder framework with an attention mechanism. The\narchitecture comprises a 1st-order-pre-trainer, a 2nd-order-improver, and a\ndiscriminator network, designed to boost the order accuracy of charge density\nin Particle-In-Cell (PIC) simulations. We acquire our training data from our\nself-developed 3-D PIC code, JefiPIC. The training procedure starts with the\n1st-order-pre-trainer, which is trained on a large dataset to predict charge\ndensities based on the provided article positions. Subsequently, we fine-tune\nthe 1st-order-pre-trainer, whose predictions then serve as inputs to the\n2nd-order-improver. Meanwhile, we train the 2nd-order-improver and\ndiscriminator network using a smaller volume of 2nd-order data, thereby\nachieving to generate charge density with 2nd-order accuracy. In the concluding\nphase, we replace JefiPIC's conventional particle interpolation process with\nour trained neural network. Our results demonstrate that the neural\nnetwork-enhanced PIC simulation can effectively simulate plasmas with 2\nnd-order accuracy. This highlights the advantage of our proposed neural\nnetwork: it can achieve higher-accuracy data with fewer real labels.", "category": "physics_comp-ph" }, { "text": "Structural Flyby Characterization of Nanoporosity: Recently, Ferreira da Silva et al. [3] have performed a gradient pattern\nanalysis of a canonical sample set (CSS) of scanning force microscopy (SFM)\nimages of p-Si. They applied the so-called Gradient Pattern Analysis to images\nof three typical p-Si samples distinguished by different absorption energy\nlevels and aspect ratios. Taking into account the measures of spatial\nasymmetric fluctuations they interpreted the global porosity not only in terms\nof the amount of roughness, but rather in terms of the structural complexity\n(e.g., walls and fine structures as slots). This analysis has been adapted in\norder to operate in a OpenGL flyby environment (the StrFB code), whose\napplication give the numerical characterization of the structure during the\nflyby real time. Using this analysis we compare the levels of asymmetric\nfragmentation of active porosity related to different materials as p-Si and\n\"porous diamond-like\" carbon. In summary we have shown that the gradient\npattern analysis technique in a flyby environment is a reliable sensitive\nmethod to investigate, qualitatively and quantitatively, the complex morphology\nof active nanostructures.", "category": "physics_comp-ph" }, { "text": "Exponential and Weibull models for spherical and spherical-shell\n diffusion-controlled release systems with semi-absorbing boundaries: We consider the classical problem of particle diffusion in $d$-dimensional\nradially-symmetric systems with absorbing boundaries. A key quantity to\ncharacterise such diffusive transport is the evolution of the proportion of\nparticles remaining in the system over time, which we denote by\n$\\mathcal{P}(t)$. Rather than work with analytical expressions for\n$\\mathcal{P}(t)$ obtained from solution of the corresponding continuum model,\nwhich when available take the form of an infinite series of exponential terms,\nsingle-term low-parameter models are commonly proposed to approximate\n$\\mathcal{P}(t)$ to ease the process of fitting, characterising and\ninterpreting experimental release data. Previous models of this form have\nmainly been developed for circular and spherical systems with an absorbing\nboundary. In this work, we consider circular, spherical, annular and\nspherical-shell systems with absorbing, reflecting and/or semi-absorbing\nboundaries. By proposing a moment matching approach, we develop several simple\none and two parameter exponential and Weibull models for $\\mathcal{P}(t)$, each\ninvolving parameters that depend explicitly on the system dimension,\ndiffusivity, geometry and boundary conditions. The developed models, despite\ntheir simplicity, agree very well with values of $\\mathcal{P}(t)$ obtained from\nstochastic model simulations and continuum model solutions.", "category": "physics_comp-ph" }, { "text": "Direct Calculation of Self-Gravitational Force for Infinitesimally Thin\n Gaseous Disks Using Adaptive Mesh Refinement: Yen et al. (2012) advanced a direct approach for the calculation of\nself-gravitational force to second order accuracy based on uniform grid\ndiscretization. This method improves the accuracy of N-body calculation by\nusing exact integration of kernel functions and employing the Fast Fourier\nTransform (FFT) to reduce complexity of computation to nearly linear. This\ndirect approach is free of artificial boundary conditions, however, the\napplicability is limited by the uniform discretization of grids. We report here\nan advancement in the direct method with the implementation of adaptive mesh\nrefinement (AMR) and maintaining second-order accuracy, which breaks the\nbarrier set by uniform grid discretization. The adoption of graphic process\nunits (GPUs) can significantly speed up the computation and make application of\nthis method possible to astrophysical systems of gaseous disk galaxies and\nprotoplanetary disks.", "category": "physics_comp-ph" }, { "text": "Efficient Monte Carlo Calculations of the One-Body Density: An alternative Monte Carlo estimator for the one-body density rho(r) is\npresented. This estimator has a simple form and can be readily used in any type\nof Monte Carlo simulation. Comparisons with the usual regularization of the\ndelta-function on a grid show that the statistical errors are greatly reduced.\nFurthermore, our expression allows accurate calculations of the density at any\npoint in space, even in the regions never visited during the Monte Carlo\nsimulation. The method is illustrated with the computation of accurate\nVariational Monte Carlo electronic densities for the Helium atom (1D curve) and\nfor the water dimer (3D grid containing up to 51x51x51=132651 points).", "category": "physics_comp-ph" }, { "text": "Hardware Random number Generator for cryptography: One of the key requirement of many schemes is that of random numbers.\nSequence of random numbers are used at several stages of a standard\ncryptographic protocol. A simple example is of a Vernam cipher, where a string\nof random numbers is added to massage string to generate the encrypted code. It\nis represented as $C=M \\oplus K $ where $M$ is the message, $K$ is the key and\n$C$ is the ciphertext. It has been mathematically shown that this simple scheme\nis unbreakable is key K as long as M and is used only once. For a good\ncryptosystem, the security of the cryptosystem is not be based on keeping the\nalgorithm secret but solely on keeping the key secret. The quality and\nunpredictability of secret data is critical to securing communication by modern\ncryptographic techniques. Generation of such data for cryptographic purposes\ntypically requires an unpredictable physical source of random data. In this\nmanuscript, we present studies of three different methods for producing random\nnumber. We have tested them by studying its frequency, correlation as well as\nusing the test suit from NIST.", "category": "physics_comp-ph" }, { "text": "SPICE model of memristive devices with threshold: Although memristive devices with threshold voltages are the norm rather than\nthe exception in experimentally realizable systems, their SPICE programming is\nnot yet common. Here, we show how to implement such systems in the SPICE\nenvironment. Specifically, we present SPICE models of a popular\nvoltage-controlled memristive system specified by five different parameters for\nPSPICE and NGSPICE circuit simulators. We expect this implementation to find\nwidespread use in circuits design and testing.", "category": "physics_comp-ph" }, { "text": "High-precision regressors for particle physics: Monte Carlo simulations of physics processes at particle colliders like the\nLarge Hadron Collider at CERN take up a major fraction of the computational\nbudget. For some simulations, a single data point takes seconds, minutes, or\neven hours to compute from first principles. Since the necessary number of data\npoints per simulation is on the order of $10^9$ - $10^{12}$, machine learning\nregressors can be used in place of physics simulators to significantly reduce\nthis computational burden. However, this task requires high-precision\nregressors that can deliver data with relative errors of less than $1\\%$ or\neven $0.1\\%$ over the entire domain of the function. In this paper, we develop\noptimal training strategies and tune various machine learning regressors to\nsatisfy the high-precision requirement. We leverage symmetry arguments from\nparticle physics to optimize the performance of the regressors. Inspired by\nResNets, we design a Deep Neural Network with skip connections that outperform\nfully connected Deep Neural Networks. We find that at lower dimensions, boosted\ndecision trees far outperform neural networks while at higher dimensions neural\nnetworks perform significantly better. We show that these regressors can speed\nup simulations by a factor of $10^3$ - $10^6$ over the first-principles\ncomputations currently used in Monte Carlo simulations. Additionally, using\nsymmetry arguments derived from particle physics, we reduce the number of\nregressors necessary for each simulation by an order of magnitude. Our work can\nsignificantly reduce the training and storage burden of Monte Carlo simulations\nat current and future collider experiments.", "category": "physics_comp-ph" }, { "text": "GPGPU Acceleration of All-Electron Electronic Structure Theory Using\n Localized Numeric Atom-Centered Basis Functions: We present an implementation of all-electron density-functional theory for\nmassively parallel GPGPU-based platforms, using localized atom-centered basis\nfunctions and real-space integration grids. Special attention is paid to domain\ndecomposition of the problem on non-uniform grids, which enables compute- and\nmemory-parallel execution across thousands of nodes for real-space operations,\ne.g. the update of the electron density, the integration of the real-space\nHamiltonian matrix, and calculation of Pulay forces. To assess the performance\nof our GPGPU implementation, we performed benchmarks on three different\narchitectures using a 103-material test set. We find that operations which rely\non dense serial linear algebra show dramatic speedups from GPGPU acceleration:\nin particular, SCF iterations including force and stress calculations exhibit\nspeedups ranging from 4.5 to 6.6. For the architectures and problem types\ninvestigated here, this translates to an expected overall speedup between 3-4\nfor the entire calculation (including non-GPU accelerated parts), for problems\nfeaturing several tens to hundreds of atoms. Additional calculations for a\n375-atom Bi$_2$Se$_3$ bilayer show that the present GPGPU strategy scales for\nlarge-scale distributed-parallel simulations.", "category": "physics_comp-ph" }, { "text": "Visual, user-interactive generation of bond networks in 3D particle\n configurations: We present a new program able to perform visual structural analysis on 3D\nparticle systems called PASYVAT (PArticle SYstem Visual Analysis Tool). More\nspecifically, it can select multiple interparticle distance ranges from a\nradial distribution function (RDF) plot and display them in 3D as bonds between\nthe particles falling within the selected distance range, thus generating a\nnetwork of bonds. This software can be used with any data set representing a\nsystem of points or other objects having a welldefined center of mass or\ngeometric center in 3D space. In this article we describe the program and its\ninternal structure, with emphasis on its applicability in the study of certain\nparticle configurations, obtained from classical molecular dynamics simulation\nin condensed matter physics.", "category": "physics_comp-ph" }, { "text": "Efficient method for simulating quantum electron dynamics under the time\n dependent Kohn-Sham equation: A numerical scheme for solving the time-evolution of wave functions under the\ntime dependent Kohn-Sham equation has been developed. Since the effective\nHamiltonian depends on the wave functions, the wave functions and the effective\nHamiltonian should evolve consistently with each other. For this purpose, a\nself-consistent loop is required at every time-step for solving the\ntime-evolution numerically, which is computationally expensive. However, in\nthis paper, we develop a different approach expressing a formal solution of the\nTD-KS equation, and prove that it is possible to solve the TD-KS equation\nefficiently and accurately by means of a simple numerical scheme without the\nuse of any self-consistent loops.", "category": "physics_comp-ph" }, { "text": "Sampling Free Energy Surfaces as Slices by Combining Umbrella Sampling\n and Metadynamics: Metadynamics (MTD) is a very powerful technique to sample high-dimensional\nfree energy landscapes, and due to its self-guiding property, the method has\nbeen successful in studying complex reactions and conformational changes. MTD\nsampling is based on filling the free energy basins by biasing potentials and\nthus for cases with flat, broad and unbound free energy wells, the\ncomputational time to sample them becomes very large. To alleviate this\nproblem, we combine the standard Umbrella Sampling (US) technique with MTD to\nsample orthogonal collective variables (CVs) in a simultaneous way. Within this\nscheme, we construct the equilibrium distribution of CVs from biased\ndistributions obtained from independent MTD simulations with umbrella\npotentials. Reweighting is carried out by a procedure that combines US\nreweighting and Tiwary-Parrinello MTD reweighting within the Weighted Histogram\nAnalysis Method (WHAM). The approach is ideal for a controlled sampling of a CV\nin a MTD simulation, making it computationally efficient in sampling flat,\nbroad and unbound free energy surfaces. This technique also allows for a\ndistributed sampling of a high-dimensional free energy surface, further\nincreasing the computational efficiency in sampling. We demonstrate the\napplication of this technique in sampling high-dimensional surface for various\nchemical reactions using ab initio and QM/MM hybrid molecular dynamics\nsimulations. Further, in order to carry out MTD bias reweighting for computing\nforward reaction barriers in ab initio or QM/MM simulations, we propose a\ncomputationally affordable approach that does not require recrossing\ntrajectories.", "category": "physics_comp-ph" }, { "text": "A family of single-node second-order boundary schemes for the lattice\n Boltzmann method: In this work, we propose a family of single-node second-order boundary\nschemes for the lattice Boltzmann method with general collision terms. The\nconstruction of the schemes is quite universal and simple, it does not involve\nconcrete lattice Boltzmann models and uses the half-way bounce-back rule as a\ncentral step. The constructed schemes are all second-order accurate if so is\nthe bounce-back rule. In addition, the proposed schemes have good stability\nthanks to convex combinations. The accuracy and stability of several specific\nschemes are numerically validated for multiple-relaxation-time models in both\n2D and 3D.", "category": "physics_comp-ph" }, { "text": "Dependencia de la Temperatura de Compensaci\u00f3n en Modelos de Multicapas\n Ferrimagn\u00e9ticas de tipo $3(S_{1-x} \u03c3_x)/\u03c3$: We investigate numerically the behavior of the compensation temperature,\n$T_{comp}$, in ferrimagnetic multilayer in the type $3(S_{1-x}\n\\sigma_x)/\\sigma$ with respect to the increase of the spin concentration\n$\\sigma_x$, which are randomly distributive in a mix of spins $S = 0, \\pm 1$\nand $\\sigma = \\pm 1/2$, with ferromagnetic interaction between spins of the\nsame type, and antiferromagnetic interaction for spins of different type. We\nhave found that $T_{comp}$ decrease slowly with the increase of $x$, until\n$\\sigma_x$ represent a little more than 40\\% of the spin mixture. For higher\nconcentration values $T_{comp}$ drops abruptly and disappears, as predicted by\nexperimental result.", "category": "physics_comp-ph" }, { "text": "Channel thickness optimization for ultra thin and 2D chemically doped\n TFETs: 2D material based tunnel FETs are among the most promising candidates for low\npower electronics applications since they offer ultimate gate control and high\ncurrent drives that are achievable through small tunneling distances during the\ndevice operation. The ideal device is characterized by a minimized tunneling\ndistance. However, devices with the thinnest possible body do not necessarily\nprovide the best performance. For example, reducing the channel thickness\nincreases the depletion width in the source which can be a significant part of\nthe total tunneling distance. Hence, it is important to determine the optimum\nchannel thickness for each channel material individually. In this work, we\nstudy the optimum channel thickness for three channel materials: WSe$_{2}$,\nBlack Phosphorus (BP), and InAs using full-band self-consistent quantum\ntransport simulations. To identify the ideal channel thickness for each\nmaterial at a specific doping density, a new analytic model is proposed and\nbenchmarked against the numerical simulations.", "category": "physics_comp-ph" }, { "text": "Macroscopic Electromagnetic Response of Arbitrarily Shaped Spatially\n Dispersive Bodies formed by Metallic Wires: In media with strong spatial dispersion the electric displacement vector and\nthe electric field are typically linked by a partial differential equation in\nthe bulk region. The objective of this work is to highlight that in the\nvicinity of an interface the relation between the macroscopic fields cannot be\nunivocally determined from the bulk response of the involved materials, but\nrequires instead the knowledge of internal degrees of freedom of the materials.\nWe derive such relation for the particular case of \"wire media\", and describe a\nnumerical formalism that enables characterizing the electromagnetic response of\narbitrarily shaped spatially dispersive bodies formed by arrays of crossed\nwires. The possibility of concentrating the electromagnetic field in a narrow\nspot by tapering a metamaterial waveguide is discussed.", "category": "physics_comp-ph" }, { "text": "Distributed deep reinforcement learning for simulation control: Several applications in the scientific simulation of physical systems can be\nformulated as control/optimization problems. The computational models for such\nsystems generally contain hyperparameters, which control solution fidelity and\ncomputational expense. The tuning of these parameters is non-trivial and the\ngeneral approach is to manually `spot-check' for good combinations. This is\nbecause optimal hyperparameter configuration search becomes impractical when\nthe parameter space is large and when they may vary dynamically. To address\nthis issue, we present a framework based on deep reinforcement learning (RL) to\ntrain a deep neural network agent that controls a model solve by varying\nparameters dynamically. First, we validate our RL framework for the problem of\ncontrolling chaos in chaotic systems by dynamically changing the parameters of\nthe system. Subsequently, we illustrate the capabilities of our framework for\naccelerating the convergence of a steady-state CFD solver by automatically\nadjusting the relaxation factors of discretized Navier-Stokes equations during\nrun-time. The results indicate that the run-time control of the relaxation\nfactors by the learned policy leads to a significant reduction in the number of\niterations for convergence compared to the random selection of the relaxation\nfactors. Our results point to potential benefits for learning adaptive\nhyperparameter learning strategies across different geometries and boundary\nconditions with implications for reduced computational campaign expenses.\n\\footnote{Data and codes available at\n\\url{https://github.com/Romit-Maulik/PAR-RL}}", "category": "physics_comp-ph" }, { "text": "A finite-difference lattice Boltzmann model with second-order accuracy\n of time and space for incompressible flow: In this paper, a kind of finite-difference lattice Boltzmann method with the\nsecond-order accuracy of time and space (T2S2-FDLBM) is proposed. In this\nmethod, a new simplified two-stage fourth order time-accurate discretization\napproach is applied to construct time marching scheme, and the spatial gradient\noperator is discretized by a mixed difference scheme to maintain a second-order\naccuracy both in time and space. It is shown that the previous\nfinite-difference lattice Boltzmann method (FDLBM) proposed by Guo [1] is a\nspecial case of the T2S2-FDLBM. Through the von Neumann analysis, the stability\nof the method is analyzed and two specific T2S2-FDLBMs are discussed. The two\nT2S2-FDLBMs are applied to simulate some incompressible flows with the\nnon-uniform grids. Compared with the previous FDLBM and SLBM, the T2S2-FDLBM is\nmore accurate and more stable. The value of the Courant-Friedrichs-Lewy\ncondition number in our method can be up to 0.9, which also significantly\nimproves the computational efficiency.", "category": "physics_comp-ph" }, { "text": "Discover governing differential equations from evolving systems: Discovering the governing equations of evolving systems from available\nobservations is essential and challenging. In this paper, we consider a new\nscenario: discovering governing equations from streaming data. Current methods\nstruggle to discover governing differential equations with considering\nmeasurements as a whole, leading to failure to handle this task. We propose an\nonline modeling method capable of handling samples one by one sequentially by\nmodeling streaming data instead of processing the entire dataset. The proposed\nmethod performs well in discovering ordinary differential equations (ODEs) and\npartial differential equations (PDEs) from streaming data. Evolving systems are\nchanging over time, which invariably changes with system status. Thus, finding\nthe exact change points is critical. The measurement generated from a changed\nsystem is distributed dissimilarly to before; hence, the difference can be\nidentified by the proposed method. Our proposal is competitive in identifying\nthe change points and discovering governing differential equations in three\nhybrid systems and two switching linear systems.", "category": "physics_comp-ph" }, { "text": "Numerical differentiation: local versus global methods: In the context of the analysis of measured data, one is often faced with the\ntask to differentiate data numerically. Typically, this occurs when measured\ndata are concerned or data are evaluated numerically during the evolution of\npartial or ordinary differential equations. Usually, one does not take care for\naccuracy of the resulting estimates of derivatives because modern computers are\nassumed to be accurate to many digits. But measurements yield intrinsic errors,\nwhich are often much less accurate than the limit of the machine used, and\nthere exists the effect of ``loss of significance'', well known in numerical\nmathematics and computational physics. The problem occurs primarily in\nnumerical subtraction, and clearly, the estimation of derivatives involves the\napproximation of differences. In this article, we discuss several techniques\nfor the estimation of derivatives. As a novel aspect, we divide into local and\nglobal methods, and explain the respective shortcomings. We have developed a\ngeneral scheme for global methods, and illustrate our ideas by spline smoothing\nand spectral smoothing. The results from these less known techniques are\nconfronted with the ones from local methods. As typical for the latter, we\nchose Savitzky-Golay filtering and finite differences. Two basic quantities are\nused for characterization of results: The variance of the difference of the\ntrue derivative and its estimate, and as important new characteristic, the\nsmoothness of the estimate. We apply the different techniques to numerically\nproduced data and demonstrate the application to data from an aeroacoustic\nexperiment. As a result, we find that global methods are generally preferable\nif a smooth process is considered. For rough estimates local methods work\nacceptably well.", "category": "physics_comp-ph" }, { "text": "An open and parallel multiresolution framework using block-based\n adaptive grids: A numerical approach for solving evolutionary partial differential equations\nin two and three space dimensions on block-based adaptive grids is presented.\nThe numerical discretization is based on high-order, central finite-differences\nand explicit time integration. Grid refinement and coarsening are triggered by\nmultiresolution analysis, i.e. thresholding of wavelet coefficients, which\nallow controlling the precision of the adaptive approximation of the solution\nwith respect to uniform grid computations. The implementation of the scheme is\nfully parallel using MPI with a hybrid data structure. Load balancing relies on\nspace filling curves techniques. Validation tests for 2D advection equations\nallow to assess the precision and performance of the developed code.\nComputations of the compressible Navier-Stokes equations for a temporally\ndeveloping 2D mixing layer illustrate the properties of the code for nonlinear\nmulti-scale problems. The code is open source.", "category": "physics_comp-ph" }, { "text": "New method of solving the many-body Schr\u00f6dinger equation: A method of solving the Schr\\\"{o}dinger equation based on the use of constant\nparticle-particle interaction potential surfaces (IPS) is proposed. The\nmany-body wave function is presented in a configuration interaction form, with\ncoefficients depending on the total interaction potential. The corresponding\nset of linear ordinary differential equations for the coefficients was\ndeveloped. To reduce the computational work, a hierarchy of approximations\nbased on interaction potential surfaces of a part of the particle system was\nworked out. The solution of a simple exactly solvable model and He-like ions\nproves that this method is more accurate than the conventional configuration\ninteraction method and demonstrates a better convergence with a basis set\nincrease.", "category": "physics_comp-ph" }, { "text": "Energy minimization of 2D incommensurate heterostructures: We derive and analyze a novel approach for modeling and computing the\nmechanical relaxation of incommensurate 2D heterostructures. Our approach\nparametrizes the relaxation pattern by the compact local configuration space\nrather than real space, thus bypassing the need for the standard supercell\napproximation and giving a true aperiodic atomistic configuration. Our model\nextends the computationally accessible regime of weakly coupled bilayers with\nsimilar orientations or lattice spacing, for example materials with a small\nrelative twist where the widely studied large-scale moire patterns arise. Our\nmodel also makes possible the simulation of multi-layers for which no\ninterlayer empirical atomistic potential exists, such as those composed of MoS2\nlayers, and more generally makes possible the simulation of the relaxation of\nmulti-layer heterostructures for which a planar moire pattern does not exist.", "category": "physics_comp-ph" }, { "text": "COMSOL implementation of the H-$\u03c6$-formulation with thin cuts for\n modeling superconductors with transport currents: Despite the acclaimed success of the magnetic field (H) formulation for\nmodeling the electromagnetic behavior of superconductors with the finite\nelement method, the use of vector-dependent variables in non-conducting domains\nleads to unnecessarily long computation times. In order to solve this issue, we\nhave recently shown how to use a magnetic scalar potential together with the\nH-formulation in the COMSOL Multiphysics environment to efficiently and\naccurately solve for the magnetic field surrounding superconducting domains.\nHowever, from the definition of the magnetic scalar potential, the\nnon-conducting domains must be made simply connected in order to obey Ampere's\nlaw. In this work, we use thin cuts to apply a discontinuity in $\\phi$ and make\nthe non-conducting domains simply connected. This approach is shown to be\neasily implementable in the COMSOL Multiphysics finite element program, already\nwidely used by the applied superconductivity community. We simulate three\ndifferent models in 2-D and 3-D using superconducting filaments and tapes, and\nshow that the results are in very good agreement with the H-A and\nH-formulations. Finally, we compare the computation times between the\nformulations, showing that the H-$\\phi$-formulation can be up to seven times\nfaster than the standard H-formulation in certain applications of interest.", "category": "physics_comp-ph" }, { "text": "Minimal domain size necessary to simulate the field enhancement factor\n numerically with specified precision: In the literature about field emission, finite elements and finite\ndifferences techniques are being increasingly employed to understand the local\nfield enhancement factor (FEF) via numerical simulations. In theoretical\nanalyses, it is usual to consider the emitter as isolated, i.e, a single tip\nfield emitter infinitely far from any physical boundary, except the substrate.\nHowever, simulation domains must be finite and the simulation boundaries\ninfluences the electrostatic potential distribution. In either finite elements\nor finite differences techniques, there is a systematic error ($\\epsilon$) in\nthe FEF caused by the finite size of the simulation domain. It is attempting to\noversize the domain to avoid any influence from the boundaries, however, the\ncomputation might become memory and time consuming, especially in full three\ndimensional analyses. In this work, we provide the minimum width and height of\nthe simulation domain necessary to evaluate the FEF with $\\epsilon$ at the\ndesired tolerance. The minimum width ($A$) and height ($B$) are given relative\nto the height of the emitter ($h$), that is, $(A/h)_{min} \\times (B/h)_{min}$\nnecessary to simulate isolated emitters on a substrate. We also provide the\n$(B/h)_{min}$ to simulate arrays and the $(A/h)_{min}$ to simulate an emitter\nbetween an anode-cathode planar capacitor. At last, we present the formulae to\nobtain the minimal domain size to simulate clusters of emitters with precision\n$\\epsilon_{tol}$. Our formulae account for ellipsoidal emitters and hemisphere\non cylindrical posts. In the latter case, where an analytical solution is not\nknown at present, our results are expected to produce an unprecedented\nnumerical accuracy in the corresponding local FEF.", "category": "physics_comp-ph" }, { "text": "Cascaded lattice Boltzmann method for incompressible thermal flows with\n heat sources and general thermal boundary conditions: Cascaded or central-moment-based lattice Boltzmann method (CLBM) is a\nrelatively recent development in the LBM community, which has better numerical\nstability and naturally achieves better Galilean invariance for a specified\nlattice compared with the classical single-relation-time (SRT) LBM. Recently,\nCLBM has been extended to simulate thermal flows based on the\ndouble-distribution-function (DDF) approach [L. Fei \\textit{et al.}, Int. J.\nHeat Mass Transfer 120, 624 (2018)]. In this work, CLBM is further extended to\nsimulate thermal flows involving complex thermal boundary conditions and/or a\nheat source. Particularly, a discrete source term in the central-moment space\nis proposed to include a heat source, and a general bounce-back scheme is\nemployed to implement thermal boundary conditions. The numerical results for\nseveral canonical problems are in good agreement with the analytical solutions\nand/or numerical results in the literature, which verifies the present CLBM\nimplementation for thermal flows.", "category": "physics_comp-ph" }, { "text": "SAAMPLE: A Segregated Accuracy-driven Algorithm for Multiphase\n Pressure-Linked Equations: Existing hybrid Level Set / Front Tracking methods have been developed for\nstructured meshes and successfully used for efficient and accurate simulations\nof complex multiphase flows. This contribution extends the capability of hybrid\nLevel Set / Front Tracking methods towards handling surface tension driven\nmultiphase flows using unstructured meshes. Unstructured meshes are\ntraditionally used in Computational Fluid Dynamics to handle geometrically\ncomplex problems. In order to simulate surface-tension driven multiphase flows\non unstructured meshes, a new SAAMPLE Segregated Accuracy-driven Algorithm for\nMultiphase Pressure-Linked Equations is proposed, that increases the robustness\nof the unstructured Level Set / Front Tracking (LENT) method. The LENT method\nis implemented in the Open- FOAM open source code for Computational Fluid\nDynamics.", "category": "physics_comp-ph" }, { "text": "Ab initio relativistic treatment of the intercombination\n $a^3\u03a0-X^1\u03a3^+$ Cameron system of the CO molecule: The intercombination $a^3\\Pi - X^1\\Sigma^+$ Cameron system of carbon monoxide\nhas been computationally studied in the framework of multi-reference Fock space\ncoupled cluster method with the use of generalized relativistic pseudopotential\nmodel for the effective introducing the relativity in all-electron correlation\ntreatment. The extremely weak $a^3\\Pi_{\\Omega=0^+,1} - X^1\\Sigma^+$ transition\nprobabilities and radiative lifetimes of the metastable $a^3\\Pi$ state were\ncalculated and compared with their previous theoretical and experimental\ncounterparts. The impact of a presumable variation of the fine structure\nconstant $\\alpha=e^2/\\hbar c$ on transition strength of the Cameron system has\nbeen numerically evaluated as well.", "category": "physics_comp-ph" }, { "text": "Ab initio phonon coupling and optical response of hot electrons in\n plasmonic metals: Ultrafast laser measurements probe the non-equilibrium dynamics of excited\nelectrons in metals with increasing temporal resolution. Electronic structure\ncalculations can provide a detailed microscopic understanding of hot electron\ndynamics, but a parameter-free description of pump-probe measurements has not\nyet been possible, despite intensive research, because of the phenomenological\ntreatment of electron-phonon interactions. We present ab initio predictions of\nthe electron-temperature dependent heat capacities and electron-phonon coupling\ncoefficients of plasmonic metals. We find substantial differences from\nfree-electron and semi-empirical estimates, especially in noble metals above\ntransient electron temperatures of 2000 K, because of the previously-neglected\nstrong dependence of electron-phonon matrix elements on electron energy. We\nalso present first-principles calculations of the electron-temperature\ndependent dielectric response of hot electrons in plasmonic metals, including\ndirect interband and phonon-assisted intraband transitions, facilitating\ncomplete theoretical predictions of the time-resolved optical probe signatures\nin ultrafast laser experiments.", "category": "physics_comp-ph" }, { "text": "A multiscale discrete velocity method for model kinetic equations: In this paper, authors focus effort on improving the conventional discrete\nvelocity method (DVM) into a multiscale scheme in finite volume framework for\ngas flow in all flow regimes. Unlike the typical multiscale kinetic methods\nunified gas-kinetic scheme (UGKS) and discrete unified gas-kinetic scheme\n(DUGKS), which concentrate on the evolution of the distribution function at the\ncell interface, in the present scheme the flux for macroscopic variables is\nsplit into the equilibrium part and the nonequilibrium part, and the\nnonequilibrium flux is calculated by integrating the discrete distribution\nfunction at the cell center, which overcomes the excess numerical dissipation\nof the conventional DVM in the continuum flow regime. Afterwards, the\nmacroscopic variables are finally updated by simply integrating the discrete\ndistribution function at the cell center, or by a blend of the increments based\non the macroscopic and the microscopic systems, and the multiscale property is\nachieved. Several test cases, involving unsteady, steady, high speed, low speed\ngas flows in all flow regimes, have been performed, demonstrating the good\nperformance of the multiscale DVM from free molecule to continuum Navier-Stokes\nsolutions and the multiscale property of the scheme is proved.", "category": "physics_comp-ph" }, { "text": "Collective Variables for Free Energy Surface Tailoring -- Understanding\n and Modifying Functionality in Systems Dominated by Rare Events: We introduce a method for elucidating and modifying the functionality of\nsystems dominated by rare events that relies on the automated tuning of their\nunderlying free energy surface. The proposed approach seeks to construct\ncollective variables (CVs) that encode the essential information regarding the\nrare events of the system of interest. The appropriate CVs are identified using\nHarmonic Linear Discriminant Analysis (HLDA), a machine-learning based method\nthat is trained solely on data collected from short ordinary simulations in the\nrelevant metastable states of the system. Utilizing the interpretable form of\nthe resulting CVs, the critical interaction potentials that determine the\nsystem's rare transitions are identified and purposely modified to tailor the\nfree energy surface in manner that alters functionality as desired. The\napplicability of the method is illustrated in the context of three different\nsystems, thereby demonstrating that thermodynamic and kinetic properties can be\ntractably modified with little to no prior knowledge or intuition.", "category": "physics_comp-ph" }, { "text": "Regularized Ensemble Kalman Methods for Inverse Problems: Inverse problems are common and important in many applications in\ncomputational physics but are inherently ill-posed with many possible model\nparameters resulting in satisfactory results in the observation space. When\nsolving the inverse problem with adjoint-based optimization, the problem can be\nregularized by adding additional constraints in the cost function. However,\nsimilar regularizations have not been used in ensemble-based methods, where the\nsame optimization is done implicitly through the analysis step rather than\nthrough explicit minimization of the cost function. Ensemble-based methods, and\nin particular ensemble Kalman methods, have gained popularity in practice where\nphysics models typically do not have readily available adjoint capabilities.\nWhile the model outputs can be improved by incorporating observations using\nthese methods, the lack of regularization means the inference of the model\nparameters remains ill-posed. Here we propose a regularized ensemble Kalman\nmethod capable of enforcing regularization constraints. Specifically, we derive\na modified analysis scheme that implicitly minimizes a cost function with\ngeneralized constraints. We demonstrate the method's ability to regularize the\ninverse problem with three cases of increasing complexity, starting with\ninferring scalar model parameters. As a final case, we utilize the proposed\nmethod to infer the closure field in the Reynolds-averaged Navier--Stokes\nequations; a problem of significant importance in fluid dynamics and many\nengineering applications.", "category": "physics_comp-ph" }, { "text": "Simulation of reversible molecular mechanical logic gates and circuits: Landauer's principle places a fundamental lower limit on the work required to\nperform a logically irreversible operation. Logically reversible gates provide\na way to avoid these work costs, and also simplify the task of making the\ncomputation as a whole thermodynamically reversible. The inherent reversibility\nof mechanical logic gates would make them good candidates for the design of\npractical logically reversible computing systems if not for the relatively\nlarge size and mass of such systems. In this paper, we outline the design and\nsimulation of reversible molecular mechanical logic gates that come close to\nthe limits of thermodynamic reversibility even under the effects of thermal\nnoise, and outline associated circuit components from which arbitrary\ncombinatorial reversible circuits can be constructed and simulated. We\ndemonstrate that isolated components can be operated in a thermodynamically\nreversible manner, and explore the complexities of combining components to\nimplement more complex computations. Finally, we demonstrate a method to\nconstruct arbitrarily large reversible combinatorial circuits using multiple\nexternal controls and signal boosters with a working half-adder circuit.", "category": "physics_comp-ph" }, { "text": "A quantum-inspired method for solving the Vlasov-Poisson equations: Kinetic simulations of collisionless (or weakly collisional) plasmas using\nthe Vlasov equation are often infeasible due to high resolution requirements\nand the exponential scaling of computational cost with respect to dimension.\nRecently, it has been proposed that matrix product state (MPS) methods, a\nquantum-inspired but classical algorithm, can be used to solve partial\ndifferential equations with exponential speed-up, provided that the solution\ncan be compressed and efficiently represented as an MPS within some tolerable\nerror threshold. In this work, we explore the practicality of MPS methods for\nsolving the Vlasov-Poisson equations in 1D1V, and find that important features\nof linear and nonlinear dynamics, such as damping or growth rates and\nsaturation amplitudes, can be captured while compressing the solution\nsignificantly. Furthermore, by comparing the performance of different mappings\nof the distribution functions onto the MPS, we develop an intuition of the MPS\nrepresentation and its behavior in the context of solving the Vlasov-Poisson\nequations, which will be useful for extending these methods to higher\ndimensional problems.", "category": "physics_comp-ph" }, { "text": "Beyond black-boxes in Bayesian inverse problems and model validation:\n applications in solid mechanics of elastography: The present paper is motivated by one of the most fundamental challenges in\ninverse problems, that of quantifying model discrepancies and errors. While\nsignificant strides have been made in calibrating model parameters, the\noverwhelming majority of pertinent methods is based on the assumption of a\nperfect model. Motivated by problems in solid mechanics which, as all problems\nin continuum thermodynamics, are described by conservation laws and\nphenomenological constitutive closures, we argue that in order to quantify\nmodel uncertainty in a physically meaningful manner, one should break open the\nblack-box forward model. In particular we propose formulating an undirected\nprobabilistic model that explicitly accounts for the governing equations and\ntheir validity. This recasts the solution of both forward and inverse problems\nas probabilistic inference tasks where the problem's state variables should not\nonly be compatible with the data but also with the governing equations as well.\nEven though the probability densities involved do not contain any black-box\nterms, they live in much higher-dimensional spaces. In combination with the\nintractability of the normalization constant of the undirected model employed,\nthis poses significant challenges which we propose to address with a\nlinearly-scaling, double-layer of Stochastic Variational Inference. We\ndemonstrate the capabilities and efficacy of the proposed model in synthetic\nforward and inverse problems (with and without model error) in elastography.", "category": "physics_comp-ph" }, { "text": "Full-Wave Algorithm to Model Effects of Bedding Slopes on the Response\n of Subsurface Electromagnetic Geophysical Sensors near Unconformities: We propose a full-wave pseudo-analytical numerical electromagnetic (EM)\nalgorithm to model subsurface induction sensors, traversing planar-layered\ngeological formations of arbitrary EM material anisotropy and loss, which are\nused, for example, in the exploration of hydrocarbon reserves. Unlike past\npseudo-analytical planar-layered modeling algorithms that impose parallelism\nbetween the formation's bed junctions however, our method involves judicious\nemployment of Transformation Optics techniques to address challenges related to\nmodeling relative slope (i.e., tilting) between said junctions (including\narbitrary azimuth orientation of each junction). The algorithm exhibits this\nflexibility, both with respect to loss and anisotropy in the formation layers\nas well as junction tilting, via employing special planar slabs that coat each\n\"flattened\" (i.e., originally tilted) planar interface, locally redirecting the\nincident wave within the coating slabs to cause wave fronts to interact with\nthe flattened interfaces as if they were still tilted with a specific,\nuser-defined orientation. Moreover, since the coating layers are homogeneous\nrather than exhibiting continuous material variation, a minimal number of these\nlayers must be inserted and hence reduces added simulation time and\ncomputational expense. As said coating layers are not reflectionless however,\nthey do induce artificial field scattering that corrupts legitimate field\nsignatures due to the (effective) interface tilting. Numerical results, for two\nhalf-spaces separated by a tilted interface, quantify error trends versus\nmaterial and sensor characteristics. We finally exhibit responses of sensors\ntraversing three-layered media, where we vary the anisotropy, loss, and\nrelative tilting of the formations and explore the sensitivity of the sensor's\ncomplex-valued measurements.", "category": "physics_comp-ph" }, { "text": "Improved Fast Randomized Iteration Approach to Full Configuration\n Interaction: We present three modifications to our recently introduced fast randomized\niteration method for full configuration interaction (FCI-FRI) and investigate\ntheir effects on the method's performance for Ne, H$_2$O, and N$_2$. The\ninitiator approximation, originally developed for full configuration\ninteraction quantum Monte Carlo, significantly reduces statistical error in\nFCI-FRI when few samples are used in compression operations, enabling its\napplication to larger chemical systems. The semi-stochastic extension, which\ninvolves exactly preserving a fixed subset of elements in each compression,\nimproves statistical efficiency in some cases but reduces it in others. We also\ndeveloped a new approach to sampling excitations that yields consistent\nimprovements in statistical efficiency and reductions in computational cost. We\ndiscuss possible strategies based on our findings for improving the performance\nof stochastic quantum chemistry methods more generally.", "category": "physics_comp-ph" }, { "text": "Deterministic Solution of the Boltzmann Equation Using Discontinuous\n Galerkin Discretizations in Velocity Space: We present a new deterministic approach for the solution of the Boltzmann\nkinetic equation based on nodal discontinuous Galerkin (DG) discretizations in\nvelocity space. In the new approach the collision operator has the form of a\nbilinear operator with pre-computed kernel; its evaluation requires $O(n^5)$\noperations at every point of the phase space where $n$ is the number of degrees\nof freedom in one velocity dimension. The method is generalized to any\nmolecular potential. Results of numerical simulations are presented for the\nproblem of spatially homogeneous relaxation for the hard spheres potential.\nComparison with the method of Direct Simulation Monte Carlo (DSMC) showed\nexcellent agreement.", "category": "physics_comp-ph" }, { "text": "Poly-dodecahedrane: A new allotrope of carbon: Carbon is the most important chemical element and the theoretical study of\nits new allotropes can be of great interest. In this study, regular\ndodecahedron (dodecahedrane) oligomers (n = 1, 3, 5, 7, 9, 11, 13) by extending\nthe dodecahedrane units in 3-dimensions were designed. Then, a theoretical\nstudy was conducted on their structures and electronic properties as potential\nnew carbon allotrope. The cohesive energy (Ecoh) and $\\Delta$G, were\ncalculated. Experimental observations indicate that the Ecoh rises as the\nnumber of dodecahedrane units increases, whereas the Gibbs free energy change\n$\\Delta$G decreases with an increase in the number of dodecahedrane units. The\nHOMO-LUMO energy gap (Eg) values, which represent electronic properties,\ndecrease with increasing number of dodecahedrane units. Density functional\ntheory (DFT) calculations of the novel carbon allotropes polydodecahedrane\nnanostructures have unveiled a previously unobserved symmetry, indicating\nintrinsic metallic behavior. The symmetrical distribution of partial charges\nwas found in molecular electrostatic potential (MEP) diagrams for all\noligomers, showing a tendency of the structures to maintain a symmetrical\nstructural order as the number of monomer units increases. In addition, natural\nbond orbital (NBO) analysis of 13-units oligomer as largest designed structure\nreveals near-$sp^3$ hybridization for different carbons. Based on the\ncalculated results, the structures have a tendency to extend in 3-dimensions\nand form a covalent network of poly-dodecahedrane with a unique structure\nconsisting of interconnected cyclopentane rings. The results show that this\nexclusive configuration exhibits theoretical stability and suggests the\npotential for poly-dodecahedrane to be regarded as a novel carbon allotrope.", "category": "physics_comp-ph" }, { "text": "Deep Surrogate Models for Multi-dimensional Regression of Reactor Power: There is renewed interest in developing small modular reactors and\nmicro-reactors. Innovation is necessary in both construction and operation\nmethods of these reactors to be financially attractive. For operation, an area\nof interest is the development of fully autonomous reactor control. Significant\nefforts are necessary to demonstrate an autonomous control framework for a\nnuclear system, while adhering to established safety criteria. Our group has\nproposed and received support for demonstration of an autonomous framework on a\nsubcritical system: the MIT Graphite Exponential Pile. In order to have a fast\nresponse (on the order of miliseconds), we must extract specific capabilities\nof general-purpose system codes to a surrogate model. Thus, we have adopted\ncurrent state-of-the-art neural network libraries to build surrogate models.\n This work focuses on establishing the capability of neural networks to\nprovide an accurate and precise multi-dimensional regression of a nuclear\nreactor's power distribution. We assess using a neural network surrogate\nagainst a previously validated model: an MCNP5 model of the MIT reactor. The\nresults indicate that neural networks are an appropriate choice for surrogate\nmodels to implement in an autonomous reactor control framework. The MAPE across\nall test datasets was < 1.16 % with a corresponding standard deviation of <\n0.77 %. The error is low, considering that the node-wise fission power can vary\nfrom 7 kW to 30 kW across the core.", "category": "physics_comp-ph" }, { "text": "Manipulation of the large Rashba spin splitting in polar two-dimensional\n transition metal dichalcogenides: Transition metal dichalcogenide (TMD) monolayers MXY (M=Mo, W, X(not equal\nto)Y=S, Se, Te) are two-dimensional polar semiconductors. Setting WSeTe\nmonolayer as an example and using density functional theory calculations, we\ninvestigate the manipulation of Rashba spin orbit coupling (SOC) in the MXY\nmonolayer. It is found that the intrinsic out-of-plane electric field due to\nthe mirror symmetry breaking induces the large Rashba spin splitting around the\nGamma point, which, however, can be easily tuned by applying the in-plane\nbiaxial strain. Through a relatively small strain (from -2% to 2%), a large\ntunability (from around -50% to 50%) of Rashba SOC can be obtained due to the\nmodified orbital overlap, which can in turn modulate the intrinsic electric\nfield. The orbital selective external potential method further confirms the\nsignificance of the orbital overlap between W-dz2 and Se-pz in Rashba SOC. In\naddition, we also explore the influence of the external electric field on\nRashba SOC in the WSeTe monolayer, which is less effective than strain. The\nlarge Rashba spin splitting, together with the valley spin splitting in MXY\nmonolayers may make a special contribution to semiconductor spintronics and\nvalleytronics.", "category": "physics_comp-ph" }, { "text": "Performance of preconditioned iterative linear solvers for\n cardiovascular simulations in rigid and deformable vessels: Computing the solution of linear systems of equations is invariably the most\ntime consuming task in the numerical solutions of PDEs in many fields of\ncomputational science. In this study, we focus on the numerical simulation of\ncardiovascular hemodynamics with rigid and deformable walls, discretized in\nspace and time through the variational multi-scale finite element method. We\nfocus on three approaches: the problem agnostic generalized minimum residual\n(GMRES) and stabilized bi-conjugate gradient (BICGS) methods, and a recently\nproposed, problem specific, bi-partitioned (BIPN) method. We also perform a\ncomparative analysis of several preconditioners, including diagonal,\nblock-diagonal, incomplete factorization, multi-grid, and resistance based\nmethods. Solver performance and matrix characteristics (diagonal dominance,\nsymmetry, sparsity, bandwidth and spectral properties) are first examined for\nan idealized cylindrical geometry with physiologic boundary conditions and then\nsuccessively tested on several patient-specific anatomies representative of\nrealistic cardiovascular simulation problems. Incomplete factorization\npre-conditioners provide the best performance and results in terms of both\nstrong and weak scalability. The BIPN method was found to outperform other\nmethods in patient-specific models with rigid walls. In models with deformable\nwalls, BIPN was outperformed by BICG with diagonal and Incomplete LU\npreconditioners.", "category": "physics_comp-ph" }, { "text": "Fast laser field reconstruction method based on a Gerchberg-Saxton\n algorithm with mode decomposition: Knowledge of the electric field of femtosecond, high intensity laser pulses\nis of paramount importance to study the interaction of this class of lasers\nwith matter. A novel, hybrid method to reconstruct the laser field from fluence\nmeasurements in the transverse plane at multiple positions along the\npropagation axis is presented, combining a Hermite-Gauss modes decomposition\nand elements of the Gerchberg-Saxton algorithm. The proposed Gerchberg-Saxton\nalgorithm with modes decomposition (GSA-MD) takes into account the pointing\ninstabilities of high intensity laser systems by tuning the centers of the HG\nmodes. Furthermore, it quickly builds a field description by progressively\nincreasing the number of modes and thus the accuracy of the field\nreconstruction. The results of field reconstruction using the GSA-MD are shown\nto be in excellent agreement with experimental measurements from two different\nhigh-peak power laser facilities.", "category": "physics_comp-ph" }, { "text": "Python Classes for Numerical Solution of PDE's: We announce some Python classes for numerical solution of partial\ndifferential equations, or boundary value problems of ordinary differential\nequations. These classes are built on routines in \\texttt{numpy} and\n\\texttt{scipy.sparse.linalg} (or \\texttt{scipy.linalg} for smaller problems).", "category": "physics_comp-ph" }, { "text": "Genus dependence of the number of (non-)orientable surface\n triangulations: Topological triangulations of orientable and non-orientable surfaces with\narbitrary genus have important applications in quantum geometry, graph theory\nand statistical physics. However, until now only the asymptotics for 2-spheres\nare known analytically, and exact counts of triangulations are only available\nfor both small genus and small triangulations. We apply the Wang-Landau\nalgorithm to calculate the number $N(m,h)$ of triangulations for several order\nof magnitudes in system size $m$ and genus $h$. We verify that the limit of the\nentropy density of triangulations is independent of genus and orientability and\nare able to determine the next-to-leading and the next-to-next-to-leading order\nterms. We conjecture for the number of surface triangulations the asymptotic\nbehavior \\begin{equation*} N(m,h) \\rightarrow (170.4 \\pm 15.1)^h m^{-2(h -\n1)/5} \\left( \\frac{256}{27} \\right)^{m / 2}\\;, \\end{equation*} what might guide\na mathematicians proof for the exact asymptotics.", "category": "physics_comp-ph" }, { "text": "Meshing strategies for the alleviation of mesh-induced effects in\n cohesive element models: One of the main approaches for modeling fracture and crack propagation in\nsolid materials is adaptive insertion of cohesive elements, in which line-like\n(2D) or surface-like (3D) elements are inserted into the finite element mesh to\nmodel the nucleation and propagation of failure surfaces. In this approach,\nhowever, cracks are forced to propagate along element boundaries, following\npaths that in general require more energy per unit crack extension (greater\ndriving forces) than those followed in the original continuum, which in turn\nleads to erroneous solutions. In this work we illustrate how the introduction\nof a discretization produces two undesired effects, which we term mesh-induced\nanisotropy and mesh-induced toughness. Subsequently, we analyze those effects\nthrough polar plots of the path deviation ratio (a measure of the ability of a\nmesh to represent straight lines) for commonly adopted meshes. Finally, we\npropose to reduce those effects through K-means meshes and through a new type\nof mesh, which we term conjugate-directions mesh. The behavior of all meshes\nunder consideration as the mesh size is reduced is analyzed through a numerical\nstudy of convergence.", "category": "physics_comp-ph" }, { "text": "A higher-order accurate operator splitting spectral method for the\n Wigner-Poisson system: An accurate description of 2-D quantum transport in a double-gate metal oxide\nsemiconductor filed effect transistor (dgMOSFET) requires a high-resolution\nsolver to a coupled system of the 4-D Wigner equation and 2-D Poisson equation.\nIn this paper, we propose an operator splitting spectral method to evolve such\nWigner-Poisson system in 4-D phase space with high accuracy. After an operator\nsplitting of the Wigner equation, the resulting two sub-equations can be solved\nanalytically with spectral approximation in phase space. Meanwhile, we adopt a\nChebyshev spectral method to solve the Poisson equation. Spectral convergence\nin phase space and a fourth-order accuracy in time are both numerically\nverified. Finally, we apply the proposed solver into simulating dgMOSFET,\ndevelop the steady states from long-time simulations and obtain numerically\nconverged current-voltage (I-V) curves.", "category": "physics_comp-ph" }, { "text": "Laser Ablation of Gold into Water: near Critical Point Phenomena and\n Hydrodynamic Instability: Laser ablation of gold irradiated through the transparent water is studied.\n We follow dynamics of gold expansion into the water along very long (up to\n200 ns) time interval. This is significant because namely at these late times\npressure at a contact boundary between gold (Au) and water decreases down to\nthe saturation pressure of gold.\n Thus the saturation pressure begins to influence dynamics near the contact.\n The inertia of displaced water decelerates the contact.\n In the reference frame connected with the contact, the deceleration is\nequivalent to the free fall acceleration in a gravity field.\n Such conditions are favorable for the development of Rayleigh-Taylor\ninstability (RTI) because heavy fluid (gold) is placed above the light one\n(water) in a gravity field.\n We extract the increment of RTI from 2T-HD 1D runs.\n Surface tension and especially viscosity significantly dump the RTI gain\nduring deceleration. Atomistic simulation with Molecular Dynamics method\ncombined with Monte-Carlo method (MD-MC) for large electron heat conduction in\ngold is performed to gain a clear insight into the underlying mechanisms. MD-MC\nruns show that significant amplification of surface perturbations takes place.\n These perturbations start just from thermal fluctuations and the noise\nproduced by bombardment of the atmosphere by fragments of foam.\n The perturbations achieve amplification enough to separate the droplets from\nthe RTI jets of gold. Thus the gold droplets fall into the water.", "category": "physics_comp-ph" }, { "text": "Achieving tunable surface tension in the pseudopotential lattice\n Boltzmann modeling of multiphase flows: In this paper, we aim to address an important issue about the pseudopotential\nlattice Boltzmann (LB) model, which has attracted much attention as a\nmesoscopic model for simulating interfacial dynamics of complex fluids, but\nsuffers from the problem that the surface tension cannot be tuned independently\nof the density ratio. In the literature, a multi-range potential was devised to\nadjust the surface tension [Sbragaglia et al., Phys. Rev. E 75, 026702 (2007)].\nHowever, it was recently found that the density ratio of the system will be\nchanged when the multi-range potential is employed to adjust the surface\ntension. A new approach is therefore proposed in the present work. The basic\nstrategy is to add a source term to the LB equation so as to tune the surface\ntension of the pseudopotential LB model. The proposed approach can guarantee\nthat the adjustment of the surface tension does not affect the mechanical\nstability condition of the pseudopotential LB model, and thus provides a\nseparate control of the surface tension and the density ratio. Meanwhile, it\nstill retains the mesoscopic feature and the computational simplicity of the\npseudopotential LB model. Numerical simulations are carried out for stationary\ndroplets, capillary waves, and droplet splashing on a thin liquid film. The\nnumerical results demonstrate that the proposed approach is capable of\nachieving a tunable surface tension over a very wide range and can keep the\ndensity ratio unchanged when adjusting the surface tension.", "category": "physics_comp-ph" }, { "text": "Overcoming the Convergence Difficulty of Cohesive Zone Models through a\n Newton-Raphson Modification Technique: This paper studies the convergence difficulty of cohesive zone models in\nstatic analysis. It is shown that an inappropriate starting point of iterations\nin the Newton-Raphson method is responsible for the convergence difficulty. A\nsimple, innovative approach is then proposed to overcome the convergence issue.\nThe technique is robust, simple to implement in a finite element framework,\ndoes not compromise the accuracy of analysis, and provides fast convergence.\nThe paper explains the implementation algorithm in detail and presents three\nbenchmark examples. It is concluded that the method is computationally\nefficient, has a general application, and outperforms the existing methods.", "category": "physics_comp-ph" }, { "text": "The initial step towards JOREK integration in IMAS: JOREK is being adapted to work with the Integrated Modelling & Analysis Suite\n(IMAS) which is being actively developed and used by the ITER Organization, the\nEUROfusion community and other ITER Members. The list of codes adapted to use\nthe IMAS Data Model is gradually increasing with examples including SOLPS-ITER\nand JINTRAC. The main goal of the integration of JOREK with IMAS is to enable\ninteraction with the plasma scenarios stored in the IMAS databases in the form\nof Interface Data Structures (IDSs): input conditions can be read from the\ndatabases and nonlinear plasma states determined by JOREK stored. IDSs provide\na uniform way of representing data within the IMAS framework and allow to\ntransfer data between codes and to storage within larger integrated modelling\nworkflows. In order to integrate JOREK within IMAS it is therefore necessary\nthat transformation tools are developed to facilitate the reading and writing\nof the relevant IDSs, including the MHD IDS, with its underlying Generalized\nGrid Description (GGD). For this purpose, utilities have been developed that\nextract JOREK simulation plasma state, namely the grid geometry and computed\nphysical quantities for each time slice, and then transform them to the\nappropriate output IDSs. In this article, these initial steps towards full\nJOREK integration into IMAS is presented.", "category": "physics_comp-ph" }, { "text": "Quantification of MagLIF morphology using the Mallat Scattering\n Transformation: The morphology of the stagnated plasma resulting from Magnetized Liner\nInertial Fusion (MagLIF) is measured by imaging the self-emission x-rays coming\nfrom the multi-keV plasma, and the evolution of the imploding liner is measured\nby radiographs. Equivalent diagnostic response can be derived from integrated\nrad-MHD simulations from programs such as Hydra and Gorgon. There have been\nonly limited quantitative ways to compare the image morphology, that is the\ntexture, of simulations and experiments. We have developed a metric of image\nmorphology based on the Mallat Scattering Transformation (MST), a\ntransformation that has proved to be effective at distinguishing textures,\nsounds, and written characters. This metric has demonstrated excellent\nperformance in classifying ensembles of synthetic stagnation images. We use\nthis metric to quantitatively compare simulations to experimental images, cross\nexperimental images, and to estimate the parameters of the images with\nuncertainty via a linear regression of the synthetic images to the parameters\nused to generate them. This coordinate space has proved very adept at doing a\nsophisticated relative background subtraction in the MST space. This was needed\nto compare the experimental self emission images to the rad-MHD simulation\nimages. We have also developed theory that connects the transformation to the\ncausal dynamics of physical systems. This has been done from the classical\nkinetic perspective and from the field theory perspective, where the MST is the\ngeneralized Green's function, or S-matrix of the field theory in the scale\nbasis. From both perspectives the first order MST is the current state of the\nsystem, and the second order MST are the transition rates from one state to\nanother. An efficient, GPU accelerated, Python implementation of the MST was\ndeveloped. Future applications are discussed.", "category": "physics_comp-ph" }, { "text": "Pressure Model of Soft Body Simulation: Motivated by existing models used for soft body simulation which are rather\ncomplex to implement, we present a novel technique which is based on simple\nlaws of physics and gives high quality results in real-time. We base the\nimplementation on simple thermodynamics laws and use the Clausius-Clapeyron\nstate equation for pressure calculation. In addition, this provides us with a\npressure force that is accumulated into a force accumulator of a 3D mesh object\nby using an existing spring-mass engine. Finally after integration of Newtons\nsecond law we obtain the behavior of a soft body with fixed or non-fixed air\npressure inside of it.", "category": "physics_comp-ph" }, { "text": "Semi-stochastic full configuration interaction quantum Monte Carlo:\n developments and application: We expand upon the recent semi-stochastic adaptation to full configuration\ninteraction quantum Monte Carlo (FCIQMC). We present an alternate method for\ngenerating the deterministic space without a priori knowledge of the wave\nfunction and present stochastic efficiencies for a variety of both molecular\nand lattice systems. The algorithmic details of an efficient semi-stochastic\nimplementation are presented, with particular consideration given to the effect\nthat the adaptation has on parallel performance in FCIQMC. We further\ndemonstrate the benefit for calculation of reduced density matrices in FCIQMC\nthrough replica sampling, where the semi-stochastic adaptation seems to have\neven larger efficiency gains. We then combine these ideas to produce explicitly\ncorrelated corrected FCIQMC energies for the Beryllium dimer, for which\nstochastic errors on the order of wavenumber accuracy are achievable.", "category": "physics_comp-ph" }, { "text": "Reinterpretation and Long-Term Preservation of Data and Code: Careful preservation of experimental data, simulations, analysis products,\nand theoretical work maximizes their long-term scientific return on investment\nby enabling new analyses and reinterpretation of the results in the future. Key\ninfrastructure and technical developments needed for some high-value science\ntargets are not in scope for the operations program of the large experiments\nand are often not effectively funded. Increasingly, the science goals of our\nprojects require contributions that span the boundaries between individual\nexperiments and surveys, and between the theoretical and experimental\ncommunities. Furthermore, the computational requirements and technical\nsophistication of this work is increasing. As a result, it is imperative that\nthe funding agencies create programs that can devote significant resources to\nthese efforts outside of the context of the operations of individual major\nexperiments, including smaller experiments and theory/simulation work. In this\nSnowmass 2021 Computational Frontier topical group report (CompF7:\nReinterpretation and long-term preservation of data and code), we summarize the\ncurrent state of the field and make recommendations for the future.", "category": "physics_comp-ph" }, { "text": "Ring artifacts correction in compressed sensing tomographic\n reconstruction: We present a novel approach to handle ring artifacts correction in compressed\nsensing tomographic reconstruction. The correction is part of the\nreconstruction process, which differs from classical sinogram pre-processing\nand image post-processing techniques. The principle of compressed sensing\ntomographic reconstruction is presented. Then, we show that the ring artifacts\ncorrection can be integrated in the reconstruction problem formalism. We\nprovide numerical results for both simulated and real data. This technique is\nincluded in the PyHST2 code which is used at the European Synchrotron Radiation\nFacility for tomographic reconstruction.", "category": "physics_comp-ph" }, { "text": "A Boltzmann scheme with physically relevant discrete velocities for\n Euler equations: Kinetic or Boltzmann schemes are interesting alternatives to the macroscopic\nnumerical methods for solving the hyperbolic conservation laws of gas dynamics.\nThey utilize the particle-based description instead of the wave propagation\nmodels. While the continuous particle velocity based upwind schemes were\ndeveloped in the earlier decades, the discrete velocity Boltzmann schemes\nintroduced in the last decade are found to be simpler and are easier to handle.\nIn this work, we introduce a novel way of introducing discrete velocities which\ncorrespond to the physical wave speeds and formulate a discrete velocity\nBoltzmann scheme for solving Euler equations.", "category": "physics_comp-ph" }, { "text": "Asymptotic approximations for Bloch waves and topological mode steering\n in a planar array of Neumann scatterers: We study the canonical problem of wave scattering by periodic arrays, either\nof infinite or finite extent, of Neumann scatterers in the plane; the\ncharacteristic lengthscale of the scatterers is considered small relative to\nthe lattice period. We utilise the method of matched asymptotic expansions,\ntogether with Fourier series representations, to create an efficient and\naccurate numerical approach for finding the dispersion curves associated with\nFloquet-Bloch waves through an infinite array of scatterers. The approach lends\nitself to direct scattering problems for finite arrays and we illustrate the\nflexibility of these asymptotic representations on topical examples from\ntopological wave physics.", "category": "physics_comp-ph" }, { "text": "First-principles study of phononic thermal transport in monolayer C3N: a\n comparison with graphene: Very recently, a new graphene-like crystalline, hole-free, 2D-single-layer\ncarbon nitride C3N, has been fabricated by polymerization of\n2,3-diaminophenazine and used to fabricate a field-effect transistor device\nwith an on-off current ratio reaching (Adv. Mater. 2017, 1605625). Heat\ndissipation plays a vital role in its practical applications, and therefore the\nthermal transport properties need to be explored urgently. In this paper, we\nperform first-principles calculations combined with phonon Boltzmann transport\nequation to investigate the phononic thermal transport properties of monolayer\nC3N, and meanwhile, a comparison with graphene is given. Our calculated\nintrinsic lattice thermal conductivity of C3N is 380 W/mK at room temperature,\nwhich is one order of magnitude lower than that of graphene (3550 W/mK at 300\nK), but is greatly higher than many other typical 2D materials. The underlying\nmechanisms governing the thermal transport were thoroughly discussed and\ncompared to graphene, including group velocities, phonon relax time, the\ncontribution from phonon branches, phonon anharmonicity and size effect. The\nfundamental physics understood from this study may shed light on further\nstudies of the newly fabricated 2D crystalline C3N sheets.", "category": "physics_comp-ph" }, { "text": "ELSI: A Unified Software Interface for Kohn-Sham Electronic Structure\n Solvers: Solving the electronic structure from a generalized or standard eigenproblem\nis often the bottleneck in large scale calculations based on Kohn-Sham\ndensity-functional theory. This problem must be addressed by essentially all\ncurrent electronic structure codes, based on similar matrix expressions, and by\nhigh-performance computation. We here present a unified software interface,\nELSI, to access different strategies that address the Kohn-Sham eigenvalue\nproblem. Currently supported algorithms include the dense generalized\neigensolver library ELPA, the orbital minimization method implemented in\nlibOMM, and the pole expansion and selected inversion (PEXSI) approach with\nlower computational complexity for semilocal density functionals. The ELSI\ninterface aims to simplify the implementation and optimal use of the different\nstrategies, by offering (a) a unified software framework designed for the\nelectronic structure solvers in Kohn-Sham density-functional theory; (b)\nreasonable default parameters for a chosen solver; (c) automatic conversion\nbetween input and internal working matrix formats, and in the future (d)\nrecommendation of the optimal solver depending on the specific problem.\nComparative benchmarks are shown for system sizes up to 11,520 atoms (172,800\nbasis functions) on distributed memory supercomputing architectures.", "category": "physics_comp-ph" }, { "text": "Frozen-orbital and downfolding calculations with auxiliary-field quantum\n Monte Carlo: We describe the implementation of the frozen-orbital and downfolding\napproximations in the auxiliary-field quantum Monte Carlo (AFQMC) method. These\napproaches can provide significant computational savings compared to fully\ncorrelating all the electrons. While the many-body wave function is never\nexplicit in AFQMC, its random walkers are Slater determinants, whose orbitals\nmay be expressed in terms of any one-particle orbital basis. It is therefore\nstraightforward to partition the full N-particle Hilbert space into active and\ninactive parts to implement the frozen-orbital method. In the frozen-core\napproximation, for example, the core electrons can be eliminated in the\ncorrelated part of the calculations, greatly increasing the computational\nefficiency, especially for heavy atoms. Scalar relativistic effects are easily\nincluded using the Douglas-Kroll-Hess theory. Using this method, we obtain a\nway to effectively eliminate the error due to single-projector, norm-conserving\npseudopotentials in AFQMC. We also illustrate a generalization of the\nfrozen-orbital approach that downfolds high-energy basis states to a physically\nrelevant low-energy sector, which allows a systematic approach to produce\nrealistic model Hamiltonians to further increase efficiency for extended\nsystems.", "category": "physics_comp-ph" }, { "text": "Rotation by shape change, autonomous molecular motors and effective\n timecrystalline dynamics: A deformable body can rotate even with no angular momentum, simply by\nchanging its shape. A good example is a falling cat, how it maneuvers in air to\nland on its feet. Here a first principles molecular level example of the\nphenomenon is presented. For this the thermal vibrations of individual atoms in\nan isolated cyclopropane molecule are simulated in vacuum and at ultralow\ninternal temperature values, and the ensuing molecular motion is followed\nstroboscopically. It is observed that in the limit of long stroboscopic time\nsteps the vibrations combine into an apparent uniform rotation of the entire\nmolecule even in the absence of angular momentum. This large time scale\nrotational motion is then modeled in an effective theory approach, in terms of\ntimecrystalline Hamiltonian dynamics. The phenomenon is a temperature sensitive\nmeasurable. As such it has potential applications that range from models of\nautonomous molecular motors to development of molecular level detector, sensor\nand control technologies.", "category": "physics_comp-ph" }, { "text": "New stable, explicit, first order method to solve the heat conduction\n equation: We introduce a novel explicit and stable numerical algorithm to solve the\nspatially discretized heat or diffusion equation. We compare the performance of\nthe new method with analytical and numerical solutions. We show that the method\nis first order in time and can give approximate results for extremely large\nsystems faster than the commonly used explicit or implicit methods.", "category": "physics_comp-ph" }, { "text": "Load balancing strategies for the DSMC simulation of hypersonic flows\n using HPC: In the context of the validation of PICLas, a kinetic particle suite for the\nsimulation of rarefied, non-equilibrium plasma flows, the biased hypersonic\nnitrogen flow around a blunted cone was simulated with the Direct Simulation\nMonte Carlo method. The setup is characterized by a complex flow with strong\nlocal gradients and thermal non-equilibrium resulting in a highly inhomogeneous\ncomputational load. Especially, the load distribution is of interest, because\nit allows to exploit the utilized computational resources efficiently.\nDifferent load distribution algorithms are investigated and compared within a\nstrong scaling. This investigation of the parallel performance of PICLas is\naccompanied by simulation results in terms of the velocity magnitude,\ntranslational temperature and heat flux, which is compared to experimental\nmeasurements.", "category": "physics_comp-ph" }, { "text": "A Lattice-Boltzmann method for the interaction between mechanical waves\n and solid mobile bodies: The acoustic waves generated by moving bodies and the movement of bodies by\nacoustic waves are central phenomena in the operation of musical instruments\nand in everyday's experiences like the movement of a boat on a lake by the wake\ngenerated by a propelled ship. Previous works have successfully simulated the\ninteraction between a moving body and a lattice-Boltzmann fluid by immersed\nboundary methods. Hereby, we show how to implement the same coupling in the\ncase of a Lattice-Boltzmann for waves, i.e. a LBGK model that directly recovers\nthe wave equation in a linear medium, without modeling fluids. The coupling is\nperformed by matching the displacement at the medium-solid boundary and via the\npressure, which characterizes the forces undergone by the medium and the\nimmersed body. The proposed model simplifies the preceeding immersed boundary\nmethods and reduces the calculations steps. The method is illustrated by\nsimulating the movement of immersed bodies in two dimensions, like the\ndisplacement of a two-dimensional disk due to an incoming wave or the wake\ngenerated by a moving object in a medium at rest. The proposal consitutes a\nvaluable tool for the study of acoustical waves by lattice-Boltzmann methods.", "category": "physics_comp-ph" }, { "text": "Conjugate gradient heatbath for ill-conditioned actions: We present a method for performing sampling from a Boltzmann distribution of\nan ill-conditioned quadratic action. This method is based on heatbath\nthermalization along a set of conjugate directions, generated via a\nconjugate-gradient procedure. The resulting scheme outperforms local updates\nfor matrices with very high condition number, since it avoids the slowing down\nof modes with lower eigenvalue, and has some advantages over the global\nheatbath approach, compared to which it is more stable and allows for more\nfreedom in devising case-specific optimizations.", "category": "physics_comp-ph" }, { "text": "A Numerical Approach to Solving Nonlinear Differential Equations on a\n Grid with Potential Applicability to Computational Fluid Dynamics: A finite element method for solving nonlinear differential equations on a\ngrid, with potential applicability to computational fluid dynamics (CFD), is\ndeveloped and tested. The current method facilitates the computation of\nsolutions of a high polynomial degree on a grid. A high polynomial degree is\nachieved by interpolating both the value, and the value of the derivatives up\nto a given order, of continuously distributed unknown variables. The\ntwo-dimensional lid-driven cavity, a common benchmark problem for CFD methods,\nis used as a test case. It is shown that increasing the polynomial degree has\nsome advantages, compared to increasing the number of grid-points, when solving\nthe given benchmark problem using the current method. The current method yields\nresults which agree well with previously published results for this test case.", "category": "physics_comp-ph" }, { "text": "Numerical investigation into coarse-scale models of diffusion in complex\n heterogeneous media: Computational modelling of diffusion in heterogeneous media is prohibitively\nexpensive for problems with fine-scale heterogeneities. A common strategy for\nresolving this issue is to decompose the domain into a number of\nnon-overlapping sub-domains and homogenize the spatially-dependent diffusivity\nwithin each sub-domain (homogenization cell). This process yields a\ncoarse-scale model for approximating the solution behaviour of the original\nfine-scale model at a reduced computational cost. In this paper, we study\ncoarse-scale diffusion models in block heterogeneous media and investigate, for\nthe first time, the effect that various factors have on the accuracy of\nresulting coarse-scale solutions. We present new findings on the error\nassociated with homogenization as well as confirm via numerical experimentation\nthat periodic boundary conditions are the best choice for the homogenization\ncell and demonstrate that the smallest homogenization cell that is\ncomputationally feasible should be used in numerical simulations.", "category": "physics_comp-ph" }, { "text": "Iterative method for solution of radiation emission/transmission matrix\n equations: An iterative method is derived for image reconstruction. Among other\nattributes, this method allows constraints unrelated to the radiation\nmeasurements to be incorporated into the reconstructed image. A comparison is\nmade with the widely used Maximum-Likelihood Expectation-Maximization (MLEM)\nalgorithm.", "category": "physics_comp-ph" }, { "text": "Parareal in time 3D numerical solver for the LWR Benchmark neutron\n diffusion transient model: We present a parareal in time algorithm for the simulation of neutron\ndiffusion transient model. The method is made efficient by means of a coarse\nsolver defined with large time steps and steady control rods model. Using\nfinite element for the space discretization, our implementation provides a good\nscalability of the algorithm. Numerical results show the efficiency of the\nparareal method on large light water reactor transient model corresponding to\nthe Langenbuch-Maurer-Werner (LMW) benchmark [1].", "category": "physics_comp-ph" }, { "text": "Learning Large-Time-Step Molecular Dynamics with Graph Neural Networks: Molecular dynamics (MD) simulation predicts the trajectory of atoms by\nsolving Newton's equation of motion with a numeric integrator. Due to physical\nconstraints, the time step of the integrator need to be small to maintain\nsufficient precision. This limits the efficiency of simulation. To this end, we\nintroduce a graph neural network (GNN) based model, MDNet, to predict the\nevolution of coordinates and momentum with large time steps. In addition, MDNet\ncan easily scale to a larger system, due to its linear complexity with respect\nto the system size. We demonstrate the performance of MDNet on a 4000-atom\nsystem with large time steps, and show that MDNet can predict good equilibrium\nand transport properties, well aligned with standard MD simulations.", "category": "physics_comp-ph" }, { "text": "Minimal Modification to Nos\u00e9-Hoover Barostat Enables Correct NPT\n Sampling: The Nos\\'e-Hoover dynamics for isothermal-isobaric (NPT) computer simulations\ndo not generate the appropriate partition function for ergodic systems. The\npresent paper points out that this can be corrected with a simple addition of a\nconstant term to only one of the equations of motion. The solution proposed is\nmuch simpler than previous modifications done towards the same goal. The\npresent modification is motivated by the work virial theorem, which has been\nderived for the special case of an infinitely periodic system in the first part\nof this paper.", "category": "physics_comp-ph" }, { "text": "Bayesian optimization with improved scalability and derivative\n information for efficient design of nanophotonic structures: We propose the combination of forward shape derivatives and the use of an\niterative inversion scheme for Bayesian optimization to find optimal designs of\nnanophotonic devices. This approach widens the range of applicability of\nBayesian optmization to situations where a larger number of iterations is\nrequired and where derivative information is available. This was previously\nimpractical because the computational efforts required to identify the next\nevaluation point in the parameter space became much larger than the actual\nevaluation of the objective function. We demonstrate an implementation of the\nmethod by optimizing a waveguide edge coupler.", "category": "physics_comp-ph" }, { "text": "Physical Symmetries Embedded in Neural Networks: Neural networks are a central technique in machine learning. Recent years\nhave seen a wave of interest in applying neural networks to physical systems\nfor which the governing dynamics are known and expressed through differential\nequations. Two fundamental challenges facing the development of neural networks\nin physics applications is their lack of interpretability and their\nphysics-agnostic design. The focus of the present work is to embed physical\nconstraints into the structure of the neural network to address the second\nfundamental challenge. By constraining tunable parameters (such as weights and\nbiases) and adding special layers to the network, the desired constraints are\nguaranteed to be satisfied without the need for explicit regularization terms.\nThis is demonstrated on upervised and unsupervised networks for two basic\nsymmetries: even/odd symmetry of a function and energy conservation. In the\nsupervised case, the network with embedded constraints is shown to perform well\non regression problems while simultaneously obeying the desired constraints\nwhereas a traditional network fits the data but violates the underlying\nconstraints. Finally, a new unsupervised neural network is proposed that\nguarantees energy conservation through an embedded symplectic structure. The\nsymplectic neural network is used to solve a system of energy-conserving\ndifferential equations and out-performs an unsupervised, non-symplectic neural\nnetwork.", "category": "physics_comp-ph" }, { "text": "A dynamical programming approach for controlling the directed abelian\n Dhar-Ramaswamy model: A dynamical programming approach is used to deal with the problem of\ncontrolling the directed abelian Dhar-Ramaswamy model on two-dimensional square\nlattice. Two strategies are considered to obtain explicit results to this task.\nFirst, the optimal solution of the problem is characterized by the solution of\nthe Bellman equation obtained by numerical algorithms. Second, the solution is\nused as a benchmark to value how far from the optimum other heuristics that can\nbe applied to larger systems are. This approach is the first attempt on the\ndirection of schemes for controlling self-organized criticality that are based\non optimization principles that consider explicitly a tradeoff between the size\nof the avalanches and the cost of intervention.", "category": "physics_comp-ph" }, { "text": "Effects of interlayer exchange on collapse mechanisms and stability of\n magnetic skyrmions: Theoretical calculations of thermally activated decay of skyrmions in systems\ncomprising several magnetic monolayers are presented, with a special focus on\nbilayer systems. Mechanisms of skyrmion collapse are identified and\ncorresponding energy barriers and thermal collapse rates are evaluated as\nfunctions of the interlayer exchange coupling and mutual stacking of the\nmonolayers using transition state theory and an atomistic spin Hamiltonian. In\norder to contrast the results to monolayer systems, the magnetic interactions\nwithin each layer are chosen so as to mimic the well-established Pd/Fe/Ir(111)\nsystem. Even bilayer systems demonstrate a rich diversity of skyrmion collapse\nmechanisms that sometimes co-exist. For very weakly coupled layers, the\nskyrmions in each layer decay successively via radially-symmetric shrinking.\nSlightly larger coupling leads to an asymmetric chimera collapse stabilized by\ninterlayer exchange. When the interlayer exchange coupling reaches a certain\ncritical value, the skyrmions collapse simultaneously. Interestingly, the\noverall energy barrier for the skyrmion collapse does not always converge to a\nmultiple of that for a monolayer system in the strongly coupled regime. For a\ncertain stacking of the magnetic layers, the energy barrier as a function of\nthe interlayer exchange coupling features a maximum and then decreases with the\ncoupling strength in the strong coupling regime. Calculated mechanisms of\nskyrmion collapse are used to ultimately predict the skyrmion lifetime. Our\nresults reveal a comprehensive picture of thermal stability of skyrmions in\nmagnetic multilayers and provide a perspective for realizing skyrmions with\ncontrolled properties.", "category": "physics_comp-ph" }, { "text": "APEnet+: high bandwidth 3D torus direct network for petaflops scale\n commodity clusters: We describe herein the APElink+ board, a PCIe interconnect adapter featuring\nthe latest advances in wire speed and interface technology plus hardware\nsupport for a RDMA programming model and experimental acceleration of GPU\nnetworking; this design allows us to build a low latency, high bandwidth PC\ncluster, the APEnet+ network, the new generation of our cost-effective,\ntens-of-thousands-scalable cluster network architecture. Some test results and\ncharacterization of data transmission of a complete testbench, based on a\ncommercial development card mounting an Altera FPGA, are provided.", "category": "physics_comp-ph" }, { "text": "Implicit Finite Volume and Discontinuous Galerkin Methods for\n Multicomponent Flow in Unstructured 3D Fractured Porous Media: We present a new implicit higher-order finite element (FE) approach to\nefficiently model compressible multicomponent fluid flow on unstructured grids\nand in fractured porous subsurface formations. The scheme is sequential\nimplicit: pressures and fluxes are updated with an implicit Mixed Hybrid Finite\nElement (MHFE) method, and the transport of each species is approximated with\nan implicit second-order Discontinuous Galerkin (DG) FE method. Discrete\nfractures are incorporated with a cross-flow equilibrium approach. This is the\nfirst investigation of all-implicit higher-order MHFE-DG for unstructured\ntriangular, quadrilateral (2D), and hexahedral (3D) grids and discrete\nfractures. A lowest-order implicit finite volume (FV) transport update is also\ndeveloped for the same grid types. The implicit methods are compared to an\nImplicit-Pressure-Explicit-Composition (IMPEC) scheme. For fractured domains,\nthe unconditionally stable implicit transport update is shown to increase\ncomputational efficiency by orders of magnitude as compared to IMPEC, which has\na time-step constraint proportional to the pore volume of discrete fracture\ngrid cells. However, when lowest-order Euler time-discretizations are used,\nnumerical errors increase linearly with the larger implicit time-steps,\nresulting in high numerical dispersion. Second-order Crank-Nicolson implicit\nMHFE-DG and MHFE-FV are therefore presented as well. Convergence analyses show\ntwice the convergence rate for the DG methods as compared to FV, resulting in\ntwo to three orders of magnitude higher computational efficiency. Numerical\nexperiments demonstrate the efficiency and robustness in modeling compressible\nmulticomponent flow on irregular and fractured 2D and 3D grids, even in the\npresence of fingering instabilities.", "category": "physics_comp-ph" }, { "text": "Structure-preserving strategy for conservative simulation of\n relativistic nonlinear Landau--Fokker--Planck equation: Mathematical symmetries of the Beliaev--Budker kernel are the most important\nstructure of the relativistic Landau--Fokker--Planck equation. By preserving\nthe beautiful symmetries, a mass-momentum-energy-conserving simulation has been\ndemonstrated without any artificial constraints.", "category": "physics_comp-ph" }, { "text": "Electromagnetic Wave Propagation In The Plasma Layer of A Reentry\n Vehicle: The ability to simulate a reentry vehicle plasma layer and the radio wave\ninteraction with that layer, is crucial to the design of aerospace vehicles\nwhen the analysis of radio communication blackout is required. Results of\naerothermal heating, plasma generation and electromagnetic wave propagation\nover a reentry vehicle are presented in this paper. Simulation of a magnetic\nwindow radio communication blackout mitigation method is successfully\ndemonstrated.", "category": "physics_comp-ph" }, { "text": "Visualizing the world's largest turbulence simulation: In this exploratory submission we present the visualization of the largest\ninterstellar turbulence simulations ever performed, unravelling key\nastrophysical processes concerning the formation of stars and the relative role\nof magnetic fields. The simulations, including pure hydrodynamical (HD) and\nmagneto-hydrodynamical (MHD) runs, up to a size of $10048^3$ grid elements,\nwere produced on the supercomputers of the Leibniz Supercomputing Centre and\nvisualized using the hybrid parallel (MPI+TBB) ray-tracing engine OSPRay\nassociated with VisIt. Besides revealing features of turbulence with an\nunprecedented resolution, the visualizations brilliantly showcase the\nstretching-and-folding mechanisms through which astrophysical processes such as\nsupernova explosions drive turbulence and amplify the magnetic field in the\ninterstellar gas, and how the first structures, the seeds of newborn stars are\nshaped by this process.", "category": "physics_comp-ph" }, { "text": "Ideal, best packing, and energy minimizing double helices: We study optimal double helices with straight axes (or the fattest tubes\naround them) computationally using three kinds of functionals; ideal ones using\nropelength, best volume packing ones, and energy minimizers using two\none-parameter families of interaction energies between two strands of types\n$r^{-\\alpha}$ and $\\frac1r\\exp(-kr)$. We compare the numerical results with\nexperimental data of DNA.", "category": "physics_comp-ph" }, { "text": "LoDIP: Low light phase retrieval with deep image prior: Phase retrieval (PR) is a fundamental challenge in scientific imaging,\nenabling nanoscale techniques like coherent diffractive imaging (CDI). Imaging\nat low radiation doses becomes important in applications where samples are\nsusceptible to radiation damage. However, most PR methods struggle in low dose\nscenario due to the presence of very high shot noise. Advancements in the\noptical data acquisition setup, exemplified by in-situ CDI, have shown\npotential for low-dose imaging. But these depend on a time series of\nmeasurements, rendering them unsuitable for single-image applications.\nSimilarly, on the computational front, data-driven phase retrieval techniques\nare not readily adaptable to the single-image context. Deep learning based\nsingle-image methods, such as deep image prior, have been effective for various\nimaging tasks but have exhibited limited success when applied to PR. In this\nwork, we propose LoDIP which combines the in-situ CDI setup with the power of\nimplicit neural priors to tackle the problem of single-image low-dose phase\nretrieval. Quantitative evaluations demonstrate the superior performance of\nLoDIP on this task as well as applicability to real experimental scenarios.", "category": "physics_comp-ph" }, { "text": "PYG4OMETRY: a Python library for the creation of Monte Carlo radiation\n transport physical geometries: Creating and maintaining computer readable geometries for use in Monte Carlo\nRadiation Transport (MCRT) simulations is an error-prone and time-consuming\ntask. Simulating a system often requires geometry from different sources and\nmodelling environments, including a range of MCRT codes and computer-aided\ndesign (CAD) tools. PYG4OMETRY is a Python library that enables users to\nrapidly create, manipulate, display, read and write Geometry Description Markup\nLanguage (GDML)-based geometry used in simulations. PYG4OMETRY provides\nimportation of CAD files to GDML tessellated solids, conversion of GDML\ngeometry to FLUKA and conversely from FLUKA to GDML. The implementation of\nPYG4OMETRY is explained in detail along with small examples. The paper\nconcludes with a complete example using most of the PYG4OMETRY features and a\ndiscussion of extensions and future work.", "category": "physics_comp-ph" }, { "text": "Comparison of Update and Genetic Training Algorithms in a Memristor\n Crossbar Perceptron: Memristor-based computer architectures are becoming more attractive as a\npossible choice of hardware for the implementation of neural networks. However,\nat present, memristor technologies are susceptible to a variety of failure\nmodes, a serious concern in any application where regular access to the\nhardware may not be expected or even possible. In this study, we investigate\nwhether certain training algorithms may be more resilient to particular\nhardware failure modes, and therefore more suitable for use in those\napplications. We implement two training algorithms -- a local update scheme and\na genetic algorithm -- in a simulated memristor crossbar, and compare their\nability to train for a simple image classification task as an increasing number\nof memristors fail to adjust their conductance. We demonstrate that there is a\nclear distinction between the two algorithms in several measures of the rate of\nfailure to train.", "category": "physics_comp-ph" }, { "text": "Grain structure dependence of coercivity in thin films: We investigated coercive fields of 200nm x 1200nm x 5nm rectangular\nnanocrystalline thin films as a function of grain size D using finite elements\nsimulations. To this end, we created granular finite element models with grain\nsizes ranging from 5nm to 60nm, and performed micromagnetic hysteresis\ncalculations along the y-axis (easy direction) as well as along the x-axis\n(hard direction). We then used an extended Random Anisotropy model to interpret\nthe results and to illustrate the interplay of random anisotropy and shape\ninduced anisotropy, which is coherent on a much larger scale, in thin films.", "category": "physics_comp-ph" }, { "text": "A comparison of the static and dynamic properties of a semi-flexible\n polymer using lattice-Boltzmann and Brownian dynamics simulations: The aim of this paper is to compare results from lattice-Boltzmann and\nBrownian dynamics simulations of linear chain molecules. We have systematically\nvaried the parameters that may affect the accuracy of the lattice-Boltzmann\nsimulations, including grid resolution, temperature, polymer mass, and fluid\nviscosity. The effects of the periodic boundary conditions are minimized by an\nanalytic correction for the different long-range interactions in periodic and\nunbounded systems. Lattice-Boltzmann results for the diffusion coefficient and\nRouse mode relaxation times were found to be insensitive to temperature, which\nsuggests that effects of hydrodynamic retardation are small. By increasing the\nresolution of the lattice-Boltzmann grid with respect to the polymer size,\nconvergent results for the diffusion coefficient and relaxation times were\nobtained; these results agree with Brownian dynamics to within 1--2%.", "category": "physics_comp-ph" }, { "text": "Variance extrapolation method for neural-network variational Monte Carlo: Constructing more expressive ansatz has been a primary focus for quantum\nMonte Carlo, aimed at more accurate \\textit{ab initio} calculations. However,\nwith more powerful ansatz, e.g. various recent developed models based on\nneural-network architectures, the training becomes more difficult and\nexpensive, which may have a counterproductive effect on the accuracy of\ncalculation. In this work, we propose to make use of the training data to\nperform variance extrapolation when using neural-network ansatz in variational\nMonte Carlo. We show that this approach can speed up the convergence and\nsurpass the ansatz limitation to obtain an improved estimation of the energy.\nMoreover, variance extrapolation greatly enhances the error cancellation\ncapability, resulting in significantly improved relative energy outcomes, which\nare the keys to chemistry and physics problems.", "category": "physics_comp-ph" }, { "text": "Deconfined quantum criticality in spin-1/2 chains with long-range\n interactions: We study spin-$1/2$ chains with long-range power-law decaying unfrustrated\n(bipartite) Heisenberg exchange $J_r \\propto r^{-\\alpha}$ and multi-spin\ninteractions $Q$ favoring a valence-bond solid (VBS) ground state. Employing\nquantum Monte Carlo techniques and Lanczos diagonalization, we analyze order\nparameters and excited-state level crossings to characterize quantum states and\nphase transitions in the $(\\alpha,Q)$ plane. For weak $Q$ and sufficiently\nslowly decaying Heisenberg interactions (small $\\alpha$), the system has a\nlong-range-ordered antiferromagnetic (AFM) ground state, and upon increasing\n$\\alpha$ there is a continuous transition into a quasi long-range ordered\n(QLRO) critical state of the type in the standard Heisenberg chain. For rapidly\ndecaying long-range interactions, there is transition between QLRO and VBS\nground states of the same kind as in the frustrated $J_1$-$J_2$ Heisenberg\nchain. Our most important finding is a direct continuous quantum phase\ntransition between the AFM and VBS states - a close analogy to the 2D\ndeconfined quantum-critical point. In previous 1D analogies the ordered phases\nboth have gapped fractional excitations, and the critical point is a\nconventional Luttinger Liquid. In our model the excitations fractionalize upon\ntransitioning from the AFM state, changing from spin waves to deconfined\nspinons. We extract critical exponents at the AFM-VBS transition and use\norder-parameter distributions to study emergent symmetries. We find emergent\nO($4$) symmetry of the O($3$) AFM and scalar VBS order parameters. Thus, the\norder parameter fluctuations exhibit the covariance of a uniaxially deformed\nO($4$) sphere (an \"elliptical\" symmetry). This unusual quantum phase transition\ndoes not yet have any known field theory description, and our detailed results\ncan serve to guide its construction. We discuss possible experimental\nrealizations.", "category": "physics_comp-ph" }, { "text": "Investigations of an effective time-domain boundary condition for\n quiscent viscothermal acoustics: Accurate simulations of sound propagation in narrow geometries need to\naccount for viscous and thermal losses. In this respect, effective boundary\nconditions that model viscothermal losses in frequency-domain acoustics have\nrecently gained in popularity. Here, we investigate the time-domain analogue of\none such boundary condition. We demonstrate that the thermal part of the\nboundary condition is dissipative in time domain as expected, while the viscous\npart, unexpectedly, may lead to an infinite instability. A\nfinite-difference-time-domain scheme is developed for simulations of sound\npropagation in a duct with only thermal losses, and the obtained transmission\ncharacteristics are found to be in excellent agreement with frequency-domain\nsimulations.", "category": "physics_comp-ph" }, { "text": "GPU-Acceleration of the ELPA2 Distributed Eigensolver for Dense\n Symmetric and Hermitian Eigenproblems: The solution of eigenproblems is often a key computational bottleneck that\nlimits the tractable system size of numerical algorithms, among them electronic\nstructure theory in chemistry and in condensed matter physics. Large\neigenproblems can easily exceed the capacity of a single compute node, thus\nmust be solved on distributed-memory parallel computers. We here present\nGPU-oriented optimizations of the ELPA two-stage tridiagonalization eigensolver\n(ELPA2). On top of cuBLAS-based GPU offloading, we add a CUDA kernel to speed\nup the back-transformation of eigenvectors, which can be the computationally\nmost expensive part of the two-stage tridiagonalization algorithm. We benchmark\nthe performance of this GPU-accelerated eigensolver on two hybrid CPU-GPU\narchitectures, namely a compute cluster based on Intel Xeon Gold CPUs and\nNVIDIA Volta GPUs, and the Summit supercomputer based on IBM POWER9 CPUs and\nNVIDIA Volta GPUs. Consistent with previous benchmarks on CPU-only\narchitectures, the GPU-accelerated two-stage solver exhibits a parallel\nperformance superior to the one-stage counterpart. Finally, we demonstrate the\nperformance of the GPU-accelerated eigensolver developed in this work for\nroutine semi-local KS-DFT calculations comprising thousands of atoms.", "category": "physics_comp-ph" }, { "text": "Stylized facts from a threshold-based heterogeneous agent model: A class of heterogeneous agent models is investigated where investors switch\ntrading position whenever their motivation to do so exceeds some critical\nthreshold. These motivations can be psychological in nature or reflect\nbehaviour suggested by the efficient market hypothesis (EMH). By introducing\ndifferent propensities into a baseline model that displays EMH behaviour, one\ncan attempt to isolate their effects upon the market dynamics.\n The simulation results indicate that the introduction of a herding propensity\nresults in excess kurtosis and power-law decay consistent with those observed\nin actual return distributions, but not in significant long-term volatility\ncorrelations. Possible alternatives for introducing such long-term volatility\ncorrelations are then identified and discussed.", "category": "physics_comp-ph" }, { "text": "HEP Software Foundation Community White Paper Working Group -- Data\n Organization, Management and Access (DOMA): Without significant changes to data organization, management, and access\n(DOMA), HEP experiments will find scientific output limited by how fast data\ncan be accessed and digested by computational resources. In this white paper we\ndiscuss challenges in DOMA that HEP experiments, such as the HL-LHC, will face\nas well as potential ways to address them. A research and development timeline\nto assess these changes is also proposed.", "category": "physics_comp-ph" }, { "text": "Topology optimization on two-dimensional manifolds: This paper implements topology optimization on two-dimensional manifolds. In\nthis paper, the material interpolation is implemented on a material parameter\nin the partial differential equation used to describe a physical field, when\nthis physical field is defined on a two-dimensional manifold; the material\ndensity is used to formulate a mixed boundary condition of the physical field\nand implement the penalization between two different types of boundary\nconditions, when this physical field is defined on a three-dimensional domain\nwith its boundary conditions defined on the two-dimensional manifold\ncorresponding a surface or an interface of this three-dimensional domain. Based\non the homeomorphic property of two-dimensional manifolds, typical\ntwo-dimensional manifolds, e.g., sphere, torus, M\\\"{o}bius strip and Klein\nbottle, are included in the numerical tests, which are provided for the\nproblems on fluidic mechanics, heat transfer and electromagnetics.", "category": "physics_comp-ph" }, { "text": "Universal Lattice Basis: We report on the utility of using Shannons Sampling theorem to solve Quantum\nMechanical systems. We show that by extending the logic of Shannons\ninterpolation theorem we can define a Universal Lattice Basis, which has\nsuperior interpolating properties compared to traditional methods. This basis\nis orthonormal, semi-local, has a Euclidean norm, and a simple analytic\nexpression for the derivatives. Additionally, we can define a bounded domain\nfor which band-limited functions, such as Gaussians, show quadratic convergence\nin the representation error in respect to the sampling frequency. This theory\nalso extends to the periodic domain and we illustrate the simple analytic forms\nof the periodic semi-local basis and derivatives. Additionally, we show that\nthis periodic basis is equivalent to the space defined by the Fast Fourier\nTransform. This novel basis has great utility in solving quantum mechanical\nproblems for which the wave functions are known to be naturally band-limited.\nSeveral numerical examples in single and multi-dimensions are given to show the\nconvergence and equivalence of the periodic and bounded domains for compact\nstates.", "category": "physics_comp-ph" }, { "text": "Efficient ab initio calculation of electronic stopping in disordered\n systems via geometry pre-sampling: application to liquid water: Knowledge of the electronic stopping curve for swift ions, $S_e(v)$,\nparticularly around the Bragg peak, is important for understanding radiation\ndamage. Experimentally, however, the determination of such feature for light\nions is very challenging, especially in disordered systems such as liquid water\nand biological tissue. Recent developments in real-time time-dependent density\nfunctional theory (rt-TDDFT) have enabled the calculation of $S_e(v)$ along\nnm-sized trajectories. However, it is still a challenge to obtain a meaningful\nstatistically averaged $S_e(v)$ that can be compared to observations. In this\nwork, taking advantage of the correlation between the local electronic\nstructure probed by the projectile and the distance from the projectile to the\natoms in the target, we devise a trajectory pre-sampling scheme to select,\ngeometrically, a small set of short trajectories to accelerate the convergence\nof the averaged $S_e(v)$ computed via rt-TDDFT. For protons in liquid water, we\nfirst calculate the reference probability distribution function (PDF) for the\ndistance from the proton to the closest oxygen atom,\n$\\phi_R(r_{p{\\rightarrow}O})$, for a trajectory of a length similar to those\nsampled experimentally. Then, short trajectories are sequentially selected so\nthat the accumulated PDF reproduces $\\phi_R(r_{p{\\rightarrow}O})$ to\nincreasingly high accuracy. Using these pre-sampled trajectories, we\ndemonstrate that the averaged $S_e(v_p)$ converges in the whole velocity range\nwith less than eight trajectories, while other averaging methods using randomly\nand uniformly distributed trajectories require approximately ten times the\ncomputational effort. This allows us to compare the $S_e(v_p)$ curve to\nexperimental data, and assess widely used empirical tables based on Bragg's\nrule.", "category": "physics_comp-ph" }, { "text": "Nonlinear Acceleration of Sequential Fully Implicit (SFI) Method for\n Coupled Flow and Transport in Porous Media: The sequential fully implicit (SFI) method was introduced along with the\ndevelopment of the multiscale finite volume (MSFV) framework, and has received\nconsiderable attention in recent years. Each time step for SFI consists of an\nouter loop to solve the coupled system, in which there is one inner Newton loop\nto implicitly solve the pressure equation and another loop to implicitly solve\nthe transport equations. Limited research has been conducted that deals with\nthe outer coupling level to investigate the convergence performance. In this\npaper we extend the basic SFI method with several nonlinear acceleration\ntechniques for improving the outer-loop convergence. Specifically, we consider\nnumerical relaxation, quasi-Newton (QN) and Anderson acceleration (AA) methods.\nThe acceleration techniques are adapted and studied for the first time within\nthe context of SFI for coupled flow and transport in porous media. We reveal\nthat the iterative form of SFI is equivalent to a nonlinear block Gauss-Seidel\n(BGS) process. The effectiveness of the acceleration techniques is demonstrated\nusing several challenging examples. The results show that the basic SFI method\nis quite inefficient, suffering from slow convergence or even convergence\nfailure. In order to better understand the behaviors of SFI, we carry out\ndetailed analysis on the coupling mechanisms between the sub-problems. Compared\nwith the basic SFI method, superior convergence performance is achieved by the\nacceleration techniques, which can resolve the convergence difficulties\nassociated with various types of coupling effects. We show across a wide range\nof flow conditions that the acceleration techniques can stabilize the iterative\nprocess, and largely reduce the outer iteration count.", "category": "physics_comp-ph" }, { "text": "Alya: Towards Exascale for Engineering Simulation Codes: Alya is the BSC in-house HPC-based multi-physics simulation code. It is\ndesigned from scratch to run efficiently in parallel supercomputers, solving\ncoupled problems. The target domain is engineering, with all its particular\nfeatures: complex geome- tries and unstructured meshes, coupled multi-physics\nwith exotic coupling schemes and Physical models, ill-posed problems,\nflexibility needs for rapidly including new models, etc. Since its conception\nin 2004, Alya has shown scaling behaviour in an increasing number of cores. In\nthis paper, we present its performance up to 100.000 cores in Blue Waters, the\nNCSA supercomputer. The selected tests are representative of the engineering\nworld, all the problematic features included: incompressible flow in a hu- man\nrespiratory system, low Mach combustion problem in a kiln furnace and coupled\nelectro-mechanical problem in a heart. We show scalability plots for all cases,\ndiscussing all the aspects of such kind of simulations, including solvers\nconvergence.", "category": "physics_comp-ph" }, { "text": "Large-scale grid-enabled lattice-Boltzmann simulations of complex fluid\n flow in porous media and under shear: Well designed lattice-Boltzmann codes exploit the essentially embarrassingly\nparallel features of the algorithm and so can be run with considerable\nefficiency on modern supercomputers. Such scalable codes permit us to simulate\nthe behaviour of increasingly large quantities of complex condensed matter\nsystems. In the present paper, we present some preliminary results on the large\nscale three-dimensional lattice-Boltzmann simulation of binary immiscible fluid\nflows through a porous medium derived from digitised x-ray microtomographic\ndata of Bentheimer sandstone, and from the study of the same fluids under\nshear. Simulations on such scales can benefit considerably from the use of\ncomputational steering and we describe our implementation of steering within\nthe lattice-Boltzmann code, called LB3D, making use of the RealityGrid steering\nlibrary. Our large scale simulations benefit from the new concept of capability\ncomputing, designed to prioritise the execution of big jobs on major\nsupercomputing resources. The advent of persistent computational grids promises\nto provide an optimal environment in which to deploy these mesoscale simulation\nmethods, which can exploit the distributed nature of compute, visualisation and\nstorage resources to reach scientific results rapidly; we discuss our work on\nthe grid-enablement of lattice-Boltzmann methods in this context.", "category": "physics_comp-ph" }, { "text": "Short-time critical dynamics at perfect and non-perfect surface: We report Monte Carlo simulations of critical dynamics far from equilibrium\non a perfect and non-perfect surface in the 3d Ising model. For an ordered\ninitial state, the dynamic relaxation of the surface magnetization, the line\nmagnetization of the defect line, and the corresponding susceptibilities and\nappropriate cumulant is carefully examined at the ordinary, special and surface\nphase transitions. The universal dynamic scaling behavior including a dynamic\ncrossover scaling form is identified. The exponent $\\beta_1$ of the surface\nmagnetization and $\\beta_2$ of the line magnetization are extracted. The impact\nof the defect line on the surface universality classes is investigated.", "category": "physics_comp-ph" }, { "text": "Self-learning projective quantum Monte Carlo simulations guided by\n restricted Boltzmann machines: The projective quantum Monte Carlo (PQMC) algorithms are among the most\npowerful computational techniques to simulate the ground state properties of\nquantum many-body systems. However, they are efficient only if a sufficiently\naccurate trial wave function is used to guide the simulation. In the standard\napproach, this guiding wave function is obtained in a separate simulation that\nperforms a variational minimization. Here we show how to perform PQMC\nsimulations guided by an adaptive wave function based on a restricted Boltzmann\nmachine. This adaptive wave function is optimized along the PQMC simulation via\nunsupervised machine learning, avoiding the need of a separate variational\noptimization. As a byproduct, this technique provides an accurate ansatz for\nthe ground state wave function, which is obtained by minimizing the\nKullback-Leibler divergence with respect to the PQMC samples, rather than by\nminimizing the energy expectation value as in standard variational\noptimizations. The high accuracy of this self-learning PQMC technique is\ndemonstrated for a paradigmatic sign-problem-free model, namely, the\nferromagnetic quantum Ising chain, showing very precise agreement with the\npredictions of the Jordan-Wigner theory and of loop quantum Monte Carlo\nsimulations performed in the low-temperature limit.", "category": "physics_comp-ph" }, { "text": "Stochastic turbulence modeling in RANS simulations via Multilevel Monte\n Carlo: A multilevel Monte Carlo (MLMC) method for quantifying model-form\nuncertainties associated with the Reynolds-Averaged Navier-Stokes (RANS)\nsimulations is presented. Two, high-dimensional, stochastic extensions of the\nRANS equations are considered to demonstrate the applicability of the MLMC\nmethod. The first approach is based on global perturbation of the baseline eddy\nviscosity field using a lognormal random field. A more general second extension\nis considered based on the work of [Xiao et al.(2017)], where the entire\nReynolds Stress Tensor (RST) is perturbed while maintaining realizability. For\ntwo fundamental flows, we show that the MLMC method based on a hierarchy of\nmeshes is asymptotically faster than plain Monte Carlo. Additionally, we\ndemonstrate that for some flows an optimal multilevel estimator can be obtained\nfor which the cost scales with the same order as a single CFD solve on the\nfinest grid level.", "category": "physics_comp-ph" }, { "text": "Periodic Pulay method for robust and efficient convergence acceleration\n of self-consistent field iterations: Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of\nthe most widely used mixing schemes for accelerating the self-consistent\nsolution of electronic structure problems. In this work, we propose a simple\ngeneralization of DIIS in which Pulay extrapolation is performed at periodic\nintervals rather than on every self-consistent field iteration, and linear\nmixing is performed on all other iterations. We demonstrate through numerical\ntests on a wide variety of materials systems in the framework of density\nfunctional theory that the proposed generalization of Pulay's method\nsignificantly improves its robustness and efficiency.", "category": "physics_comp-ph" }, { "text": "GPU acceleration of local and semilocal density functional calculations\n in the SPARC electronic structure code: We present a GPU-accelerated version of the real-space SPARC electronic\nstructure code for performing Kohn-Sham density functional theory calculations\nwithin the local density and generalized gradient approximations. In\nparticular, we develop a modular math kernel based implementation for NVIDIA\narchitectures wherein the computationally expensive operations are carried out\non the GPUs, with the remainder of the workload retained on the CPUs. Using\nrepresentative bulk and slab examples, we show that GPUs enable speedups of up\nto 6x relative to CPU-only execution, bringing time to solution down to less\nthan 30 seconds for a metallic system with over 14,000 electrons, and enabling\nsignificant reductions in computational resources required for a given wall\ntime.", "category": "physics_comp-ph" }, { "text": "Strict bounding of quantities of interest in computations based on\n domain decomposition: This paper deals with bounding the error on the estimation of quantities of\ninterest obtained by finite element and domain decomposition methods. The\nproposed bounds are written in order to separate the two errors involved in the\nresolution of reference and adjoint problems : on the one hand the\ndiscretization error due to the finite element method and on the other hand the\nalgebraic error due to the use of the iterative solver. Beside practical\nconsiderations on the parallel computation of the bounds, it is shown that the\ninterface conformity can be slightly relaxed so that local enrichment or\nrefinement are possible in the subdomains bearing singularities or quantities\nof interest which simplifies the improvement of the estimation. Academic\nassessments are given on 2D static linear mechanic problems.", "category": "physics_comp-ph" }, { "text": "Poisson-Boltzmann model for protein-surface electrostatic interactions\n and grid-convergence study using the PyGBe code: Interactions between surfaces and proteins occur in many vital processes and\nare crucial in biotechnology: the ability to control specific interactions is\nessential in fields like biomaterials, biomedical implants and biosensors. In\nthe latter case, biosensor sensitivity hinges on ligand proteins adsorbing on\nbioactive surfaces with a favorable orientation, exposing reaction sites to\ntarget molecules. Protein adsorption, being a free-energy-driven process, is\ndifficult to study experimentally. This paper develops and evaluates a\ncomputational model to study electrostatic interactions of proteins and charged\nnanosurfaces, via the Poisson-Boltzmann equation. We extended the\nimplicit-solvent model used in the open-source code PyGBe to include surfaces\nof imposed charge or potential. This code solves the boundary integral\nformulation of the Poisson-Boltzmann equation, discretized with surface\nelements. PyGBe has at its core a treecode-accelerated Krylov iterative solver,\nresulting in O(N log N) scaling, with further acceleration on hardware via\nmulti-threaded execution on \\gpu s. It computes solvation and surface free\nenergies, providing a framework for studying the effect of electrostatics on\nadsorption. We then derived an analytical solution for a spherical charged\nsurface interacting with a spherical molecule, then completed a\ngrid-convergence study to build evidence on the correctness of our approach.\nThe study showed the error decaying with the average area of the boundary\nelements, i.e., the method is O(1/N), which is consistent with our previous\nverification studies using PyGBe. We also studied grid-convergence using a real\nmolecular geometry (protein GB1D4'), in this case using Richardson\nextrapolation (in the absence of an analytical solution) and confirmed the\nO(1/N) scaling in this case.", "category": "physics_comp-ph" }, { "text": "Snowmass Computing Frontier: Software Development, Staffing and Training: Report of the Snowmass CpF-I4 subgroup on Software Development, Staffing and\nTraining", "category": "physics_comp-ph" }, { "text": "cuPentBatch -- A batched pentadiagonal solver for NVIDIA GPUs: We introduce cuPentBatch -- our own pentadiagonal solver for NVIDIA GPUs. The\ndevelopment of cuPentBatch has been motivated by applications involving\nnumerical solutions of parabolic partial differential equations, which we\ndescribe. Our solver is written with batch processing in mind (as necessitated\nby parameter studies of various physical models). In particular, our solver is\ndirected at those problems where only the right-hand side of the matrix changes\nas the batch solutions are generated. As such, we demonstrate that cuPentBatch\noutperforms the NVIDIA standard pentadiagonal batch solver gpsvInterleavedBatch\nfor the class of physically-relevant computational problems encountered herein.", "category": "physics_comp-ph" }, { "text": "Molybdenum Carbide MXenes as Efficient Nanosensors Towards Selected\n Chemical Warfare Agents: There has been budding demand for the fast, reliable, inexpensive,\nnon-invasive, sensitive, and compact sensors with low power consumption in\nvarious fields, such as defence, chemical sensing, health care, and safe\nenvironment monitoring units. Particularly, an efficient detection of chemical\nwarfare agents (CWAs) is of great importance for the safety and security of the\nhumans. Inspired by this, we explored molybdenum carbide MXenes (Mo2CTx; Tx= O,\nF, S) as efficient sensors towards selected CWAs, such as arsine (AsH3),\nmustard gas (C4H8Cl2S), cyanogen chloride (NCCl), and phosgene (COCl2) both in\naqueous and non-aqueous mediums. Our van der Waals corrected density functional\ntheory (DFT) calculations reveal that the CWAs bind with Mo2CF2, and Mo2CS2\nmonolayers under strong chemisorption with binding energies in the range of\n-2.33 to -4.05 eV, whereas Mo2CO2 results in comparatively weak bindings of\n-0.29 to -0.58 eV. We further report the variations in the electronic\nproperties, electrostatic potentials and work functions of Mo2CTx upon the\nadsorption of CWAs, which authenticate an efficient sensing mechanism.\nStatistical thermodynamic analysis is applied to explore the sensing properties\nof Mo2CTx at various of temperatures and pressures. We believe that our\nfindings will pave the way to an innovative class of low-cost reusable sensors\nfor the sensitive and selective detection of highly toxic CWAs in air as well\nas in aqueous media.", "category": "physics_comp-ph" }, { "text": "Surrogate Modeling for Fluid Flows Based on Physics-Constrained Deep\n Learning Without Simulation Data: Numerical simulations on fluid dynamics problems primarily rely on spatially\nor/and temporally discretization of the governing equation into the\nfinite-dimensional algebraic system solved by computers. Due to complicated\nnature of the physics and geometry, such process can be computational\nprohibitive for most real-time applications and many-query analyses. Therefore,\ndeveloping a cost-effective surrogate model is of great practical significance.\nDeep learning (DL) has shown new promises for surrogate modeling due to its\ncapability of handling strong nonlinearity and high dimensionality. However,\nthe off-the-shelf DL architectures fail to operate when the data becomes\nsparse. Unfortunately, data is often insufficient in most parametric fluid\ndynamics problems since each data point in the parameter space requires an\nexpensive numerical simulation based on the first principle, e.g.,\nNaiver--Stokes equations. In this paper, we provide a physics-constrained DL\napproach for surrogate modeling of fluid flows without relying on any\nsimulation data. Specifically, a structured deep neural network (DNN)\narchitecture is devised to enforce the initial and boundary conditions, and the\ngoverning partial differential equations are incorporated into the loss of the\nDNN to drive the training. Numerical experiments are conducted on a number of\ninternal flows relevant to hemodynamics applications, and the forward\npropagation of uncertainties in fluid properties and domain geometry is studied\nas well. The results show excellent agreement on the flow field and\nforward-propagated uncertainties between the DL surrogate approximations and\nthe first-principle numerical simulations.", "category": "physics_comp-ph" }, { "text": "Higher-order adaptive finite-element methods for Kohn-Sham density\n functional theory: We present an efficient computational approach to perform real-space\nelectronic structure calculations using an adaptive higher-order finite-element\ndiscretization of Kohn-Sham density-functional theory (DFT). To this end, we\ndevelop an a-priori mesh adaption technique to construct a close to optimal\nfinite-element discretization of the problem. We further propose an efficient\nsolution strategy for solving the discrete eigenvalue problem by using spectral\nfinite-elements in conjunction with Gauss-Lobatto quadrature, and a Chebyshev\nacceleration technique for computing the occupied eigenspace. The proposed\napproach has been observed to provide a staggering 100-200 fold computational\nadvantage over the solution of a generalized eigenvalue problem. Using the\nproposed solution procedure, we investigate the computational efficiency\nafforded by higher-order finite-element discretization of the Kohn-Sham DFT\nproblem. Our studies suggest that staggering computational savings of the order\nof 1000 fold relative to linear finite-elements can be realized, for both\nall-electron and local pseudopotential calculations. On all the benchmark\nsystems studied, we observe diminishing returns in computational savings beyond\nthe sixth-order for accuracies commensurate with chemical accuracy. A\ncomparative study of the computational efficiency of the proposed higher-order\nfinite-element discretizations suggests that the performance of finite-element\nbasis is competing with the plane-wave discretization for non-periodic local\npseudopotential calculations, and compares to the Gaussian basis for\nall-electron calculations within an order of magnitude. Further, we demonstrate\nthe capability of the proposed approach to compute the electronic structure of\na metallic system containing 1688 atoms using modest computational resources,\nand good scalability of the present implementation up to 192 processors.", "category": "physics_comp-ph" }, { "text": "Iterative Retraining of Quantum Spin Models Using Recurrent Neural\n Networks: Modeling quantum many-body systems is enormously challenging due to the\nexponential scaling of Hilbert dimension with system size. Finding efficient\ncompressions of the wavefunction is key to building scalable models. Here, we\nintroduce iterative retraining, an approach for simulating bulk quantum systems\nthat uses recurrent neural networks (RNNs). By mapping translations in the\nlattice vector to the time index of an RNN, we are able to efficiently capture\nthe near translational invariance of large lattices. We show that we can use\nthis symmetry mapping to simulate very large systems in one and two dimensions.\nWe do so by 'growing' our model, iteratively retraining the same model on\nprogressively larger lattices until edge effects become negligible. We argue\nthat this scheme generalizes more naturally to higher dimensions than Density\nMatrix Renormalization Group.", "category": "physics_comp-ph" }, { "text": "How to Differentiate Collective Variables in Free Energy Codes:\n Computer-Algebra Code Generation and Automatic Differentiation: The proper choice of collective variables (CVs) is central to biased-sampling\nfree energy reconstruction methods in molecular dynamics simulations. The\nPLUMED 2 library, for instance, provides several sophisticated CV choices,\nimplemented in a C++ framework; however, developing new CVs is still time\nconsuming due to the need to provide code for the analytical derivatives of all\nfunctions with respect to atomic coordinates. We present two solutions to this\nproblem, namely (a) symbolic differentiation and code generation, and (b)\nautomatic code differentiation, in both cases leveraging open-source libraries\n(SymPy and Stan Math respectively). The two approaches are demonstrated and\ndiscussed in detail implementing a realistic example CV, the local radius of\ncurvature of a polymer. Users may use the code as a template to streamline the\nimplementation of their own CVs using high-level constructs and automatic\ngradient computation.", "category": "physics_comp-ph" }, { "text": "Scan Coil Dynamics Simulation for Subsampled Scanning Transmission\n Electron Microscopy: Subsampling and fast scanning in the scanning transmission electron\nmicroscope is problematic due to scan coil hysteresis - the mismatch between\nthe actual and assumed location of the electron probe beam as a function of the\nhistory of the scan. Hysteresis limits the resolution of the microscope and can\ninduce artefacts in our images, particularly during flyback. In this work, we\naim to provide insights on the effects of hysteresis during image formation. To\naccomplish this, a simulation has been developed to model a scanning system as\na damped double-harmonic oscillator, with the simulation being capable of\nmanaging many microscope dependant parameters to study the effect on the\nresultant scan trajectories. The model developed shows that the trajectory of\nthe electron beam probe is not obvious and the relationship between scanning\npattern and probe trajectory is complex.", "category": "physics_comp-ph" }, { "text": "Analysis of dynamic ruptures generating seismic waves in a\n self-gravitating planet: an iterative coupling scheme and well-posedness: We study the solution of the system of equations describing the dynamical\nevolution of spontaneous ruptures generated in a prestressed\nelastic-gravitational deforming body and governed by rate and state friction\nlaws. We propose an iterative coupling scheme based on a weak formulation with\nnonlinear interior boundary conditions, both for continuous time and with\nimplicit discretization (backward Euler) in time. We regularize the problem by\nintroducing viscosity. This guarantees the convergence of the scheme for\nsolutions of the regularized problems in both cases. We also make precise the\nconditions on the relevant coefficients for convergence to hold.", "category": "physics_comp-ph" }, { "text": "Vector Fitting: We introduce the Vector Fitting algorithm for the creation of reduced-order\nmodels from the sampled response of a linear time-invariant system. This\ndata-driven approach to reduction is particularly useful when the system under\nmodeling is known only through experimental measurements. The theory behind\nVector Fitting is presented for single- and multiple-input systems, together\nwith numerical details, pseudocodes, and an open-source implementation. We\ndiscuss how the reduced model can be made stable and converted to a variety of\nforms for use in virtually any modeling context. Finally, we survey recent\nextensions of the Vector Fitting algorithm geared towards time-domain,\nparametric and distributed systems modeling.", "category": "physics_comp-ph" }, { "text": "Electron wave functions on $T^2$ in a static magnetic field of arbitrary\n direction: A basis set expansion is performed to find the eigenvalues and wave functions\nfor an electron on a toroidal surface $T^2$ subject to a constant magnetic\nfield in an arbitrary direction. The evolution of several low-lying states as a\nfunction of field strength and field orientation is reported, and a procedure\nto extend the results to include two-body Coulomb matrix elements on $T^2$ is\npresented.", "category": "physics_comp-ph" }, { "text": "Extending a Hybrid Godunov Method for Radiation Hydrodynamics to\n Multiple Dimensions: This paper presents a hybrid Godunov method for three-dimensional radiation\nhydrodynamics. The multidimensional technique outlined in this paper is an\nextension of the one-dimensional method that was developed by Sekora & Stone\n2009, 2010. The earlier one-dimensional technique was shown to preserve certain\nasymptotic limits and be uniformly well behaved from the photon free streaming\n(hyperbolic) limit through the weak equilibrium diffusion (parabolic) limit and\nto the strong equilibrium diffusion (hyperbolic) limit. This paper gives the\nalgorithmic details for constructing a multidimensional method. A future paper\nwill present numerical tests that demonstrate the robustness of the\ncomputational technique across a wide-range of parameter space.", "category": "physics_comp-ph" }, { "text": "A mixed basis approach in the SGP-limit: A perturbation method for computing quick estimates of the echo decay in\npulsed spin echo gradient NMR diffusion experiments in the short gradient pulse\nlimit is presented. The perturbation basis involves (relatively few) dipole\ndistributions on the boundaries generating a small perturbation matrix in\nO(s^2) time, where s denotes the number of boundary elements. Several\napproximate eigenvalues and eigenfunctions to the diffusion operator are\nretrieved. The method is applied to 1-D and 2-D systems with Neumann boundary\nconditions.", "category": "physics_comp-ph" }, { "text": "Planet-disc interactions with Discontinuous Galerkin Methods using GPUs: We present a two-dimensional Cartesian code based on high order discontinuous\nGalerkin methods, implemented to run in parallel over multiple GPUs. A simple\nplanet-disc setup is used to compare the behaviour of our code against the\nbehaviour found using the FARGO3D code with a polar mesh. We make use of the\ntime dependence of the torque exerted by the disc on the planet as a mean to\nquantify the numerical viscosity of the code. We find that the numerical\nviscosity of the Keplerian flow can be as low as a few $10^{-8}r^2\\Omega$, $r$\nand $\\Omega$ being respectively the local orbital radius and frequency, for\nfifth order schemes and resolution of $\\sim 10^{-2}r$. Although for a single\ndisc problem a solution of low numerical viscosity can be obtained at lower\ncomputational cost with FARGO3D (which is nearly an order of magnitude faster\nthan a fifth order method), discontinuous Galerkin methods appear promising to\nobtain solutions of low numerical viscosity in more complex situations where\nthe flow cannot be captured on a polar or spherical mesh concentric with the\ndisc.", "category": "physics_comp-ph" }, { "text": "Compton scattering in particle-in-cell codes: We present a Monte Carlo collisional scheme that models single Compton\nscattering between leptons and photons in particle-in-cell codes. The numerical\nimplementation of Compton scattering can deal with macro-particles of different\nweights and conserves momentum and energy in each collision. Our scheme is\nvalidated through two benchmarks for which exact analytical solutions exist:\nthe inverse Compton spectra produced by an electron scattering with an\nisotropic photon gas and the photon-electron gas equilibrium described by the\nKompaneets equation. It opens new opportunities for numerical investigation of\nplasma phenomena where a significant population of high energy photons is\npresent in the system.", "category": "physics_comp-ph" }, { "text": "A central partition of molecular conformational space. I. Basic\n structures: On the basis of empirical evidence from molecular dynamics simulations,\nmolecular conformational space can be described by means of a partition of\ncentral conical regions characterized by the dominance relations between\ncartesian coordinates. This work presents a geometric and combinatorial\ndescription of this structure.", "category": "physics_comp-ph" }, { "text": "Modeling meso-scale energy localization in shocked HMX, Part II:\n training machine-learned surrogate models for void shape and void-void\n interaction effects: Surrogate models for hotspot ignition and growth rates were presented in Part\nI, where the hotspots were formed by the collapse of single cylindrical voids.\nSuch isolated cylindrical voids are idealizations of the void morphology in\nreal meso-structures. This paper therefore investigates the effect of\nnon-cylindrical void shapes and void-void interactions on hotspot ignition and\ngrowth. Surrogate models capturing these effects are constructed using a\nBayesian Kriging approach. The training data for machine learning the\nsurrogates are derived from reactive void collapse simulations spanning the\nparameter space of void aspect ratio (AR), void orientation ($\\theta$), and\nvoid fraction ($\\phi$). The resulting surrogate models portray strong\ndependence of the ignition and growth rates on void aspect ratio and\norientation, particularly when they are oriented at acute angles with respect\nto the imposed shock. The surrogate models for void interaction effects show\nsignificant changes in hotspot ignition and growth rates as the void fraction\nincreases. The paper elucidates the physics of hotspot evolution in void fields\ndue to the creation and interaction of multiple hotspots. The results from this\nwork will be useful not only for constructing meso-informed macro-scale models\nof HMX, but also for understanding the physics of void-void interactions and\nsensitivity due to void shape and orientation.", "category": "physics_comp-ph" }, { "text": "Shot Noise Suppression in Avalanche Photodiodes: We identify a new shot noise suppression mechanism in a thin (~100 nm)\nheterostructure avalanche photodiode. In the low-gain regime the shot noise is\nsuppressed due to temporal correlations within amplified current pulses. We\ndemonstrate in a Monte Carlo simulation that the effective excess noise factors\ncan be <1, and reconcile the apparent conflict between theory and experiments.\nThis shot noise suppression mechanism is independent of known mechanisms such\nas Coulomb interaction, or reflection at heterojunction interfaces.", "category": "physics_comp-ph" }, { "text": "Transport Properties of Water Confined in a Graphene Nanochannel: Equilibrium molecular dynamics simulations are used to investigate the effect\nof phase transitions on the transport properties of highly-confined water\nbetween parallel graphene sheets. An abrupt reduction by several orders of\nmagnitude in the mobility of water is observed in strong confinement, as\nindicated by reduced diffusivity and increased shear viscosity values. The bulk\nviscosity, which is related to the resistance to expansion and compression of a\nsubstance, is also calculated, showing an enhancement compared to the bulk\nvalue for all levels of confinement. An investigation into the phase behaviour\nof confined water reveals a transition from a liquid monolayer to a rhombic\nfrozen monolayer at nanochannel heights between 6.8-7.8 \\r{A}; for larger\nseparations, multilayer liquid water is recovered. It is shown how this phase\ntransition is at the root of the impeded transport.", "category": "physics_comp-ph" }, { "text": "Heat Transport with a Twist: Despite the desirability of polymers for use in many products due to their\nflexibility, light weight, and durability, their status as thermal insulators\nhas precluded their use in applications where thermal conductors are required.\nHowever, recent results suggest that the thermal conductance of polymers can be\nenhanced and that their heat transport behaviors may be highly sensitive to\nnanoscale control. Here we use non-equilibrium molecular dynamics (MD)\nsimulations to study the effect of mechanical twist on the steady-state thermal\nconductance across multi-stranded polyethylene wires. We find that a highly\ntwisted double-helical polyethylene wire can display a thermal conductance up\nto three times that of its untwisted form, an effect which can be attributed to\na structural transition in the strands of the double helix. We also find that\nin thicker wires composed of many parallel strands, adding just one twist can\nincrease its thermal conductance by over 30%. However, we find that unlike\nstretching a polymer wire, which causes a monotonic increase in thermal\nconductance, the effect of twist is highly non-monotonic, and certain amounts\nof twist can actually decrease the thermal conductance. Finally, we apply the\nContinuous Chirality Measure (CCM) in an attempt to explore the correlation\nbetween heat conductance and chirality. The CCM is found to correlate with\ntwist as expected, but we attribute the observed heat transport behaviors to\nstructural factors other than chirality.", "category": "physics_comp-ph" }, { "text": "Slow nonisothermal flows: numerical and asymptotic analysis of the\n Boltzmann equation: Slow flows of a slightly rarefied gas under high thermal stresses are\nconsidered. The correct fluid-dynamic description of this class of flows is\nbased on the Kogan--Galkin--Friedlander equations, containing some\nnon-Navier--Stokes terms in the momentum equation. Appropriate boundary\nconditions are determined from the asymptotic analysis of the Knudsen layer on\nthe basis of the Boltzmann equation. Boundary conditions up to the second order\nof the Knudsen number are studied. Some two-dimensional examples are examined\nfor their comparative analysis. The fluid-dynamic results are supported by\nnumerical solution of the Boltzmann equation obtained by the Tcheremissine's\nprojection-interpolation discrete-velocity method extended for nonuniform\ngrids. The competition pattern between the first- and the second-order\nnonlinear thermal-stress flows has been obtained for the first time.", "category": "physics_comp-ph" }, { "text": "Physics-Informed Supervised Residual Learning for Electromagnetic\n Modeling: In this study, physics-informed supervised residual learning (PhiSRL) is\nproposed to enable an effective, robust, and general deep learning framework\nfor 2D electromagnetic (EM) modeling. Based on the mathematical connection\nbetween the fixed-point iteration method and the residual neural network\n(ResNet), PhiSRL aims to solve a system of linear matrix equations. It applies\nconvolutional neural networks (CNNs) to learn updates of the solution with\nrespect to the residuals. Inspired by the stationary and non-stationary\niterative scheme of the fixed-point iteration method, stationary and\nnon-stationary iterative physics-informed ResNets (SiPhiResNet and NiPhiResNet)\nare designed to solve the volume integral equation (VIE) of EM scattering. The\neffectiveness and universality of PhiSRL are validated by solving VIE of\nlossless and lossy scatterers with the mean squared errors (MSEs) converging to\n$\\sim 10^{-4}$ (SiPhiResNet) and $\\sim 10^{-7}$ (NiPhiResNet). Numerical\nresults further verify the generalization ability of PhiSRL.", "category": "physics_comp-ph" }, { "text": "Performance of FORTRAN and C GPU Extensions for a Benchmark Suite of\n Fourier Pseudospectral Algorithms: A comparison of PGI OpenACC, FORTRAN CUDA, and Nvidia CUDA pseudospectral\nmethods on a single GPU and GCC FORTRAN on single and multiple CPU cores is\nreported. The GPU implementations use CuFFT and the CPU implementations use\nFFTW. Porting pre-existing FORTRAN codes to utilize a GPUs is efficient and\neasy to implement with OpenACC and CUDA FORTRAN. Example programs are provided.", "category": "physics_comp-ph" }, { "text": "Application of the Covariant projection finite elements in the E field\n formulation for wave guide analysis: The use of covariant projection finite elements in the efficient 3-D vector\nfinite element analysis of wave guide is presented.", "category": "physics_comp-ph" }, { "text": "Compact Graph Representation of crystal structures using Point-wise\n Distance Distributions: Use of graphs to represent crystal structures has become popular in recent\nyears as they provide a natural translation from atoms and bonds to nodes and\nedges. Graphs capture structure, while remaining invariant to the symmetries\nthat crystals display. Several works in property prediction, including those\nwith state-of-the-art results, make use of the Crystal Graph. The present work\noffers a graph based on Point-wise Distance Distributions which retains\nsymmetrical invariance, decreases computational load, and yields similar or\nbetter prediction accuracy on both experimental and simulated crystals.", "category": "physics_comp-ph" }, { "text": "Efficient simulations of Hartree--Fock equations by an accelerated\n gradient descent method: We develop convergence acceleration procedures that enable a gradient\ndescent-type iteration method to efficiently simulate Hartree--Fock equations\nfor atoms interacting both with each other and with an external potential. Our\ndevelopment focuses on three aspects: (i) optimization of a parameter in the\npreconditioning operator; (ii) adoption of a technique that eliminates the\nslowest-decaying mode to the case of many equations (describing many atoms);\nand (iii) a novel extension of the above technique that allows one to eliminate\nmultiple modes simultaneously. We illustrate performance of the numerical\nmethod for the 2D model of the first layer of helium atoms above a graphene\nsheet. We demonstrate that incorporation of aspects (i) and (ii) above into the\n``plain\" gradient descent method accelerates it by at least two orders of\nmagnitude, and often by much more. Aspect (iii) -- a multiple-mode elimination\n-- may bring further improvement to the convergence rate compared to aspect\n(ii), the single-mode elimination. Both single- and multiple-mode elimination\ntechniques are shown to significantly outperform the well-known Anderson\nAcceleration. We believe that our acceleration techniques can also be gainfully\nemployed by other numerical methods, especially those handling hard-core-type\ninteraction potentials.", "category": "physics_comp-ph" }, { "text": "Machine learning and density functional theory: Over the past decade machine learning has made significant advances in\napproximating density functionals, but whether this signals the end of\nhuman-designed functionals remains to be seen. Ryan Pederson, Bhupalee Kalita\nand Kieron Burke discuss the rise of machine learning for functional design.", "category": "physics_comp-ph" }, { "text": "Unbiased Reduced Density Matrices and Electronic Properties from Full\n Configuration Interaction Quantum Monte Carlo: Properties that are necessarily formulated within pure (symmetric)\nexpectation values are difficult to calculate for projector quantum Monte Carlo\napproaches, but are critical in order to compute many of the important\nobservable properties of electronic systems. Here, we investigate an approach\nfor the sampling of unbiased reduced density matrices within the Full\nConfiguration Interaction Quantum Monte Carlo dynamic, which requires only\nsmall computational overheads. This is achieved via an independent replica\npopulation of walkers in the dynamic, sampled alongside the original\npopulation. The resulting reduced density matrices are free from systematic\nerror (beyond those present via constraints on the dynamic itself), and can be\nused to compute a variety of expectation values and properties, with rapid\nconvergence to an exact limit. A quasi-variational energy estimate derived from\nthese density matrices is proposed as an accurate alternative to the projected\nestimator for multiconfigurational wavefunctions, while its variational\nproperty could potentially lend itself to accurate extrapolation approaches in\nlarger systems.", "category": "physics_comp-ph" }, { "text": "Numerical integration of quantum time evolution in a curved manifold: The numerical integration of the Schr\\\"odinger equation by discretization of\ntime is explored for the curved manifolds arising from finite representations\nbased on evolving basis states. In particular, the unitarity of the evolution\nis assessed, in the sense of the conservation of mutual scalar products in a\nset of evolving states, and with them the conservation of orthonormality and\nparticle number. Although the adequately represented equation is known to give\nrise to unitary evolution in spite of curvature, discretized integrators easily\nbreak that conservation, thereby deteriorating their stability. The Crank\nNicolson algorithm, which offers unitary evolution in Euclidian spaces\nindependent of time-step size $\\mathrm{d}t$, can be generalised to curved\nmanifolds in different ways. Here we compare a previously proposed algorithm\nthat is unitary by construction, albeit integrating the wrong equation, with a\nfaithful generalisation of the algorithm, which is, however, not strictly\nunitary for finite $\\mathrm{d}t$.", "category": "physics_comp-ph" }, { "text": "Instanton based importance sampling for rare events in stochastic PDEs: We present a new method for sampling rare and large fluctuations in a\nnon-equilibrium system governed by a stochastic partial differential equation\n(SPDE) with additive forcing. To this end, we deploy the so-called instanton\nformalism that corresponds to a saddle-point approximation of the action in the\npath integral formulation of the underlying SPDE. The crucial step in our\napproach is the formulation of an alternative SPDE that incorporates knowledge\nof the instanton solution such that we are able to constrain the dynamical\nevolutions around extreme flow configurations only. Finally, a reweighting\nprocedure based on the Girsanov theorem is applied to recover the full\ndistribution function of the original system. The entire procedure is\ndemonstrated on the example of the one-dimensional Burgers equation.\nFurthermore, we compare our method to conventional direct numerical simulations\nas well as to Hybrid Monte Carlo methods. It will be shown that the\ninstanton-based sampling method outperforms both approaches and allows for an\naccurate quantification of the whole probability density function of velocity\ngradients from the core to the very far tails.", "category": "physics_comp-ph" }, { "text": "H2ZIXY: Pauli spin matrix decomposition of real symmetric matrices: We present a code in Python3 which takes a square real symmetric matrix, of\narbitrary size, and decomposes it as a tensor product of Pauli spin matrices.\nThe application to the decomposition of a Hamiltonian of relevance to nuclear\nphysics for implementation on quantum computer is given.", "category": "physics_comp-ph" }, { "text": "GeantV: Results from the prototype of concurrent vector particle\n transport simulation in HEP: Full detector simulation was among the largest CPU consumer in all CERN\nexperiment software stacks for the first two runs of the Large Hadron Collider\n(LHC). In the early 2010's, the projections were that simulation demands would\nscale linearly with luminosity increase, compensated only partially by an\nincrease of computing resources. The extension of fast simulation approaches to\nmore use cases, covering a larger fraction of the simulation budget, is only\npart of the solution due to intrinsic precision limitations. The remainder\ncorresponds to speeding-up the simulation software by several factors, which is\nout of reach using simple optimizations on the current code base. In this\ncontext, the GeantV R&D project was launched, aiming to redesign the legacy\nparticle transport codes in order to make them benefit from fine-grained\nparallelism features such as vectorization, but also from increased code and\ndata locality. This paper presents extensively the results and achievements of\nthis R&D, as well as the conclusions and lessons learnt from the beta\nprototype.", "category": "physics_comp-ph" }, { "text": "A dissipative particle dynamics model of biofilm growth: A dissipative particle dynamics (DPD) model for the quantitative simulation\nof biofilm growth controlled by substrate (nutrient) consumption, advective and\ndiffusive substrate transport, and hydrodynamic interactions with fluid flow\n(including fragmentation and reattachment) is described. The model was used to\nsimulate biomass growth, decay, and spreading. It predicts how the biofilm\nmorphology depends on flow conditions, biofilm growth kinetics, the\nrheomechanical properties of the biofilm and adhesion to solid surfaces. The\nmorphology of the model biofilm depends strongly on its rigidity and the\nmagnitude of the body force that drives the fluid over the biofilm.", "category": "physics_comp-ph" }, { "text": "Lattice Boltzmann modeling of boiling heat transfer: The boiling curve\n and the effects of wettability: A hybrid thermal lattice Boltzmann (LB) model is presented to simulate\nthermal multiphase flows with phase change based on an improved pseudopotential\nLB approach [Q. Li, K. H. Luo, and X. J. Li, Phys. Rev. E 87, 053301 (2013)].\nThe present model does not suffer from the spurious term caused by the\nforcing-term effect, which was encountered in some previous thermal LB models\nfor liquid-vapor phase change. Using the model, the liquid-vapor boiling\nprocess is simulated. The boiling curve together with the three boiling stages\n(nucleate boiling, transition boiling, and film boiling) is numerically\nreproduced in the LB community for the first time. The numerical results show\nthat the basic features and the fundamental characteristics of boiling heat\ntransfer are well captured, such as the severe fluctuation of transient heat\nflux in the transition boiling and the feature that the maximum heat transfer\ncoefficient lies at a lower wall superheat than that of the maximum heat flux.\nFurthermore, the effects of the heating surface wettability on boiling heat\ntransfer are investigated. It is found that an increase in contact angle\npromotes the onset of boiling but reduces the critical heat flux, and makes the\nboiling process enter into the film boiling regime at a lower wall superheat,\nwhich is consistent with the findings from experimental studies.", "category": "physics_comp-ph" }, { "text": "An improved lattice Boltzmann D3Q19 method based on an alternative\n equilibrium discretization: Lattice Boltzmann simulations of three-dimensional, isothermal hydrodynamics\noften use either the D3Q19 or the D3Q27 velocity sets. While both models\ncorrectly approximate Navier-Stokes in the continuum limit, the D3Q19 model is\ncomputationally less expensive but has some known deficiencies regarding\nGalilean invariance, especially for high Reynolds number flows. In this work we\npresent a novel methodology to construct lattice Boltzmann equilibria for\nhydrodynamics directly from the continuous Maxwellian equilibrium. While our\nnew approach reproduces the well known LBM equilibrium for D2Q9 and D3Q27\nlattice models, it yields a different equilibrium formulation for the D3Q19\nstencil. This newly proposed formulation is shown to be more accurate than the\nwidely used second order equilibrium, while having the same computation costs.\nWe present a steady state Chapman-Enskog analysis of the standard and the\nimproved D3Q19 model and conduct numerical experiments that demonstrate the\nsuperior accuracy of our newly developed D3Q19 equilibrium.", "category": "physics_comp-ph" }, { "text": "Constructing high-order discontinuity-capturing schemes with\n linear-weight polynomials and boundary variation diminishing algorithm: In this study, a new framework of constructing very high order\ndiscontinuity-capturing schemes is proposed for finite volume method. These\nschemes, so-called $\\mathrm{P}_{n}\\mathrm{T}_{m}-\\mathrm{BVD}$ (polynomial of\n$n$-degree and THINC function of $m$-level reconstruction based on BVD\nalgorithm), are designed by employing high-order linear-weight polynomials and\nTHINC (Tangent of Hyperbola for INterface Capturing) functions with adaptive\nsteepness as the reconstruction candidates. The final reconstruction function\nin each cell is determined with a multi-stage BVD (Boundary Variation\nDiminishing) algorithm so as to effectively control numerical oscillation and\ndissipation. We devise the new schemes up to eleventh order in an efficient way\nby directly increasing the order of the underlying upwind scheme using\nlinear-weight polynomials. The analysis of the spectral property and accuracy\ntests show that the new reconstruction strategy well preserves the\nlow-dissipation property of the underlying upwind schemes with high-order\nlinear-weight polynomials for smooth solution over all wave numbers and\nrealizes $n+1$ order convergence rate. The performance of new schemes is\nexamined through widely used benchmark tests, which demonstrate that the\nproposed schemes are capable of simultaneously resolving small-scale flow\nfeatures with high resolution and capturing discontinuities with low\ndissipation. With outperforming results and simplicity in algorithm, the new\nreconstruction strategy shows great potential as an alternative numerical\nframework for computing nonlinear hyperbolic conservation laws that have\ndiscontinuous and smooth solutions of different scales.", "category": "physics_comp-ph" }, { "text": "The Wigner branching random walk: Efficient implementation and\n performance evaluation: To implement the Wigner branching random walk, the particle carrying a signed\nweight, either $-1$ or $+1$, is more friendly to data storage and arithmetic\nmanipulations than that taking a real-valued weight continuously from $-1$ to\n$+1$. The former is called a signed particle and the latter a weighted\nparticle. In this paper, we propose two efficient strategies to realize the\nsigned-particle implementation. One is to interpret the multiplicative\nfunctional as the probability to generate pairs of particles instead of the\nincremental weight, and the other is to utilize a bootstrap filter to adjust\nthe skewness of particle weights. Performance evaluations on the Gaussian\nbarrier scattering (2D) and a Helium-like system (4D) demonstrate the\nfeasibility of both strategies and the variance reduction property of the\nsecond approach. We provide an improvement of the first signed-particle\nimplementation that partially alleviates the restriction on the time step and\nperform a thorough theoretical and numerical comparison among all the existing\nsigned-particle implementations. Details on implementing the importance\nsampling according to the quasi-probability density and an efficient resampling\nor particle reduction are also provided.", "category": "physics_comp-ph" }, { "text": "Geometry of triple junctions during grain boundary premelting: Grain Boundaries (GB) whose energy is larger than twice the energy of the\nsolid/liquid interface exhibit the premelting phenomenon, for which an\natomically thin liquid layer develops at temperatures slightly below the bulk\nmelting temperature. Premelting can have a severe impact on the structural\nintegrity of a polycrystalline material and on the mechanical high temperature\nproperties, also in the context of crack formation during the very last stages\nof solidification. The triple junction between a dry GB and the two\nsolid/liquid interfaces of a liquid layer propagating along the GB cannot be\ndefined from macroscopic continuum properties and surface tension equilibria in\nterms of Young's law. We show how incorporating atomistic scale physics using a\ndisjoining potential regularizes the state of the triple junction and yields an\nequilibrium with a well-defined microscopic contact angle. We support this\nfinding by dynamical simulations using a multi-phase field model with obstacle\npotential for both purely kinetic and diffusive conditions. Generally, our\nresults should provide insights on the dynamics of GB phase transitions, of\nwhich the complex phenomena associated with liquid metal embrittlement are an\nexample.", "category": "physics_comp-ph" }, { "text": "On the weak scaling of the contact distance between two fluctuating\n interfaces with system size: A pair of flat parallel surfaces, each freely diffusing along the direction\nof their separation, will eventually come into contact. If the shapes of these\nsurfaces also fluctuate, then contact will occur when their centers of mass\nremain separated by a nonzero distance $\\ell$. Here we examine the statistics\nof $\\ell$ at the time of first contact for surfaces that evolve in time\naccording to the Edwards-Wilkinson equation. We present a general approach to\ncalculate its probability distribution and determine how its most likely value\n$\\ell^*$ depends on the surfaces' lateral size $L$. We are motivated by an\ninterest in the motion of interfaces between two phases at conditions of\nthermodynamic coexistence, and in particular the annihilation of domain wall\npairs under periodic boundary conditions. Computer simulations of this scenario\nverify the predicted scaling behavior in two and three dimensions. In the\nlatter case, slow growth where $\\ell^\\ast$ is an algebraic function of $\\log L$\nimplies that slab-shaped domains remain topologically intact until $\\ell$\nbecomes very small, contradicting expectations from equilibrium thermodynamics.", "category": "physics_comp-ph" }, { "text": "Random number generators for massively parallel simulations on GPU: High-performance streams of (pseudo) random numbers are crucial for the\nefficient implementation for countless stochastic algorithms, most importantly,\nMonte Carlo simulations and molecular dynamics simulations with stochastic\nthermostats. A number of implementations of random number generators has been\ndiscussed for GPU platforms before and some generators are even included in the\nCUDA supporting libraries. Nevertheless, not all of these generators are well\nsuited for highly parallel applications where each thread requires its own\ngenerator instance. For this specific situation encountered, for instance, in\nsimulations of lattice models, most of the high-quality generators with large\nstates such as Mersenne twister cannot be used efficiently without substantial\nchanges. We provide a broad review of existing CUDA variants of random-number\ngenerators and present the CUDA implementation of a new massively parallel\nhigh-quality, high-performance generator with a small memory load overhead.", "category": "physics_comp-ph" }, { "text": "Lattice Boltzmann Models for Micro-tomographic Pore-spaces: The lattice Boltzmann method (LBM) is a popular numerical framework to\ninvestigate single and multiphase flow though porous media. For estimation of\nabsolute permeability based on micro-tomographic images of the porous medium,\nthe single-relaxation time (SRT) collision model is the most widely-used,\nalthough the multiple-relaxation-time (MRT) collision model also has recently\nacquired wider usage, especially for industrial applications. However, the SRT\ncollision model and a sub-optimal choice of the MRT collision parameters can\nboth lead to permeability predictions that depend on the relaxation time, \\tau.\nThis parametric dependence is nonphysical for Stokes flow in porous media and\nalso leads to much larger number of iterations required for convergence. In\nthis paper, we performed a systematic numerical evaluation of the different\nsets of relaxation parameters in the D3Q19-MRT model for modeling Stokes flow\nin 3-D microtomographic pore-spaces using the bounceback scheme. These sets of\nparameters are evaluated from the point of view of accuracy, convergence rate,\nand an ability to generate parameter-independent permeability solutions.\nInstead of tuning all six independent relaxation rates that are available in\nthe MRT model, the sets that were analyzed have relaxation rates that depend on\none or two independent parameters, namely \\tau and \\Lambda. We tested\nelementary porous media at different image resolutions and a random packing of\nspheres at relatively high resolution. We observe that sets of certain specific\nrelaxation parameters (Sets B, D, or E as listed in Table 2), and \\tau in the\nrange \\tau\\in[1.0,1.3] can result in best overall accuracy, convergence rate,\nand parameter-independent permeability predictions.", "category": "physics_comp-ph" }, { "text": "The effect of distributed time-delays on the synchronization of neuronal\n networks: Here we investigate the synchronization of networks of FitzHugh-Nagumo\nneurons coupled in scale-free, small-world and random topologies, in the\npresence of distributed time delays in the coupling of neurons. We explore how\nthe synchronization transition is affected when the time delays in the\ninteractions between pairs of interacting neurons are non-uniform. We find that\nthe presence of distributed time-delays does not change the behavior of the\nsynchronization transition significantly, vis-a-vis networks with constant\ntime-delay, where the value of the constant time-delay is the mean of the\ndistributed delays. We also notice that a normal distribution of delays gives\nrise to a transition at marginally lower coupling strengths, vis-a-vis\nuniformly distributed delays. These trends hold across classes of networks and\nfor varying standard deviations of the delay distribution, indicating the\ngenerality of these results. So we conclude that distributed delays, which may\nbe typically expected in real-world situations, do not have a notable effect on\nsynchronization. This allows results obtained with constant delays to remain\nrelevant even in the case of randomly distributed delays.", "category": "physics_comp-ph" }, { "text": "Model Reduction for Multi-Scale Transport Problems using Model-form\n Preserving Least-Squares Projections with Variable Transformation: A projection-based formulation is presented for non-linear model reduction of\nproblems with extreme scale disparity. The approach allows for the selection of\nan arbitrary, but complete, set of solution variables while preserving the\nstructure of the governing equations. Least-squares-based minimization is\nleveraged to guarantee symmetrization and discrete consistency with the\nfull-order model (FOM). Two levels of scaling are used to achieve the\nconditioning required to effectively handle problems with extremely disparate\nphysical phenomena, characterized by extreme stiffness in the system of\nequations. The formulation -- referred to as model-form preserving\nleast-squares with variable transformation (MP-LSVT) -- provides global\nstabilization for both implicit and explicit time integration schemes. To\nachieve computational efficiency, a pivoted QR decomposition is used with\noversampling, and adapted to the MP-LSVT method. The framework is demonstrated\nin representative two- and three-dimensional reacting flow problems, and the\nMP-LSVT is shown to exhibit improved stability and accuracy over standard\nprojection-based ROM techniques. Physical realizability and local stability are\npromoted by enforcing limiters in both temperature and species mass fractions.\nThese limiters are demonstrated to be important in eliminating regions of\nspurious burning, thus enabling the ROMs to provide accurate representations of\nthe heat release rate and flame propagation speed. In the 3D application, it is\nshown that more than two orders of magnitude acceleration in computational\nefficiency can be achieved, while also providing reasonable future-state\npredictions. A key contribution of this work is the development and\ndemonstration of a comprehensive ROM formulation that targets highly\nchallenging multi-scale transport-dominated problems.", "category": "physics_comp-ph" }, { "text": "Molecular geometric deep learning: Geometric deep learning (GDL) has demonstrated huge power and enormous\npotential in molecular data analysis. However, a great challenge still remains\nfor highly efficient molecular representations. Currently, covalent-bond-based\nmolecular graphs are the de facto standard for representing molecular topology\nat the atomic level. Here we demonstrate, for the first time, that molecular\ngraphs constructed only from non-covalent bonds can achieve similar or even\nbetter results than covalent-bond-based models in molecular property\nprediction. This demonstrates the great potential of novel molecular\nrepresentations beyond the de facto standard of covalent-bond-based molecular\ngraphs. Based on the finding, we propose molecular geometric deep learning\n(Mol-GDL). The essential idea is to incorporate a more general molecular\nrepresentation into GDL models. In our Mol-GDL, molecular topology is modeled\nas a series of molecular graphs, each focusing on a different scale of atomic\ninteractions. In this way, both covalent interactions and non-covalent\ninteractions are incorporated into the molecular representation on an equal\nfooting. We systematically test Mol-GDL on fourteen commonly-used benchmark\ndatasets. The results show that our Mol-GDL can achieve a better performance\nthan state-of-the-art (SOTA) methods. Source code and data are available at\nhttps://github.com/CS-BIO/Mol-GDL.", "category": "physics_comp-ph" }, { "text": "On the physical inadmissibility of ILES for simulations of Euler\n equation turbulence: We present two main results. The first is a plausible validation argument for\nthe principle of a maximal rate of entropy production for Euler equation\nturbulence. This principle can be seen as an extension of the second law of\nthermodynamics. In our second main result, we examine competing models for\nlarge eddy simulations of Euler equation (fully developed) turbulence. We\ncompare schemes with no subgrid modeling, implicit large eddy simulation (ILES)\nwith limited subgrid modeling and those using dynamic subgrid scale models. Our\nanalysis is based upon three fundamental physical principles: conservation of\nenergy, the maximum entropy production rate and the principle of universality\nfor multifractal clustering of intermittency. We draw the conclusion that the\nabsence of subgrid modeling, or its partial inclusion in ILES solution violates\nthe maximum entropy dissipation rate admissibility criteria. We identify\ncircumstances in which the resulting errors have a minor effect on specific\nobservable quantities and situations where the effect is major.\n Application to numerical modeling of the deflagration to detonation\ntransition in type Ia supernova is discussed.", "category": "physics_comp-ph" }, { "text": "Kinetic modeling of multiphase flow based on simplified Enskog equation: A new kinetic model for multiphase flow was presented under the framework of\nthe discrete Boltzmann method (DBM). Significantly different from the previous\nDBM, a bottom-up approach was adopted in this model. The effects of molecular\nsize and repulsion potential were described by the Enskog collision model; the\nattraction potential was obtained through the mean-field approximation method.\nThe molecular interactions, which result in the non-ideal equation of state and\nsurface tension, were directly introduced as an external force term. Several\ntypical benchmark problems, including Couette flow, two-phase coexistence\ncurve, the Laplace law, phase separation, and the collision of two droplets,\nwere simulated to verify the model. Especially, for two types of droplet\ncollisions, the strengths of two non-equilibrium effects, $\\bar{D}_2^*$ and\n$\\bar{D}_3^*$, defined through the second and third order non-conserved kinetic\nmoments of $(f - f ^{eq})$, are comparatively investigated, where $f$\n($f^{eq}$) is the (equilibrium) distribution function. It is interesting to\nfind that during the collision process, $\\bar{D}_2^*$ is always significantly\nlarger than $\\bar{D}_3^*$, $\\bar{D}_2^*$ can be used to identify the different\nstages of the collision process and to distinguish different types of\ncollisions. The modeling method can be directly extended to a higher-order\nmodel for the case where the non-equilibrium effect is strong, and the linear\nconstitutive law of viscous stress is no longer valid.", "category": "physics_comp-ph" }, { "text": "A Concurrent Multiscale Micromorphic Molecular Dynamics. Part I.\n Theoretical Formulation: Based on a novel concept of multiplicative multiscale decomposition, we have\nderived a multiscale micromorphic molecular dynamics (MMMD)to extent the\n(Andersen)-Parrinello-Rahman molecular dynamics to mesoscale and macroscale.\nThe multiscale micromorphic molecular dynamics is a con-current three-scale\nparticle dynamics that couples a fine scale molecular dynamics, a mesoscale\nparticle dynamics of micromorphic medium, and a coarse scale nonlocal particle\ndynamics of nonlinear continuum. By choosing proper statistical closure\nconditions, we have shown that the original Andersen-Parrinello-Rahman\nmolecular dynamics can be rigorously formulated and justified from first\nprinciple, and it is a special case of the proposed multiscale micromorphic\nmolecular dynamics. The discovered mutiscale structure and the corresponding\nmultiscale dynamics reveal a seamless transition channel from atomistic scale\nto continuum scale and the intrinsic coupling relation among them, and it can\nbe used to solve finite size nanoscale science and engineering problems with\narbitrary boundary conditions.", "category": "physics_comp-ph" }, { "text": "A spectral scheme for Kohn-Sham density functional theory of clusters: Starting from the observation that one of the most successful methods for\nsolving the Kohn-Sham equations for periodic systems -- the plane-wave method\n-- is a spectral method based on eigenfunction expansion, we formulate a\nspectral method designed towards solving the Kohn-Sham equations for clusters.\nThis allows for efficient calculation of the electronic structure of clusters\n(and molecules) with high accuracy and systematic convergence properties\nwithout the need for any artificial periodicity. The basis functions in this\nmethod form a complete orthonormal set and are expressible in terms of\nspherical harmonics and spherical Bessel functions. Computation of the occupied\neigenstates of the discretized Kohn-Sham Hamiltonian is carried out using a\ncombination of preconditioned block eigensolvers and Chebyshev polynomial\nfilter accelerated subspace iterations. Several algorithmic and computational\naspects of the method, including computation of the electrostatics terms and\nparallelization are discussed. We have implemented these methods and algorithms\ninto an efficient and reliable package called ClusterES (Cluster Electronic\nStructure). A variety of benchmark calculations employing local and non-local\npseudopotentials are carried out using our package and the results are compared\nto the literature. Convergence properties of the basis set are discussed\nthrough numerical examples. Computations involving large systems that contain\nthousands of electrons are demonstrated to highlight the efficacy of our\nmethodology. The use of our method to study clusters with arbitrary point group\nsymmetries is briefly discussed.", "category": "physics_comp-ph" }, { "text": "Fully implicit and accurate treatment of jump conditions for two-phase\n incompressible Navier-Stokes equation: We present a numerical method for two-phase incompressible Navier-Stokes\nequation with jump discontinuity in the normal component of the stress tensor\nand in the material properties. Although the proposed method is only\nfirst-order accurate, it does capture discontinuity sharply, not neglecting nor\nomitting any component of the jump condition. Discontinuities in velocity\ngradient and pressure are expressed using a linear combination of singular\nforce and tangential derivatives of velocities to handle jump conditions in a\nfully implicit manner. The linear system for the divergence of the stress\ntensor is constructed in the framework of the ghost fluid method, and the\nresulting saddle-point system is solved via an iterative procedure. Numerical\nresults support the inference that the proposed method converges in $L^\\infty$\nnorms even when velocities and pressures are not smooth across the interface\nand can handle a large density ratio that is likely to appear in a real-world\nsimulation.", "category": "physics_comp-ph" }, { "text": "Numerical simulation of moving rigid body in rarefied gases: In this paper we present a numerical scheme to simulate a moving rigid body\nwith arbitrary shape suspended in a rarefied gas. The rarefied gas is simulated\nby solving the Boltzmann equation using a DSMC particle method. The motion of\nthe rigid body is governed by the Newton-Euler equations, where the force and\nthe torque on the rigid body is computed from the momentum transfer of the gas\nmolecules colliding with the body. On the other hand, the motion of the rigid\nbody influences the gas flow in its surroundings. We validate the numerical\nresults by testing the Einstein relation for Brownian motion of the suspended\nparticle. The translational as well as the rotational degrees of freedom are\ntaken into account. It is shown that the numerically computed translational and\nrotational diffusion coefficients converge to the theoretical values.", "category": "physics_comp-ph" }, { "text": "Enhanced force-field calibration via machine learning: The influence of microscopic force fields on the motion of Brownian particles\nplays a fundamental role in a broad range of fields, including soft matter,\nbiophysics, and active matter. Often, the experimental calibration of these\nforce fields relies on the analysis of the trajectories of these Brownian\nparticles. However, such an analysis is not always straightforward, especially\nif the underlying force fields are non-conservative or time-varying, driving\nthe system out of thermodynamic equilibrium. Here, we introduce a toolbox to\ncalibrate microscopic force fields by analyzing the trajectories of a Brownian\nparticle using machine learning, namely recurrent neural networks. We\ndemonstrate that this machine-learning approach outperforms standard methods\nwhen characterizing the force fields generated by harmonic potentials if the\navailable data are limited. More importantly, it provides a tool to calibrate\nforce fields in situations for which there are no standard methods, such as\nnon-conservative and time-varying force fields. In order to make this method\nreadily available for other users, we provide a Python software package named\nDeepCalib, which can be easily personalized and optimized for specific\napplications.", "category": "physics_comp-ph" }, { "text": "Coarse-Graining Hamiltonian Systems Using WSINDy: The Weak-form Sparse Identification of Nonlinear Dynamics algorithm (WSINDy)\nhas been demonstrated to offer coarse-graining capabilities in the context of\ninteracting particle systems (https://doi.org/10.1016/j.physd.2022.133406). In\nthis work we extend this capability to the problem of coarse-graining\nHamiltonian dynamics which possess approximate symmetries associated with\ntimescale separation. Such approximate symmetries often lead to the existence\nof a Hamiltonian system of reduced dimension that may be used to efficiently\ncapture the dynamics of the symmetry-invariant dependent variables. Deriving\nsuch reduced systems, or approximating them numerically, is an ongoing\nchallenge. We demonstrate that WSINDy can successfully identify this reduced\nHamiltonian system in the presence of large intrinsic perturbations while\nremaining robust to extrinsic noise. This is significant in part due to the\nnontrivial means by which such systems are derived analytically. WSINDy also\nnaturally preserves the Hamiltonian structure by restricting to a trial basis\nof Hamiltonian vector fields. The methodology is computational efficient, often\nrequiring only a single trajectory to learn the global reduced Hamiltonian, and\navoiding forward solves in the learning process. Using nearly-periodic\nHamiltonian systems as a prototypical class of systems with approximate\nsymmetries, we show that WSINDy robustly identifies the correct leading-order\nsystem, with dimension reduced by at least two, upon observation of the\nrelevant degrees of freedom. We also provide a contribution to averaging theory\nby proving that first-order averaging at the level of vector fields preserves\nHamiltonian structure in nearly-periodic Hamiltonian systems. We provide\nphysically relevant examples, namely coupled oscillator dynamics, the\nH\\'enon-Heiles system for stellar motion within a galaxy, and the dynamics of\ncharged particles.", "category": "physics_comp-ph" }, { "text": "The Materials Simulation Toolkit for Machine Learning (MAST-ML): an\n automated open source toolkit to accelerate data-driven materials research: As data science and machine learning methods are taking on an increasingly\nimportant role in the materials research community, there is a need for the\ndevelopment of machine learning software tools that are easy to use (even for\nnonexperts with no programming ability), provide flexible access to the most\nimportant algorithms, and codify best practices of machine learning model\ndevelopment and evaluation. Here, we introduce the Materials Simulation Toolkit\nfor Machine Learning (MAST-ML), an open source Python-based software package\ndesigned to broaden and accelerate the use of machine learning in materials\nscience research. MAST-ML provides predefined routines for many input setup,\nmodel fitting, and post-analysis tasks, as well as a simple structure for\nexecuting a multi-step machine learning model workflow. In this paper, we\ndescribe how MAST-ML is used to streamline and accelerate the execution of\nmachine learning problems. We walk through how to acquire and run MAST-ML,\ndemonstrate how to execute different components of a supervised machine\nlearning workflow via a customized input file, and showcase a number of\nfeatures and analyses conducted automatically during a MAST-ML run. Further, we\ndemonstrate the utility of MAST-ML by showcasing examples of recent materials\ninformatics studies which used MAST-ML to formulate and evaluate various\nmachine learning models for an array of materials applications. Finally, we lay\nout a vision of how MAST-ML, together with complementary software packages and\nemerging cyberinfrastructure, can advance the rapidly growing field of\nmaterials informatics, with a focus on producing machine learning models\neasily, reproducibly, and in a manner that facilitates model evolution and\nimprovement in the future.", "category": "physics_comp-ph" }, { "text": "Swift $GW$ beyond $10,000$ electrons using fractured stochastic orbitals: We introduce the concept of fractured stochastic orbitals (FSOs), short\nvectors that sample a small number of space points and enable an efficient\nstochastic sampling of any general function. As a first demonstration, FSOs are\napplied in conjunction with simple direct-projection to accelerate our recent\nstochastic $GW$ technique; the new developments enable accurate prediction of\n$G_{0}W_{0}$ quasiparticle energies and gaps for systems with up to\n$N_{e}>10,000$ electrons, with small statistical errors of $\\pm0.05\\,{\\rm eV}$\nand using less than 2000 core CPU hours. Overall, stochastic $GW$ scales now\nlinearly (and often sub-linearly) with $N_{e}.$", "category": "physics_comp-ph" }, { "text": "A pseudospectral matrix method for time-dependent tensor fields on a\n spherical shell: We construct a pseudospectral method for the solution of time-dependent,\nnon-linear partial differential equations on a three-dimensional spherical\nshell. The problem we address is the treatment of tensor fields on the sphere.\nAs a test case we consider the evolution of a single black hole in numerical\ngeneral relativity. A natural strategy would be the expansion in tensor\nspherical harmonics in spherical coordinates. Instead, we consider the simpler\nand potentially more efficient possibility of a double Fourier expansion on the\nsphere for tensors in Cartesian coordinates. As usual for the double Fourier\nmethod, we employ a filter to address time-step limitations and certain\nstability issues. We find that a tensor filter based on spin-weighted spherical\nharmonics is successful, while two simplified, non-spin-weighted filters do not\nlead to stable evolutions. The derivatives and the filter are implemented by\nmatrix multiplication for efficiency. A key technical point is the construction\nof a matrix multiplication method for the spin-weighted spherical harmonic\nfilter. As example for the efficient parallelization of the double Fourier,\nspin-weighted filter method we discuss an implementation on a GPU, which\nachieves a speed-up of up to a factor of 20 compared to a single core CPU\nimplementation.", "category": "physics_comp-ph" }, { "text": "A note on the general multi-moment constrained flux reconstruction\n formulation for high order schemes: This paper presents a general formulation to construct high order numerical\nschemes by using multi-moment constraint conditions on the flux function\nreconstruction. The new formulation, so called multi-moment constrained flux\nreconstruction (MMC-FR), distinguishes itself essentially from the flux\nreconstruction formulation (FR) of Huynh (2007) by imposing not only the\ncontinuity constraint conditions on the flux function at the cell boundary, but\nalso other types constraints which may include those on the spatial derivatives\nor the point values. This formulation can be also interprated as a blend of\nLagrange interpolation the Hermite interpolation, which provides a numerical\nframework to accomodate a wider spectrum of high order schemes. Some\nrepresentative schemes will be presented and evaluated through Fourier analysis\nand numerical tests.", "category": "physics_comp-ph" }, { "text": "Density functional perturbation theory within non-collinear magnetism: We extend the density functional perturbation theory formalism to the case of\nnon-collinear magnetism. The main problem comes with the exchange-correlation\n(XC) potential derivatives, which are the only ones that are affected by the\nnon-collinearity of the system. Most of the present XC functionals are\nconstructed at the collinear level, such that the off-diagonal (containing\nmagnetization densities along $x$ and $y$ directions) derivatives cannot be\ncalculated simply in the non-collinear framework. To solve this problem, we\nconsider here possibilities to transform the non-collinear XC derivatives to a\nlocal collinear basis, where the $z$ axis is aligned with the local\nmagnetization at each point. The two methods we explore are i) expanding the\nspin rotation matrix as a Taylor series, ii) evaluating explicitly the XC for\nthe local density approximation through an analytical expression of the\nexpansion terms. We compare the two methods and describe their practical\nimplementation. We show their application for atomic displacement and electric\nfield perturbations at the second order, within the norm-conserving\npseudopotential methods.", "category": "physics_comp-ph" }, { "text": "An adaptive grid refinement strategy for the simulation of negative\n streamers: The evolution of negative streamers during electric breakdown of a\nnon-attaching gas can be described by a two-fluid model for electrons and\npositive ions. It consists of continuity equations for the charged particles\nincluding drift, diffusion and reaction in the local electric field, coupled to\nthe Poisson equation for the electric potential. The model generates field\nenhancement and steep propagating ionization fronts at the tip of growing\nionized filaments. An adaptive grid refinement method for the simulation of\nthese structures is presented. It uses finite volume spatial discretizations\nand explicit time stepping, which allows the decoupling of the grids for the\ncontinuity equations from those for the Poisson equation. Standard refinement\nmethods in which the refinement criterion is based on local error monitors fail\ndue to the pulled character of the streamer front that propagates into a\nlinearly unstable state. We present a refinement method which deals with all\nthese features. Tests on one-dimensional streamer fronts as well as on\nthree-dimensional streamers with cylindrical symmetry (hence effectively 2D for\nnumerical purposes) are carried out successfully. Results on fine grids are\npresented, they show that such an adaptive grid method is needed to capture the\nstreamer characteristics well. This refinement strategy enables us to\nadequately compute negative streamers in pure gases in the parameter regime\nwhere a physical instability appears: branching streamers.", "category": "physics_comp-ph" }, { "text": "A generalized nonlinear Schr\u00f6dinger Python module implementing\n different models of input pulse quantum noise: We provide Python tools enabling numerical simulation and analysis of the\npropagation dynamics of ultrashort laser pulses in nonlinear waveguides. The\nmodeling approach is based on the widely used generalized nonlinear\nSchr\\\"odinger equation for the pulse envelope. The presented software\nimplements the effects of linear dispersion, pulse self-steepening, and the\nRaman effect. The focus lies on the implementation of input pulse shot noise,\ni.e. classical background fields that mimick quantum noise, which are often not\nthoroughly presented in the scientific literature. We discuss and implement\ncommonly adopted quantum noise models based on pure spectral phase noise, as\nwell as Gaussian noise. Coherence properties of the resulting spectra can be\ncalculated. We demonstrate the functionality of the software by reproducing\nresults for a supercontinuum generation process in a photonic crystal fiber,\ndocumented in the scientific literature. The presented Python tools are are\nopen-source and released under the MIT license in a publicly available software\nrepository.", "category": "physics_comp-ph" }, { "text": "CPMD/GULP QM/MM Interface for Modeling Periodic Solids: Implementation\n and its Application in the Study of Y-Zeolite Supported Rh$_n$ Clusters: We report here the development of hybrid quantum mechanics/molecular\nmechanics (QM/MM) interface between the plane-wave density functional theory\nbased CPMD code and the empirical force-field based GULP code for modeling\nperiodic solids and surfaces. The hybrid QM/MM interface is based on the\nelectrostatic coupling between QM and MM regions. The interface is designed for\ncarrying out full relaxation of all the QM and MM atoms during geometry\noptimizations and molecular dynamics simulations, including the boundary atoms.\nBoth Born-Oppenheimer and Car-Parrinello molecular dynamics schemes are enabled\nfor the QM part during the QM/MM calculations. This interface has the advantage\nof parallelization of both the programs such that the QM and MM force\nevaluations can be carried out in parallel in order to model large systems. The\ninterface program is first validated for total energy conservation and parallel\nscaling performance is benchmarked. Oxygen vacancy in {\\alpha}-cristobalite is\nthen studied in detail and the results are compared with a fully QM calculation\nand experimental data. Subsequently, we use our implementation to investigate\nthe structure of rhodium cluster (Rh$_n$ ; $n$=2 to 6) formed from\nRh(C$_2$H$_4$)$_2$ complex adsorbed within a cavity of Y-zeolite in a reducible\natmosphere of H$_2$ gas.", "category": "physics_comp-ph" }, { "text": "Phase-amplitude functional theory -- new ab initio calculation method\n for large size systems: New method for ab initio calculations of the properties of large size system\nbased on phase-amplitude functional is presented. It is shown that Schrodinger\nequation for many electrons complex system including large size molecules, or\nclusters and also periodic systems could be translated into functional of two\nvariables, attributed to many electron wavefunctions: phase and the amplitude\n(i.e. square root of total electron density). The equations for the phase and\nthe amplitude are derived. The kinetic and Coulomb interaction energy are\nexpressed in function of these variables. The equations for one-electron\nwavefunctions, necessary for the energy spectrum are derived using these two\nvariables.", "category": "physics_comp-ph" }, { "text": "Simplified-DPN treatment of the neutron transport equation: In this paper the simplified double-spherical harmonics SDPN, approximation\nof the neutron transport equation is proposed. The SDPN equations are derived\nfrom the multi-group DPN equations for N=1,2,3 (comparable to the SP3, SP5, and\nSP7 equations, respectively), and are converted into the form of second order\nmulti-group diffusion equations. The finite element method with the variational\napproach is then used to numerically solve these equations. The computational\nperformance of the SDPN method is compared with the SPN on several fixed-source\nand criticality test problems. The results show that the SDPN formulation\ngenerally results in parameters like criticality eigenvalue, disadvantage\nfactors, absorption rate, etc. more accurately than the SPN, even up to an\norder of magnitude more precise, while the computational effort is the same for\nboth methods.", "category": "physics_comp-ph" }, { "text": "A Generative Model for Extrapolation Prediction in Materials Informatics: We report a deep generative model for regression tasks in materials\ninformatics. The model is introduced as a component of a data imputer, and\npredicts more than 20 diverse experimental properties of organic molecules. The\nimputer is designed to predict material properties by \"imagining\" the missing\ndata in the database, enabling the use of incomplete material data. Even\nremoving 60% of the data does not diminish the prediction accuracy in a model\ntask. Moreover, the model excels at extrapolation prediction, where target\nvalues of the test data are out of the range of the training data. Such\nextrapolation has been regarded as an essential technique for exploring novel\nmaterials, but has hardly been studied to date due to its difficulty. We\ndemonstrate that the prediction performance can be improved by >30% by using\nthe imputer compared with traditional linear regression and boosting models.\nThe benefit becomes especially pronounced with few records for an experimental\nproperty (< 100 cases) when prediction would be difficult by conventional\nmethods. The presented approach can be used to more efficiently explore\nfunctional materials and break through previous performance limits.", "category": "physics_comp-ph" }, { "text": "A physics-informed operator regression framework for extracting\n data-driven continuum models: The application of deep learning toward discovery of data-driven models\nrequires careful application of inductive biases to obtain a description of\nphysics which is both accurate and robust. We present here a framework for\ndiscovering continuum models from high fidelity molecular simulation data. Our\napproach applies a neural network parameterization of governing physics in\nmodal space, allowing a characterization of differential operators while\nproviding structure which may be used to impose biases related to symmetry,\nisotropy, and conservation form. We demonstrate the effectiveness of our\nframework for a variety of physics, including local and nonlocal diffusion\nprocesses and single and multiphase flows. For the flow physics we demonstrate\nthis approach leads to a learned operator that generalizes to system\ncharacteristics not included in the training sets, such as variable particle\nsizes, densities, and concentration.", "category": "physics_comp-ph" }, { "text": "Tuning symplectic integrators is easy and worthwhile: Many applications in computational physics that use numerical integrators\nbased on splitting and composition can benefit from the development of\noptimized algorithms and from choosing the best ordering of terms. The cost in\nprogramming and execution time is minimal, while the performance improvements\ncan be large.", "category": "physics_comp-ph" }, { "text": "OptFROG - Analytic signal spectrograms with optimized time-frequency\n resolution: A Python package for the calculation of spectrograms with optimized time and\nfrequency resolution for application in the analysis of numerical simulations\non ultrashort pulse propagation is presented. Gabor's uncertainty principle\nprevents both resolutions from being optimal simultaneously for a given window\nfunction employed in the underlying short-time Fourier analysis. Our aim is to\nyield a time-frequency representation of the input signal with marginals that\nrepresent the original intensities per unit time and frequency similarly well.\nAs use-case we demonstrate the implemented functionality for the analysis of\nsimulations on ultrashort pulse propagation in a nonlinear waveguide.", "category": "physics_comp-ph" }, { "text": "Insights into one-body density matrices using deep learning: The one-body reduced density matrix (1-RDM) of a many-body system at zero\ntemperature gives direct access to many observables, such as the charge\ndensity, kinetic energy and occupation numbers. It would be desirable to\nexpress it as a simple functional of the density or of other local observables,\nbut to date satisfactory approximations have not yet been found. Deep learning\nis the state-of the art approach to perform high dimensional regressions and\nclassification tasks, and is becoming widely used in the condensed matter\ncommunity to develop increasingly accurate density functionals. Autoencoders\nare deep learning models that perform efficient dimensionality reduction,\nallowing the distillation of data to its fundamental features needed to\nrepresent it. By training autoencoders on a large data-set of 1-RDMs from\nexactly solvable real-space model systems, and performing principal component\nanalysis, the machine learns to what extent the data can be compressed and\nhence how it is constrained. We gain insight into these machine learned\nconstraints and employ them to inform approximations to the 1-RDM as a\nfunctional of the charge density. We exploit known physical properties of the\n1-RDM in the simplest possible cases to perform feature engineering, where we\ninform the structure of the models from known mathematical relations, allowing\nus to integrate existing understanding into the machine learning methods. By\ncomparing various deep learning approaches we gain insight into what physical\nfeatures of the density matrix are most amenable to machine learning, utilising\nboth known and learned characteristics.", "category": "physics_comp-ph" }, { "text": "Hybrid FFT algorithm for fast demagnetization field calculations on\n non-equidistant magnetic layers: In micromagnetic simulations, the demagnetization field is by far the\ncomputationally most expensive field component and often a limiting factor in\nlarge multilayer systems. We present an exact method to calculate the\ndemagnetization field of magnetic layers with arbitrary thicknesses. In this\napproach we combine the widely used fast-Fourier-transform based circular\nconvolution method with an explicit convolution using a generalized form of the\nNewell formulas. We implement the method both for central processors and\ngraphics processors and find that significant speedups for irregular multilayer\ngeometries can be achieved. Using this method we optimize the geometry of a\nmagnetic random-access memory cell by varying a single specific layer thickness\nand simulate a hysteresis curve to determine the resulting switching field.", "category": "physics_comp-ph" }, { "text": "Fast Uncertainty Estimates in Deep Learning Interatomic Potentials: Deep learning has emerged as a promising paradigm to give access to highly\naccurate predictions of molecular and materials properties. A common\nshort-coming shared by current approaches, however, is that neural networks\nonly give point estimates of their predictions and do not come with predictive\nuncertainties associated with these estimates. Existing uncertainty\nquantification efforts have primarily leveraged the standard deviation of\npredictions across an ensemble of independently trained neural networks. This\nincurs a large computational overhead in both training and prediction that\noften results in order-of-magnitude more expensive predictions. Here, we\npropose a method to estimate the predictive uncertainty based on a single\nneural network without the need for an ensemble. This allows us to obtain\nuncertainty estimates with virtually no additional computational overhead over\nstandard training and inference. We demonstrate that the quality of the\nuncertainty estimates matches those obtained from deep ensembles. We further\nexamine the uncertainty estimates of our methods and deep ensembles across the\nconfiguration space of our test system and compare the uncertainties to the\npotential energy surface. Finally, we study the efficacy of the method in an\nactive learning setting and find the results to match an ensemble-based\nstrategy at order-of-magnitude reduced computational cost.", "category": "physics_comp-ph" }, { "text": "Exterior complex scaling as a perfect absorber in time-dependent\n problems: It is shown that exterior complex scaling provides for complete absorption of\noutgoing flux in numerical solutions of the time-dependent Schr\\\"odinger\nequation with strong infrared fields. This is demonstrated by computing high\nharmonic spectra and wave-function overlaps with the exact solution for a\none-dimensional model system and by three-dimensional calculations for the H\natom and a Ne atom model. We lay out the key ingredients for correct\nimplementation and identify criteria for efficient discretization.", "category": "physics_comp-ph" }, { "text": "A Hybrid Monte Carlo algorithm for sampling rare events in space-time\n histories of stochastic fields: We introduce a variant of the Hybrid Monte Carlo (HMC) algorithm to address\nlarge-deviation statistics in stochastic hydrodynamics. Based on the\npath-integral approach to stochastic (partial) differential equations, our HMC\nalgorithm samples space-time histories of the dynamical degrees of freedom\nunder the influence of random noise. First, we validate and benchmark the HMC\nalgorithm by reproducing multiscale properties of the one-dimensional Burgers\nequation driven by Gaussian and white-in-time noise. Second, we show how to\nimplement an importance sampling protocol to significantly enhance, by orders\nof magnitudes, the probability to sample extreme and rare events, making it\npossible to estimate moments of field variables of extremely high order (up to\n30 and more). By employing reweighting techniques, we map the biased\nconfigurations back to the original probability measure in order to probe their\nstatistical importance. Finally, we show that by biasing the system towards\nvery intense negative gradients, the HMC algorithm is able to explore the\nstatistical fluctuations around instanton configurations. Our results will also\nbe interesting and relevant in lattice gauge theory since they provide insight\ninto reweighting techniques.", "category": "physics_comp-ph" }, { "text": "Dynamic relaxation of topological defect at Kosterlitz-Thouless phase\n transition: With Monte Carlo methods we study the dynamic relaxation of a vortex state at\nthe Kosterlitz-Thouless phase transition of the two-dimensional XY model. A\nlocal pseudo-magnetization is introduced to characterize the symmetric\nstructure of the dynamic systems. The dynamic scaling behavior of the\npseudo-magnetization and Binder cumulant is carefully analyzed, and the\ncritical exponents are determined. To illustrate the dynamic effect of the\ntopological defect, similar analysis for the the dynamic relaxation with a\nspin-wave initial state is also performed for comparison. We demonstrate that a\nlimited amount of quenched disorder in the core of the vortex state may alter\nthe dynamic universality class. Further, theoretical calculations based on the\nlong-wave approximation are presented.", "category": "physics_comp-ph" }, { "text": "Evaluation of ensemble methods for quantifying uncertainties in\n steady-state CFD applications with small ensemble sizes: Bayesian uncertainty quantification (UQ) is of interest to industry and\nacademia as it provides a framework for quantifying and reducing the\nuncertainty in computational models by incorporating available data. For\nsystems with very high computational costs, for instance, the computational\nfluid dynamics (CFD) problem, the conventional, exact Bayesian approach such as\nMarkov chain Monte Carlo is intractable. To this end, the ensemble-based\nBayesian methods have been used for CFD applications. However, their\napplicability for UQ has not been fully analyzed and understood thus far. Here,\nwe evaluate the performance of three widely used iterative ensemble-based data\nassimilation methods, namely ensemble Kalman filter, ensemble randomized\nmaximum likelihood method, and ensemble Kalman filter with multiple data\nassimilation for UQ problems. We present the derivations of the three ensemble\nmethods from an optimization viewpoint. Further, a scalar case is used to\ndemonstrate the performance of the three different approaches with emphasis on\nthe effects of small ensemble sizes. Finally, we assess the three ensemble\nmethods for quantifying uncertainties in steady-state CFD problems involving\nturbulent mean flows. Specifically, the Reynolds averaged Navier--Stokes (RANS)\nequation is considered the forward model, and the uncertainties in the\npropagated velocity are quantified and reduced by incorporating observation\ndata. The results show that the ensemble methods cannot accurately capture the\ntrue posterior distribution, but they can provide a good estimation of the\nuncertainties even when very limited ensemble sizes are used. Based on the\noverall performance and efficiency from the comparison, the ensemble randomized\nmaximum likelihood method is identified as the best choice of approximate\nBayesian UQ approach~among the three ensemble methods evaluated here.", "category": "physics_comp-ph" }, { "text": "Explicit Integration with GPU Acceleration for Large Kinetic Networks: We demonstrate the first implementation of recently-developed fast explicit\nkinetic integration algorithms on modern graphics processing unit (GPU)\naccelerators. Taking as a generic test case a Type Ia supernova explosion with\nan extremely stiff thermonuclear network having 150 isotopic species and 1604\nreactions coupled to hydrodynamics using operator splitting, we demonstrate the\ncapability to solve of order 100 realistic kinetic networks in parallel in the\nsame time that standard implicit methods can solve a single such network on a\nCPU. This orders-of-magnitude decrease in compute time for solving systems of\nrealistic kinetic networks implies that important coupled, multiphysics\nproblems in various scientific and technical fields that were intractible, or\ncould be simulated only with highly schematic kinetic networks, are now\ncomputationally feasible.", "category": "physics_comp-ph" }, { "text": "Bakry-\u00c9mery-Ricci curvature: An alternative network geometry measure\n in the expanding toolbox of graph Ricci curvatures: The characterization of complex networks with tools originating in geometry,\nfor instance through the statistics of so-called Ricci curvatures, is a well\nestablished tool of network science. There exist various types of such Ricci\ncurvatures, capturing different aspects of network geometry. In the present\nwork, we investigate Bakry-\\'Emery-Ricci curvature, a notion of discrete Ricci\ncurvature that has been studied much in geometry, but so far has not been\napplied to networks. We explore on standard classes of artificial networks as\nwell as on selected empirical ones to what the statistics of that curvature are\nsimilar to or different from that of other curvatures, how it is correlated to\nother important network measures, and what it tells us about the underlying\nnetwork. We observe that most vertices typically have negative curvature.\nRandom and small-world networks exhibit a narrow curvature distribution whereas\nother classes and most of the real-world networks possess a wide curvature\ndistribution. When we compare Bakry-\\'Emery-Ricci curvature with two other\ndiscrete notions of Ricci-curvature, Forman-Ricci and Ollivier-Ricci curvature\nfor both model and real-world networks, we observe a high positive correlation\nbetween Bakry-\\'Emery-Ricci and both Forman-Ricci and Ollivier-Ricci curvature,\nand in particular with the augmented version of Forman-Ricci curvature.\nBakry-\\'Emery-Ricci curvature also exhibits a high negative correlation with\nthe vertex centrality measure and degree for most of the model and real-world\nnetworks. However, it does not correlate with the clustering coefficient. Also,\nwe investigate the importance of vertices with highly negative curvature values\nto maintain communication in the network. The computational time for\nBakry-\\'Emery-Ricci curvature is shorter than that required for Ollivier-Ricci\ncurvature but higher than for Augmented Forman-Ricci curvature.", "category": "physics_comp-ph" }, { "text": "AlfaMC: a fast alpha particle transport Monte Carlo code: AlfaMC is a Monte Carlo simulation code for the transport of alpha particles.\nThe code is based on the Continuous Slowing Down Approximation and uses the\nNIST/ASTAR stopping-power database. The code uses a powerful geometrical\npackage allowing the coding of complex geometries. A flexible histogramming\npackage is used which greatly easies the scoring of results. The code is\ntailored for microdosimetric applications where speed is a key factor.\nComparison with the SRIM code is made for transmitted energy in thin layers and\nrange for air, mylar, aluminum and gold. The general agreement between the two\ncodes is good for beam energies between 1 and 12 MeV. The code is open-source\nand released under the General Public Licence.", "category": "physics_comp-ph" }, { "text": "DECal, a Python tool for the efficiency calculation of thermal neutron\n detectors based on thin-film converters: The Detector Efficiency Calculator (DECal) is a series of Python functions\nand tools designed to analytically calculate, visualise and optimise the\ndetection efficiency of thermal neutron detectors, which are based on thin-film\nconverters. The implementation presented in this article concerns 10B-based\ndetectors in particular. The code can be run via a graphical user interface, as\nwell as via the command line. The source code is openly available to interested\nusers via a GitHub repository.", "category": "physics_comp-ph" }, { "text": "The Montecinos-Balsara ADER-FV Polynomial Basis: Convergence Properties\n & Extension to Non-Conservative Multidimensional Systems: Hyperbolic systems of PDEs can be solved to arbitrary orders of accuracy by\nusing the ADER Finite Volume method. These PDE systems may be non-conservative\nand non-homogeneous, and contain stiff source terms. ADER-FV requires a\nspatio-temporal polynomial reconstruction of the data in each spacetime cell,\nat each time step. This reconstruction is obtained as the root of a nonlinear\nsystem, resulting from the use of a Galerkin method. It was proved in Jackson\n[7] that for traditional choices of basis polynomials, the eigenvalues of\ncertain matrices appearing in these nonlinear systems are always 0, regardless\nof the number of spatial dimensions of the PDEs or the chosen order of accuracy\nof the ADER-FV method. This guarantees fast convergence to the Galerkin root\nfor certain classes of PDEs.\n In Montecinos and Balsara [9] a new, more efficient class of basis\npolynomials for the one-dimensional ADER-FV method was presented. This new\nclass of basis polynomials, originally presented for conservative systems, is\nextended to multidimensional, non-conservative systems here, and the\ncorresponding property regarding the eigenvalues of the Galerkin matrices is\nproved.", "category": "physics_comp-ph" }, { "text": "Multibody Multipole Methods: A three-body potential function can account for interactions among triples of\nparticles which are uncaptured by pairwise interaction functions such as\nCoulombic or Lennard-Jones potentials. Likewise, a multibody potential of order\n$n$ can account for interactions among $n$-tuples of particles uncaptured by\ninteraction functions of lower orders. To date, the computation of multibody\npotential functions for a large number of particles has not been possible due\nto its $O(N^n)$ scaling cost. In this paper we describe a fast tree-code for\nefficiently approximating multibody potentials that can be factorized as\nproducts of functions of pairwise distances. For the first time, we show how to\nderive a Barnes-Hut type algorithm for handling interactions among more than\ntwo particles. Our algorithm uses two approximation schemes: 1) a deterministic\nseries expansion-based method; 2) a Monte Carlo-based approximation based on\nthe central limit theorem. Our approach guarantees a user-specified bound on\nthe absolute or relative error in the computed potential with an asymptotic\nprobability guarantee. We provide speedup results on a three-body dispersion\npotential, the Axilrod-Teller potential.", "category": "physics_comp-ph" }, { "text": "Calculation of electron-ion temperature equilibration rates and friction\n coefficients in plasmas and liquid metals using quantum molecular dynamics: We discuss a method to calculate with quantum molecular dynamics simulations\nthe rate of energy exchanges between electrons and ions in two-temperature\nplasmas, liquid metals and hot solids. Promising results from this method were\nrecently reported for various materials and physical conditions [J. Simoni and\nJ. Daligault, Phys. Rev. Lett. 122, 205001 (2019)]. Like other ab-initio\ncalculations, the approach offers a very useful comparison with the\nexperimental measurements and permits an extension into conditions not covered\nby the experiments. The energy relaxation rate is related to the friction\ncoefficients felt by individual ions due to their non-adiabatic interactions\nwith electrons. Each coefficient satisfies a Kubo relation given by the time\nintegral of the autocorrelation function of the interaction force between an\nion and the electrons. These Kubo relations are evaluated using the output of\nquantum molecular dynamics calculations in which electrons are treated in the\nframework of finite-temperature density functional theory. The calculation\npresents difficulties that are unlike those encountered with the Kubo formulas\nfor the electrical and thermal conductivities. In particular, the widely used\nKubo-Greenwood approximation is inapplicable here. Indeed, the friction\ncoefficients and the energy relaxation rate diverge in this approximation since\nit does not properly account for the electronic screening of electron-ion\ninteractions. The inclusion of screening effects considerably complicates the\ncalculations. We discuss the physically-motivated approximations we applied to\ndeal with these complications in order to investigate a widest range of\nmaterials and physical conditions.", "category": "physics_comp-ph" }, { "text": "Deployment of High Energy Physics software with a standard method: The installation and maintenance of scientific software for research in\nexperimental, phenomenological, and theoretical High Energy Physics (HEP)\nrequires a considerable amount of time and expertise. While many tools are\navailable to make the task of installation and maintenance much easier, many of\nthese tools require maintenance on their own, have little documentation and\nvery few are used outside of HEP community.\n For the installation and maintenance of the software, we rely on the well\ntested, extensively documented, and reliable stack of software management tools\nwith the RPM Package Manager (RPM) at its core. The precompiled HEP software\npackages can be deployed easily and without detailed Linux system knowledge and\nare kept up-to-date through the regular system update process. The precompiled\npackages were tested on multiple installations of openSUSE, RHEL clones, and\nFedora. As the RPM infrastructure is adopted by many Linux distributions, the\napproach can be used on more systems.\n In this contribution, we discuss our approach to software deployment in\ndetail, present the software repositories for multiple RPM-based Linux\ndistributions to a wider public and call for a collaboration for all the\ninterested parties.", "category": "physics_comp-ph" }, { "text": "Fast and accurate multidimensional free energy integration: Enhanced sampling and free energy calculation algorithms of the Thermodynamic\nIntegration family (such as the Adaptive Biasing Force method, ABF) are not\nbased on the direct computation of a free energy surface, but rather of its\ngradient. Integrating the free energy surface is non-trivial in dimension\nhigher than one. Here the author introduces a flexible, portable implementation\nof a Poisson equation formalism to integrate free energy surfaces from\nestimated gradients in dimension 2 and 3, using any combination of periodic and\nnon-periodic (Neumann) boundary conditions. The algorithm is implemented in\nportable C++, and provided as a standalone tool that can be used to integrate\nmultidimensional gradient fields estimated on a grid using any algorithm, such\nas Umbrella Integration as a post-treatment of Umbrella Sampling simulations.\nIt is also included in the implementation of ABF (and its extended-system\nvariant eABF) in the Collective Variables Module, enabling the seamless\ncomputation of multidimensional free energy surfaces within ABF and eABF\nsimulations. A Python-based analysis toolchain is provided to easily plot and\nanalyze multidimensional ABF simulation results, including metrics to assess\ntheir convergence. The Poisson integration algorithm can also be used to\nperform Helmholtz decomposition of noisy gradients estimates on the fly,\nresulting in an efficient implementation of the projected ABF (pABF) method\nproposed by Leli\\`evre and co-workers. In numerical tests, pABF is found to\nlead to faster convergence with respect to ABF in simple cases of low intrinsic\ndimension, but seems detrimental to convergence in a more realistic case\ninvolving degenerate coordinates and hidden barriers, due to slower\nexploration. This suggests that variance reduction schemes do not always yield\nconvergence improvements when applied to enhanced sampling methods.", "category": "physics_comp-ph" }, { "text": "Non-Adlerian phase slip and non stationary synchronization of\n spin-torque oscillators to a microwave source: The non-autonomous dynamics of spin-torque oscillators in presence of both\nmicrowave current and field at the same frequency can exhibit complex\nnon-isochronous effects. A non-stationary mode hopping between quasi-periodic\nmode (frequency pulling) and periodic mode (phase locking), and a deterministic\nphase slip characterized by an oscillatory synchronization transient\n(non-Adlerian phase slip) after the phase jump of have been predicted. In the\nlatter effect, a wavelet based analysis reveals that in the positive and\nnegative phase jump the synchronization transient occurs at the frequency of\nthe higher and lower sideband frequency respectively. The non-Adlerian phase\nslip effect, even if discovered in STOs, is a general property of\nnon-autonomous behavior valid to any non-isochronous auto-oscillator in regime\nof moderate and large force locking.", "category": "physics_comp-ph" }, { "text": "Towards Quantum Monte Carlo Forces on Heavier Ions: Scaling Properties: Quantum Monte Carlo (QMC) forces have been studied extensively in recent\ndecades because of their importance with spectroscopic observables and geometry\noptimization. Here we benchmark the accuracy and statistical cost of QMC\nforces. The zero-variance zero-bias (ZVZB) force estimator is used in standard\nvariational and diffusion Monte Carlo simulations with mean-field based trial\nwavefunctions and atomic pseudopotentials. Statistical force uncertainties are\nobtained with a recently developed regression technique for heavy tailed QMC\ndata [P. Lopez Rios and G. J. Conduit, Phys. Rev. E 99, 063312 (2019)]. By\nconsidering selected atoms and dimers with elements ranging from H to Zn\n($1\\leq Z_{\\mathrm{eff}} \\leq 20$), we assess the accuracy and the\ncomputational cost of ZVZB forces as the effective pseudopotential valence\ncharge, $Z_{\\mathrm{eff}}$, increases. We find that the cost of QMC energies\nand forces approximately follow simple power laws in $Z_{\\mathrm{eff}}$. The\nforce uncertainty grows more rapidly, leading to a best case cost scaling\nrelationship of approximately $Z_{\\mathrm{eff}}^{6.5(3)}$ for DMC. We find the\naffordable system size decreases as $Z_{\\mathrm{eff}}^{-2}$, insensitive to\nmodel assumptions or the use of \"space warp\" variance reduction. Our results\npredict the practical cost of obtaining forces for a range of materials, such\nas transition metal oxides where QMC forces have yet to be applied, and\nunderscore the importance of further developing force variance reduction\ntechniques, particularly for atoms with high $\\zeff$.", "category": "physics_comp-ph" }, { "text": "Feasibility Studies for the Panda Experiment at Fair: PANDA, the detector to study AntiProton ANnihilations at DArmstadt, will be\ninstalled at the future international Facility for Anti-proton and Ion Research\n(FAIR) in Darmstadt, Germany. The PANDA physics program is oriented towards the\nstudies of the strong interaction and hadron structure performed with the\nhighest quality beam of anti-protons [1]. In the preparation for PANDA\nexperiments, large-scale simulation studies are being performed to validate the\nperformance of all individual detector components and to advice on detector\noptimisation. The feasibility of the analysis strategies together with the\ncalibration methods are being studied. Simulations were carried out using the\nframework called PandaROOT [2], based on ROOT and the Virtual Monte Carlo\nconcept [3].\n [1] http://www-panda.gsi.de; Technical Progress Report (2005); Physics\nPerformance Report (2009), arXiv:0903.3905v1.\n [2] [PANDA Collaboration] S. Spataro, J. Phys. 119, 032035 (2008).\n [3] http://root.cern.ch", "category": "physics_comp-ph" }, { "text": "Protein Interaction Networks are Fragile against Random Attacks and\n Robust against Malicious Attacks: The capacity to resist attacks from the environment is crucial to the\nsurvival of all organisms. We quantitatively analyze the susceptibility of\nprotein interaction networks of numerous organisms to random and malicious\nattacks. We find for all organisms studied that random rewiring improves\nprotein network robustness, so that actual networks are more fragile than\nrewired surrogates. This unexpected fragility contrasts with the behavior of\nnetworks such as the Internet, whose robustness decreases with random rewiring.\nWe trace this surprising effect to the modular structure of protein networks.", "category": "physics_comp-ph" }, { "text": "Approximate Expressions for the Capillary Force and the Surface Area of\n a Liquid Bridge between Identical Spheres: We consider a liquid bridge between two identical spheres and provide\napproximate expressions for the capillary force and the exposed surface area of\nthe liquid bridge as functions of the liquid bridge's total volume and the\nsphere separation distance. The radius of the spheres and the solid-liquid\ncontact angle are parameters that enter the expressions. These expressions are\nneeded for efficient numerical simulations of drying suspensions.", "category": "physics_comp-ph" }, { "text": "Kinetics of liquid-solid phase transition in large nickel clusters: In this paper we have explored computationally the solidification process of\nlarge nickel clusters. This process has the characteristic features of the\nfirst order phase transition occurring in a finite system. The focus of our\nresearch is placed on the elucidation of correlated dynamics of a large\nensemble of particles in the course of the nanoscale liquid-solid phase\ntransition through the computation and analysis of the results of molecular\ndynamics (MD) simulations with the corresponding theoretical model. This\nproblem is of significant interest and importance, because the controlled\ndynamics of systems on the nanoscale is one of the central topics in the\ndevelopment of modern nanotechnologies.\n MD simulations in large molecular systems are rather computer power\ndemanding. Therefore, in order to advance with MD simulations we have used\nmodern computational methods based on the graphics processing units (GPU). The\nadvantages of the use of GPUs for MD simulations in comparison with the CPUs\nare demonstrated and benchmarked. The reported speedup reaches factors greater\nthan 400. This work opens a path towards exploration with the use of MD of a\nlarger number of scientific problems inaccessible earlier with the CPU based\ncomputational technology.", "category": "physics_comp-ph" }, { "text": "Uncertainty relations for the Hohenberg-Kohn theorem: How does charge density constrain many-body wavefunctions in nature? The\nHohenberg-Kohn theorem for non-relativistic, interacting many-body\nSchr\\\"odinger systems is well-known and was proved using\n\\emph{reductio-ad-absurdum}; however, the physical mechanism or principle which\nenables this theorem in nature has not been understood. Here, we obtain\neffective canonical operators in the interacting many-body problem -- (i) the\nlocal electric field, which mediates interaction between particles, and\ncontributes to the potential energy; and (ii) the particle momenta, which\ncontribute to the kinetic energy. The commutation of these operators results in\nthe charge density distribution. Thus, quantum fluctuations of interacting\nmany-particle systems are constrained by charge density, providing a mechanism\nby which an external potential, by coupling to the charge density, tunes the\nquantum-mechanical many-body wavefunction. As an initial test, we obtain the\nfunctional form for total energy of interacting many-particle systems, and in\nthe uniform density limit, find promising agreement with Quantum Monte Carlo\nsimulations.", "category": "physics_comp-ph" }, { "text": "Convective Viscous Cahn-Hilliard/Allen-Cahn Equation: Exact Solutions: Recently the combination of the well-known Cahn-Hilliard and Allen-Cahn\nequations was used to describe surface processes, such as simultaneous\nadsorption/desorption and surface diffusion. In the present paper we have\nconsidered the one-dimensional version of the Cahn-Hilliard/Allen-Cahn equation\ncomplemented with convective and viscous terms. Exact solutions are obtained\nand the conditions of their existence as well as the influence of applied field\nand additional dissipation are discussed.", "category": "physics_comp-ph" }, { "text": "Exact and efficient calculation of derivatives of Lagrange multipliers\n for molecular dynamic simulations of biological molecules: In the simulation of biological molecules, it is customary to impose\nconstraints on the fastest degrees of freedom to increase the time step. The\nevaluation of the involved constraint forces must be performed in an efficient\nmanner, for otherwise it would be a bottleneck in the calculations; for this\nreason, linearly-scaling calculation methods have become widely used. If\nintegrators of order higher than 2 (e.g. Gear predictor-corrector methods) are\nused to find the trajectories of atoms, the derivatives of the forces on atoms\nwith respect to the time also need to be calculated, which includes the\nderivatives of constraint forces. In this letter we prove that such calculation\ncan be analytically performed with linearly scaling numerical complexity\n(O(Nc), being Nc the number of constraints). This ensures the feasibility of\nconstrained molecular dynamics calculations with high-order integrators.", "category": "physics_comp-ph" }, { "text": "Massively Parallel Transport Sweeps on Meshes with Cyclic Dependencies: When solving the first-order form of the linear Boltzmann equation, a common\nmisconception is that the matrix-free computational method of ``sweeping the\nmesh\", used in conjunction with the Discrete Ordinates method, is too complex\nor does not scale well enough to be implemented in modern high performance\ncomputing codes. This has led to considerable efforts in the development of\nmatrix-based methods that are computationally expensive and is partly driven by\nthe requirements placed on modern spatial discretizations. In particular,\nmodern transport codes are required to support higher order elements, a concept\nthat invariably adds a lot of complexity to sweeps because of the introduction\nof cyclic dependencies with curved mesh cells. In this article we will present\na comprehensive implementation of sweeping, to a piecewise-linear DFEM spatial\ndiscretization with particular focus on handling cyclic dependencies and\npossible extensions to higher order spatial discretizations. These methods are\nimplemented in a new C++ simulation framework called Chi-Tech ($\\chi{-}Tech$).\nWe present some typical simulation results with some performance aspects that\none can expect during real world simulations, we also present a scaling study\nto $>$100k processes where Chi-Tech maintains greater than 80\\% efficiency\nsolving a total of 87.7 trillion angular flux unknowns for a 116 group\nsimulation.", "category": "physics_comp-ph" }, { "text": "On Measurement and Computation: Inspired by the work of Feynman, Deutsch, We formally propose the theory of\nphysical computability and accordingly, the physical complexity theory. To\nachieve this, a framework that can evaluate almost all forms of computation\nusing various physical mechanisms is discussed. Here, we focus on using it to\nreview the theory of Quantum Computation. As a preliminary study on more\ngeneral problems, some examples of other physical mechanism are also given in\nthis paper.", "category": "physics_comp-ph" }, { "text": "Full Hydrodynamic Simulation of GaAs MESFETs: A finite difference upwind discretization scheme in two dimensions is\npresented in detail for the transient simulation of the highly coupled\nnon-linear partial differential equations of the full hydrodynamic model,\nproviding thereby a practical engineering tool for improved charge carrier\ntransport simulations at high electric fields and frequencies. The\ndiscretization scheme preserves the conservation and transportive properties of\nthe equations. The hydrodynamic model is able to describe inertia effects which\nplay an increasing role in different fields of micro- and optoelectronics,\nwhere simplified charge transport models like the drift-diffusion model and the\nenergy balance model are no longer applicable. Results of extensive numerical\nsimulations are shown for a two-dimensional MESFET device. A comparison of the\nhydrodynamic model to the commonly used energy balance model is given and the\naccuracy of the results is discussed.", "category": "physics_comp-ph" }, { "text": "Solving for Micro- and Macro- Scale Electrostatic Configurations Using\n the Robin Hood Algorithm: We present a novel technique by which highly-segmented electrostatic\nconfigurations can be solved. The Robin Hood method is a matrix-inversion\nalgorithm optimized for solving high density boundary element method (BEM)\nproblems. We illustrate the capabilities of this solver by studying two\ndistinct geometry scales: (a) the electrostatic potential of a large volume\nbeta-detector and (b) the field enhancement present at surface of electrode\nnano-structures. Geometries with elements numbering in the O(10^5) are easily\nmodeled and solved without loss of accuracy. The technique has recently been\nexpanded so as to include dielectrics and magnetic materials.", "category": "physics_comp-ph" }, { "text": "Magnetohydrodynamics with Physics Informed Neural Operators: The modeling of multi-scale and multi-physics complex systems typically\ninvolves the use of scientific software that can optimally leverage extreme\nscale computing. Despite major developments in recent years, these simulations\ncontinue to be computationally intensive and time consuming. Here we explore\nthe use of AI to accelerate the modeling of complex systems at a fraction of\nthe computational cost of classical methods, and present the first application\nof physics informed neural operators to model 2D incompressible\nmagnetohydrodynamics simulations. Our AI models incorporate tensor Fourier\nneural operators as their backbone, which we implemented with the TensorLY\npackage. Our results indicate that physics informed neural operators can\naccurately capture the physics of magnetohydrodynamics simulations that\ndescribe laminar flows with Reynolds numbers $Re\\leq250$. We also explore the\napplicability of our AI surrogates for turbulent flows, and discuss a variety\nof methodologies that may be incorporated in future work to create AI models\nthat provide a computationally efficient and high fidelity description of\nmagnetohydrodynamics simulations for a broad range of Reynolds numbers. The\nscientific software developed in this project is released with this manuscript.", "category": "physics_comp-ph" }, { "text": "A comparison between bottom-discontinuity numerical treatments in the DG\n framework: In this work, using a unified framework consisting of third-order accurate\ndiscontinuous Galerkin schemes, we perform a comparison between five different\nnumerical approaches to the free-surface shallow flow simulation on bottom\nsteps. Together with the study of the overall impact that such techniques have\non the numerical models, we highlight the role that the treatment of bottom\ndiscontinuities plays in the preservation of specific asymptotic conditions. In\nparticular, we consider three widespread approaches that perform well if the\nmotionless steady state has to be preserved and two approaches (one previously\nconceived by the first two authors and one original) which are also promising\nfor the preservation of a moving-water steady state. Several one-dimensional\ntest cases are used to verify the third-order accuracy of the models in\nsimulating an unsteady flow, the behavior of the models for a quiescent flow in\nthe cases of both continuous and discontinuous bottom, and the good resolution\nproperties of the schemes. Moreover, specific test cases are introduced to show\nthe behavior of the different approaches when a bottom step interacts with both\nsteady and unsteady moving flows.", "category": "physics_comp-ph" }, { "text": "The Butterfly Effect: Correlations Between Modeling in Nuclear-Particle\n Physics and Socioeconomic Factors: A scientometric analysis has been performed on selected physics journals to\nestimate the presence of simulation and modeling in physics literature in the\npast fifty years. Correlations between the observed trends and several social\nand economical factors have been evaluated.", "category": "physics_comp-ph" }, { "text": "Learning Generic Solutions for Multiphase Transport in Porous Media via\n the Flux Functions Operator: Traditional numerical schemes for simulating fluid flow and transport in\nporous media can be computationally expensive. Advances in machine learning for\nscientific computing have the potential to help speed up the simulation time in\nmany scientific and engineering fields. DeepONet has recently emerged as a\npowerful tool for accelerating the solution of partial differential equations\n(PDEs) by learning operators (mapping between function spaces) of PDEs. In this\nwork, we learn the mapping between the space of flux functions of the\nBuckley-Leverett PDE and the space of solutions (saturations). We use\nPhysics-Informed DeepONets (PI-DeepONets) to achieve this mapping without any\npaired input-output observations, except for a set of given initial or boundary\nconditions; ergo, eliminating the expensive data generation process. By\nleveraging the underlying physical laws via soft penalty constraints during\nmodel training, in a manner similar to Physics-Informed Neural Networks\n(PINNs), and a unique deep neural network architecture, the proposed\nPI-DeepONet model can predict the solution accurately given any type of flux\nfunction (concave, convex, or non-convex) while achieving up to four orders of\nmagnitude improvements in speed over traditional numerical solvers. Moreover,\nthe trained PI-DeepONet model demonstrates excellent generalization qualities,\nrendering it a promising tool for accelerating the solution of transport\nproblems in porous media.", "category": "physics_comp-ph" }, { "text": "Dimensionality Reduction and Reduced Order Modeling for Traveling Wave\n Physics: We develop an unsupervised machine learning algorithm for the automated\ndiscovery and identification of traveling waves in spatio-temporal systems\ngoverned by partial differential equations (PDEs). Our method uses sparse\nregression and subspace clustering to robustly identify translational\ninvariances that can be leveraged to build improved reduced order models\n(ROMs). Invariances, whether translational or rotational, are well known to\ncompromise the ability of ROMs to produce accurate and/or low-rank\nrepresentations of the spatio-temporal dynamics. However, by discovering\ntranslations in a principled way, data can be shifted into a coordinate systems\nwhere quality, low-dimensional ROMs can be constructed. This approach can be\nused on either numerical or experimental data with or without knowledge of the\ngoverning equations. We demonstrate our method on a variety of PDEs of\nincreasing difficulty, taken from the field of fluid dynamics, showing the\nefficacy and robustness of the proposed approach.", "category": "physics_comp-ph" }, { "text": "The Biot-Darcy-Brinkman model of flow in deformable double porous media;\n homogenization and numerical modelling: In this paper we present the two-level homogenization of the flow in a\ndeformable double-porous structure described at two characteristic scales. The\nhigher level porosity associated with the mesoscopic structure is constituted\nby channels in a matrix made of a microporous material consisting of elastic\nskeleton and pores saturated by a viscous fluid. The macroscopic model is\nderived by the homogenization of the flow in the heterogeneous structure\ncharacterized by two small parameters involved in the two-level asymptotic\nanalysis, whereby a scaling ansatz is adopted to respect the pore size\ndifferences. The first level upscaling of the fluid-structure interaction\nproblem yields a Biot continuum describing the mesoscopic matrix coupled with\nthe Stokes flow in the channels. The second step of the homogenization leads to\na macroscopic model involving three equations for displacements, the mesoscopic\nflow velocity and the micropore pressure. Due to interactions between the two\nporosities, the macroscopic flow is governed by a Darcy-Brinkman model\ncomprising two equations which are coupled with the overall equilibrium\nequation respecting the hierarchical structure of the two-phase medium.\nExpressions of the effective macroscopic parameters of the homogenized\ndouble-porosity continuum are derived, depending on the characteristic\nresponses of the mesoscopic structure. Some symmetry and reciprocity\nrelationships are shown and issues of boundary conditions are discussed. The\nmodel has been implemented in the finite element code SfePy which is\nwell-suited for computational homogenization. A numerical example of solving a\nnonstationary problem using mixed finite element method is included.", "category": "physics_comp-ph" }, { "text": "The spectrum of non-centrosymmetrically layered spherical cavity\n resonator. I.The mode decomposition method: We develop a theoretical method for solving Maxwell's equations to obtain the\nfrequency spectra of inhomogeneous and asymmetric cavity resonators using a\ncouple of effective Debye-type potentials. The structure we study specifically\nis the layered spherical cavity resonator with symmetrically or asymmetrically\ninserted inner dielectric sphere. The comparison of the exact numerical results\nobtained for the frequency spectrum of layered cavity resonator with\ncentrosymmetrically inserted sphere and the spectrum found from the suggested\ntheory reveals good agreement at the initial part of the frequency axis. The\ncoincidence accuracy depends on the number of trial resonant modes that we use\nwhile approving our method numerically.", "category": "physics_comp-ph" }, { "text": "The Droplet Formation-Dissolution Transition in Different Ensembles:\n Finite-Size Scaling from Two Perspectives: The formation and dissolution of a droplet is an important mechanism related\nto various nucleation phenomena. Here, we address the droplet\nformation-dissolution transition in a two-dimensional Lennard-Jones gas to\ndemonstrate a consistent finite-size scaling approach from two perspectives\nusing orthogonal control parameters. For the canonical ensemble, this means\nthat we fix the temperature while varying the density and vice versa. Using\nspecialised parallel multicanonical methods for both cases, we confirm\nanalytical predictions at fixed temperature (rigorously only proven for lattice\nsystems) and corresponding scaling predictions from expansions at fixed\ndensity. Importantly, our methodological approach provides us with reference\nquantities from the grand canonical ensemble that enter the analytical\npredictions. Our orthogonal finite-size scaling setup can be exploited for\ntheoretical and experimental investigations of general nucleation phenomena -\nif one identifies the corresponding reference ensemble and adapts the theory\naccordingly. In this case, our numerical approach can be readily translated to\nthe corresponding ensembles and thereby proves very useful for numerical\nstudies of equilibrium droplet formation, in general.", "category": "physics_comp-ph" }, { "text": "Modeling Heat Conduction with Two-Dissipative Variables: A\n Mechanism-Data Fusion Method: In this paper, we propose a mechanism-data fusion method (MDFM) for modeling\nheat conduction with two-dissipative variables. This method enjoys mathematical\nrigor from physical laws, adaptability from machine learning, and solvability\nfrom conventional numerical methods. Specifically, we use the\nconservation-dissipation formalism (CDF) to derive a system of first-order\nhyperbolic partial differential equations (PDEs) for heat conduction, which\nnaturally obeys the first and second laws of thermodynamics. Next, we train the\nunknown functions in this PDE system with deep neural networks; this involves a\n\"warm-up\" technique which prepares the connection of several time series.\nMoreover, we propose a novel method, the Inner-Step operation (ISO), to narrow\nthe gap from the discrete form to the continuous system. Lots of numerical\nexperiments are conducted to show that the proposed model can well predict the\nheat conduction in diffusive, hydrodynamic and ballistic regimes, and the model\ndisplays higher accuracy under a wider range of Knudsen numbers than the famous\nGuyer-Krumhansl (G-K) model.", "category": "physics_comp-ph" }, { "text": "Molecular propensity as a driver for explorative reactivity studies: Quantum chemical studies of reactivity involve calculations on a large number\nof molecular structures and comparison of their energies. Already the set-up of\nthese calculations limits the scope of the results that one will obtain,\nbecause several system-specific variables such as the charge and spin need to\nbe set prior to the calculation. For a reliable exploration of reaction\nmechanisms, a considerable number of calculations with varying global\nparameters must be taken into account, or important facts about the reactivity\nof the system under consideration can go undetected. For example, one could\nmiss crossings of potential energy surfaces for different spin states or might\nnot note that a molecule is prone to oxidation. Here, we introduce the concept\nof molecular propensity to account for the predisposition of a molecular system\nto react across different electronic states in certain nuclear configurations.\nWithin our real-time quantum chemistry framework, we developed an algorithm\nthat allows us to be alerted to such a propensity of a system under\nconsideration.", "category": "physics_comp-ph" }, { "text": "Monte Carlo Integration with Subtraction: This paper investigates a class of algorithms for numerical integration of a\nfunction in d dimensions over a compact domain by Monte Carlo methods. We\nconstruct a histogram approximation to the function using a partition of the\nintegration domain into a set of bins specified by some parameters. We then\nconsider two adaptations; the first is to subtract the histogram approximation,\nwhose integral we may easily evaluate explicitly, from the function and\nintegrate the difference using Monte Carlo; the second is to modify the bin\nparameters in order to make the variance of the Monte Carlo estimate of the\nintegral the same for all bins. This allows us to use Student's t-test as a\ntrigger for rebinning, which we claim is more stable than the \\chi-squared test\nthat is commonly used for this purpose. We provide a program that we have used\nto study the algorithm for the case where the histogram is represented as a\nproduct of one-dimensional histograms. We discuss the assumptions and\napproximations made, as well as giving a pedagogical discussion of the myriad\nways in which the results of any such Monte Carlo integration program can be\nmisleading.", "category": "physics_comp-ph" }, { "text": "Solving inverse problems using conditional invertible neural networks: Inverse modeling for computing a high-dimensional spatially-varying property\nfield from indirect sparse and noisy observations is a challenging problem.\nThis is due to the complex physical system of interest often expressed in the\nform of multiscale PDEs, the high-dimensionality of the spatial property of\ninterest, and the incomplete and noisy nature of observations. To address these\nchallenges, we develop a model that maps the given observations to the unknown\ninput field in the form of a surrogate model. This inverse surrogate model will\nthen allow us to estimate the unknown input field for any given sparse and\nnoisy output observations. Here, the inverse mapping is limited to a broad\nprior distribution of the input field with which the surrogate model is\ntrained. In this work, we construct a two- and three-dimensional inverse\nsurrogate models consisting of an invertible and a conditional neural network\ntrained in an end-to-end fashion with limited training data. The invertible\nnetwork is developed using a flow-based generative model. The developed inverse\nsurrogate model is then applied for an inversion task of a multiphase flow\nproblem where given the pressure and saturation observations the aim is to\nrecover a high-dimensional non-Gaussian permeability field where the two facies\nconsist of heterogeneous permeability and varying length-scales. For both the\ntwo- and three-dimensional surrogate models, the predicted sample realizations\nof the non-Gaussian permeability field are diverse with the predictive mean\nbeing close to the ground truth even when the model is trained with limited\ndata.", "category": "physics_comp-ph" }, { "text": "Binary interaction algorithms for the simulation of flocking and\n swarming dynamics: Microscopic models of flocking and swarming takes in account large numbers of\ninteracting individ- uals. Numerical resolution of large flocks implies huge\ncomputational costs. Typically for $N$ interacting individuals we have a cost\nof $O(N^2)$. We tackle the problem numerically by considering approximated\nbinary interaction dynamics described by kinetic equations and simulating such\nequations by suitable stochastic methods. This approach permits to compute\napproximate solutions as functions of a small scaling parameter $\\varepsilon$\nat a reduced complexity of O(N) operations. Several numerical results show the\nefficiency of the algorithms proposed.", "category": "physics_comp-ph" }, { "text": "Largenet2: an object-oriented programming library for simulating large\n adaptive networks: The largenet2 C++ library provides an infrastructure for the simulation of\nlarge dynamic and adaptive networks with discrete node and link states. The\nlibrary is released as free software. It is available at\nhttp://rincedd.github.com/largenet2. Largenet2 is licensed under the Creative\nCommons Attribution-NonCommercial 3.0 Unported License.", "category": "physics_comp-ph" }, { "text": "High order local absorbing boundary conditions for acoustic waves in\n terms of farfield expansions: We devise a new high order local absorbing boundary condition (ABC) for\nradiating problems and scattering of time-harmonic acoustic waves from\nobstacles of arbitrary shape. By introducing an artificial boundary $S$\nenclosing the scatterer, the original unbounded domain $\\Omega$ is decomposed\ninto a bounded computational domain $\\Omega^{-}$ and an exterior unbounded\ndomain $\\Omega^{+}$. Then, we define interface conditions at the artificial\nboundary $S$, from truncated versions of the well-known Wilcox and Karp\nfarfield expansion representations of the exact solution in the exterior region\n$\\Omega^{+}$. As a result, we obtain a new local absorbing boundary condition\n(ABC) for a bounded problem on $\\Omega^{-}$, which effectively accounts for the\noutgoing behavior of the scattered field. Contrary to the low order absorbing\nconditions previously defined, the order of the error induced by this ABC can\neasily match the order of the numerical method in $\\Omega^{-}$. We accomplish\nthis by simply adding as many terms as needed to the truncated farfield\nexpansions of Wilcox or Karp. The convergence of these expansions guarantees\nthat the order of approximation of the new ABC can be increased arbitrarily\nwithout having to enlarge the radius of the artificial boundary. We include\nnumerical results in two and three dimensions which demonstrate the improved\naccuracy and simplicity of this new formulation when compared to other\nabsorbing boundary conditions.", "category": "physics_comp-ph" }, { "text": "General relativistic resistive magnetohydrodynamics with robust\n primitive variable recovery for accretion disk simulations: Recent advances in black hole astrophysics, particularly the first visual\nevidence of a supermassive black hole at the center of the galaxy M87 by the\nEvent Horizon Telescope (EHT), and the detection of an orbiting \"hot spot\"\nnearby the event horizon of Sgr A* in the Galactic center by the Gravity\nCollaboration, require the development of novel numerical methods to understand\nthe underlying plasma microphysics. Non-thermal emission related to such hot\nspots is conjectured to originate from plasmoids that form due to magnetic\nreconnection in thin current layers in the innermost accretion zone.\nResistivity plays a crucial role in current sheet formation, magnetic\nreconnection, and plasmoid growth in black hole accretion disks and jets. We\nincluded resistivity in the three-dimensional general-relativistic\nmagnetohydrodynamics (GRMHD) code BHAC and present the implementation of an\nImplicit-Explicit scheme to treat the stiff resistive source terms of the GRMHD\nequations. The algorithm is tested in combination with adaptive mesh refinement\nto resolve the resistive scales and a constrained transport method to keep the\nmagnetic field solenoidal. Several novel methods for primitive variable\nrecovery, a key part in relativistic magnetohydrodynamics codes, are presented\nand compared for accuracy, robustness, and efficiency. We propose a new\ninversion strategy that allows for resistive-GRMHD simulations of low\ngas-to-magnetic pressure ratio and highly magnetized regimes as applicable for\nblack hole accretion disks, jets, and neutron star magnetospheres. We apply the\nnew scheme to study the effect of resistivity on accreting black holes,\naccounting for dissipative effects as reconnection.", "category": "physics_comp-ph" }, { "text": "Ensemble variational Monte Carlo for optimization of correlated excited\n state wave functions: Variational Monte Carlo methods have recently been applied to the calculation\nof excited states; however, it is still an open question what objective\nfunction is most effective. A promising approach is to optimize excited states\nusing a penalty to minimize overlap with lower eigenstates, which has the\ndrawback that states must be computed one at a time. We derive a general\nframework for constructing objective functions with minima at the the lowest\n$N$ eigenstates of a many-body Hamiltonian. The objective function uses a\nweighted average of the energies and an overlap penalty, which must satisfy\nseveral conditions. We show this objective function has a minimum at the exact\neigenstates for a finite penalty, and provide a few strategies to minimize the\nobjective function. The method is demonstrated using ab initio variational\nMonte Carlo to calculate the degenerate first excited state of a CO molecule.", "category": "physics_comp-ph" }, { "text": "Can we find steady-state solutions to multiscale rarefied gas flows\n within dozens of iterations?: One of the central problems in the study of rarefied gas dynamics is to find\nthe steady-state solution of the Boltzmann equation quickly. When the Knudsen\nnumber is large, i.e. the system is highly rarefied, the conventional iteration\nscheme can lead to convergence within a few iterations. However, when the\nKnudsen number is small, i.e. the flow falls in the near-continuum regime,\nhundreds of thousands iterations are needed, and yet the \"converged\" solutions\nare prone to be contaminated by accumulated error and large numerical\ndissipation. Recently, based on the gas kinetic models, the implicit unified\ngas kinetic scheme (UGKS) and its variants have significantly reduced the\niterations in the near-continuum flow regime, but still much higher than that\nof the highly rarefied gas flows. In this paper, we put forward a general\nsynthetic iteration scheme (GSIS) to find the steady-state solutions of general\nrarefied gas flows within dozens of iterations at any Knudsen number. As the\nGSIS does not rely on the specific kinetic model/collision operator, it can be\nnaturally extended to quickly find converged solutions for mixture flows and\neven flows involving chemical reactions. These two superior advantages are also\nexpected to accelerate the slow convergence in simulation of near-continuum\nflows via the direct simulation Monte Carlo method and its low-variance\nversion.", "category": "physics_comp-ph" }, { "text": "Learning the constitutive relation of polymeric flows with memory: We develop a learning strategy to infer the constitutive relation for the\nstress of polymeric flows with memory. We make no assumptions regarding the\nfunctional form of the constitutive relations, except that they should be\nexpressible in differential form as a function of the local stress- and\nstrain-rate tensors. In particular, we use a Gaussian Process regression to\ninfer the constitutive relations from stress trajectories generated from\nsmall-scale (fixed strain-rate) microscopic polymer simulations. For\nsimplicity, a Hookean dumbbell representation is used as a microscopic model,\nbut the method itself can be generalized to incorporate more realistic\ndescriptions. The learned constitutive relation is then used to perform\nmacroscopic flow simulations, allowing us to update the stress distribution in\nthe fluid in a manner that accounts for the microscopic polymer dynamics. The\nresults using the learned constitutive relation are in excellent agreement with\nfull Multi-Scale Simulations, which directly couple micro/macro degrees of\nfreedom, as well as the exact analytical solution given by the Maxwell\nconstitutive relation. We are able to fully capture the history dependence of\nthe flow, as well as the elastic effects in the fluid. We expect the proposed\nlearning/simulation approach to be used not only to study the dynamics of\nentangled polymer flows, but also for the complex dynamics of other Soft Matter\nsystems, which possess a similar hierarchy of length- and time-scales.", "category": "physics_comp-ph" }, { "text": "Modelling turbulence via numerical functional integration using Burgers'\n equation: We investigate the feasibility of modelling turbulence via numeric functional\nintegration. By transforming the Burgers' equation into a functional integral\nwe are able to calculate equal-time spatial correlation of system variables\nusing standard methods of multidimensional integration. In contrast to direct\nnumerical simulation, our method allows for simple parallelization of the\nproblem as the value of the integral within any region can be calculated\nseparately from others. Thus the calculations required for obtaining one\ncorrelation data set can be distributed to several supercomputers and/or the\ncloud simultaneously.\n We present the mathematical background of our method and its numerical\nimplementation. We are interested in a steady state system with isotropic and\nhomogeneous turbulence, for which we use a lattice version of the functional\nintegral used in the perturbative analysis of stochastic transport equations.\nThe numeric implementation is composed of a fast serial program for evaluating\nthe integral over a given volume and a parallel Python wrapper that divides the\nproblem into subvolumes and distributes the work among available processes. The\ncode is available at https://github.com/iljah/hdintegrator for anyone to\ndownload, use, study, modify and redistribute.\n We present velocity cross correlation for a 10x2 lattice in space and time\nrespectively, and analyse the computational resources required for the\nintegration. We also discuss potential improvements to the presented method.", "category": "physics_comp-ph" }, { "text": "Computing diffraction anomalies as nonlinear eigenvalue problems: When a plane electromagnetic wave impinges upon a diffraction grating or\nother periodic structures, reflected and transmitted waves propagate away from\nthe structure in different radiation channels. A diffraction anomaly occurs\nwhen the outgoing waves in one or more radiation channels vanish. Zero\nreflection, zero transmission and perfect absorption are important examples of\ndiffraction anomalies, and they are useful for manipulating electromagnetic\nwaves and light. Since diffraction anomalies appear only at specific\nfrequencies and/or wavevectors, and may require the tuning of structural or\nmaterial parameters, they are relatively difficult to find by standard\nnumerical methods. Iterative methods may be used, but good initial guesses are\nrequired. To determine all diffraction anomalies in a given frequency interval,\nit is necessary to repeatedly solve the diffraction problem for many\nfrequencies. In this paper, an efficient numerical method is developed for\ncomputing diffraction anomalies. The method relies on nonlinear eigenvalue\nformulations for scattering anomalies and solves the nonlinear eigenvalue\nproblems by a contour-integral method. Numerical examples involving periodic\narrays of cylinders are presented to illustrate the new method.", "category": "physics_comp-ph" }, { "text": "Improved guaranteed computable bounds on homogenized properties of\n periodic media by Fourier-Galerkin method with exact integration: Moulinec and Suquet introduced FFT-based homogenization in 1994, and twenty\nyears later, their approach is still effective for evaluating the homogenized\nproperties arising from the periodic cell problem. This paper builds on the\nauthor's (2013) variational reformulation approximated by trigonometric\npolynomials establishing two numerical schemes: Galerkin approximation (Ga) and\na version with numerical integration (GaNi). The latter approach, fully\nequivalent to the original Moulinec-Suquet algorithm, was used to evaluate\nguaranteed upper-lower bounds on homogenized coefficients incorporating a\nclosed-form double grid quadrature. Here, these concepts, based on the primal\nand the dual formulations, are employed for the Ga scheme. For the same\ncomputational effort, the Ga outperforms the GaNi with more accurate guaranteed\nbounds and more predictable numerical behaviors. Quadrature technique leading\nto block-sparse linear systems is extended here to materials defined via\nhigh-resolution images in a way which allows for effective treatment using the\nFFT. Memory demands are reduced by a reformulation of the double to the\noriginal grid scheme using FFT shifts. Minimization of the bounds during\niterations of conjugate gradients is effective, particularly when incorporating\na solution from a coarser grid. The methodology presented here for the scalar\nlinear elliptic problem could be extended to more complex frameworks.", "category": "physics_comp-ph" }, { "text": "Three-dimensional lattice Boltzmann models for solid-liquid phase change: A three-dimensional (3 D) multiple-relaxation-time (MRT) and a 3 D\nsingle-relaxation-time (SRT) lattice Boltzmann (LB) models are proposed for the\nsolid-liquid phase change. The enthalpy conservation equation can be recovered\nfrom the present models. The reasonable relationship of the relaxation times in\nthe MRT model is discussed. Both One-dimensional (1 D) melting and\nsolidification with analytical solutions are respectively calculated by the SRT\nand MRT models for validation. Compared with the SRT model, the MRT one is more\naccurate to capture the phase interface. The MRT model is also verified with\nother published two-dimensional (2 D) numerical results. The validations\nsuggest that the present MRT approach is qualified to simulate the 3 D\nsolid-liquid phase change process. Furthermore, the influences of Rayleigh\nnumber and Prandtl number on the 3 D melting are investigated.", "category": "physics_comp-ph" }, { "text": "Comparison of the LBE and DUGKS methods for DNS of decaying homogeneous\n isotropic turbulence: The main objective of this work is to perform a detailed comparison of the\nlattice Boltzmann equation (LBE) and the recently developed discrete unified\ngas-kinetic scheme (DUGKS) methods for direct numerical simulation (DNS) of the\ndecaying homogeneous isotropic turbulence (DHIT) in a periodic box. The flow\nfields and key statistical quantities computed by both methods are compared\nwith those from pseudo-spectral (PS) method. The results show that the LBE and\nDUGKS have almost the same accuracy when the flow field is well-resolved, and\nthat the LBE is less dissipative and is slightly more efficient than the DUGKS,\nbut the latter has a superior numerical stability, particularly for high\nReynolds number flows. Therefore, the DUGKS method can be viewed as a viable\ntool for DNS of turbulent flows. It should be emphasized that the main\nadvantage of the DUGKS when compared with the LBE method is its feasibility in\nadopting nonuniform meshes, which is critical for wall-bounded turbulent flows.\nThe present work provides a basis for further applications of DUGKS in studying\nthe physics of the turbulent flows.", "category": "physics_comp-ph" }, { "text": "UKRmol+: a suite for modelling of electronic processes in molecules\n interacting with electrons, positrons and photons using the R-matrix method: UKRmol+ is a new implementation of the UK R-matrix electron-molecule\nscattering code. Key features of the implementation are the use of quantum\nchemistry codes such as Molpro to provide target molecular orbitals; the\noptional use of mixed Gaussian -- B-spline basis functions to represent the\ncontinuum and improved configuration and Hamiltonian generation. The code is\ndescribed, and examples covering electron collisions from a range of targets,\npositron collisions and photionisation are presented. The codes are freely\navailable as a tarball from Zenodo.", "category": "physics_comp-ph" }, { "text": "Solution of the Generalized Linear Boltzmann Equation for Transport in\n Multidimensional Stochastic Media: The generalized linear Boltzmann equation (GLBE) is a recently developed\nframework based on non-classical transport theory for modeling the expected\nvalue of particle flux in an arbitrary stochastic medium. Provided with a\nnon-classical cross-section for a given statistical description of a medium,\nany transport problem in that medium may be solved. Previous work has only\nconsidered one-dimensional media without finite boundary conditions and\ndiscrete binary mixtures of materials. In this work the solution approach for\nthe GLBE in multidimensional media with finite boundaries is outlined. The\ndiscrete ordinates method with an implicit discretization of the pathlength\nvariable is used to leverage sweeping methods for the transport operator. In\naddition, several convenient approximations for non-classical cross-sections\nare introduced. The solution approach is verified against random realizations\nof a Gaussian process medium in a square enclosure.", "category": "physics_comp-ph" }, { "text": "Critical exponents for the cloud-crystal phase transition of charged\n particles in a Paul Trap: It is well known that charged particles stored in a Paul trap, one of the\nmost versatile tools in atomic and molecular physics, may undergo a phase\ntransition from a disordered cloud state to a geometrically well-ordered\ncrystalline state (the Wigner crystal). In this paper we show that the average\nlifetime $\\bar\\tau_m$ of the metastable cloud state preceding the cloud\n$\\rightarrow$ crystal phase transition follows a powerlaw, $\\bar\\tau_m \\sim\n(\\gamma-\\gamma_c)^{-\\beta}$, $\\gamma>\\gamma_c$, where $\\gamma_c$ is the\ncritical value of the damping constant $\\gamma$ at which the cloud\n$\\rightarrow$ crystal phase transition occurs. The critical exponent $\\beta$\ndepends on the trap control parameter $q$, but is independent of the number of\nparticles $N$ stored in the trap and the trap control parameter $a$, which\ndetermines the shape (oblate, prolate, or spherical) of the cloud. For\n$q=0.15,0.20$, and $0.25$, we find $\\beta=1.20\\pm 0.03$, $\\beta=1.61\\pm 0.09$,\nand $\\beta=2.38\\pm 0.12$, respectively. In addition we find that for given $a$\nand $q$, the critical value $\\gamma_c$ of the damping scales approximately like\n$\\gamma_c=C \\ln [ \\ln (N)] + D$ as a function of $N$, where $C$ and $D$ are\nconstants. Beyond their relevance for Wigner crystallization of nonneutral\nplasmas in Paul traps and mini storage rings, we conjecture that our results\nare also of relevance for the field of crystalline beams.", "category": "physics_comp-ph" }, { "text": "A new HLLD Riemann solver with Boris correction for reducing Alfv\u00e9n\n speed: A new Riemann solver is presented for the ideal magnetohydrodynamics (MHD)\nequations with the so-called Boris correction. The Boris correction is applied\nto reduce wave speeds, avoiding an extremely small timestep in MHD simulations.\nThe proposed Riemann solver, Boris-HLLD, is based on the HLLD solver. As done\nby the original HLLD solver, (1) the Boris-HLLD solver has four intermediate\nstates in the Riemann fan when left and right states are given, (2) it resolves\nthe contact discontinuity, Alfv\\'en waves, and fast waves, and (3) it satisfies\nall the jump conditions across shock waves and discontinuities except for slow\nshock waves. The results of a shock tube problem indicate that the scheme with\nthe Boris-HLLD solver captures contact discontinuities sharply and it exhibits\nshock waves without any overshoot when using the minmod limiter. The stability\ntests show that the scheme is stable when $|u| \\lesssim 0.5c$ for a low\nAlfv\\'en speed ($V_A \\lesssim c$), where $u$, $c$, and $V_A$ denote the gas\nvelocity, speed of light, and Alfv\\'en speed, respectively. For a high Alfv\\'en\nspeed ($V_A \\gtrsim c$), where the plasma beta is relatively low in many cases,\nthe stable region is large, $|u| \\lesssim (0.6-1) c$. We discuss the effect of\nthe Boris correction on physical quantities using several test problems. The\nBoris-HLLD scheme can be useful for problems with supersonic flows in which\nregions with a very low plasma beta appear in the computational domain.", "category": "physics_comp-ph" }, { "text": "Consistent forcing scheme in the cascaded lattice Boltzmann method: In this paper, we give a more pellucid derivation for the cascaded lattice\nBoltzmann method (CLBM) based on a general multiple-relaxation-time (MRT) frame\nthrough defining a shift matrix. When the shift matrix is a unit matrix, the\nCLBM degrades into an MRT LBM. Based on this, a consistent forcing scheme is\ndeveloped for the CLBM. The applicability of the non-slip rule, the\nsecond-order convergence rate in space and the property of isotropy for the\nconsistent forcing scheme is demonstrated through the simulation of several\ncanonical problems. Several other existing force schemes previously used in the\nCLBM are also examined. The study clarifies the relation between MRT LBM and\nCLBM under a general framework.", "category": "physics_comp-ph" }, { "text": "Anisotropic interfacial tension, contact angles, and line tensions: A\n graphics-processing-unit-based Monte Carlo study of the Ising model: As a generic example for crystals where the crystal-fluid interface tension\ndepends on the orientation of the interface relative to the crystal lattice\naxes, the nearest neighbor Ising model on the simple cubic lattice is studied\nover a wide temperature range, both above and below the roughening transition\ntemperature. Using a thin film geometry $L_x \\times L_y \\times L_z$ with\nperiodic boundary conditions along the z-axis and two free $L_x \\times L_y$\nsurfaces at which opposing surface fields $\\pm H_{1}$ act, under conditions of\npartial wetting, a single planar interface inclined under a contact angle\n$\\theta < \\pi/2$ relative to the yz-plane is stabilized. In the y-direction, a\ngeneralization of the antiperiodic boundary condition is used that maintains\nthe translational invariance in y-direction despite the inhomogeneity of the\nmagnetization distribution in this system. This geometry allows a simultaneous\nstudy of the angle-dependent interface tension, the contact angle, and the line\ntension (which depends on the contact angle, and on temperature). All these\nquantities are extracted from suitable thermodynamic integration procedures. In\norder to keep finite size effects as well as statistical errors small enough,\nrather large lattice sizes (of the order of 46 million sites) are found\nnecessary, availability of very efficient code implementation of graphics\nprocessing units (GPUs) was crucial for the feasibility of this study.", "category": "physics_comp-ph" }, { "text": "Iterative Calculation of Characteristic Modes Using Arbitrary Full-wave\n Solvers: An iterative algorithm is adopted to construct approximate representations of\nmatrices describing the scattering properties of arbitrary objects. The method\nis based on the implicit evaluation of scattering responses from iteratively\ngenerated excitations. The method does not require explicit knowledge of any\nsystem matrices (e.g., stiffness or impedance matrices) and is well-suited for\nuse with matrix-free and iterative full-wave solvers, such as FDTD, FEM, and\nMLFMA. The proposed method allows for significant speed-up compared to the\ndirect construction of a full transition matrix or scattering dyadic. The\nmethod is applied to the characteristic mode decomposition of arbitrarily\nshaped obstacles of arbitrary material distribution. Examples demonstrating the\nspeed-up and complexity of the algorithm are studied with several commercial\nsoftware packages.", "category": "physics_comp-ph" }, { "text": "Lagrange Discrete Ordinates: a new angular discretization for the three\n dimensional linear Boltzmann equation: The classical $S_n$ equations of Carlson and Lee have been a mainstay in\nmulti-dimensional radiation transport calculations. In this paper, an\nalternative to the $S_n$ equations, the \"Lagrange Discrete Ordinate\" (LDO)\nequations are derived. These equations are based on an interpolatory framework\nfor functions on the unit sphere in three dimensions. While the LDO equations\nretain the formal structure of the classical $S_n$ equations, they have a\nnumber of important differences. The LDO equations naturally allow the angular\nflux to be evaluated in directions other than those found in the quadrature\nset. To calculate the scattering source in the LDO equations, no spherical\nharmonic moments are needed--only values of the angular flux. Moreover, the LDO\nscattering source preserves the eigenstructure of the continuous scattering\noperator. The formal similarity of the LDO equations with the $S_n$ equations\nshould allow easy modification of mature 3D $S_n$ codes such as PARTISN or\nPENTRAN to solve the LDO equations. Numerical results are shown that\ndemonstrate the spectral convergence (in angle) of the LDO equations for smooth\nsolutions and the ability to mitigate ray effects by increasing the angular\nresolution of the LDO equations.", "category": "physics_comp-ph" }, { "text": "Efficient wavefunction propagation by minimizing accumulated action: This paper presents a new technique to calculate the evolution of a quantum\nwavefunction in a chosen spatial basis by minimizing the accumulated action.\nIntroduction of a finite temporal basis reduces the problem to a set of linear\nequations, while an appropriate choice of temporal basis set offers improved\nconvergence relative to methods based on matrix exponentiation for a class of\nphysically relevant problems.", "category": "physics_comp-ph" }, { "text": "HEP Software Foundation Community White Paper Working Group - Data\n Processing Frameworks: Data processing frameworks are an essential part of HEP experiments' software\nstacks. Frameworks provide a means by which code developers can undertake the\nessential tasks of physics data processing, accessing relevant inputs and\nstoring their outputs, in a coherent way without needing to know the details of\nother domains. Frameworks provide essential core services for developers and\nhelp deliver a configurable working application to the experiments' production\nsystems. Modern HEP processing frameworks are in the process of adapting to a\nnew computing landscape dominated by parallel processing and heterogeneity,\nwhich pose many questions regarding enhanced functionality and scaling that\nmust be faced without compromising the maintainability of the code. In this\npaper we identify a program of work that can help further clarify the key\nconcepts of frameworks for HEP and then spawn R&D activities that can focus the\ncommunity's efforts in the most efficient manner to address the challenges of\nthe upcoming experimental program.", "category": "physics_comp-ph" }, { "text": "$\u03bc$MECH Micromechanics Library: The paper presents the project of an open source C/C++ library of analytical\nsolutions to micromechanical fields within media with ellipsoidal\nheterogeneities. The solutions are based on Eshelby's stress-free, in general\npolynomial, eigenstrains and equivalent inclusion method. To some extent, the\ninteractions among inclusions in a non-dilute medium are taken into account by\nmeans of the self-compatibility algorithm. Moreover, the library is furnished\nwith a powerful I/O interface and conventional homogenization tools. Advantages\nand limitations of the implemented strategies are addressed through comparisons\nwith reference solutions by means of the Finite Element Method.", "category": "physics_comp-ph" }, { "text": "Adaptive two-regime method: application to front propagation: The Adaptive Two-Regime Method (ATRM) is developed for hybrid (multiscale)\nstochastic simulation of reaction-diffusion problems. It efficiently couples\ndetailed Brownian dynamics simulations with coarser lattice-based models. The\nATRM is a generalization of the previously developed Two-Regime Method [Flegg\net al, Journal of the Royal Society Interface, 2012] to multiscale problems\nwhich require a dynamic selection of regions where detailed Brownian dynamics\nsimulation is used. Typical applications include a front propagation or\nspatio-temporal oscillations. In this paper, the ATRM is used for an in-depth\nstudy of front propagation in a stochastic reaction-diffusion system which has\nits mean-field model given in terms of the Fisher equation [Fisher, Annals of\nEugenics, 1937]. It exhibits a travelling reaction front which is sensitive to\nstochastic fluctuations at the leading edge of the wavefront. Previous studies\ninto stochastic effects on the Fisher wave propagation speed have focused on\nlattice-based models, but there has been limited progress using off-lattice\n(Brownian dynamics) models, which suffer due to their high computational cost,\nparticularly at the high molecular numbers that are necessary to approach the\nFisher mean-field model. By modelling only the wavefront itself with the\noff-lattice model, it is shown that the ATRM leads to the same Fisher wave\nresults as purely off-lattice models, but at a fraction of the computational\ncost. The error analysis of the ATRM is also presented for a morphogen gradient\nmodel.", "category": "physics_comp-ph" }, { "text": "Machine Learning in High Energy Physics Community White Paper: Machine learning has been applied to several problems in particle physics\nresearch, beginning with applications to high-level physics analysis in the\n1990s and 2000s, followed by an explosion of applications in particle and event\nidentification and reconstruction in the 2010s. In this document we discuss\npromising future research and development areas for machine learning in\nparticle physics. We detail a roadmap for their implementation, software and\nhardware resource requirements, collaborative initiatives with the data science\ncommunity, academia and industry, and training the particle physics community\nin data science. The main objective of the document is to connect and motivate\nthese areas of research and development with the physics drivers of the\nHigh-Luminosity Large Hadron Collider and future neutrino experiments and\nidentify the resource needs for their implementation. Additionally we identify\nareas where collaboration with external communities will be of great benefit.", "category": "physics_comp-ph" }, { "text": "Computing optimal interfacial structure of ordered phases: We propose a general framework of computing interfacial structure. If an\nordered phase is involved, the interfacial structure can be obtained by simply\nminimizing the free energy with compatible boundary conditions. The framework\nis applied to Landau- Brazovskii model and works efficiently.", "category": "physics_comp-ph" }, { "text": "The HEP Software Foundation Community: The HEP Software Foundation was founded in 2014 to tackle common problems of\nsoftware development and sustainability for high-energy physics. In this paper\nwe outline the motivation for the founding of the organisation and give a brief\nhistory of its development. We describe how the organisation functions today\nand what challenges remain to be faced in the future.", "category": "physics_comp-ph" }, { "text": "Efficient implementation of the superposition of atomic potentials\n initial guess for electronic structure calculations in Gaussian basis sets: The superposition of atomic potentials (SAP) approach has recently been shown\nto be a simple and efficient way to initialize electronic structure\ncalculations [S. Lehtola, J. Chem. Theory Comput. 15, 1593 (2019)]. Here, we\nstudy the differences between effective potentials from fully numerical density\nfunctional and optimized effective potential calculations for fixed\nconfigurations. We find that the differences are small, overall, and choose\nexchange-only potentials at the local density approximation level of theory\ncomputed on top of Hartree-Fock densities as a good compromise. The differences\nbetween potentials arising from different atomic configurations are also found\nto be small at this level of theory.\n Furthermore, we discuss the efficient Gaussian-basis implementation of SAP\nvia error function fits to fully numerical atomic radial potentials. The guess\nobtained from the fitted potentials can be easily implemented in any\nGaussian-basis quantum chemistry code in terms of two-electron integrals. Fits\ncovering the whole periodic table from H to Og are reported for\nnon-relativistic as well as fully relativistic four-component calculations that\nhave been carried out with fully numerical approaches.", "category": "physics_comp-ph" }, { "text": "Modelling of transport phenomena in gases based on quantum scattering: A quantum interatomic scattering is implemented in the direct simulation\nMonte Carlo (DSMC) method applied to transport phenomena in rarefied gases. In\ncontrast to the traditional DSMC method based on the classical scattering, the\nproposed implementation allows us to model flows of gases over the whole\ntemperature range beginning from 1 K up any high temperature when no ionization\nhappens. To illustrate the new numerical approach, two helium isotopes $^3$He\nand $^4$He were considered in two canonical problems, namely, heat transfer\nbetween two planar surfaces and planar Couette flow. To solve these problems,\nthe ab initio potential for helium is used, but the proposed technique can be\nused with any intermolecular potential. The problems were solved over the\ntemperature range from 1 K to 3000 K and for two values of the rarefaction\nparameter 1 and 10. The former corresponds to the transitional regime and the\nlast describes the temperature jump and velocity slip regime. No influence of\nthe quantum effects was detected within the numerical error of 0.1 % for the\ntemperature 300 K and higher. However, the quantum approach requires less\ncomputational effort than the classical one in this temperature range. For\ntemperatures lower than 300 K, the influence of the quantum effects exceed the\nnumerical error and reaches 67 % at the temperature of 1 K.", "category": "physics_comp-ph" }, { "text": "A Discontinuous Galerkin method with a modified penalty flux for the\n propagation and scattering of acousto-elastic waves: We develop an approach for simulating acousto-elastic wave phenomena,\nincluding scattering from fluid-solid boundaries, where the solid is allowed to\nbe anisotropic, with the Discontinuous Galerkin method. We use a coupled\nfirst-order elastic strain-velocity, acoustic velocity-pressure formulation,\nand append penalty terms based on interior boundary continuity conditions to\nthe numerical (central) flux so that the consistency condition holds for the\ndiscretized Discontinuous Galerkin weak formulation. We incorporate the\nfluid-solid boundaries through these penalty terms and obtain a stable\nalgorithm. Our approach avoids the diagonalization into polarized wave\nconstituents such as in the approach based on solving elementwise Riemann\nproblems.", "category": "physics_comp-ph" }, { "text": "Explicit coupling of acoustic and elastic wave propagation in finite\n difference simulations: We present a mechanism to explicitly couple the finite-difference\ndiscretizations of 2D acoustic and isotropic elastic wave systems that are\nseparated by straight interfaces. Such coupled simulations allow the\napplication of the elastic model to geological regions that are of special\ninterest for seismic exploration studies (e.g., the areas surrounding salt\nbodies), while with the computationally more tractable acoustic model still\nbeing applied in the background regions. Specifically, the acoustic wave system\nis expressed in terms of velocity and pressure while the elastic wave system is\nexpressed in terms of velocity and stress. Both systems are posed in\nfirst-order forms and discretized on staggered grids. Special variants of the\nstandard finite-difference operators, namely, operators that possess the\nsummation-by-parts property, are used for the approximation of spatial\nderivatives. Penalty terms, which are also referred to as the simultaneous\napproximation terms, are designed to weakly impose the elastic-acoustic\ninterface conditions in the finite-difference discretizations and couple the\nelastic and acoustic wave simulations together. With the presented mechanism,\nwe are able to perform the coupled elastic-acoustic wave simulations stably and\naccurately. Moreover, it is shown that the energy-conserving property in the\ncontinuous systems can be preserved in the discretization with carefully\ndesigned penalty terms.", "category": "physics_comp-ph" }, { "text": "Three-Dimensional Model for Electrospinning Processes in Controlled Gas\n Counterflow: We study the effects of a controlled gas flow on the dynamics of electrified\njets in the electrospinning process. The main idea is to model the air drag\neffects of the gas flow by using a non-linear Langevin-like approach. The model\nis employed to investigate the dynamics of electrified polymer jets at\ndifferent conditions of air drag force, showing that a controlled gas\ncounterflow can lead to a decrease of the average diameter of electrospun\nfibers, and potentially to an improvement of the quality of electrospun\nproducts. We probe the influence of air drag effects on the bending\ninstabilities of the jet and on its angular fluctuations during the process.\nThe insights provided by this study might prove useful for the design of future\nelectrospinning experiments and polymer nanofiber materials.", "category": "physics_comp-ph" }, { "text": "The Virtual Research Environment: towards a comprehensive analysis\n platform: The Virtual Research Environment is an analysis platform developed at CERN\nserving the needs of scientific communities involved in European Projects. Its\nscope is to facilitate the development of end-to-end physics workflows,\nproviding researchers with access to an infrastructure and to the digital\ncontent necessary to produce and preserve a scientific result in compliance\nwith FAIR principles. The platform's development is aimed at demonstrating how\nsciences spanning from High Energy Physics to Astrophysics could benefit from\nthe usage of common technologies, initially born to satisfy CERN's\nexabyte-scale data management needs. The Virtual Research Environment's main\ncomponents are (1) a federated distributed storage solution (the Data Lake),\nproviding functionalities for data injection and replication through a Data\nManagement framework (Rucio), (2) a computing cluster supplying the processing\npower to run full analyses with Reana, a re-analysis software, (3) a federated\nand reliable Authentication and Authorization layer and (4) an enhanced\nnotebook interface with containerised environments to hide the infrastructure's\ncomplexity from the user. The deployment of the Virtual Research Environment is\nopen-source and modular, in order to make it easily reproducible by partner\ninstitutions; it is publicly accessible and kept up to date by taking advantage\nof state of the art IT-infrastructure technologies.", "category": "physics_comp-ph" }, { "text": "The stochastic counterpart of conservation laws with heterogeneous\n conductivity fields: application to deterministic problems and uncertainty\n quantification: Conservation laws in the form of elliptic and parabolic partial differential\nequations (PDEs) are fundamental to the modeling of many problems such as heat\ntransfer and flow in porous media. Many of such PDEs are stochastic due to the\npresence of uncertainty in the conductivity field. Based on the relation\nbetween stochastic diffusion processes and PDEs, Monte Carlo (MC) methods are\navailable to solve these PDEs. These methods are especially relevant for cases\nwhere we are interested in the solution in a small subset of the domain. The\nexisting MC methods based on the stochastic formulation require restrictively\nsmall time steps for high variance conductivity fields. Moreover, in many\napplications the conductivity is piecewise constant and the existing methods\nare not readily applicable in these cases. Here we provide an algorithm to\nsolve one-dimensional elliptic problems that bypasses these two limitations.\nThe methodology is demonstrated using problems governed by deterministic and\nstochastic PDEs. It is shown that the method provides an efficient alternative\nto compute the statistical moments of the solution to a stochastic PDE at any\npoint in the domain. A variance reduction scheme is proposed for applying the\nmethod for efficient mean calculations.", "category": "physics_comp-ph" }, { "text": "Fluctuation-Induced Phenomena in Nanoscale Systems: Harnessing the Power\n of Noise: The famous Johnson-Nyquist formula relating noise current to conductance has\na microscopic generalization relating noise current density to microscopic\nconductivity, with corollary relations governing noise in the components of the\nelectromagnetic fields. These relations, known collectively in physics as\nfluctuation-dissipation relations, form the basis of the modern understanding\nof fluctuation-induced phenomena, a field of burgeoning importance in\nexperimental physics and nanotechnology. In this review, we survey recent\nprogress in computational techniques for modeling fluctuation-induced\nphenomena, focusing on two cases of particular interest: near-field radiative\nheat transfer and Casimir forces. In each case we review the basic physics of\nthe phenomenon, discuss semi-analytical and numerical algorithms for\ntheoretical analysis, and present recent predictions for novel phenomena in\ncomplex material and geometric configurations.", "category": "physics_comp-ph" }, { "text": "On the Courant-Friedrichs-Lewy condition for numerical solvers of the\n coagulation equation: Evolving the size distribution of solid aggregates challenges simulations of\nyoung stellar objects. Among other difficulties, generic formulae for stability\nconditions of explicit solvers provide severe constrains when integrating the\ncoagulation equation for astrophysical objects. Recent numerical experiments\nhave recently reported that these generic conditions may be much too stringent.\nBy analysing the coagulation equation in the Laplace space, we explain why this\nis indeed the case and provide a novel stability condition which avoids time\nover-sampling.", "category": "physics_comp-ph" }, { "text": "Genarris: Random Generation of Molecular Crystal Structures and Fast\n Screening with a Harris Approximation: We present Genarris, a Python package that performs configuration space\nscreening for molecular crystals of rigid molecules by random sampling with\nphysical constraints. For fast energy evaluations Genarris employs a Harris\napproximation, whereby the total density of a molecular crystal is constructed\nvia superposition of single molecule densities. Dispersion-inclusive density\nfunctional theory (DFT) is then used for the Harris density without performing\na self-consistency cycle. Genarris uses machine learning for clustering, based\non a relative coordinate descriptor (RCD) developed specifically for molecular\ncrystals, which is shown to be robust in identifying packing motif similarity.\nIn addition to random structure generation, Genarris offers three workflows\nbased on different sequences of successive clustering and selection steps: the\n\"Rigorous\" workflow is an exhaustive exploration of the potential energy\nlandscape, the \"Energy\" workflow produces a set of low energy structures, and\nthe \"Diverse\" workflow produces a maximally diverse set of structures. The\nlatter is recommended for generating initial populations for genetic\nalgorithms. Here, the implementation of Genarris is reported and its\napplication is demonstrated for three test cases.", "category": "physics_comp-ph" }, { "text": "Efficient planning of peen-forming patterns via artificial neural\n networks: Robust automation of the shot peen forming process demands a closed-loop\nfeedback in which a suitable treatment pattern needs to be found in real-time\nfor each treatment iteration. In this work, we present a method for finding the\npeen-forming patterns, based on a neural network (NN), which learns the\nnonlinear function that relates a given target shape (input) to its optimal\npeening pattern (output), from data generated by finite element simulations.\nThe trained NN yields patterns with an average binary accuracy of 98.8\\% with\nrespect to the ground truth in microseconds.", "category": "physics_comp-ph" }, { "text": "Semi-Lagrangian lattice Boltzmann method for compressible flows: This work thoroughly investigates a semi-Lagrangian lattice Boltzmann (SLLBM)\nsolver for compressible flows. In contrast to other LBM for compressible flows,\nthe vertices are organized in cells, and interpolation polynomials up to fourth\norder are used to attain the off-vertex distribution function values. Differing\nfrom the recently introduced Particles on Demand (PoD) method, the method\noperates in a static, non-moving reference frame. Yet the SLLBM in the present\nformulation grants supersonic flows and exhibits a high degree of Galilean\ninvariance. The SLLBM solver allows for an independent time step size due to\nthe integration along characteristics and for the use of unusual velocity sets,\nlike the D2Q25, which is constructed by the roots of the fifth-order Hermite\npolynomial. The properties of the present model are shown in diverse example\nsimulations of a two-dimensional Taylor-Green vortex, a Sod shock tube, a\ntwo-dimensional Riemann problem and a shock-vortex interaction. It is shown\nthat the cell-based interpolation and the use of Gauss-Lobatto-Chebyshev\nsupport points allow for spatially high-order solutions and minimize the mass\nloss caused by the interpolation. Transformed grids in the shock-vortex\ninteraction show the general applicability to non-uniform grids.", "category": "physics_comp-ph" }, { "text": "Motifs in earthquake networks: Romania, Italy, United States of America,\n and Japan: We present a detailed description of seismic activity in Romania, Italy, and\nJapan, as well as the California seismic zone in the United States of America,\nbased on the statistical analysis of the underlying earthquake networks used to\nmodel the aforementioned zones. Our results on network connectivity and simple\nnetwork motifs allow for a complex description of seismic zones, while at the\nsame time reinforcing the current understanding of seismicity as a critical\nphenomenon. The reported distributions on node connectivity, three-, and\nfour-event motifs are consistent with power-law, i.e., scale-free,\ndistributions over large intervals and are robust across earthquake networks\nobtained from different discretizations of the seismic zones of interest. In\nour analysis of the distributions of node connectivity and simple motifs, we\ndistinguish between the global distribution and the powerlaw part of it with\nthe help of maximum likelihood estimation (MLE) method and complementary\ncumulative distribution functions (CCDF). The main message is that the\ndistributions reported for the aforementioned seismic zones have large\npower-law components, extending over some orders of magnitude, independent of\ndiscretization. All the results were obtained using publicly-available\ndatabases and open-source software, as well as a new toolbox available on\nGitHub, specifically designed to automatically analyze earthquake databases.", "category": "physics_comp-ph" }, { "text": "Collective mode mining from molecular dynamics simulations: a\n comparative approach: The evaluation of collective modes is fundamental in the analysis of\nmolecular dynamics simulations. Several methods are available to extract that\ninformation, i.e normal mode analysis, principal component and spectral\nanalysis of trajectories, basically differing by the quantity considered as the\nnodal one (frequency, amplitude, or pattern of displacement) and leading to the\ndefinition of different kinds of collective excitations and physical spectral\nobservables. Different views converge in the harmonic regime and/or for\nhomo-atomic systems. However, for anharmonic and out of equilibrium dynamics\ndifferent quantities bring different information and only their comparison can\ngive a complete view of the system behavior. To allow such a comparative\nanalysis, we review and compare the different approaches, applying them in\ndifferent combination to two examples of physical relevance: graphene and\nfullerene C60.", "category": "physics_comp-ph" }, { "text": "A monolithic ALE Newton-Krylov solver with Multigrid-Richardson-Schwarz\n preconditioning for incompressible Fluid Structure Interaction: In this paper we study a monolithic Newton-Krylov solver with exact Jacobian\nfor the solution of incompressible FSI problems. A main focus of this work is\non the use of geometric multigrid preconditioners with modified Richardson\nsmoothers preconditioned by an additive Schwarz algorithm. The definition of\nthe subdomains in the Schwarz smoother is driven by the natural splitting\nbetween fluid and solid. The monolithic approach guarantees the automatic\nsatisfaction of the stress balance and the kinematic conditions across the\nfluid-solid interface. The enforcement of the incompressibility conditions both\nfor the fluid and for the solid parts is taken care of by using inf-sup stable\nfinite element pairs without stabilization terms. A suitable Arbitrary\nLagrangian Eulerian (ALE) operator is chosen in order to avoid mesh\nentanglement while solving for large displacements of the moving fluid domain.\nNumerical results of two and three-dimensional benchmark tests with Newtonian\nfluids and nonlinear hyperelastic solids show a robust performance of our fully\nincompressible solver especially for the more challenging\ndirect-to-steady-state problems.", "category": "physics_comp-ph" }, { "text": "Multigrid Renormalization: We combine the multigrid (MG) method with state-of-the-art concepts from the\nvariational formulation of the numerical renormalization group. The resulting\nMG renormalization (MGR) method is a natural generalization of the MG method\nfor solving partial differential equations. When the solution on a grid of $N$\npoints is sought, our MGR method has a computational cost scaling as\n$\\mathcal{O}(\\log(N))$, as opposed to $\\mathcal{O}(N)$ for the best standard MG\nmethod. Therefore MGR can exponentially speed up standard MG computations. To\nillustrate our method, we develop a novel algorithm for the ground state\ncomputation of the nonlinear Schr\\\"{o}dinger equation. Our algorithm acts\nvariationally on tensor products and updates the tensors one after another by\nsolving a local nonlinear optimization problem. We compare several different\nmethods for the nonlinear tensor update and find that the Newton method is the\nmost efficient as well as precise. The combination of MGR with our nonlinear\nground state algorithm produces accurate results for the nonlinear\nSchr\\\"{o}dinger equation on $N = 10^{18}$ grid points in three spatial\ndimensions.", "category": "physics_comp-ph" }, { "text": "Attenuating the fermion sign problem in path integral Monte Carlo\n simulations using the Bogoliubov inequality and thermodynamic integration: Accurate thermodynamic simulations of correlated fermions using path integral\nMonte Carlo (PIMC) methods are of paramount importance for many applications\nsuch as the description of ultracold atoms, electrons in quantum dots, and\nwarm-dense matter. The main obstacle is the fermion sign problem (FSP), which\nleads to an exponential increase in computation time both with increasing the\nsystem-size and with decreasing temperature. Very recently, Hirshberg et al.\n[J. Chem. Phys. 152, 171102 (2020)] have proposed to alleviate the FSP based on\nthe Bogoliubov inequality. In the present work, we extend this approach by\nadding a parameter that controls the perturbation, allowing for an\nextrapolation to the exact result. In this way, we can also use thermodynamic\nintegration to obtain an improved estimate of the fermionic energy. As a test\nsystem, we choose electrons in 2D and 3D quantum dots and find in some cases a\nspeed-up exceeding 10^6 , as compared to standard PIMC, while retaining a\nrelative accuracy of $\\sim0.1\\%$. Our approach is quite general and can readily\nbe adapted to other simulation methods.", "category": "physics_comp-ph" }, { "text": "Numerical study of extreme mechanical force exerted by a turbulent flow\n on a bluff body by direct and rare-event sampling techniques: This study investigates, by means of numerical simulations, extreme\nmechanical force exerted by a turbulent flow impinging on a bluff body, and\nexamines the relevance of two distinct rare-event algorithms to efficiently\nsample these events. The drag experienced by a square obstacle placed in a\nturbulent channel flow (in two dimensions) is taken as a representative case\nstudy. Direct sampling shows that extreme fluctuations are closely related to\nthe presence of a strong vortex blocked in the near wake of the obstacle. This\nvortex is responsible for a significant pressure drop between the forebody and\nthe base of the obstacle, thus yielding a very high value of the drag. Two\nalgorithms are then considered to speed up the sampling of such flow scenarii,\nnamely the AMS and the GKTL algorithms. The general idea behind these\nalgorithms is to replace a long simulation by a set of much shorter ones,\nrunning in parallel, with dynamics that are replicated or pruned, according to\nsome specific rules designed to sample large-amplitude events more frequently.\nThese algorithms have been shown to be relevant for a wide range of problems in\nstatistical physics, computer science, biochemistry. The present study is the\nfirst application to a fluid-structure interaction problem. Practical evidence\nis given that the fast sweeping time of turbulent fluid structures past the\nobstacle has a strong influence on the efficiency of the rare-event algorithm.\nWhile the AMS algorithm does not yield significant run-time savings as compared\nto direct sampling, the GKTL algorithm appears to be effective to sample very\nefficiently extreme fluctuations of the time-averaged drag and estimate related\nstatistics such as return times.", "category": "physics_comp-ph" }, { "text": "Generation of surface plasmon-polaritons by edge effects: By using numerical and analytical methods, we describe the generation of\nfine-scale lateral electromagnetic waves, called surface plasmon-polaritons\n(SPPs), on atomically thick, metamaterial conducting sheets in two spatial\ndimensions (2D). Our computations capture the two-scale character of the total\nfield and reveal how each edge of the sheet acts as a source of an SPP that may\ndominate the diffracted field. We use the finite element method to numerically\nimplement a variational formulation for a weak discontinuity of the tangential\nmagnetic field across a hypersurface. An adaptive, local mesh refinement\nstrategy based on a posteriori error estimators is applied to resolve the\npronounced two-scale character of wave propagation and radiation over the\nmetamaterial sheet. We demonstrate by numerical examples how a singular\ngeometry, e.g., sheets with sharp edges, and sharp spatial changes in the\nassociated surface conductivity may significantly influence surface plasmons in\nnanophotonics.", "category": "physics_comp-ph" }, { "text": "Time-dependent density functional theory for a unified description of\n ultrafast dynamics: pulsed light, electrons, and atoms in crystalline solids: We have developed a novel multiscale computational scheme to describe coupled\ndynamics of light electromagnetic field with electrons and atoms in crystalline\nsolids, where first-principles molecular dynamics based on time-dependent\ndensity functional theory is used to describe the microscopic dynamics. The\nmethod is applicable to wide phenomena in nonlinear and ultrafast optics. To\nshow usefulness of the method, we apply it to a pump-probe measurement of\ncoherent phonon in diamond where a Raman amplification takes place during the\npropagation of the probe pulse.", "category": "physics_comp-ph" }, { "text": "Using the Energy probability distribution zeros to obtain the critical\n properties of the two-dimensional anisotropic Heisenberg model: In this paper we present a Monte Carlo study of the critical behavior of the\neasy axis anisotropic Heisenberg spin model in two dimensions. Based on the\npartial knowledge of the zeros of the energy probability distribution we\ndetermine with good precision the phase diagram of the model obtaining the\ncritical temperature and exponents for several values of the anisotropy. Our\nresults indicate that the model is in the Ising universality class for any\nanisotropy.", "category": "physics_comp-ph" }, { "text": "Mesoscopic simulations at the physics-chemistry-biology interface: We discuss the Lattice Boltzmann-Particle Dynamics (LBPD) multiscale paradigm\nfor the simulation of complex states of flowing matter at the interface between\nPhysics, Chemistry and Biology. In particular, we describe current large-scale\nLBPD simulations of biopolymer translocation across cellular membranes,\nmolecular transport in ion channels and amyloid aggregation in cells. We also\nprovide prospects for future LBPD explorations in the direction of cellular\norganization, the direct simulation of full biological organelles, all the way\nto up to physiological scales of potential relevance to future\nprecision-medicine applications, such as the accurate description of\nhomeostatic processes. It is argued that, with the advent of Exascale\ncomputing, the mesoscale physics approach advocated in this paper, may come to\nage in the next decade and open up new exciting perspectives for physics-based\ncomputational medicine.", "category": "physics_comp-ph" }, { "text": "Electrical Double Layer Capacitance of Curved Graphite Electrodes: To improve the understanding of the relation between electrode curvature and\nenergy storage mechanisms, a systematic investigation of the correlation\nbetween convex and concave electrode surfaces and the differential capacitance\nof an electrochemical double layer capacitor using molecular dynamics\nsimulations is presented. Each electrode consists of three layers of curved\ngraphene sheets with a convex and concave surface to which the constant\npotential method was applied. The differential capacitance shows a fluctuating\nbehavior with respect to the curvature radius of the convex and concave areas\nof the electrode. The reasons identified for this are differences in the\ngeometric arrangement and solvation of the adsorbed ions as well as a steric\nhindrance prohibiting further charge accumulation. Because the total\ndifferential capacitance is calculated as a weighted average of contributions\nfrom concave and convex surfaces, the influence of individual curvatures on the\ntotal capacitance is significantly reduced for the total electrode surface.", "category": "physics_comp-ph" }, { "text": "The Finite Difference Time Domain (FDTD) Method to Determine Energies\n and Wave Functions of Two-Electron Quantum Dot: The finite difference time domain (FDTD) method has been successfully applied\nto obtain energies and wave functions for two electrons in a quantum dot\nmodeled by a three dimensional harmonic potential. The FDTD method uses the\ntime-dependent Schr\\\"odinger equation (TDSE) in imaginary time. The TDSE is\nnumerically solved with an initial random wave function and after enough\nsimulation time, the wave function converges to the ground state wave function.\nThe excited states are determined by using the same procedure for the ground\nstate with additional constraints that the wave function must be orthogonal\nwith all lower energy wave functions. The numerical results for energies and\nwave functions for different parameters of confinement potentials are given and\ncompared with published results using other numerical methods. It is shown that\nthe FDTD method gives accurate energies and wave functions.", "category": "physics_comp-ph" }, { "text": "Topology optimization of unsaturated flows in multi-material porous\n media: application to a simple diaper model: We present a mathematical approach to optimize the material distribution for\nfluid transport in unsaturated porous media. Our benchmark problem is a\nsimplified diaper model as an exemplary liquid absorber. Our model has up to\nthree materials with vastly different properties, which in the reference\nconfiguration are arranged in parallel layers. Here, swelling is neglected and\nthe geometry of a swollen diaper is assumed and treated as a porous medium of\nhigh porosity. The imbibition process is then simulated by solving an\nunsaturated flow problem based on Richards' equation. Our aim is to maximize\nthe amount of absorbed liquid by redistributing the materials. To this end, a\ndensity based multi-material topology optimization (based on the well known\nSIMP model) is performed. A sensitivity analysis for the nonlinear transient\nproblem is provided, which enables the application of first order optimization\nsolvers. We perform two- and three-material optimization and discuss several\nvariants of the problem setting. We present designs with up to 45% more\nabsorbed liquid compared to the reference configuration.", "category": "physics_comp-ph" }, { "text": "Formation energy of intrinsic defects in silicon from the\n Galitskii-Migdal formula: The work is devoted to the formation energy calculations of intrinsic defects\nin silicon based on the GW method and the Galitskii-Migdal formula. The two\nmethods for calculating the electronic response function are applied. The first\none uses direct integration over frequency to determine the response function.\nThe diagonal form of the spectral function is the only assumption within the\nRPA framework, but the supercell calculations are very time-consuming.\nTherefore, we propose the method in which the response function is calculated\nin the plasmon pole approximation, and the GW contribution to the\nexchange-correlation energy is taken with a certain mixing constant. The value\nof the constant is found from the correspondence with experimental data. This\nmakes it possible to obtain an accuracy comparable to the first method at\nsignificantly lower computational costs. The described method is used to\ncalculate the formation energy of the neutral self-interstitial, vacancy and\ntwo divacansy structures in supercells of 214-217 silicon atoms.", "category": "physics_comp-ph" }, { "text": "On the Impact of Fluid Structure Interaction in Blood Flow Simulations:\n Stenotic Coronary Artery Benchmark: We study the impact of using fluid-structure interactions (FSI) to simulate\nblood flow in a large stenosed artery. We compare typical flow configurations\nusing Navier-Stokes in a rigid geometry setting to a fully coupled FSI model.\nThe relevance of vascular elasticity is investigated with respect to several\nquestions of clinical importance. Namely, we study the effect of using FSI on\nthe wall shear stress distribution, on the Fractional Flow Reserve and on the\ndamping effect of a stenosis on the pressure amplitude during the pulsatory\ncycle. The coupled problem is described in a monolithic variational formulation\nbased on Arbitrary Lagrangian Eulerian (ALE) coordinates. For comparison, we\nperform pure Navier-Stokes simulations on a prestressed geometry to give a good\nmatching of both configurations. A series of numerical simulations that cover\nimportant hemodynamical factors are presented and discussed.", "category": "physics_comp-ph" }, { "text": "An entropy based thermalization scheme for hybrid simulations of Coulomb\n collisions: We formulate and test a hybrid fluid-Monte Carlo scheme for the treatment of\nelastic collisions in gases and plasmas. While our primary focus and\ndemonstrations of applicability are for moderately collisional plasmas, as\ndescribed by the Landau-Fokker-Planck equation, the method is expected to be\napplicable also to collision processes described by the Boltzmann equation.\nThis scheme is similar to the previously discussed velocity-based scheme [R.\nCaflisch, et. al, Multiscale Modeling & Simulation 7, 865, (2008)] and the\nscattering-angle-based scheme [A.M. Dimits, et. al, Bull. APS 55, no. 15 (2010,\nAbstract: XP9.00006)], but with a firmer theoretical basis and without the\ninherent limitation to the Landau-Fokker-Planck case. It gives a significant\nperformance improvement (e.g., error for a given computational effort) over the\nvelocity-based scheme. These features are achieved by assigning passive scalars\nto each simulated particle and tracking their evolution through collisions. The\nmethod permits a detailed error analysis that is confirmed by numerical\nresults. The tests performed are for the evolution from anisotropic Maxwellian\nand a bump-on-tail distribution.", "category": "physics_comp-ph" }, { "text": "Solving Newton's Equations of Motion with Large Timesteps using\n Recurrent Neural Networks based Operators: Classical molecular dynamics simulations are based on solving Newton's\nequations of motion. Using a small timestep, numerical integrators such as\nVerlet generate trajectories of particles as solutions to Newton's equations.\nWe introduce operators derived using recurrent neural networks that accurately\nsolve Newton's equations utilizing sequences of past trajectory data, and\nproduce energy-conserving dynamics of particles using timesteps up to 4000\ntimes larger compared to the Verlet timestep. We demonstrate significant\nspeedup in many example problems including 3D systems of up to 16 particles.", "category": "physics_comp-ph" }, { "text": "Performance of the BGSDC integrator for computing fast ion trajectories\n in nuclear fusion reactors: Modelling neutral beam injection (NBI) in fusion reactors requires computing\nthe trajectories of large ensembles of particles. Slowing down times of up to\none second combined with nanosecond time steps make these simulations\ncomputationally very costly. This paper explores the performance of BGSDC, a\nnew numerical time stepping method, for tracking ions generated by NBI in the\nDIII-D and JET reactors. BGSDC is a high-order generalisation of the Boris\nmethod, combining it with spectral deferred corrections and the Generalized\nMinimal Residual method GMRES. Without collision modelling, where numerical\ndrift can be quantified accurately, we find that BGSDC can deliver higher\nquality particle distributions than the standard Boris integrator at comparable\ncost or comparable distributions at lower cost. With collision models,\nquantifying accuracy is difficult but we show that BGSDC produces stable\ndistributions at larger time steps than Boris.", "category": "physics_comp-ph" }, { "text": "Exchange interactions of CaMnO3 in the bulk and at the surface: In this work, we present electronic and magnetic properties of CaMnO3 (CMO)\nas obtained from ab initio calculations. We identify the preferable magnetic\norder by means of density functional theory plus Hubbard U calculations and\nextract the effective exchange parameters (Jij's) using the magnetic force\ntheorem. We find that the effects of geometrical relaxation at the surface as\nwell as the change of crystal field are very strong and are able to influence\nthe lower energy magnetic configuration. In particular, our analysis reveals\nthat the exchange interaction between the Mn atoms belonging to the surface and\nthe subsurface layers is very sensitive to the structural changes. An earlier\nstudy [A. Filippetti and W.E. Pickett, Phys. Rev. Lett. 83, 4184 (1999)]\nsuggested that this coupling is ferromagnetic and gives rise to the spin flip\nprocess on the surface of CMO. In our work we confirm their finding for an\nunrelaxed geometry, but once the structural relaxations are taken into account,\nthis exchange coupling changes its sign. Thus, we suggest that the surface of\nCMO should have the same G-type antiferromagnetic order as in the bulk.\nFinally, we show that the suggested SF can be induced in the system by\nintroducing an excess of electrons.", "category": "physics_comp-ph" }, { "text": "Multi-core computation of transfer matrices for strip lattices in the\n Potts model: The transfer-matrix technique is a convenient way for studying strip lattices\nin the Potts model since the compu- tational costs depend just on the periodic\npart of the lattice and not on the whole. However, even when the cost is\nreduced, the transfer-matrix technique is still an NP-hard problem since the\ntime T(|V|, |E|) needed to compute the matrix grows ex- ponentially as a\nfunction of the graph width. In this work, we present a parallel\ntransfer-matrix implementation that scales performance under multi-core\narchitectures. The construction of the matrix is based on several repetitions\nof the deletion- contraction technique, allowing parallelism suitable to\nmulti-core machines. Our experimental results show that the multi-core\nimplementation achieves speedups of 3.7X with p = 4 processors and 5.7X with p\n= 8. The efficiency of the implementation lies between 60% and 95%, achieving\nthe best balance of speedup and efficiency at p = 4 processors for actual\nmulti-core architectures. The algorithm also takes advantage of the lattice\nsymmetry, making the transfer matrix computation to run up to 2X faster than\nits non-symmetric counterpart and use up to a quarter of the original space.", "category": "physics_comp-ph" }, { "text": "Simulating diffusion properties of solid-state electrolytes via a neural\n network potential: Performance and training scheme: The recently published DeePMD model\n(https://github.com/deepmodeling/deepmd-kit), based on a deep neural network\narchitecture, brings the hope of solving the time-scale issue which often\nprevents the application of first principle molecular dynamics to physical\nsystems. With this contribution we assess the performance of the DeePMD\npotential on a real-life application and model diffusion of ions in solid-state\nelectrolytes. We consider as test cases the well known Li10GeP2S12,\nLi7La3Zr2O12 and Na3Zr2Si2PO12. We develop and test a training protocol\nsuitable for the computation of diffusion coefficients, which is one of the key\nproperties to be optimized for battery applications, and we find good agreement\nwith previous computations. Our results show that the DeePMD model may be a\nsuccessful component of a framework to identify novel solid-state electrolytes.", "category": "physics_comp-ph" }, { "text": "Generic framework for data-race-free many-particle simulations on shared\n memory hardware: Recently, there has been much progress in the formulation and implementation\nof methods for generic many-particle simulations. These models, however,\ntypically either do not utilize shared memory hardware or do not guarantee\ndata-race freedom for arbitrary particle dynamics. Here, we present both a\nabstract formal model of particle dynamics and a corresponding domain-specific\nprogramming language that can guarantee data-race freedom. The design of both\nthe model and the language are heavily inspired by the Rust programming\nlanguage that enables data-race-free general-purpose parallel computation. We\nalso present a method of preventing deadlocks within our model by a suitable\ngraph representation of a particle simulation. Finally, we demonstrate the\npracticability of our model on a number of common numerical primitives from\nmolecular dynamics.", "category": "physics_comp-ph" }, { "text": "Solving Schr\u00f6dinger Equation Using Tensor Neural Network: In this paper, we introduce a prototype of machine learning method to solve\nthe many-body Schr\\\"{o}dinger equation by the tensor neural network. Based on\nthe tensor product structure, we can do the direct numerical integration by\nusing fixed quadrature points for the functions constructed by the tensor\nneural network within tolerable computational complexity. The corresponding\nmachine learning method is built for solving many-body Schr\\\"{o}dinger\nequation. Some numerical examples are provided to validate the accuracy and\nefficiency of the proposed algorithms.", "category": "physics_comp-ph" }, { "text": "Simulating interfacial flows: a farewell to planes: Over the past decades, the volume-of-fluid (VOF) method has been the method\nof choice for simulating atomization processes, owing to its unique ability to\ndiscretely conserve mass. Current state-of-the-art VOF methods, however, rely\non the piecewise-linear interface calculation (PLIC) to represent the interface\nused when calculating advection fluxes. This renders the estimated curvature of\nthe transported interface zeroth-order accurate at best, adversely impacting\nthe simulation of surface-tension-driven flows.\n In the past few years, there have been several attempts at using\npiecewise-parabolic interface approximations instead of piecewise-linear ones\nfor computing advection fluxes, albeit all limited to two-dimensional cases or\nnot inherently mass conservative. In this contribution, we present our most\nrecent work on three-dimensional piecewise-parabolic interface reconstruction\nand apply it in the context of the VOF method. As a result of increasing the\norder of the interface representation, the reconstruction of the interface and\nthe estimation of its curvature now become a single step instead of two\nseparate ones. The performance of this new approach is assessed both in terms\nof accuracy and stability and compared to the classical PLIC-VOF approach on a\nrange of canonical test-cases and cases of surface-tension-driven\ninstabilities.", "category": "physics_comp-ph" }, { "text": "Non-local Potts model on random lattice and chromatic number of a plane: Statistical models are widely used for the investigation of complex system's\nbehavior. Most of the models considered in the literature are formulated on\nregular lattices with nearest-neighbor interactions. The models with non-local\ninteraction kernels have been less studied. In this article, we investigate an\nexample of such a model - the non-local q-color Potts model on a random d=2\nlattice. Only the same color spins at a unit distance (within some small margin\n$\\delta$) interact. We study the vacuum states of this model and present the\nresults of numerical simulations and discuss qualitative features of the\ncorresponding patterns. Conjectured relation with the chromatic number of a\nplane problem is discussed.", "category": "physics_comp-ph" }, { "text": "Efficiency of navigation in indexed networks: We investigate efficient methods for packets to navigate in complex networks.\nThe packets are assumed to have memory, but no previous knowledge of the graph.\nWe assume the graph to be indexed, i.e. every vertex is associated with a\nnumber (accessible to the packets) between one and the size of the graph. We\ntest different schemes to assign indices and utilize them in packet navigation.\nFour different network models with very different topological characteristics\nare used for testing the schemes. We find that one scheme outperform the\nothers, and has an efficiency close to the theoretical optimum. We discuss the\nuse of indexed-graph navigation in peer-to-peer networking and other\ndistributed information systems.", "category": "physics_comp-ph" }, { "text": "Bounding free energy difference with flow matching: This paper introduces a method for computing the Helmholtz free energy using\nthe flow matching technique. Unlike previous work that utilized flow-based\nmodels for variational free energy calculations, this method provides bounds\nfor free energy estimation based on targeted free energy perturbation, by\nperforming calculations on samples from both ends of the mapping. We\ndemonstrate applications of the present method by estimating the free energy of\nthe classical Coulomb gas in a harmonic trap.", "category": "physics_comp-ph" }, { "text": "Adaptive local basis set for Kohn-Sham density functional theory in a\n discontinuous Galerkin framework II: Force, vibration, and molecular dynamics\n calculations: Recently, we have proposed the adaptive local basis set for electronic\nstructure calculations based on Kohn-Sham density functional theory in a\npseudopotential framework. The adaptive local basis set is efficient and\nsystematically improvable for total energy calculations. In this paper, we\npresent the calculation of atomic forces, which can be used for a range of\napplications such as geometry optimization and molecular dynamics simulation.\nWe demonstrate that, under mild assumptions, the computation of atomic forces\ncan scale nearly linearly with the number of atoms in the system using the\nadaptive local basis set. We quantify the accuracy of the Hellmann-Feynman\nforces for a range of physical systems, benchmarked against converged planewave\ncalculations, and find that the adaptive local basis set is efficient for both\nforce and energy calculations, requiring at most a few tens of basis functions\nper atom to attain accuracy required in practice. Since the adaptive local\nbasis set has implicit dependence on atomic positions, Pulay forces are in\ngeneral nonzero. However, we find that the Pulay force is numerically small and\nsystematically decreasing with increasing basis completeness, so that the\nHellmann-Feynman force is sufficient for basis sizes of a few tens of basis\nfunctions per atom. We verify the accuracy of the computed forces in static\ncalculations of quasi-1D and 3D disordered Si systems, vibration calculation of\na quasi-1D Si system, and molecular dynamics calculations of H$_2$ and liquid\nAl-Si alloy systems, where we find excellent agreement with independent\nbenchmark results in literature.", "category": "physics_comp-ph" }, { "text": "Smooth-Wall Boundary Conditions for Energy-Dissipation Turbulence Models: It is shown that the smooth-wall boundary conditions specified for commonly\nused dissipation-based turbulence models are mathematically incorrect. It is\ndemonstrated that when these traditional wall boundary conditions are used, the\nresulting formulations allow either an infinite number of solutions or no\nsolution. Furthermore, these solutions do not enforce energy conservation and\nthey do not properly enforce the no-slip condition at a smooth surface. This is\ntrue for all dissipation-based turbulence models, including the k-{\\epsilon},\nk-{\\omega}, and k-{\\zeta} models. Physically correct wall boundary conditions\nmust force both k and its gradient to zero at a smooth wall. Enforcing these\ntwo boundary conditions on k is sufficient to determine a unique solution to\nthe coupled system of differential transport equations. There is no need to\nimpose any wall boundary condition on {\\epsilon}, {\\omega}, or {\\zeta} at a\nsmooth surface and it is incorrect to do so. The behavior of {\\epsilon},\n{\\omega}, or {\\zeta} approaching a smooth surface is that required to satisfy\nthe differential equations and force both k and its gradient to zero at the\nwall.", "category": "physics_comp-ph" }, { "text": "Evaluation of Constant Potential Method in Simulating Electric\n Double-Layer Capacitors: A major challenge in the molecular simulation of electric double layer\ncapacitors (EDLCs) is the choice of an appropriate model for the electrode.\nTypically, in such simulations the electrode surface is modeled using a uniform\nfixed charge on each of the electrode atoms, which ignores the electrode\nresponse to local charge fluctuations induced by charge fluctuations in the\nelectrolyte. In this work, we evaluate and compare this Fixed Charge Method\n(FCM) with the more realistic Constant Potential Method (CPM), [Reed, et al.,\nJ. Chem. Phys., 126, 084704 (2007)], in which the electrode charges fluctuate\nin order to maintain constant electric potential in each electrode. For this\ncomparison, we utilize a simplified LiClO$_4$-acetonitrile/graphite EDLC. At\nlow potential difference ($\\Delta\\Psi\\le 2V$), the two methods yield\nessentially identical results for ion and solvent density profiles; however,\nsignificant differences appear at higher $\\Delta\\Psi$. At $\\Delta\\Psi\\ge 4V$,\nthe CPM ion density profiles show significant enhancement (over FCM) of\n\"partially electrode solvated\" Li$^+$ ions very close to the electrode surface.\nThe ability of the CPM electrode to respond to local charge fluctuations in the\nelectrolyte is seen to significantly lower the energy (and barrier) for the\napproach of Li$^+$ ions to the electrode surface.", "category": "physics_comp-ph" }, { "text": "Linear Scaling Density Matrix Real Time TDDFT: Propagator Unitarity \\&\n Matrix Truncation: Real time, density matrix based, time dependent density functional theory\nproceeds through the propagation of the density matrix, as opposed to the\nKohn-Sham orbitals. It is possible to reduce the computational workload by\nimposing spatial cut-off radii on sparse matrices, and the propagation of the\ndensity matrix in this manner provides direct access to the optical response of\nvery large systems, which would be otherwise impractical to obtain using the\nstandard formulations of TDDFT. Following a brief summary of our\nimplementation, along with several benchmark tests illustrating the validity of\nthe method, we present an exploration of the factors affecting the accuracy of\nthe approach. In particular we investigate the effect of basis set size and\nmatrix truncation, the key approximation used in achieving linear scaling, on\nthe propagator unitarity and optical spectra. Finally we illustrate that, with\nan appropriate density matrix truncation range applied, the computational load\nscales linearly with the system size and discuss the limitations of the\napproach.", "category": "physics_comp-ph" }, { "text": "Deep learning density functionals for gradient descent optimization: Machine-learned regression models represent a promising tool to implement\naccurate and computationally affordable energy-density functionals to solve\nquantum many-body problems via density functional theory. However, while they\ncan easily be trained to accurately map ground-state density profiles to the\ncorresponding energies, their functional derivatives often turn out to be too\nnoisy, leading to instabilities in self-consistent iterations and in\ngradient-based searches of the ground-state density profile. We investigate how\nthese instabilities occur when standard deep neural networks are adopted as\nregression models, and we show how to avoid it using an ad-hoc convolutional\narchitecture featuring an inter-channel averaging layer. The testbed we\nconsider is a realistic model for noninteracting atoms in optical speckle\ndisorder. With the inter-channel average, accurate and systematically\nimprovable ground-state energies and density profiles are obtained via\ngradient-descent optimization, without instabilities nor violations of the\nvariational principle.", "category": "physics_comp-ph" }, { "text": "On the drag and lift coefficients of ellipsoidal particles under\n rarefied flow conditions: The capability to simulate a two-way coupled interaction between a rarefied\ngas and an arbitrary-shaped colloidal particle is important for many practical\napplications, such as aerospace engineering, lung drug deliver and\nsemiconductor manufacturing. By means of numerical simulations based on the\nDirect Simulation Monte Carlo (DSMC) method, we investigate the influence of\nthe orientation of the particle and rarefaction on the drag and lift\ncoefficients, in the case of prolate and oblate ellipsoidal particles immersed\nin a uniform ambient flow. This is done by modelling the solid particles using\na cut-cell algorithm embedded within our DSMC solver. In this approach, the\nsurface of the particle is described by its analytical expression and the\nmicroscopic gas-solid interactions are computed exactly using a ray-tracing\ntechnique. The measured drag and lift coefficients are used to extend the\ncorrelations available in the continuum regime to the rarefied regime, focusing\non the transitional and free-molecular regimes. The functional forms for the\ncorrelations for the ellipsoidal particles are chosen as a generalisation from\nthe spherical case. We show that the fits over the data from numerical\nsimulations can be extended to regimes outside the simulated range of $Kn$ by\ntesting the obtained predictive model on values of $Kn$ that where not included\nin the fitting process, allowing to achieve an higher precision when compared\nwith existing predictive models from literature. Finally, we underline the\nimportance of this work in providing new correlations for non-spherical\nparticles that can be used for point-particle Euler-Lagrangian simulations to\naddress the problem of contamination from finite-size particles in high-tech\nmechanical systems.", "category": "physics_comp-ph" }, { "text": "Validation of the GreenX library time-frequency component for efficient\n GW and RPA calculations: Electronic structure calculations based on many-body perturbation theory\n(e.g. GW or the random-phase approximation (RPA)) require function evaluations\nin the complex time and frequency domain, for example inhomogeneous Fourier\ntransforms or analytic continuation from the imaginary axis to the real axis.\nFor inhomogeneous Fourier transforms, the time-frequency component of the\nGreenX library provides time-frequency grids that can be utilized in\nlow-scaling RPA and GW implementations. In addition, the adoption of the\ncompact frequency grids provided by our library also reduces the computational\noverhead in RPA implementations with conventional scaling. In this work, we\npresent low-scaling GW and conventional RPA benchmark calculations using the\nGreenX grids with different codes (FHI-aims, CP2K and ABINIT) for molecules,\ntwo-dimensional materials and solids. Very small integration errors are\nobserved when using 30 time-frequency points for our test cases, namely\n$<10^{-8}$ eV/electron for the RPA correlation energies, and 10 meV for the GW\nquasiparticle energies.", "category": "physics_comp-ph" }, { "text": "Non-Intrusive Reduced-Order Modeling Using Uncertainty-Aware Deep Neural\n Networks and Proper Orthogonal Decomposition: Application to Flood Modeling: Deep Learning research is advancing at a fantastic rate, and there is much to\ngain from transferring this knowledge to older fields like Computational Fluid\nDynamics in practical engineering contexts. This work compares state-of-the-art\nmethods that address uncertainty quantification in Deep Neural Networks,\npushing forward the reduced-order modeling approach of Proper Orthogonal\nDecomposition-Neural Networks (POD-NN) with Deep Ensembles and Variational\nInference-based Bayesian Neural Networks on two-dimensional problems in space.\nThese are first tested on benchmark problems, and then applied to a real-life\napplication: flooding predictions in the Mille \\^Iles river in the Montreal,\nQuebec, Canada metropolitan area. Our setup involves a set of input parameters,\nwith a potentially noisy distribution, and accumulates the simulation data\nresulting from these parameters. The goal is to build a non-intrusive surrogate\nmodel that is able to know when it does not know, which is still an open\nresearch area in Neural Networks (and in AI in general). With the help of this\nmodel, probabilistic flooding maps are generated, aware of the model\nuncertainty. These insights on the unknown are also utilized for an uncertainty\npropagation task, allowing for flooded area predictions that are broader and\nsafer than those made with a regular uncertainty-uninformed surrogate model.\nOur study of the time-dependent and highly nonlinear case of a dam break is\nalso presented. Both the ensembles and the Bayesian approach lead to reliable\nresults for multiple smooth physical solutions, providing the correct warning\nwhen going out-of-distribution. However, the former, referred to as POD-EnsNN,\nproved much easier to implement and showed greater flexibility than the latter\nin the case of discontinuities, where standard algorithms may oscillate or fail\nto converge.", "category": "physics_comp-ph" }, { "text": "Positron scattering and annihilation from the hydrogen molecule at zero\n energy: The confined variational method is used to generate a basis of correlated\ngaussians to describe the interaction region wave function for positron\nscattering from the H$_2$ molecule. The scattering length was $\\approx -2.7$\n$a_0$ while the zero energy $Z_{\\rm eff}$ of 15.7 is compatible with\nexperimental values. The variation of the scattering length and $Z_{\\rm eff}$\nwith inter-nuclear distance was surprisingly rapid due to virtual state\nformation at $R \\approx 3.4$ $a_0$.", "category": "physics_comp-ph" }, { "text": "Fundamental parameters of QCD: The theory of strong interactions, QCD, is described in terms of a few\nparameters, namely the strong coupling constant alpha_s and the quark masses.\nWe show how these parameters can be determined reliably using computer\nsimulations of QCD on a space-time lattice, and by employing a finite-size\nscaling method, which allows to trace the energy dependence of alpha_s and\nquark masses over several orders of magnitude. We also discuss methods designed\nto reduce the effects of finite lattice spacing and address the issue of\ncomputer resources required.", "category": "physics_comp-ph" }, { "text": "Targeting high symmetry in structure predictions by biasing the\n potential energy surface: Ground state structures found in nature are in many cases of high symmetry.\nBut structure prediction methods typically render only a small fraction of high\nsymmetry structures. Especially for large crystalline unit cells there are many\nlow energy defect structures. For this reason methods have been developed where\neither preferentially high symmetry structures are used as input or where the\nwhole structural search is done within a certain symmetry group. In both cases\nit is necessary to specify the correct symmetry group beforehand. However it\ncan in general not be predicted which symmetry group is the correct one leading\nto the ground state. For this reason we introduce a potential energy biasing\nscheme that favors symmetry and where it is not necessary to specify any\nsymmetry group beforehand. On this biased potential energy surface, high\nsymmetry structures will be found much faster than on an unbiased surface and\nindependently of the symmetry group to which they belong. For our two test\ncases, a $C_{60}$ fullerene and bulk silicon carbide, we get a speedups of 25\nand 63. In our data we also find a clear correlation between the similarity of\nthe atomic environments and the energy. In low energy structures all the atoms\nof a species tend to have similar environments.", "category": "physics_comp-ph" }, { "text": "A positivity-preserving Eulerian two-phase approach with thermal\n relaxation for compressible flows with a liquid and gases: A positivity-preserving fractional algorithm is presented for solving the\nfour-equation homogeneous relaxation model (HRM) with an arbitrary number of\nideal gases and a liquid governed by the stiffened gas equation of state. The\nfractional algorithm consists of a time step of the hyperbolic five-equation\nmodel by Allaire et al. and an algebraic numerical thermal relaxation step at\nan infinite relaxation rate. Interpolation and flux limiters are proposed for\nthe use of high-order Cartesian finite difference or finite volume schemes in a\ngeneral form such that the positivity of the partial densities and squared\nsound speed, as well as the boundedness of the volume fractions and mass\nfractions, are preserved with the algorithm. A conservative solution update for\nthe four-equation HRM is also guaranteed by the algorithm which is advantageous\nfor certain applications such as those with phase transition. The accuracy and\nrobustness of the algorithm with a high-order explicit finite difference\nweighted compact nonlinear scheme (WCNS) using the incremental-stencil weighted\nessentially non-oscillatory (WENO) interpolation, are demonstrated with various\nnumerical tests.", "category": "physics_comp-ph" }, { "text": "Woptic: optical conductivity with Wannier functions and adaptive k-mesh\n refinement: We present an algorithm for the adaptive tetrahedral integration over the\nBrillouin zone of crystalline materials, and apply it to compute the optical\nconductivity, dc conductivity, and thermopower. For these quantities, whose\ncontributions are often localized in small portions of the Brillouin zone,\nadaptive integration is especially relevant. Our implementation, the woptic\npackage, is tied into the wien2wannier framework and allows including a\nmany-body self energy, e.g. from dynamical mean-field theory (DMFT). Wannier\nfunctions and dipole matrix elements are computed with the DFT package Wien2k\nand Wannier90. For illustration, we show DFT results for fcc-Al and DMFT\nresults for the correlated metal SrVO$_3$.", "category": "physics_comp-ph" }, { "text": "Phase transition in a stochastic prime number generator: We introduce a stochastic algorithm that acts as a prime number generator.\nThe dynamics of such algorithm gives rise to a continuous phase transition\nwhich separates a phase where the algorithm is able to reduce a whole set of\nintegers into primes and a phase where the system reaches a frozen state with\nlow prime density. We present both numerical simulations and an analytical\napproach in terms of an annealed approximation, by means of which the data are\ncollapsed. A critical slowing down phenomenon is also outlined.", "category": "physics_comp-ph" }, { "text": "Harmonic analysis of random number generators: The spectral test of random number generators (R.R. Coveyou and R.D.\nMcPherson, 1967) is generalized. The sequence of random numbers is analyzed\nexplicitly, not just via their n-tupel distributions. We find that the mixed\nmultiplicative generator with power of two modulus does not pass the extended\ntest with an ideal result. Best qualities has a new generator with the\nrecursion formula X(k+1)=a*X(k)+c*int(k/2) mod 2^d. We discuss the choice of\nthe parameters a, c for very large moduli 2^d and present an implementation of\nthe suggested generator with d=256, a=2^128+2^64+2^32+62181, c=(2^160+1)*11463.", "category": "physics_comp-ph" }, { "text": "Machine learning enabled fast evaluation of dynamic aperture for storage\n ring accelerators: For any storage ring-based large-scale scientific facility, one of the most\nimportant performance parameters is the dynamic aperture (DA), which measures\nthe motion stability of charged particles in a global manner. To date,\nlong-term tracking-based simulation is regarded as the most reliable method to\ncalculate DA. However, numerical tracking may become a significant issue,\nespecially when lots of candidate designs of a storage ring need to be\nevaluated. In this paper, we present a novel machine learning-based method,\nwhich can reduce the computation cost of DA tracking by approximately one order\nof magnitude, while keeping sufficiently high evaluation accuracy. Moreover, we\ndemonstrate that this method is independent of concrete physical models of a\nstorage ring. This method has the potential to be applied to similar problems\nof identifying irregular motions in other complex dynamical systems.", "category": "physics_comp-ph" }, { "text": "Diffusion NMR in periodic media: efficient computation and spectral\n properties: The Bloch-Torrey equation governs the evolution of the transverse\nmagnetization in diffusion magnetic resonance imaging, where two mechanisms are\nat play: diffusion of spins (Laplacian term) and their precession in a magnetic\nfield gradient (imaginary potential term). In this paper, we study this\nequation in a periodic medium: a unit cell repeated over the nodes of a\nlattice. Although the gradient term of the equation is not invariant by lattice\ntranslations, the equation can be analyzed within a single unit cell by\nreplacing a continuous-time gradient profile by narrow pulses. In this\napproximation, the effects of precession and diffusion are separated and the\nproblem is reduced to the study of a sequence of diffusion equations with\npseudo-periodic boundary conditions. This representation allows for efficient\nnumerical computations as well as new theoretical insights into the formation\nof the signal in periodic media. In particular, we study the eigenmodes and\neigenvalues of the Bloch-Torrey operator. We show how the localization of\neigenmodes is related to branching points in the spectrum and we discuss low-\nand high-gradient asymptotic behaviors. The range of validity of the\napproximation is discussed; interestingly the method turns out to be more\naccurate and efficient at high gradient, being thus an important complementary\ntool to conventional numerical methods that are most accurate at low gradients.", "category": "physics_comp-ph" }, { "text": "Quantifying biomolecular diffusion with a \"spherical cow\" model: The dynamics of biological polymers, including proteins, RNA, and DNA, occur\nin very high-dimensional spaces. Many naturally-occurring polymers can navigate\na vast phase space and rapidly find their lowest free energy (folded) state.\nThus, although the search process is stochastic, it is not completely random.\nInstead, it is best described in terms of diffusion along a downhill free\nenergy landscape. In this context, there have been many efforts to use\nsimplified representations of the energetics, for which the potential energy is\nchosen to be a relatively smooth function with a global minima that corresponds\nto the folded state. That is, instead of including every type of physical\ninteraction, the broad characteristics of the landscape are encoded in\napproximate energy functions. We describe a particular class of models, called\nstructure-based models, that can be used to explore the diffusive properties of\nbiomolecular folding and conformational rearrangements. These energy functions\nmay be regarded as the \"spherical cow\" for modeling molecular biophysics. We\ndiscuss the physical principles underlying these models and provide an\nentry-level tutorial, which may be adapted for use in curricula for physics and\nnon-physics majors.", "category": "physics_comp-ph" }, { "text": "KiNet: A Deep Neural Network Representation of Chemical Kinetics: Deep learning is a potential approach to automatically develop kinetic models\nfrom experimental data. We propose a deep neural network model of KiNet to\nrepresent chemical kinetics. KiNet takes the current composition states and\npredicts the evolution of the states after a fixed time step. The long-period\nevolution of the states and their gradients to model parameters can be\nefficiently obtained by recursively applying the KiNet model multiple times. To\naddress the challenges of the high-dimensional composition space and error\naccumulation in long-period prediction, the architecture of KiNet incorporates\nthe residual network model (ResNet), and the training employs backpropagation\nthrough time (BPTT) approach to minimize multi-step prediction error. In\naddition, an approach for efficiently computing the gradient of the ignition\ndelay time (IDT) to KiNet model parameters is proposed to train the KiNet\nagainst the rich database of IDT from literature, which could address the\nscarcity of time-resolved species measurements. The KiNet is first trained and\ncompared with the simulated species profiles during the auto-ignition of H2/air\nmixtures. The obtained KiNet model can accurately predict the auto-ignition\nprocesses for various initial conditions that cover a wide range of pressures,\ntemperatures, and equivalence ratios. Then, we show that the gradient of IDT to\nKiNet model parameters is parallel to the gradient of the temperature at the\nignition point. This correlation enables efficient computation of the gradient\nof IDT via backpropagation and is demonstrated as a feasible approach for\nfine-tuning the KiNet against IDT. These demonstrations shall open up the\npossibility of building data-driven kinetic models autonomously. Finally, the\ntrained KiNet could be potentially applied to kinetic model reduction and\nchemistry acceleration in turbulent combustion simulations.", "category": "physics_comp-ph" }, { "text": "Tetraquarks as Diquark Antidiquark Bound Systems: In this paper, we study four-body systems consisting of diquark antidiquark,\nand we analyze diquark-antidiquark in the framework of a two body (pseudo\npoint) problem. We solve Lippman Schwinger equation numerically for charm\ndiquark antidiquark systems and find the eigenvalues to calculate the binding\nenergies and masses of heavy tetraquarks with hidden charms. Our results are in\ngood agreement with theoretical and experimental data.", "category": "physics_comp-ph" }, { "text": "A fast synthetic iterative scheme for the stationary phonon Boltzmann\n transport equation: In this paper, a fast synthetic iterative scheme is developed to accelerate\nconvergence for the implicit DOM based on the stationary phonon BTE. The key\ninnovative point of the present scheme is the introduction of the macroscopic\nsynthetic diffusion equation for the temperature, which is obtained from the\nzero- and first-order moment equations of the phonon BTE. The synthetic\ndiffusion equation, which is asymptomatically preserving to the Fourier's heat\nconduction equation in the diffusive regime, contains a term related to the\nFourier's law and a term determined by the second-order moment of the\ndistribution function that reflects the non-Fourier heat transfer. The\nmesoscopic kinetic equation and macroscopic diffusion equations are tightly\ncoupled together, because the diffusion equation provides the temperature for\nthe BTE, while the BTE provides the high-order moment to the diffusion equation\nto describe the non-Fourier heat transfer. This synthetic iterative scheme\nstrengthens the coupling of all phonons in the phase space to facilitate the\nfast convergence from the diffusive to ballistic regimes. Typical numerical\ntests in one-, two-, and three-dimensional problems demonstrate that our scheme\ncan describe the multiscale heat transfer problems accurately and efficiently.\nFor all test cases convergence is reached within one hundred iteration steps,\nwhich is one to three orders of magnitude faster than the traditional implicit\nDOM in the near-diffusive regime.", "category": "physics_comp-ph" }, { "text": "Multidimensional tests of a finite-volume solver for MHD with a real-gas\n equation of state: This work considers two algorithms of a finite-volume solver for the MHD\nequations with a real-gas equation of state (EOS). Both algorithms use a\nmultistate form of Harten-Lax-Van Leer approximate Riemann solver as formulated\nfor MHD discontinuities. This solver is modified to use the generalized sound\nspeed from the real-gas EOS. Two methods are tested: EOS evaluation at cell\ncenters and at flux interfaces where the former is more computationally\nefficient. A battery of 1D and 2D tests are employed: convergence of 1D and 2D\nlinearized waves, shock tube Riemann problems, a 2D nonlinear circularly\npolarized Alfv\\'en wave, and a 2D magneto-Rayleigh-Taylor instability test. The\ncell-centered EOS evaluation algorithm produces unresolvable thermodynamic\ninconsistencies in the intermediate states leading to spurious solutions while\nthe flux-interface EOS evaluation algorithm robustly produces the correct\nsolution. The linearized wave tests show this inconsistency is associated with\nthe magnetosonic waves and the magneto-Rayleigh-Taylor instability test\ndemonstrates simulation results where the spurious solution leads to an\nunphysical simulation.", "category": "physics_comp-ph" }, { "text": "XrdML, a new way to store (and exchange) X-ray powder diffraction\n measurement data: Recently PANalytical introduced the XrdML file format as a new data platform\nfor powder diffraction experiments. We will explain why an industrial standard\n(XML) was chosen and show the XML schema used to precisely describe the\ninstrumental and experimental conditions. This schema is used to validate the\ncontents of the XrdML files. Additionally the integration of the XrdML files\nwith the MS Windows operating system and the MS Windows Explorer will be\ndemonstrated.", "category": "physics_comp-ph" }, { "text": "Implementation of BVD (boundary variation diminishing) algorithm in\n simulations of compressible multiphase flows: We present in this work a new reconstruction scheme, so-called\nMUSCL-THINC-BVD scheme, to solve the five-equation model for interfacial two\nphase flows. This scheme employs the traditional shock capturing MUSCL\n(Monotone Upstream-centered Schemes for Conservation Law) scheme as well as the\ninterface sharpening THINC (Tangent of Hyperbola for INterface Capturing)\nscheme as two building-blocks of spatial reconstruction using the BVD (boundary\nvariation diminishing) principle that minimizes the variations (jumps) of the\nreconstructed variables at cell boundaries, and thus effectively reduces the\nnumerical dissipations in numerical solutions. The MUSCL-THINC-BVD scheme is\nimplemented to all state variables and volume fraction, which realizes the\nconsistency among volume fraction and other physical variables. Benchmark tests\nare carried out to verify the capability of the present method in capturing the\nmaterial interface as a well-defined sharp jump in volume fraction, as well as\nsignificant improvement in solution quality. The proposed scheme is a simple\nand effective method of practical significance for simulating compressible\ninterfacial multiphase flows.", "category": "physics_comp-ph" }, { "text": "Better, Faster Fermionic Neural Networks: The Fermionic Neural Network (FermiNet) is a recently-developed neural\nnetwork architecture that can be used as a wavefunction Ansatz for\nmany-electron systems, and has already demonstrated high accuracy on small\nsystems. Here we present several improvements to the FermiNet that allow us to\nset new records for speed and accuracy on challenging systems. We find that\nincreasing the size of the network is sufficient to reach chemical accuracy on\natoms as large as argon. Through a combination of implementing FermiNet in JAX\nand simplifying several parts of the network, we are able to reduce the number\nof GPU hours needed to train the FermiNet on large systems by an order of\nmagnitude. This enables us to run the FermiNet on the challenging transition of\nbicyclobutane to butadiene and compare against the PauliNet on the\nautomerization of cyclobutadiene, and we achieve results near the state of the\nart for both.", "category": "physics_comp-ph" }, { "text": "Non-Universality in Semi-Directed Barabasi-Albert Networks: In usual scale-free networks of Barabasi-Albert type, a newly added node\nselects randomly m neighbors from the already existing network nodes,\nproportionally to the number of links these had before. Then the number N(k) of\nnodes with k links each decays as 1/k^gamma where gamma=3 is universal, i.e.\nindependent of m. Now we use a limited directedness in the construction of the\nnetwork, as a result of which the exponent gamma decreases from 3 to 2 for\nincreasing m.", "category": "physics_comp-ph" }, { "text": "Accelerating the Convergence of Coupled Cluster Calculations of the\n Homogeneous Electron Gas Using Bayesian Ridge Regression: The homogeneous electron gas is a system which has many applications in\nchemistry and physics. However, its infinite nature makes studies at the\nmany-body level complicated due to long computational run times. Because it is\nsize extensive, coupled cluster theory is capable of studying the homogeneous\nelectron gas, but it still poses a large computational challenge as the time\nneeded for precise calculations increases in a polynomial manner with the\nnumber of particles and single-particle states. Consequently, achieving\nconvergence in energy calculations becomes challenging, if not prohibited, due\nto long computational run times and high computational resource requirements.\nThis paper develops the sequential regression extrapolation (SRE) to predict\nthe coupled cluster energies of the homogeneous electron gas in the complete\nbasis limit using Bayesian ridge regression to make predictions from\ncalculations at truncated basis sizes. Using the SRE method we were able to\npredict the coupled cluster doubles energies for the electron gas across a\nvariety of values of N and $r_s$, for a total of 70 predictions, with an\naverage error of 4.28x10$^{-4}$ Hartrees while saving 88.9 hours of\ncomputational time. The SRE method can accurately extrapolate electron gas\nenergies to the complete basis limit, saving both computational time and\nresources. Additionally, the SRE is a general method that can be applied to a\nvariety of systems, many-body methods, and extrapolations.", "category": "physics_comp-ph" }, { "text": "PLUMED 2: New feathers for an old bird: Enhancing sampling and analyzing simulations are central issues in molecular\nsimulation. Recently, we introduced PLUMED, an open-source plug-in that\nprovides some of the most popular molecular dynamics (MD) codes with\nimplementations of a variety of different enhanced sampling algorithms and\ncollective variables (CVs). The rapid changes in this field, in particular new\ndirections in enhanced sampling and dimensionality reduction together with new\nhardwares, require a code that is more flexible and more efficient. We\ntherefore present PLUMED 2 here - a complete rewrite of the code in an\nobject-oriented programming language (C++). This new version introduces greater\nflexibility and greater modularity, which both extends its core capabilities\nand makes it far easier to add new methods and CVs. It also has a simpler\ninterface with the MD engines and provides a single software library containing\nboth tools and core facilities. Ultimately, the new code better serves the\never-growing community of users and contributors in coping with the new\nchallenges arising in the field.", "category": "physics_comp-ph" }, { "text": "Reproducing Kernel Functions: A general framework for Discrete Variable\n Representation: Since its introduction, the Discrete Variable Representation (DVR) basis set\nhas become an invaluable representation of state vectors and Hermitian\noperators in non-relativistic quantum dynamics and spectroscopy calculations.\nOn the other hand reproducing kernel (positive definite) functions have been\nwidely employed for a long time to a wide variety of disciplines: detection and\nestimation problems in signal processing; data analysis in statistics;\ngenerating observational models in machine learning; solving inverse problems\nin geophysics and tomography in general; and in quantum mechanics.\n In this article it was demonstrated that, starting with the axiomatic\ndefinition of DVR provided by Littlejohn [1], it is possible to show that the\nspace upon which the projection operator, defined in ref [1], projects is a\nReproducing Kernel Hilbert Space (RKHS) whose associated reproducing kernel\nfunction can be used to generate DVR points and their corresponding DVR\nfunctions on any domain manifold (curved or not). It is illustrated how, with\nthis idea, one may be able to `neatly' address the long-standing challenge of\nbuilding multidimensional DVR basis functions defined on curved manifolds.", "category": "physics_comp-ph" }, { "text": "Code C# for chaos analysis of relativistic many-body systems with\n reactions: In this work we present a reactions module for \"Chaos Many-Body Engine\"\n(Grossu et al., 2010 [1]). Following our goal of creating a customizable,\nobject oriented code library, the list of all possible reactions, including the\ncorresponding properties (particle types, probability, cross-section, particles\nlifetime etc.), could be supplied as parameter, using a specific XML input\nfile. Inspired by the Poincare section, we propose also the \"Clusterization\nmap\", as a new intuitive analysis method of many-body systems. For\nexemplification, we implemented a numerical toy-model for nuclear relativistic\ncollisions at 4.5 A GeV/c (the SKM200 collaboration). An encouraging agreement\nwith experimental data was obtained for momentum, energy, rapidity, and angular\n{\\pi}- distributions.", "category": "physics_comp-ph" }, { "text": "Physics-Informed Extreme Theory of Functional Connections Applied to\n Data-Driven Parameters Discovery of Epidemiological Compartmental Models: In this work we apply a novel, accurate, fast, and robust physics-informed\nneural network framework for data-driven parameters discovery of problems\nmodeled via parametric ordinary differential equations (ODEs) called the\nExtreme Theory of Functional Connections (X-TFC). The proposed method merges\ntwo recently developed frameworks for solving problems involving parametric\nDEs, 1) the Theory of Functional Connections (TFC) and 2) the Physics-Informed\nNeural Networks (PINN). In particular, this work focuses on the capability of\nX-TFC in solving inverse problems to estimate the parameters governing the\nepidemiological compartmental models via a deterministic approach. The\nepidemiological compartmental models treated in this work are\nSusceptible-Infectious-Recovered (SIR),\nSusceptible-Exposed-Infectious-Recovered (SEIR), and\nSusceptible-Exposed-Infectious-Recovered-Susceptible (SEIR). The results show\nthe low computational times, the high accuracy and effectiveness of the X-TFC\nmethod in performing data-driven parameters discovery of systems modeled via\nparametric ODEs using unperturbed and perturbed data.", "category": "physics_comp-ph" }, { "text": "Structure-Property Linkage in Shocked Multi-Material Flows Using A\n Level-Set Based Eulerian Image-To-Computation Framework: Morphology and dynamics at the meso-scale play crucial roles in the overall\nmacro- or system-scale flow of heterogeneous materials. In a multi-scale\nframework, closure models upscale unresolved sub-grid (meso-scale) physics and\ntherefore encapsulate structure-property (S-P) linkages to predict performance\nat the macro-scale. This work establishes a route to structure-property\nlinkage, proceeding all the way from imaged micro-structures to flow\ncomputations in one unified level set-based framework. Level sets are used to:\n1) Define embedded geometries via image segmentation; 2) Simulate the\ninteraction of sharp immersed boundaries with the flow field, and 3) Calculate\nmorphological metrics to quantify structure. Meso-scale dynamics are computed\nto calculate sub-grid properties, i.e. closure models for momentum and energy\nequations. The structure-property linkage is demonstrated for two types of\nmulti-material flows: interaction of shocks with a cloud of particles and\nreactive meso-mechanics of pressed energetic materials. We also present an\napproach to connect local morphological characteristics in a microstructure\ncontaining topologically complex features with the shock response of imaged\nsamples of such materials. This paves the way for using geometric machine\nlearning techniques to associate imaged morphologies with their properties.", "category": "physics_comp-ph" }, { "text": "Generalised and efficient wall boundary condition treatment in\n GPU-accelerated smoothed particle hydrodynamics: This paper presents a generalised and efficient wall boundary treatment in\nthe smoothed particle hydrodynamics (SPH) method for 3-D complex and arbitrary\ngeometries with single- and multi-phase flows to be executed on graphics\nprocessing units (GPUs). Using a force balance between the wall and fluid\nparticles with a novel penalty method, a pressure boundary condition is applied\non the wall dummy particles which effectively prevents non-physical particle\npenetration into the wall boundaries also in highly violent impacts and\nmulti-phase flows with high density ratios. A new density reinitialisation\nscheme is also presented to enhance the accuracy. The proposed method is very\nsimple in comparison with previous wall boundary formulations on GPUs that\nenforces no additional memory caching and thus is ideally suited for\nheterogeneous architectures of GPUs. The method is validated in various test\ncases involving violent single- and multi-phase flows in arbitrary geometries\nand demonstrates very good robustness, accuracy and performance. The new wall\nboundary condition treatment is able to improve the high accuracy of its\nprevious version \\citep{ADAMI2012wall} also in complex 3-D and multi-phase\nproblems, while it is efficiently executable on GPUs with single precision\nfloating points arithmetic which makes it suitable for a wide range of GPUs,\nincluding consumer graphic cards. Therefore, the method is a reliable solution\nfor the long-lasting challenge of the wall boundary condition in the SPH method\nfor a broad range of natural and industrial applications.", "category": "physics_comp-ph" }, { "text": "Subspace recursive Fermi-operator expansion strategies for large-scale\n DFT eigenvalue problems on HPC architectures: Quantum mechanical calculations for material modelling using Kohn-Sham\ndensity functional theory (DFT) involve the solution of a nonlinear eigenvalue\nproblem for $N$ smallest eigenvector-eigenvalue pairs with $N$ proportional to\nthe number of electrons in the material system. These calculations are\ncomputationally demanding and have asymptotic cubic scaling complexity with the\nnumber of electrons. Large-scale matrix eigenvalue problems arising from the\ndiscretization of the Kohn-Sham DFT equations employing a systematically\nconvergent basis traditionally rely on iterative orthogonal projection methods,\nwhich are shown to be computationally efficient and scalable on massively\nparallel computing architectures. However, as the size of the material system\nincreases, these methods are known to incur dominant computational costs\nthrough the Rayleigh-Ritz projection step of the discretized Kohn-Sham\nHamiltonian matrix and the subsequent subspace diagonalization of the projected\nmatrix. This work explores the potential of polynomial expansion approaches\nbased on recursive Fermi-operator expansion as an alternative to the subspace\ndiagonalization of the projected Hamiltonian matrix to reduce the computational\ncost. Subsequently, we perform a detailed comparison of various recursive\npolynomial expansion approaches to the traditional approach of explicit\ndiagonalization on both multi-node CPU and GPU architectures and assess their\nrelative performance in terms of accuracy, computational efficiency, scaling\nbehaviour and energy efficiency.", "category": "physics_comp-ph" }, { "text": "Development of a general-purpose machine-learning interatomic potential\n for aluminum by the physically-informed neural network method: Abstract Interatomic potentials constitute the key component of large-scale\natomistic simulations of materials. The recently proposed physically-informed\nneural network (PINN) method combines a high-dimensional regression implemented\nby an artificial neural network with a physics-based bond-order interatomic\npotential applicable to both metals and nonmetals. In this paper, we present a\nmodified version of the PINN method that accelerates the potential training\nprocess and further improves the transferability of PINN potentials to unknown\natomic environments. As an application, a modified PINN potential for Al has\nbeen developed by training on a large database of electronic structure\ncalculations. The potential reproduces the reference first-principles energies\nwithin 2.6 meV per atom and accurately predicts a wide spectrum of physical\nproperties of Al. Such properties include, but are not limited to, lattice\ndynamics, thermal expansion, energies of point and extended defects, the\nmelting temperature, the structure and dynamic properties of liquid Al, the\nsurface tensions of the liquid surface and the solid-liquid interface, and the\nnucleation and growth of a grain boundary crack. Computational efficiency of\nPINN potentials is also discussed.", "category": "physics_comp-ph" }, { "text": "Multipole Expansions of Aggregate Charge: How Far to Go?: Aggregates immersed in a plasma or radiative environment will have charge\ndistributed over their extended surface. Previous studies have modeled the\naggregate charge using the monopole and dipole terms of a multipole expansion,\nwith results indicating that the dipole-dipole interactions play an important\nrole in increasing the aggregation rate and altering the morphology of the\nresultant aggregates. This study examines the effect that including the\nquadrupole terms has on the dynamics of aggregates interacting with each other\nand the confining electric fields in laboratory experiments. Results are\ncompared to modeling aggregates as a collection of point charges located at the\ncenter of each spherical monomer comprising the aggregate.", "category": "physics_comp-ph" }, { "text": "Exact microscopic theory of electromagnetic heat transfer between a\n dielectric sphere and plate: Near-field electromagnetic heat transfer holds great potential for the\nadvancement of nanotechnology. Whereas far-field electromagnetic heat transfer\nis constrained by Planck's blackbody limit, the increased density of states in\nthe near-field enhances heat transfer rates by orders of magnitude relative to\nthe conventional limit. Such enhancement opens new possibilities in numerous\napplications, including thermal-photo-voltaics, nano-patterning, and imaging.\nThe advancement in this area, however, has been hampered by the lack of\nrigorous theoretical treatment, especially for geometries that are of direct\nexperimental relevance. Here we introduce an efficient computational strategy,\nand present the first rigorous calculation of electromagnetic heat transfer in\na sphere-plate geometry, the only geometry where transfer rate beyond blackbody\nlimit has been quantitatively probed at room temperature. Our approach results\nin a definitive picture unifying various approximations previously used to\ntreat this problem, and provides new physical insights for designing\nexperiments aiming to explore enhanced thermal transfer.", "category": "physics_comp-ph" }, { "text": "Effect of lattice shrinking on the migration of water within zeolite LTA: Water adsorption within zeolites of the Linde Type A (LTA) structure plays an\nimportant role in processes of water removal from solvents. For this purpose,\nknowing in which adsorption sites water is preferably found is of interest. In\nthis paper, the distribution of water within LTA is investigated in several\naluminum-substituted frameworks ranging from a Si:Al ratio of 1 (maximum\nsubstitution, framework is hydrophilic) to a Si:Al ratio of 191 (almost pure\nsiliceous framework, it is hydrophobic). The counterion is sodium. In the\nhydrophobic framework, water enters the large {\\alpha}-cages, whereas in the\nmost hydrophilic frameworks, water enters preferably the small $\\beta$-cages.\nFor frameworks with moderate aluminum substitution, $\\beta$-cages are populated\nfirst, but at intermediate pressures water favors $\\alpha$-cages instead.\nFramework composition and pressure therefore drive water molecules selectively\ntowards $\\alpha$- or $\\beta$-cages.", "category": "physics_comp-ph" }, { "text": "A method to compute absolute free energies or enthalpies of fluids: We propose a new method to compute the free energy or enthalpy of fluids or\ndisordered solids by computer simulation . The main idea is to construct a\nreference system by freezing one representative configuration, and then carry\nout a thermodynamic integration. We present a strategy and an algorithm which\nallows to sample the thermodynamic integration path even in the case of\nliquids, despite the fact that the particles can diffuse freely through the\nsystem. The method is described in detail and illustrated with applications to\nhard sphere fluids and solids with mobile defects.", "category": "physics_comp-ph" }, { "text": "Characteristic time scales for diffusion processes through layers and\n across interfaces: This paper presents a simple tool for characterising the timescale for\ncontinuum diffusion processes through layered heterogeneous media. This\nmathematical problem is motivated by several practical applications such as\nheat transport in composite materials, flow in layered aquifers and drug\ndiffusion through the layers of the skin. In such processes, the physical\nproperties of the medium vary across layers and internal boundary conditions\napply at the interfaces between adjacent layers. To characterise the timescale,\nwe use the concept of mean action time, which provides the mean timescale at\neach position in the medium by utilising the fact that the transition of the\ntransient solution of the underlying partial differential equation model, from\ninitial state to steady state, can be represented as a cumulative distribution\nfunction of time. Using this concept, we define the characteristic timescale\nfor a multilayer diffusion process as the maximum value of the mean action time\nacross the layered medium. For given initial conditions and internal and\nexternal boundary conditions, this approach leads to simple algebraic\nexpressions for characterising the timescale that depend on the physical and\ngeometrical properties of the medium, such as the diffusivities and lengths of\nthe layers. Numerical examples demonstrate that these expressions provide\nuseful insight into explaining how the parameters in the model affect the time\nit takes for a multilayer diffusion process to reach steady state.", "category": "physics_comp-ph" }, { "text": "Mechanism of the double heterostructure TiO2/ZnO/TiO2 for photocatalytic\n and photovoltaic applications: A theoretical study: Understanding the mechanism of the heterojunction is an important step\ntowards controllable and tunable interfaces for photocatalytic and photovoltaic\nbased devices. To this aim, we propose a thorough study of a double\nheterostructure system consisting of two semiconductors with large band gap,\nnamely, wurtzite ZnO and anatase TiO2. We demonstrate via first-principle\ncalculations two stable configurations of ZnO/TiO2 interfaces. Our structural\nanalysis provides a key information on the nature of the complex interface and\nlattice distortions occurring when combining these materials. The study of the\nelectronic properties of the sandwich nanostructure TiO2/ZnO/TiO2 reveals that\nconduction band arises mainly from Ti3d orbitals, while valence band is\nmaintained by O2p of ZnO, and that the trapped states within the gap region\nfrequent in single heterostructure are substantially reduced in the double\ninterface system. Moreover, our work explains the origin of certain optical\ntransitions observed in the experimental studies. Unexpectedly, as a\nconsequence of different bond distortions, the results on the band alignments\nshow electron accumulation in the left shell of TiO2 rather than the right one.\nSuch behavior provides more choice for the sensitization and functionalization\nof TiO2 surfaces.", "category": "physics_comp-ph" }, { "text": "Microscale modelling of dielectrophoresis assembly processes: This work presents a microscale approach for simulating the dielectrophoresis\n(DEP) assembly of polarizable particles under an external electric field. The\nmodel is shown to capture interesting dynamical and topological features, such\nas the formation of chains of particles and their incipient aggregation into\nhierarchical structures. A quantitative characterization in terms of the number\nand size of these structures is also discussed. This computational model could\nrepresent a viable numerical tool to study the mechanical properties of\nparticle-based hierarchical materials and suggest new strategies for enhancing\ntheir design and manufacture.", "category": "physics_comp-ph" }, { "text": "A Robust Multi-Scale Field-Only Formulation of Electromagnetic\n Scattering: We present a boundary integral formulation of electromagnetic scattering by\nhomogeneous bodies that are characterized by linear constitutive equations in\nthe frequency domain. By working with the Cartesian components of the electric,\nE and magnetic, H fields and with the scalar functions (r*E) and (r*H), the\nproblem is cast as solving a set of scalar Helmholtz equations for the field\ncomponents that are coupled by the usual electromagnetic boundary conditions at\nmaterial boundaries. This facilitates a direct solution for E and H rather than\nworking with surface currents as intermediate quantities in existing methods.\nConsequently, our formulation is free of the well-known numerical instability\nthat occurs in the zero frequency or long wavelength limit in traditional\nsurface integral solutions of Maxwell's equations and our numerical results\nconverge uniformly to the static results in the long wavelength limit.\nFurthermore, we use a formulation of the scalar Helmholtz equation that is\nexpressed as classically convergent integrals and does not require the\nevaluation of principal value integrals or any knowledge of the solid angle.\nTherefore, standard quadrature and higher order surface elements can readily be\nused to improve numerical precision. In addition, near and far field values can\nbe calculated with equal precision and multiscale problems in which the\nscatterers possess characteristic length scales that are both large and small\nrelative to the wavelength can be easily accommodated. From this we obtain\nresults for the scattering and transmission of electromagnetic waves at\ndielectric boundaries that are valid for any ratio of the local surface\ncurvature to the wave number. This is a generalization of the familiar Fresnel\nformula and Snell's law, valid at planar dielectric boundaries, for the\nscattering and transmission of electromagnetic waves at surfaces of arbitrary\ncurvature.", "category": "physics_comp-ph" }, { "text": "Physics-Informed Machine Learning for Optical Modes in Composites: We demonstrate that embedding physics-driven constraints into machine\nlearning process can dramatically improve accuracy and generalizability of the\nresulting model. Physics-informed learning is illustrated on the example of\nanalysis of optical modes propagating through a spatially periodic composite.\nThe approach presented can be readily utilized in other situations mapped onto\nan eigenvalue problem, a known bottleneck of computational electrodynamics.\nPhysics-informed learning can be used to improve machine-learning-driven\ndesign, optimization, and characterization, in particular in situations where\nexact solutions are scarce or are slow to come up with.", "category": "physics_comp-ph" }, { "text": "Spurious currents suppression by accurate difference schemes in\n multiphase lattice Boltzmann method: Spurious currents, which are often observed near a curved interface in the\nmultiphase simulations by diffuse interface methods, are unphysical phenomena\nand usually damage the computational accuracy and stability. In this paper, the\norigination and suppression of spurious currents are investigated by using the\nmultiphase lattice Boltzmann method driven by chemical potential. Both the\ndifference error and insufficient isotropy of discrete gradient operator give\nrise to the directional deviations of nonideal force and then originate the\nspurious currents. Nevertheless, the high-order finite difference produces far\nmore accurate results than the high-order isotropic difference. We compare\nseveral finite difference schemes which have different formal accuracy and\nresolution. When a large proportional coefficient is used, the transition\nregion is narrow and steep, and the resolution of finite difference indicates\nthe computational accuracy more exactly than the formal accuracy. On the\ncontrary, for a small proportional coefficient, the transition region is wide\nand gentle, and the formal accuracy of finite difference indicates the\ncomputational accuracy better than the resolution. Furthermore, numerical\nsimulations show that the spurious currents calculated in the 3D situation are\nhighly consistent with those in 2D simulations; especially, the two-phase\ncoexistence densities calculated by the high-order accuracy finite difference\nare in excellent agreement with the theoretical predictions of the Maxwell\nequal-area construction till the reduced temperature 0.2.", "category": "physics_comp-ph" }, { "text": "The role of surface states in electrocatalyst-modified semiconductor\n photoelectrodes: Theory and simulations: In the last several years, there has been a wealth of studies to clarify the\nrole of thin layers of electrocatalysts on semiconducting photoelectrodes to\nthe efficiency of the oxygen evolution reaction (OER). It has been shown that\nthe addition of a thin oxide overlayer in many cases cathodically shifts the\npotential of photocurrent onset and/or increases the maximum photocurrent,\nleading to greater collection efficiencies beneficial for OER. However, the\norigin of this enhancement is not well understood. Here, we present a model\nrelying on analytical expressions rather than differential equations to\ninvestigate the role of surface states in electrocatalyst-modified\nsemiconductor photoelectrodes. Without catalyst overlayer, we find that if\nsurface states are screened, meaning charged surface states are electronically\nneutralized via nearby solution ions, no Helmholtz potential is generated and\nphotoelectrodes exhibit good performance. In contrast, if the surface states\nare unscreened, an additional Helmholtz potential forms decreasing the amount\nof band bending and resulting in poor performance. In the presence of a\ncatalyst overlayer, there is a strong dependence on how the surface states\ninteract with the catalyst. Catalysts in series with surface states can\nincrease the effective rate of transfer from surface states to solution,\nleading to an increase in total current while catalysts that act in parallel\nwith surface states can increase the open circuit voltage or photovoltage. Both\nseries and parallel catalyst effects operate in tandem in real devices, leading\nto an increase in current and/or photovoltage, depending on the relevant\nexchange currents. This model does not only help to understand the role of\nsurface states in charge transfer and ultimately efficiencies in\nphotoelectrochemical systems but also allows facile application for other\nresearchers.", "category": "physics_comp-ph" }, { "text": "Computing shortest paths in 2D and 3D memristive networks: Global optimisation problems in networks often require shortest path length\ncomputations to determine the most efficient route. The simplest and most\ncommon problem with a shortest path solution is perhaps that of a traditional\nlabyrinth or maze with a single entrance and exit. Many techniques and\nalgorithms have been derived to solve mazes, which often tend to be\ncomputationally demanding, especially as the size of maze and number of paths\nincrease. In addition, they are not suitable for performing multiple shortest\npath computations in mazes with multiple entrance and exit points. Mazes have\nbeen proposed to be solved using memristive networks and in this paper we\nextend the idea to show how networks of memristive elements can be utilised to\nsolve multiple shortest paths in a single network. We also show simulations\nusing memristive circuit elements that demonstrate shortest path computations\nin both 2D and 3D networks, which could have potential applications in various\nfields.", "category": "physics_comp-ph" }, { "text": "Discrete unified gas kinetic scheme for multiscale heat transfer with\n arbitrary temperature difference: In this paper, a finite-volume discrete unified gas kinetic scheme (DUGKS)\nbased on the non-gray phonon transport model is developed for multiscale heat\ntransfer problem with arbitrary temperature difference. Under large temperature\ndifference, the phonon Boltzmann transport equation (BTE) is essentially\nmultiscale, not only in the frequency space, but also in the spatial space. In\norder to realize the efficient coupling of the multiscale phonon transport, the\nphonon scattering and advection are coupled together in the present scheme on\nthe reconstruction of the distribution function at the cell interface. The\nNewtonian method is adopted to solve the nonlinear scattering term for the\nupdate of the temperature at both the cell center and interface. In addition,\nthe energy at the cell center is updated by a macroscopic equation instead of\ntaking the moment of the distribution function, which enhances the numerical\nconservation. Numerical results of the cross-plane heat transfer prove that the\npresent scheme can describe the multiscale heat transfer phenomena accurately\nwith arbitrary temperature difference in a wide range. In the diffusive regime,\neven if the time step is larger than the relaxation time, the present scheme\ncan capture the transient thermal transport process accurately. Compared to\nthat under small temperature differences, as the temperature difference\nincreases, the variation of the temperature distribution behaves quite\ndifferently and the average temperature in the domain increases in the\nballistic regime but decreases in the diffusive regime.", "category": "physics_comp-ph" }, { "text": "Physics-enhanced neural networks for equation-of-state calculations: Rapid access to accurate equation-of-state (EOS) data is crucial in the\nwarm-dense matter regime, as it is employed in various applications, such as\nproviding input for hydrodynamic codes to model inertial confinement fusion\nprocesses. In this study, we develop neural network models for predicting the\nEOS based on first-principles data. The first model utilizes basic physical\nproperties, while the second model incorporates more sophisticated physical\ninformation, using output from average-atom calculations as features.\nAverage-atom models are often noted for providing a reasonable balance of\naccuracy and speed; however, our comparison of average-atom models and\nhigher-fidelity calculations shows that more accurate models are required in\nthe warm-dense matter regime. Both the neural network models we propose,\nparticularly the physics-enhanced one, demonstrate significant potential as\naccurate and efficient methods for computing EOS data in warm-dense matter.", "category": "physics_comp-ph" }, { "text": "Steady-state parameter sensitivity in stochastic modeling via trajectory\n reweighting: Parameter sensitivity analysis is a powerful tool in the building and\nanalysis of biochemical network models. For stochastic simulations, parameter\nsensitivity analysis can be computationally expensive, requiring multiple\nsimulations for perturbed values of the parameters. Here, we use trajectory\nreweighting to derive a method for computing sensitivity coefficients in\nstochastic simulations without explicitly perturbing the parameter values,\navoiding the need for repeated simulations. The method allows the simultaneous\ncomputation of multiple sensitivity coefficients. Our approach recovers results\noriginally obtained by application of the Girsanov measure transform in the\ngeneral theory of stochastic processes [A. Plyasunov and A. P. Arkin, J. Comp.\nPhys. 221, 724 (2007)]. We build on these results to show how the method can be\nused to compute steady-state sensitivity coefficients from a single simulation\nrun, and we present various efficiency improvements. For models of biochemical\nsignaling networks the method has a particularly simple implementation. We\ndemonstrate its application to a signaling network showing stochastic focussing\nand to a bistable genetic switch, and present exact results for models with\nlinear propensity functions.", "category": "physics_comp-ph" }, { "text": "A component-level co-rotational 3D continuum finite element framework\n for efficient flexible multibody analysis: This paper proposes a systematic and novel component level co-rotational (CR)\nframework, for upgrading existing 3D continuum finite elements to flexible\nmultibody analysis. Without using any model reduction techniques, the high\nefficiency is achieved through sophisticated operations in both modeling and\nnumerical implementation phrases. In modeling phrase, as in conventional 3D\nnonlinear finite analysis, the nodal absolute coordinates are used as the\nsystem generalized coordinates, therefore simple formulations of the inertia\nforce terms can be obtained. For the elastic force terms, inspired by existing\nfloating frame of reference formulation (FFRF) and conventional element-level\nCR formulation, a component-level CR modeling strategy is developed. By in\ncombination with Schur complement theory and fully exploring the nature of the\ncomponent-level CR modeling method, an extremely efficient procedure is\ndeveloped, which enables us to transform the linear equations raised from each\nNewton-Raphson iteration step into linear systems with constant coefficient\nmatrix. The coefficient matrix thus can be pre-calculated and decomposed only\nonce, and at all the subsequent time steps only back substitutions are needed,\nwhich avoids frequently updating the Jacobian matrix and avoids directly\nsolving the large-scale linearized equation in each iteration. Multiple\nexamples are presented to demonstrate the performance of the proposed\nframework.", "category": "physics_comp-ph" }, { "text": "ROSE: A reduced-order scattering emulator for optical models: A new generation of phenomenological optical potentials requires robust\ncalibration and uncertainty quantification, motivating the use of Bayesian\nstatistical methods. These Bayesian methods usually require calculating\nobservables for thousands or even millions of parameter sets, making fast and\naccurate emulators highly desirable or even essential. Emulating scattering\nacross different energies or with interactions such as optical potentials is\nchallenging because of the non-affine parameter dependence, meaning the\nparameters do not all factorize from individual operators. Here we introduce\nand demonstrate the Reduced Order Scattering Emulator (ROSE) framework, a\nreduced basis emulator that can handle non-affine problems. ROSE is fully\nextensible and works within the publicly available BAND Framework software\nsuite for calibration, model mixing, and experimental design. As a\ndemonstration problem, we use ROSE to calibrate a realistic nucleon-target\nscattering model through the calculation of elastic cross sections. This\nproblem shows the practical value of the ROSE framework for Bayesian\nuncertainty quantification with controlled trade-offs between emulator speed\nand accuracy as compared to high-fidelity solvers. Planned extensions of ROSE\nare discussed.", "category": "physics_comp-ph" }, { "text": "An Efficient Proper Orthogonal Decomposition based Reduced-Order Model: This paper presents a novel, more efficient proper orthogonal decomposition\n(POD) based reduced-order model (ROM) for compressible flows. In this POD model\nthe governing equations, i.e., the conservation of mass, momentum, and energy\nequations were written using specific volume instead of density. This\nsubstitution allowed for the pre-computation of the coefficients of the system\nof ODEs that make up the reduced-order model. Several methods were employed to\nenhance the stability of the ODE solver: the penalty method to enforce boundary\nconditions, artificial dissipation, and a method that modifies the number of\nmodes used in the POD approximation. This new POD-based reduced-order model was\nvalidated for four cases at both on- and off-reference conditions: a\nquasi-one-dimensional nozzle, a two-dimensional channel, a three-dimensional\naxisymmetric nozzle, and a transonic fan. The speedup obtained by using the\nPOD-based ROM vs. the full-order model exceeded four orders of magnitude in all\ncases tested.", "category": "physics_comp-ph" }, { "text": "HEP Community White Paper on Software trigger and event reconstruction:\n Executive Summary: Realizing the physics programs of the planned and upgraded high-energy\nphysics (HEP) experiments over the next 10 years will require the HEP community\nto address a number of challenges in the area of software and computing. For\nthis reason, the HEP software community has engaged in a planning process over\nthe past two years, with the objective of identifying and prioritizing the\nresearch and development required to enable the next generation of HEP\ndetectors to fulfill their full physics potential. The aim is to produce a\nCommunity White Paper which will describe the community strategy and a roadmap\nfor software and computing research and development in HEP for the 2020s. The\ntopics of event reconstruction and software triggers were considered by a joint\nworking group and are summarized together in this document.", "category": "physics_comp-ph" }, { "text": "Chaotic dynamics of piezoelectric mems based on maximal Lyapunov\n exponent and Smaller Alignment Index computations: We characterize the dynamical states of a piezoelectric\nmicroelectromechanical system (MEMS) using several numerical quantifers\nincluding the maximal Lyapunov exponent, the Poincare Surface of Section and a\nchaos detection method called the Smaller Alignment Index (SALI). The analysis\nmakes use of the MEMS Hamiltonian. We start our study by considering the case\nof a conservative piezoelectric MEMS model and describe the behavior of some\nrepresentative phase space orbits of the system. We show that the dynamics of\nthe piezoelectric MEMS becomes considerably more complex as the natural\nfrequency of the system's mechanical part decreases.This refers to the\nreduction of the stiffness of the piezoelectric transducer. Then, taking into\naccount the effects of damping and time dependent forces on the piezoelectric\nMEMS, we derive the corresponding non-autonomous Hamiltonian and investigate\nits dynamical behavior. We find that the non-conservative system exhibits a\nrich dynamics, which is strongly influenced by the values of the parameters\nthat govern the piezoelectric MEMS energy gain and loss. Our results provide\nfurther evidences of the ability of the SALI to efficiently characterize the\nchaoticity of dynamical systems.", "category": "physics_comp-ph" }, { "text": "Flow-matching -- efficient coarse-graining of molecular dynamics without\n forces: Coarse-grained (CG) molecular simulations have become a standard tool to\nstudy molecular processes on time- and length-scales inaccessible to all-atom\nsimulations. Parameterizing CG force fields to match all-atom simulations has\nmainly relied on force-matching or relative entropy minimization, which require\nmany samples from costly simulations with all-atom or CG resolutions,\nrespectively. Here we present flow-matching, a new training method for CG force\nfields that combines the advantages of both methods by leveraging normalizing\nflows, a generative deep learning method. Flow-matching first trains a\nnormalizing flow to represent the CG probability density, which is equivalent\nto minimizing the relative entropy without requiring iterative CG simulations.\nSubsequently, the flow generates samples and forces according to the learned\ndistribution in order to train the desired CG free energy model via force\nmatching. Even without requiring forces from the all-atom simulations,\nflow-matching outperforms classical force-matching by an order of magnitude in\nterms of data efficiency, and produces CG models that can capture the folding\nand unfolding transitions of small proteins.", "category": "physics_comp-ph" }, { "text": "Mitigating Spatial Error in the iterative-Quasi-Monte Carlo (iQMC)\n Method for Neutron Transport Simulations with Linear Discontinuous Source\n Tilting and Effective Scattering and Fission Rate Tallies: The iterative Quasi-Monte Carlo (iQMC) method is a recently proposed method\nfor multigroup neutron transport simulations. iQMC can be viewed as a hybrid\nbetween deterministic iterative techniques, Monte Carlo simulation, and\nQuasi-Monte Carlo techniques. iQMC holds several algorithmic characteristics\nthat make it desirable for high performance computing environments including a\n$O(N^{-1})$ convergence scheme, ray tracing transport sweep, and highly\nparallelizable nature similar to analog Monte Carlo. While there are many\npotential advantages of using iQMC there are also inherent disadvantages,\nnamely the spatial discretization error introduced from the use of a mesh\nacross the domain. This work introduces two significant modifications to iQMC\nto help reduce the spatial discretization error. The first is an effective\nsource transport sweep, whereby the source strength is updated on-the-fly via\nan additional tally. This version of the transport sweep is essentially\nagnostic to the mesh, material, and geometry. The second is the addition of a\nhistory-based linear discontinuous source tilting method. Traditionally, iQMC\nutilizes a piecewise-constant source in each cell of the mesh. However, through\nthe proposed source tilting technique iQMC can utilize a piecewise-linear\nsource in each cell and reduce spatial error without refining the mesh.\nNumerical results are presented from the 2D C5G7 and Takeda-1 k-eigenvalue\nbenchmark problems. Results show that the history-based source tilting\nsignificantly reduces error in global tallies and the eigenvalue solution in\nboth benchmarks. Through the effective source transport sweep and linear source\ntilting iQMC was able to converge the eigenvalue from the 2D C5G7 problem to\nless than $0.04\\%$ error on a uniform Cartesian mesh with only $204\\times204$\ncells.", "category": "physics_comp-ph" }, { "text": "Discovery of interpretable structural model errors by combining Bayesian\n sparse regression and data assimilation: A chaotic Kuramoto-Sivashinsky test\n case: Models of many engineering and natural systems are imperfect. The discrepancy\nbetween the mathematical representations of a true physical system and its\nimperfect model is called the model error. These model errors can lead to\nsubstantial differences between the numerical solutions of the model and the\nstate of the system, particularly in those involving nonlinear, multi-scale\nphenomena. Thus, there is increasing interest in reducing model errors,\nparticularly by leveraging the rapidly growing observational data to understand\ntheir physics and sources. Here, we introduce a framework named MEDIDA: Model\nError Discovery with Interpretability and Data Assimilation. MEDIDA only\nrequires a working numerical solver of the model and a small number of\nnoise-free or noisy sporadic observations of the system. In MEDIDA, first the\nmodel error is estimated from differences between the observed states and\nmodel-predicted states (the latter are obtained from a number of one-time-step\nnumerical integrations from the previous observed states). If observations are\nnoisy, a data assimilation (DA) technique such as ensemble Kalman filter (EnKF)\nis employed to provide the analysis state of the system, which is then used to\nestimate the model error. Finally, an equation-discovery technique, here the\nrelevance vector machine (RVM), a sparsity-promoting Bayesian method, is used\nto identify an interpretable, parsimonious, and closed-form representation of\nthe model error. Using the chaotic Kuramoto-Sivashinsky (KS) system as the test\ncase, we demonstrate the excellent performance of MEDIDA in discovering\ndifferent types of structural/parametric model errors, representing different\ntypes of missing physics, using noise-free and noisy observations.", "category": "physics_comp-ph" }, { "text": "Hydration of NH$_4^+$ in Water: Bifurcated Hydrogen Bonding Structures\n and Fast Rotational Dynamics: Understanding the hydration and diffusion of ions in water at the molecular\nlevel is a topic of widespread importance. The ammonium ion (NH$_4^+$) is an\nexemplar system that has received attention for decades because of its complex\nhydration structure and relevance in industry. Here we report a study of the\nhydration and the rotational diffusion of NH$_4^+$ in water using ab initio\nmolecular dynamics simulations and quantum Monte Carlo calculations. We find\nthat the hydration structure of NH$_4^+$ features bifurcated hydrogen bonds,\nwhich leads to a rotational mechanism involving the simultaneous switching of a\npair of bifurcated hydrogen bonds. The proposed hydration structure and\nrotational mechanism are supported by existing experimental measurements, and\nthey also help to rationalize the measured fast rotation of NH$_4^+$ in water.\nThis study highlights how subtle changes in the electronic structure of\nhydrogen bonds impacts the hydration structure, which consequently affects the\ndynamics of ions and molecules in hydrogen bonded systems.", "category": "physics_comp-ph" }, { "text": "Molecular geometry and vibrational frequencies by parallel sampling: Quantum Monte Carlo is an efficient technique for finding the ground-state\nenergy and related properties of small molecules. A major challenge remains in\naccurate determination of a molecule's geometry, i.e. the optimal location of\nits individual nuclei and the frequencies of their vibration. The aim of this\narticle is to describe a simple technique to accurately establish such\nproperties. This is achieved by varying the trial function to accommodate\nchanging geometry, thereby removing a source of rather unpleasant singularities\nwhich arise when the trial function is fixed (the traditional approach).", "category": "physics_comp-ph" }, { "text": "Optical Properties of Graphene in Magnetic and Electric fields: Optical properties of graphene are explored by using the generalized\ntight-binding model. The main features of spectral structures, the form,\nfrequency, number and intensity, are greatly enriched by the complex\nrelationship among the interlayer atomic interactions, the magnetic\nquantization and the Coulomb potential energy. Absorption spectra have the\nshoulders, asymmetric peaks and logarithmic peaks, coming from the band-edge\nstates of parabolic dispersions, the constant-energy loops and the saddle\npoints, respectively. The initial forbidden excitation region is only revealed\nin even-layer AA stacking systems. Optical gaps and special structures can be\ngenerated by an electric field. The delta-function-like structures in\nmagneto-optical spectra, which present the single, twin and double peaks, are\nassociated with the symmetric, asymmetric and splitting Landau-level energy\nspectra, respectively. The single peaks due to the non-tilted Dirac cones\nexhibit the nearly uniform intensity. The AAB stacking possesses more\nabsorption structures, compared to the other stackings. The diverse\nmagneto-optical selection rules are mainly determined by the well-behaved,\nperturbed and undefined Landau modes. The frequent anti-crossings in the\nmagnetic- and electric-field-dependent energy spectra lead to the increase of\nabsorption peaks and the reduced intensities. Part of theoretical calculations\nare consistent with the experimental measurements, and the others need further\ndetailed examinations.", "category": "physics_comp-ph" }, { "text": "Machine learning for prediction of extreme statistics in modulation\n instability: A central area of research in nonlinear science is the study of instabilities\nthat drive the emergence of extreme events. Unfortunately, experimental\ntechniques for measuring such phenomena often provide only partial\ncharacterization. For example, real-time studies of instabilities in nonlinear\nfibre optics frequently use only spectral data, precluding detailed predictions\nabout the associated temporal properties. Here, we show how Machine Learning\ncan overcome this limitation by predicting statistics for the maximum intensity\nof temporal peaks in modulation instability based only on spectral\nmeasurements. Specifically, we train a neural network based Machine Learning\nmodel to correlate spectral and temporal properties of optical fibre modulation\ninstability using data from numerical simulations, and we then use this model\nto predict the temporal probability distribution based on high-dynamic range\nspectral data from experiments. These results open novel perspectives in all\nsystems exhibiting chaos and instability where direct time-domain observations\nare difficult.", "category": "physics_comp-ph" }, { "text": "Simulating fluids with a computer: Introduction and recent advances: In this article, I present recent methods for the numerical simulation of\nfluid dynamics and the associated computational algorithms. The goal of this\narticle is to explain how to model an incompressible fluid, and how to write a\ncomputer program that simulates it. I will start from Newton laws \"$F = ma$\"\napplied to a bunch of particles, then show how Euler's equation can be deduced\nfrom them by \"taking a step backward\" and seeing the fluid as a continuum. Then\nI will show how to make a computer program. Incompressibility is one of the\nmain difficulties to write a computer program that simulates a fluid. I will\nexplain how recent advances in computational mathematics result in a computer\nobject that can be used to represent a fluid and that naturally satisfies the\nincompressibility constraint. Equipped with this representation, the algorithm\nthat simulates the fluid becomes extremely simple, and has been proved to\nconverge to the solution of the equation (by Gallouet and Merigot).", "category": "physics_comp-ph" }, { "text": "Relaxation, thermalization and Markovian dynamics of two spins coupled\n to a spin bath: It is shown that by fitting a Markovian quantum master equation to the\nnumerical solution of the time-dependent Schr\\\"odinger equation of a system of\ntwo spin-1/2 particles interacting with a bath of up to 34 spin-1/2 particles,\nthe former can describe the dynamics of the two-spin system rather well. The\nfitting procedure that yields this Markovian quantum master equation accounts\nfor all non-Markovian effects in as much the general structure of this equation\nallows and yields a description that is incompatible with the Lindblad\nequation.", "category": "physics_comp-ph" }, { "text": "METAGUI 3: a graphical user interface for choosing the collective\n variables in molecular dynamics simulations: Molecular dynamics (MD) simulations allow the exploration of the phase space\nof biopolymers through the integration of equations of motion of their\nconstituent atoms. The analysis of MD trajectories often relies on the choice\nof collective variables (CVs) along which the dynamics of the system is\nprojected. We developed a graphical user interface (GUI) for facilitating the\ninteractive choice of the appropriate CVs. The GUI allows: defining\ninteractively new CVs; partitioning the configurations into microstates\ncharacterized by similar values of the CVs; calculating the free energies of\nthe microstates for both unbiased and biased (metadynamics) simulations;\nclustering the microstates in kinetic basins; visualizing the free energy\nlandscape as a function of a subset of the CVs used for the analysis. A simple\nmouse click allows one to quickly inspect structures corresponding to specific\npoints in the landscape.", "category": "physics_comp-ph" }, { "text": "Molecular Dynamics Simulations of Mutual Space-Charge Effect between\n Planar Field Emitters: Molecular dynamics simulations, with full Coulomb interaction and\nself-consistent field emission, are used to examine mutual space-charge\ninteractions between beams originating from several emitter areas, in a planar\ninfinite diode. The simulations allow observation of the trajectory of each\nindividual electron through the diode gap. Results show that when the\ncenter-to-center spacing between emitters is greater than half of the gap\nspacing the emitters are essentially independent. For smaller spacing the\nmutual space-charge effect increases rapidly and should not be discounted. A\nsimple qualitative explanation for this effect is given.", "category": "physics_comp-ph" }, { "text": "Mxdrfile: read and write Gromacs trajectories with Matlab: Progress in hardware, algorithms, and force fields are pushing the scope of\nmolecular dynamics (MD) simulations towards the length- and time scales of\ncomplex biochemical processes. This creates a need for developing advanced\nanalysis methods tailored to the specific questions at hand. We present\nmxdrfile, a set of fast routines for manipulating the binary xtc and trr\ntrajectory files formats of Gromacs, one of the most commonly used MD codes,\nwith Matlab, a powerful and versatile language for scientific computing. The\nunique ability to both read and write binary trajectory files makes it possible\nto leverage the broad capabilities of Matlab to speed up and simplify the\ndevelopment of complex analysis and visualization methods. We illustrate these\npossibilities by implementing an alignment method for buckled surfaces, and use\nit to briefly dissect the curvature-dependent composition of a buckled lipid\nbilayer. The mxdrfile package, including the buckling example, is available as\nopen source at http://kaplajon.github.io/mxdrfile/ .", "category": "physics_comp-ph" }, { "text": "Smoothed particle hydrodynamics with adaptive spatial resolution\n (SPH-ASR) for free surface flows: A numerical method based on smoothed particle hydrodynamics with adaptive\nspatial resolution (SPH-ASR) was developed for simulating free surface flows.\nThis method can reduce the computational demands while maintaining the\nnumerical accuracy. In this method, the spatial resolution changes adaptively\naccording to the distance to the free surface by numerical particle splitting\nand merging. The particles are split for refinement when they are near the free\nsurface, while they are merged for coarsening when they are away from the free\nsurface. A search algorithm was implemented for identifying the particles at\nthe free surface. A particle shifting technique, considering variable smoothing\nlength, was introduced to improve the particle distribution. The presented\nSPH-ASR method was validated by simulating various free surface flows, and the\nresults were compared to those obtained using SPH with uniform spatial\nresolution (USR) and experimental data.", "category": "physics_comp-ph" }, { "text": "Reconstruction of bremsstrahlung spectra from attenuation data using\n generalized simulated annealing: The throughout knowledge of a X-ray beam spectrum is mandatory to assess the\nquality of its source device. Since the techniques to directly measurement such\nspectra are expensive and laborious, the X-ray spectrum reconstruction using\nattenuation data has been a promising alternative. However, such reconstruction\ncorresponds mathematically to an inverse, nonlinear and ill-posed problem.\nTherefore, to solve it the use of powerful optimization algorithms and good\nregularization functions is required. Here, we present a generalized simulated\nannealing algorithm combined with a suitable smoothing regularization function\nto solve the X-ray spectrum reconstruction inverse problem. We also propose an\napproach to set the initial acceptance and visitation temperatures and a\nstandardization of the objective function terms to automatize the algorithm to\naddress with different spectra range. Numerical tests considering three\ndifferent reference spectra with its attenuation curve are presented. Results\nshow that the algorithm provides good accuracy to retrieve the reference\nspectra shapes corroborating the central importance of our regularization\nfunction and the performance improvement of the generalized simulated annealing\ncompared to its classical version.", "category": "physics_comp-ph" }, { "text": "An application of nonlinear supratransmission to the propagation of\n binary signals in weakly damped, mechanical systems of coupled oscillators: In the present article, we simulate the propagation of binary signals in\nsemi-infinite, mechanical chains of coupled oscillators harmonically driven at\nthe end, by making use of the recently discovered process of nonlinear\nsupratransmission. Our numerical results ---which are based on a brand-new\ncomputational technique with energy-invariant properties--- show an efficient\nand reliable transmission of information.", "category": "physics_comp-ph" }, { "text": "A Discontinuous Galerkin Method for Viscous Compressible Multifluids: We present a generalized discontinuous Galerkin method for a multicomponent\ncompressible barotropic Navier-Stokes system of equations. The system presented\nhas a functional viscosity nu which depends on the pressure p=p(rho,mu_i) of\nthe flow, with the density rho and the local concentration mu_i. High order\nRunge-Kutta time discretization techniques are employed, and different methods\nof dealing with arbitrary coupled boundary conditions are discussed. Analysis\nof the energy consistency of the scheme is performed in addition to inspection\nof the relative error of the solution compared to exact analytic test cases.\nFinally several examples, comparisons, generalizations and physical\napplications are presented.", "category": "physics_comp-ph" }, { "text": "Parallel 3-dim fast Fourier transforms with load balancing of the plane\n waves: The plane wave method is most widely used for solving the Kohn-Sham equations\nin first-principles materials science computations. In this procedure, the\nthree-dimensional (3-dim) trial wave functions' fast Fourier transform (FFT) is\na regular operation and one of the most demanding algorithms in terms of the\nscalability on a parallel machine. We propose a new partitioning algorithm for\nthe 3-dim FFT grid to accomplish the trade-off between the communication\noverhead and load balancing of the plane waves. It is shown by qualitative\nanalysis and numerical results that our approach could scale the plane wave\nfirst-principles calculations up to more nodes.", "category": "physics_comp-ph" }, { "text": "Numerical Precision Effects on GPU Simulation of Massive Spatial Data,\n Based on the Modified Planar Rotator Model: The present research builds on a recently proposed spatial prediction method\nfor discretized two-dimensional data, based on a suitably modified planar\nrotator (MPR) spin model from statistical physics. This approach maps the\nmeasured data onto interacting spins and, exploiting spatial correlations\nbetween them, which are similar to those present in geostatistical data,\npredicts the data at unmeasured locations. Due to the short-range nature of the\nspin pair interactions in the MPR model, parallel implementation of the\nprediction algorithm on graphical processing units (GPUs) is a natural way of\nincreasing its efficiency. In this work we study the effects of reduced\ncomputing precision as well as GPU-based hardware intrinsic functions on the\nspeedup and accuracy of the MPR-based prediction and explore which aspects of\nthe simulation can potentially benefit the most from the reduced precision. It\nis found that, particularly for massive data sets, a thoughtful precision\nsetting of the GPU implementation can significantly increase the computational\nefficiency, while incurring little to no degradation in the prediction\naccuracy.", "category": "physics_comp-ph" }, { "text": "Influence of rotational instability on the polarization structure of\n SrTiO3: The k-space polarization structure and its strain response in SrTiO3 with\nrotational instability are studied using a combination of first-principles\ndensity functional calculations, modern theory of polarization, and analytic\nWannier-function formulation. (1) As one outcome of this study, we rigorously\nprove-both numerically and analytically-that folding effect exists in\npolarization structure. (2) After eliminating the folding effect, we find that\nthe polarization structure for SrTiO3 with rotational instability is still\nconsiderably different from that for non-rotational SrTiO3, revealing that\npolarization structure is sensitive to structure distortion of oxygen-octahedra\nrotation and promises to be an effective tool for studying material properties.\n(3) Furthermore, from polarization structure we determine the microscopic\nWannier-function interactions in SrTiO3. These interactions are found to vary\nsignificantly with and without oxygen-octahedra rotation.", "category": "physics_comp-ph" }, { "text": "A Hybridizable Discontinuous Galerkin solver for the Grad-Shafranov\n equation: In axisymmetric fusion reactors, the equilibrium magnetic configuration can\nbe expressed in terms of the solution to a semi-linear elliptic equation known\nas the Grad-Shafranov equation, the solution of which determines the poloidal\ncomponent of the magnetic field. When the geometry of the confinement region is\nknown, the problem becomes an interior Dirichlet boundary value problem. We\npropose a high order solver based on the Hybridizable Discontinuous Galerkin\nmethod. The resulting algorithm (1) provides high order of convergence for the\nflux function and its gradient, (2) incorporates a novel method for handling\npiecewise smooth geometries by extension from polygonal meshes, (3) can handle\ngeometries with non-smooth boundaries and x-points, and (4) deals with the\nsemi-linearity through an accelerated two-grid fixed-point iteration. The\neffectiveness of the algorithm is verified with computations for cases where\nanalytic solutions are known on configurations similar to those of actual\ndevices (ITER with single null and double null divertor, NSTX, ASDEX upgrade,\nand Field Reversed Configurations).", "category": "physics_comp-ph" }, { "text": "ODEN: A Framework to Solve Ordinary Differential Equations using\n Artificial Neural Networks: We explore in detail a method to solve ordinary differential equations using\nfeedforward neural networks. We prove a specific loss function, which does not\nrequire knowledge of the exact solution, to be a suitable standard metric to\nevaluate neural networks' performance. Neural networks are shown to be\nproficient at approximating continuous solutions within their training domains.\nWe illustrate neural networks' ability to outperform traditional standard\nnumerical techniques. Training is thoroughly examined and three universal\nphases are found: (i) a prior tangent adjustment, (ii) a curvature fitting, and\n(iii) a fine-tuning stage. The main limitation of the method is the nontrivial\ntask of finding the appropriate neural network architecture and the choice of\nneural network hyperparameters for efficient optimization. However, we observe\nan optimal architecture that matches the complexity of the differential\nequation. A user-friendly and adaptable open-source code (ODE$\\mathcal{N}$) is\nprovided on GitHub.", "category": "physics_comp-ph" }, { "text": "Spectral-Domain Computation of Fields Radiated by Sources in\n Non-Birefringent Anisotropic Media: We derive the key expressions to robustly address the eigenfunction\nexpansion-based analysis of electromagnetic (EM) fields produced by current\nsources within planar non-birefringent anisotropic medium (NBAM) layers. In\nNBAM, the highly symmetric permeability and permittivity tensors can induce\ndirectionally-dependent, but polarization independent, propagation properties\nsupporting \"degenerate\" characteristic polarizations, i.e. four\nlinearly-independent eigenvectors associated with only two (rather than four)\nunique, non-defective eigenvalues. We first explain problems that can arise\nwhen the source(s) specifically reside within NBAM planar layers when using\ncanonical field expressions. To remedy these problems, we exhibit alternative\nspectral-domain field expressions, immune to such problems, that form the\nfoundation for a robust eigenfunction expansion-based analysis of time-harmonic\nEM radiation and scattering within such type of planar-layered media. Numerical\nresults demonstrate the high accuracy and stability achievable using this\nalgorithm.", "category": "physics_comp-ph" }, { "text": "Globally Conservative, Hybrid Self-Adjoint Angular Flux and\n Least-Squares Method Compatible with Void: In this paper, we derive a method for the second-order form of the transport\nequation that is both globally conservative and compatible with voids, using\nContinuous Finite Element Methods (CFEM). The main idea is to use the\nLeast-Squares (LS) form of the transport equation in the void regions and the\nSelf-Adjoint Angular Flux (SAAF) form elsewhere. While the SAAF formulation is\nglobally conservative, the LS formulation need a correction in void. The price\nto pay for this fix is the loss of symmetry of the bilinear form. We first\nderive this Conservative LS (CLS) formulation in void. Second we combine the\nSAAF and CLS forms and end up with an hybrid SAAF-CLS method, having the\ndesired properties. We show that extending the theory to near-void regions is a\nminor complication and can be done without affecting the global conservation of\nthe scheme. Being angular discretization agnostic, this method can be applied\nto both discrete ordinates (SN) and spherical harmonics (PN) methods. However,\nsince a globally conservative and void compatible second-order form already\nexists for SN (Wang et al. 2014), but is believed to be new for PN, we focus\nmost of our attention on that latter angular discretization. We implement and\ntest our method in Rattlesnake within the Multiphysics Object Oriented\nSimulation Environment (MOOSE) framework. Results comparing it to other methods\nare presented.", "category": "physics_comp-ph" }, { "text": "A common lines approach for ab-initio modeling of molecules with\n tetrahedral and octahedral symmetry: A main task in cryo-electron microscopy single particle reconstruction is to\nfind a three-dimensional model of a molecule given a set of its randomly\noriented and positioned noisy projection-images. In this work, we propose an\nalgorithm for ab-initio reconstruction for molecules with tetrahedral or\noctahedral symmetry. The algorithm exploits the multiple common lines between\neach pair of projection-images as well as self common lines within each image.\nIt is robust to noise in the input images as it integrates the information from\nall images at once. The applicability of the proposed algorithm is demonstrated\nusing experimental cryo-electron microscopy data.", "category": "physics_comp-ph" }, { "text": "Advanced Lanczos diagonalization for models of quantum disordered\n systems: An application of an effective numerical algorithm for solving eigenvalue\nproblems which arise in modelling electronic properties of quantum disordered\nsystems is considered. We study the electron states at the\nlocalization-delocalization transition induced by a random potential in the\nframework of the Anderson lattice model. The computation of the interior of the\nspectrum and corresponding wavefunctions for very sparse, hermitian matrices of\nsizes exceeding 10^6 x 10^6 is performed by the Lanczos-type method especially\nmodified for investigating statistical properties of energy levels and\neigenfunction amplitudes.", "category": "physics_comp-ph" }, { "text": "The PUMAS library: The PUMAS library is a transport engine for muon and tau leptons in matter.\nIt can operate with a configurable level of details, from a fast deterministic\nCSDA mode to a detailed Monte~Carlo simulation. A peculiarity of PUMAS is that\nit is revertible, i.e. it can run in forward or in backward mode. Thus, the\nPUMAS library is particularly well suited for muography applications. In the\npresent document, we provide a detailed description of PUMAS, of its physics\nand of its implementation.", "category": "physics_comp-ph" }, { "text": "As2S3, As2Se3 and As2Te3 nanosheets: Superstretchable semiconductors\n with anisotropic carrier mobilities and optical properties: In this work, density functional theory calculations were carried out to\nexplore the mechanical response, dynamical/thermal stability,\nelectronic/optical properties and photocatalytic features of monoclinic As2X3\n(X=S, Se and Te) nanosheets. Acquired phonon dispersions and ab-initio\nmolecular dynamics results confirm the stability of studied nanomembranes.\nObservation of relatively weak interlayer interactions suggests that the\nexfoliation techniques can be potentially employed to fabricate nanomembranes\nfrom their bulk counterparts. The studied nanosheets were found to show highly\nanisotropic mechanical properties. Notably, new As2Te3 2D lattice predicted by\nthis study is found to exhibit unique superstretchability, which outperforms\nother 2D materials. In addition, our results on the basis of HSE06 functional\nreveal the indirect semiconducting electronic nature for the monolayer to\nfew-layer and bulk structures of As2X3, in which a moderate decreasing trend in\nthe band-gap by increasing the thickness can be established. The studied\nnanomaterials were found to show remarkably high and anisotropic carrier\nmobilities. Moreover, optical results show that these nanosheets can absorb the\nvisible light. In particular, the valence and conduction band edge positions,\nhigh carrier mobilities and optical responses of As2Se3 nanosheets were found\nto be highly desirable for the solar water splitting. The comprehensive vision\nprovided by this study not only confirm the stability and highly attractive\nelectronic and optical characteristics of As2S3, As2Se3 and As2Te3 nanosheets,\nbut also offer new possibilities to design superstretchable nanodevices.", "category": "physics_comp-ph" }, { "text": "Lattice Boltzmann modeling of multiphase flows at large density ratio\n with an improved pseudopotential model: Owing to its conceptual simplicity and computational efficiency, the\npseudopotential multiphase lattice Boltzmann (LB) model has attracted\nsignificant attention since its emergence. In this work, we aim to extend the\npseudopotential LB model to simulate multiphase flows at large density ratio\nand relatively high Reynolds number. First, based on our recent work [Li et\nal., Phys. Rev. E. 86, 016709 (2012)], an improved forcing scheme is proposed\nfor the multiple-relaxation-time pseudopotential LB model in order to achieve\nthermodynamic consistency and large density ratio in the model. Next, through\ninvestigating the effects of the parameter a in the Carnahan-Starling equation\nof state, we find that the interface thickness is approximately proportional to\n1/sqrt(a). Using a smaller a will lead to a wider interface thickness, which\ncan reduce the spurious currents and enhance the numerical stability of the\npseudopotential model at large density ratio. Furthermore, it is found that a\nlower liquid viscosity can be gained in the pseudopotential model by increasing\nthe kinematic viscosity ratio between the vapor and liquid phases. The improved\npseudopotential LB model is numerically validated via the simulations of\nstationary droplet and droplet oscillation. Using the improved model as well as\nthe above treatments, numerical simulations of droplet splashing on a thin\nliquid film are conducted at a density ratio in excess of 500 with Reynolds\nnumbers ranging from 40 to 1000. The dynamics of droplet splashing is correctly\nreproduced and the predicted spread radius is found to obey the power law\nreported in the literature.", "category": "physics_comp-ph" }, { "text": "Towards learning Lattice Boltzmann collision operators: In this work we explore the possibility of learning from data collision\noperators for the Lattice Boltzmann Method using a deep learning approach. We\ncompare a hierarchy of designs of the neural network (NN) collision operator\nand evaluate the performance of the resulting LBM method in reproducing time\ndynamics of several canonical flows. In the current study, as a first attempt\nto address the learning problem, the data was generated by a single relaxation\ntime BGK operator. We demonstrate that vanilla NN architecture has very limited\naccuracy. On the other hand, by embedding physical properties, such as\nconservation laws and symmetries, it is possible to dramatically increase the\naccuracy by several orders of magnitude and correctly reproduce the short and\nlong time dynamics of standard fluid flows.", "category": "physics_comp-ph" }, { "text": "Numerical Methods for Flow in Fractured Porous Media: In this work we present the mathematical models for single-phase flow in\nfractured porous media. An overview of the most common approaches is\nconsidered, which includes continuous fracture models and discrete fracture\nmodels. For the latter, we discuss strategies that are developed in literature\nfor its numerical solution mainly related to the geometrical relation between\nthe fractures and porous media grids.", "category": "physics_comp-ph" }, { "text": "A computational method for modeling arbitrary junctions employing\n different surface integral equation formulations for three-dimensional\n scattering and radiation problems: This paper presents a new method, based on the well-known method of moments\n(MoM), for the numerical electromagnetic analysis of scattering and radiation\nfrom metallic or dielectric structures, or both structure types in the same\nsimulation, that are in contact with other metallic or dielectric structures.\nThe proposed method for solving the MoM junction problem consists of two\nseparate algorithms, one of which comprises a generalization for bodies in\ncontact of the surface integral equation (SIE) formulations. Unlike some other\npublished SIE generalizations in the field of computational electromagnetics,\nthis generalization does not require duplicating unknowns on the dielectric\nseparation surfaces. Additionally, this generalization is applicable to any\nordinary single-scatterer SIE formulations employed as baseline. The other\nalgorithm deals with enforcing boundary conditions and Kirchhoff's Law,\nrelating the surface current flow across a junction edge. Two important\nfeatures inherent to this latter algorithm consist of a mathematically compact\ndescription in matrix form, and, importantly from a software engineering point\nof view, an easy implementation in existing MoM codes which makes the debugging\nprocess more comprehensible. A practical example involving a real grounded\nmonopole antenna for airplane-satellite communication is analyzed for\nvalidation purposes by comparing with precise measurements covering different\nelectrical sizes.", "category": "physics_comp-ph" }, { "text": "Parallel Algorithms for Successive Convolution: In this work, we consider alternative discretizations for PDEs which use\nexpansions involving integral operators to approximate spatial derivatives.\nThese constructions use explicit information within the integral terms, but\ntreat boundary data implicitly, which contributes to the overall speed of the\nmethod. This approach is provably unconditionally stable for linear problems\nand stability has been demonstrated experimentally for nonlinear problems.\nAdditionally, it is matrix-free in the sense that it is not necessary to invert\nlinear systems and iteration is not required for nonlinear terms. Moreover, the\nscheme employs a fast summation algorithm that yields a method with a\ncomputational complexity of $\\mathcal{O}(N)$, where $N$ is the number of mesh\npoints along a direction. While much work has been done to explore the theory\nbehind these methods, their practicality in large scale computing environments\nis a largely unexplored topic. In this work, we explore the performance of\nthese methods by developing a domain decomposition algorithm suitable for\ndistributed memory systems along with shared memory algorithms. As a first\npass, we derive an artificial CFL condition that enforces a nearest-neighbor\ncommunication pattern and briefly discuss possible generalizations. We also\nanalyze several approaches for implementing the parallel algorithms by\noptimizing predominant loop structures and maximizing data reuse. Using a\nhybrid design that employs MPI and Kokkos for the distributed and shared memory\ncomponents of the algorithms, respectively, we show that our methods are\nefficient and can sustain an update rate $> 1\\times10^8$ DOF/node/s. We provide\nresults that demonstrate the scalability and versatility of our algorithms\nusing several different PDE test problems, including a nonlinear example, which\nemploys an adaptive time-stepping rule.", "category": "physics_comp-ph" }, { "text": "KMCLib: A general framework for lattice kinetic Monte Carlo (KMC)\n simulations: KMCLib is a general framework for lattice kinetic Monte Carlo (KMC)\nsimulations. The program can handle simulations of the diffusion and reaction\nof millions of particles in one, two, or three dimensions, and is designed to\nbe easily extended and customized by the user to allow for the development of\ncomplex custom KMC models for specific systems without having to modify the\ncore functionality of the program. Analysis modules and on-the-fly elementary\nstep diffusion rate calculations can be implemented as plugins following a\nwell-defined API. The plugin modules are loosely coupled to the core KMCLib\nprogram via the Python scripting language. KMCLib is written as a Python module\nwith a backend C++ library. After initial compilation of the backend library\nKMCLib is used as a Python module; input to the program is given as a Python\nscript executed using a standard Python interpreter. We give a detailed\ndescription of the features and implementation of the code and demonstrate its\nscaling behavior and parallel performance with a simple one-dimensional A-B-C\nlattice KMC model and a more complex three-dimensional lattice KMC model of\noxygen-vacancy diffusion in a fluorite structured metal oxide. KMCLib can keep\ntrack of individual particle movements and includes tools for mean square\ndisplacement analysis, and is therefore particularly well suited for studying\ndiffusion processes at surfaces and in solids.", "category": "physics_comp-ph" }, { "text": "Pressure Correction in Density Functional Theory Calculations: First-principles calculations based on density functional theory have been\nwidely used in studies of the structural, thermoelastic, rheological, and\nelectronic properties of earth-forming materials. The exchange-correlation\nterm, however, is implemented based on various approximations, and this is\nbelieved to be the main reason for discrepancies between experiments and\ntheoretical predictions. In this work, by using periclase MgO as a prototype\nsystem we examine the discrepancies in pressure and Kohn-Sham energy that are\ndue to the choice of the exchange-correlation functional. For instance, we\nchoose local density approximation and generalized gradient approximation. We\nperform extensive first-principles calculations at various temperatures and\nvolumes and find that the exchange-correlation-based discrepancies in Kohn-Sham\nenergy and pressure should be independent of temperature. This implies that the\nphysical quantities, such as the equation of states, heat capacity, and the\nGr\\\"{u}neisen parameter, estimated by a particular choice of\nexchange-correlation functional can easily be transformed into those estimated\nby another exchange-correlation functional. Our findings may be helpful in\nproviding useful constraints on mineral properties %at thermodynamic conditions\ncompatible to deep Earth. at deep Earth thermodynamic conditions.", "category": "physics_comp-ph" }, { "text": "Early Warning Signals for Bifurcations Embedded in High Dimensions: Recent work has highlighted the utility of methods for early warning signal\ndetection in dynamic systems approaching critical tipping thresholds. Often\nthese tipping points resemble local bifurcations, whose low dimensional\ndynamics can play out on a manifold embedded in a much higher dimensional state\nspace. In many cases of practical relevance, the form of this embedding is\npoorly understood or entirely unknown. This paper explores how measurement of\nthe critical phenomena that generically precede such bifurcations can be used\nto make inferences about the properties of their embeddings, and, conversely,\nhow prior knowledge about the mechanism of bifurcation can robustify\npredictions of an oncoming tipping event. These modes of analysis are first\ndemonstrated on a simple fluid flow system undergoing a Hopf bifurcation. The\nsame approach is then applied to data associated with the West African monsoon\nshift, with results corroborated by existing models of the same system. This\nexample highlights the effectiveness of the methodology even when applied to\ncomplex climate data, and demonstrates how a well-resolved spatial structure\nassociated with the onset of atmospheric instability can be inferred purely\nfrom time series measurements.", "category": "physics_comp-ph" }, { "text": "Variational solutions for Resonances by a Finite-Difference Grid Method: We demonstrate that the finite difference grid method (FDM) can be simply\nmodified to satisfy the variational principle and enable calculations of both\nreal and complex poles of the scattering matrix. These complex poles are known\nas resonances and provide the energies and inverse lifetimes of the system\nunder study (e.g., molecules) in metastable states. This approach allows\nincorporating finite grid methods in the study of resonance phenomena in\nchemistry. Possible applications include the calculation of electronic\nautoionization resonances which occur when ionization takes place as the bond\nlengths of the molecule are varied. Alternatively, the method can be applied to\ncalculate nuclear predissociation resonances which are associated with\nactivated complexes with finite lifetimes.", "category": "physics_comp-ph" }, { "text": "Phase demodulation with iterative Hilbert transform embeddings: We propose an efficient method for demodulation of phase modulated signals\nvia iterated Hilbert transform embeddings. We show that while a usual approach\nbased on one application of the Hilbert transform provides only an\napproximation to a proper phase, with iterations the accuracy is essentially\nimproved, up to precision limited mainly by the discretization effects. We\ndemonstrate that the method is applicable to arbitrarily complex waveforms, and\nto modulations fast compared to the basic frequency. Furthermore, we develop a\nperturbative theory applicable to simple cosine waveforms, showing convergence\nof the technique.", "category": "physics_comp-ph" }, { "text": "Understanding Kernel Ridge Regression: Common behaviors from simple\n functions to density functionals: Accurate approximations to density functionals have recently been obtained\nvia machine learning (ML). By applying ML to a simple function of one variable\nwithout any random sampling, we extract the qualitative dependence of errors on\nhyperparameters. We find universal features of the behavior in extreme limits,\nincluding both very small and very large length scales, and the noise-free\nlimit. We show how such features arise in ML models of density functionals.", "category": "physics_comp-ph" }, { "text": "Simulation of 2D ballistic deposition of porous nanostructured\n thin-films: A \"two-dimensional ballistic deposition\" (2D-BD) code has been developed to\nstudy the geometric effects in ballistic deposition of thin-film growth.\nCircular discs are used as depositing specie to understand the shadowing\neffects during the evolution of a thin-film. We carried out the 2D-BD\nsimulations for the angles of deposition $20^0$-$80^0$ in steps of $10^0$.\nStandard deviations $1^0$, $2^0$, $4^0$, $6^0$ and $10^0$ are used for each\nangle of deposition with disc size of $1.5 \\overset{\\circ}{A}$ to understand\nits effect on the microstructure of the thin-films. Angle of growth, porosity\nand surface roughness properties have been studied for the afore-mentioned\nangles of deposition and their standard deviations. Ballistic deposition\nsimulations with the discs of different sizes have been carried out to\nunderstand the effect of size in ballistic deposition. The results from this\ncode are compared with the available theoretical and experimental results. The\ncode is used to simulate a collimated glancing angle deposition (C-GLAD)\nexperiment. We obtain a good qualitative match for various features of the\ndeposits.", "category": "physics_comp-ph" }, { "text": "Regularization of Complex Langevin Method: The complex Langevin method, a numerical method used to compute the ensemble\naverage with a complex partition function, often suffers from runaway\ninstability. We study the regularization of the complex Langevin method via\naugmenting the action with a stabilization term. Since the regularization\nintroduces biases to the numerical result, two approaches, named 2R and 3R\nmethods, are introduced to recover the unbiased result. The 2R method\nsupplements the regularization with regression to estimate the unregularized\nensemble average, and the 3R method reduces the computational cost by coupling\nthe regularization with a reweighting strategy before regression. Both methods\ncan be generalized to the SU(n) theory and are assessed from several\nperspectives. Several numerical experiments in the lattice field theory are\ncarried out to show the effectiveness of our approaches.", "category": "physics_comp-ph" }, { "text": "A Theoretical Case Study of the Generalisation of Machine-learned\n Potentials: Machine-learned interatomic potentials (MLIPs) are typically trained on\ndatasets that encompass a restricted subset of possible input structures, which\npresents a potential challenge for their generalization to a broader range of\nsystems outside the training set. Nevertheless, MLIPs have demonstrated\nimpressive accuracy in predicting forces and energies in simulations involving\nintricate and complex structures. In this paper we aim to take steps towards\nrigorously explaining the excellent observed generalisation properties of\nMLIPs. Specifically, we offer a comprehensive theoretical and numerical\ninvestigation of the generalization of MLIPs in the context of dislocation\nsimulations. We quantify precisely how the accuracy of such simulations is\ndirectly determined by a few key factors: the size of the training structures,\nthe choice of training observations (e.g., energies, forces, virials), and the\nlevel of accuracy achieved in the fitting process. Notably, our study reveals\nthe crucial role of fitting virials in ensuring the consistency of MLIPs for\ndislocation simulations. Our series of careful numerical experiments\nencompassing screw, edge, and mixed dislocations, supports existing best\npractices in the MLIPs literature but also provides new insights into the\ndesign of data sets and loss functions.", "category": "physics_comp-ph" }, { "text": "Modelling long-range interactions in multiscale simulations of\n ferromagnetic materials: Atomistic-continuum multiscale modelling is becoming an increasingly popular\ntool for simulating the behaviour of materials due to its computational\nefficiency and reliable accuracy. In the case of ferromagnetic materials, the\natomistic approach handles the dynamics of spin magnetic moments of individual\natoms, while the continuum approximations operate with volume-averaged\nquantities, such as magnetisation. One of the challenges for multiscale models\nin relation to physics of ferromagnets is the existence of the long-range\ndipole-dipole interactions between spins. The aim of the present paper is to\ndemonstrate a way of including these interactions into existing\natomistic-continuum coupling methods based on the partitioned-domain and the\nupscaling strategies. This is achieved by modelling the demagnetising field\nexclusively at the continuum level and coupling it to both scales. Such an\napproach relies on the atomistic expression for the magnetisation field\nconverging to the continuum expression when the interatomic spacing approaches\nzero, which is demonstrated in this paper.", "category": "physics_comp-ph" }, { "text": "Fully-Tensorial Elastic-Wave Mode-Solver in FEniCS for Stimulated\n Brillouin Scattering Modeling: A framework for simulating the elastic-wave modes in waveguides, taking into\naccount the full tensorial nature of the stiffness tensor, is presented and\nimplemented in the open-source finite element solver, FEniCS. Various\napproximations of the elastic wave equation used in the stimulated Brillouin\nscattering literature are implemented and their validity and applicability are\ndiscussed. The elastic mode-solver is also coupled with an electromagnetic\ncounterpart to study the influence of elastic anisotropies on Brillouin gain.", "category": "physics_comp-ph" }, { "text": "Benchmark Computation of Morphological Complexity in the Functionalized\n Cahn-Hilliard Gradient Flow: Reductions of the self-consistent mean field theory model of amphiphilic\nmolecules in solvent leads to a singular family of functionalized Cahn-Hilliard\n(FCH) energies. We modify the energy, removing singularities to stabilize the\ncomputation of the gradient flows and develop a series of benchmark problems\nthat emulate the \"morphological complexity\" observed in experiments. These\nbenchmarks investigate the delicate balance between the rate of arrival of\namphiphilic materials onto an interface and a least energy mechanism to\naccommodate the arriving mass. The result is a trichotomy of responses in which\ntwo-dimensional interfaces grow either by a regularized motion against\ncurvature, pearling bifurcations, or curve-splitting directly into networks of\ninterfaces. We evaluate a number of schemes that use second order BDF2-type\ntime stepping coupled with Fourier pseudo-spectral spatial discretization. The\nBDF2-type schemes are either based on a fully implicit time discretization with\na preconditioned steepest descent (PSD) nonlinear solver or upon linearly\nimplicit time discretization based on the standard implicit-explicit (IMEX) and\nthe scalar auxiliary variable (SAV) approaches. We add an exponential time\ndifferencing (ETD) scheme for comparison purposes. All schemes use a fixed\nlocal truncation error target with adaptive time-stepping to achieve the error\ntarget. Each scheme requires proper \"preconditioning\" to achieve robust\nperformance that can enhance efficiency by several orders of magnitude. The\nnonlinear PSD scheme achieves the smallest global discretization error at fixed\nlocal truncation error, however the IMEX and SAV schemes are the most\ncomputationally efficient as measured by the number of FFT calls required to\nachieve a desired global error.", "category": "physics_comp-ph" }, { "text": "Conformational Control of Mechanical Networks: Understanding conformational change is crucial for programming and\ncontrolling the function of many mechanical systems such as allosteric enzymes\nand tunable metamaterials. Of particular interest is the relationship between\nthe network topology or geometry and the specific motions observed under\ncontrolling perturbations. We study this relationship in mechanical networks of\n2-D and 3-D Maxwell frames composed of point masses connected by rigid rods\nrotating freely about the masses. We first develop simple principles that yield\nall bipartite network topologies and geometries that give rise to an\narbitrarily specified instantaneous and finitely deformable motion in the\nmasses as the sole non-rigid body zero mode. We then extend these principles to\ncharacterize networks that simultaneously yield multiple specified zero modes,\nand create large networks by coupling individual modules. These principles are\nthen used to characterize and design networks with useful material (negative\nPoisson ratio) and mechanical (targeted allosteric response) functions.", "category": "physics_comp-ph" }, { "text": "Pebble bed pebble motion: Simulation and Applications: This dissertation presents a method for simulation of motion of the pebbles\nin a PBR. A new mechanical motion simulator, PEBBLES, efficiently simulates the\nkey elements of motion of the pebbles in a PBR. This model simulates\ngravitational force and contact forces including kinetic and true static\nfriction. It's used for a variety of tasks including simulation of the effect\nof earthquakes on a PBR, calculation of packing fractions, Dancoff factors,\npebble wear and the pebble force on the walls. The simulator includes a new\ndifferential static friction model for the varied geometries of PBRs. A new\nstatic friction benchmark was devised via analytically solving the mechanics\nequations to determine the minimum pebble-to-pebble friction and\npebble-to-surface friction for a five pebble pyramid. This pyramid check as\nwell as a comparison to the Janssen formula was used to test the new static\nfriction equations.\n Because larger pebble bed simulations involve hundreds of thousands of\npebbles and long periods of time, PEBBLES runs on shared memory architectures\nand distributed memory architectures. For the shared memory architecture, the\ncode uses a new O(n) lock-less parallel collision detection algorithm to\ndetermine which pebbles are likely to be in contact.\n The PEBBLES code provides new capabilities for understanding and optimizing\nPBRs. The PEBBLES code has provided the pebble motion data required to\ncalculate the motion of pebbles during a simulated earthquake. The PEBBLES code\nprovides the ability to determine the contact forces and the lengths of motion\nin contact. This information combined with the proper wear coefficients can be\nused to determine the dust production from mechanical wear. These new\ncapabilities enhance the understanding of PBRs, and the capabilities of the\ncode will allow future improvements in understanding.", "category": "physics_comp-ph" }, { "text": "Co-located diffuse approximation method for two dimensional\n incompressible channel flows: The main contribution of this paper is the formulation of a diffuse\napproximation method(DAM), for two-dimensional channel flows. The proposed\nmethod is based on the vorticity-streamfunction formulation. The DAM which\nestimates derivates of a scalar field has the remarkable advantage to work on\ndiscretization points (thus avoiding mesh generation). It has been shown that\nthe DAM is much better than the finite element method for the computation of\ngradients [1-2]. In a previous paper [3], we have shown that it can be used to\nsolve laminar natural convection problems. In this work, we discuss the\napplicability of this method to channel flows with a particular emphasis on the\nform of the weighting function.", "category": "physics_comp-ph" }, { "text": "A Truncation Error Estimation Scheme for the Finite Volume Method on\n Unstructured Meshes: This work is an attempt to develop an approximate scheme for estimating the\nvolume-based truncation errors in the finite volume analysis of laminar flows.\nThe volume-based truncation error is the net flow error across the faces of a\ncontrol volume. Unfortunately, truncation error is not a natural outcome of the\nfinite volume solution and needs to be estimated separately. Previous works in\nthe literature estimate truncation error using either higher order\ninterpolation schemes, higher order discretization schemes, or neglected terms\nin the discretization scheme. The first two approaches become complicated on\ngeneral unstructured meshes and the third approach provides inaccurate results.\nThis work proposes a truncation error estimation scheme, which is based on the\nthird approach, but provides more accurate results compared to the existing\nresults in the literature. The potential application of such a truncation error\nestimation scheme is in mesh adaptation.", "category": "physics_comp-ph" }, { "text": "GENFIRE: A generalized Fourier iterative reconstruction algorithm for\n high-resolution 3D imaging: Tomography has made a radical impact on diverse fields ranging from the study\nof 3D atomic arrangements in matter to the study of human health in medicine.\nDespite its very diverse applications, the core of tomography remains the same,\nthat is, a mathematical method must be implemented to reconstruct the 3D\nstructure of an object from a number of 2D projections. In many scientific\napplications, however, the number of projections that can be measured is\nlimited due to geometric constraints, tolerable radiation dose and/or\nacquisition speed. Thus it becomes an important problem to obtain the\nbest-possible reconstruction from a limited number of projections. Here, we\npresent the mathematical implementation of a tomographic algorithm, termed\nGENeralized Fourier Iterative REconstruction (GENFIRE). By iterating between\nreal and reciprocal space, GENFIRE searches for a global solution that is\nconcurrently consistent with the measured data and general physical\nconstraints. The algorithm requires minimal human intervention and also\nincorporates angular refinement to reduce the tilt angle error. We demonstrate\nthat GENFIRE can produce superior results relative to several other popular\ntomographic reconstruction techniques by numerical simulations, and by\nexperimentally by reconstructing the 3D structure of a porous material and a\nfrozen-hydrated marine cyanobacterium. Equipped with a graphical user\ninterface, GENFIRE is freely available from our website and is expected to find\nbroad applications across different disciplines.", "category": "physics_comp-ph" }, { "text": "Multimode non-Hermitian framework for third harmonic generation in\n nonlinear photonic systems comprising 2D materials: Resonant structures in modern nanophotonics are non-Hermitian (leaky and\nlossy), and support quasinormal modes. Moreover, contemporary cavities\nfrequently include 2D materials to exploit and resonantly enhance their\nnonlinear properties or provide tunability. Such materials add further modeling\ncomplexity due to their infinitesimally thin nature and strong dispersion.\nHere, a formalism for efficiently analyzing third harmonic generation (THG) in\nnanoparticles and metasurfaces incorporating 2D materials is proposed. It is\nbased on numerically calculating the quasinormal modes in the nanostructure, it\nis general, and does not make any prior assumptions regarding the number of\nresonances involved in the conversion process, in contrast to conventional\ncoupled-mode theory approaches in the literature. The capabilities of the\nframework are showcased via two selected examples: a single scatterer and a\nperiodic metasurface incorporating graphene for its high third-order\nnonlinearity. In both cases, excellent agreement with full-wave nonlinear\nsimulations is obtained. The proposed framework may constitute an invaluable\ntool for gaining physical insight into the frequency generation process in\nnano-optic structures and providing guidelines for achieving drastically\nenhanced THG efficiency.", "category": "physics_comp-ph" }, { "text": "Constrained Pressure-Temperature Residual (CPTR) Preconditioner\n Performance for Large-Scale Thermal CO2 Injection Simulation: This work studies the performance of a novel preconditioner, designed for\nthermal reservoir simulation cases and recently introduced in Roy et al. (2020)\nand Cremon et al. (2020), on large-scale thermal CO2 injection cases. For\nCarbon Capture and Sequestration (CCS) projects, injecting CO2 under\nsupercritical conditions is typically tens of degrees colder than the reservoir\ntemperature. Thermal effects can have a significant impact on the simulation\nresults, but they also add many challenges for the solvers. More specifically,\nthe usual combination of an iterative linear solver (such as GMRES) and the\nConstrained Pressure Residual (CPR) physics-based block-preconditioner is known\nto perform rather poorly or fail to converge when thermal effects play a\nsignificant role. The Constrained Pressure-Temperature Residual (CPTR)\npreconditioner retains the 2x2 block structure (elliptic/hyperbolic) of CPR but\nincludes the temperature in the elliptic subsystem. The elliptic subsystem is\nnow formed by two equations, and is dealt with by the system-solver of\nBoomerAMG (from the HYPRE library). Then a global smoother, ILU(0), is applied\nto the full system to handle the local, hyperbolic temperature fronts. We\nimplemented CPTR in the multi-physics solver GEOS and present results on\nvarious large-scale thermal CCS simulation cases, including both Cartesian and\nfully unstructured meshes, up to tens of millions of degrees of freedom. The\nCPTR preconditioner severely reduces the number of GMRES iterations and the\nruntime, with cases timing out in 24h with CPR now requiring a few hours with\nCPTR. We present strong scaling results using hundreds of CPU cores for\nmultiple cases, and show close to linear scaling. CPTR is also virtually\ninsensitive to the thermal Peclet number (which compares advection and\ndiffusion effects) and is suitable to any thermal regime.", "category": "physics_comp-ph" }, { "text": "Matter flow method for alleviating checkerboard oscillations in\n triangular mesh SGH Lagrangian simulation: When the SGH Lagrangian based on triangle mesh is used to simulate\ncompressible hydrodynamics, because of the stiffness of triangular mesh, the\nproblem of physical quantity cell-to-cell spatial oscillation (also called\n\"checkerboard oscillation\") is easy to occur. A matter flow method is proposed\nto alleviate the oscillation of physical quantities caused by triangular\nstiffness. The basic idea of this method is to attribute the stiffness of\ntriangle to the fact that the edges of triangle mesh can not do bending motion,\nand to compensate the effect of triangle edge bending motion by means of matter\nflow. Three effects are considered in our matter flow method: (1) transport of\nthe mass, momentum and energy carried by the moving matter; (2) the work done\non the element, since the flow of matter changes the specific volume of the\ngrid element; (3) the effect of matter flow on the strain rate in the element.\nNumerical experiments show that the proposed matter flow method can effectively\nalleviate the spatial oscillation of physical quantities.", "category": "physics_comp-ph" }, { "text": "Application of the Multi-Peaked Analytically Extended Function to\n Representation of Some Measured Lightning Currents: A multi-peaked form of the analytically extended function (AEF) is used for\napproximation of lightning current waveforms in this paper. The AEF function's\nparameters are estimated using the Marquardt least-squares method (MLSM), and\nthe general procedure for fitting the $p$-peaked AEF function to a waveform\nwith an arbitrary (finite) number of peaks is briefly described. This framework\nis used for obtaining parameters of 2-peaked waveforms typically present when\nmeasuring first negative stroke currents. Advantages, disadvantages and\npossible improvements of the approach are also discussed.", "category": "physics_comp-ph" }, { "text": "Learning physical properties of liquid crystals with deep convolutional\n neural networks: Machine learning algorithms have been available since the 1990s, but it is\nmuch more recently that they have come into use also in the physical sciences.\nWhile these algorithms have already proven to be useful in uncovering new\nproperties of materials and in simplifying experimental protocols, their usage\nin liquid crystals research is still limited. This is surprising because\noptical imaging techniques are often applied in this line of research, and it\nis precisely with images that machine learning algorithms have achieved major\nbreakthroughs in recent years. Here we use convolutional neural networks to\nprobe several properties of liquid crystals directly from their optical images\nand without using manual feature engineering. By optimizing simple\narchitectures, we find that convolutional neural networks can predict physical\nproperties of liquid crystals with exceptional accuracy. We show that these\ndeep neural networks identify liquid crystal phases and predict the order\nparameter of simulated nematic liquid crystals almost perfectly. We also show\nthat convolutional neural networks identify the pitch length of simulated\nsamples of cholesteric liquid crystals and the sample temperature of an\nexperimental liquid crystal with very high precision.", "category": "physics_comp-ph" }, { "text": "ADI type preconditioners for the steady state inhomogeneous Vlasov\n equation: The purpose of the current work is to find numerical solutions of the steady\nstate inhomogeneous Vlasov equation. This problem has a wide range of\napplications in the kinetic simulation of non-thermal plasmas. However, the\ndirect application of either time stepping schemes or iterative methods (such\nas Krylov based methods like GMRES or relexation schemes) is computationally\nexpensive. In the former case the slowest timescale in the system forces us to\nperform a long time integration while in the latter case a large number of\niterations is required. In this paper we propose a preconditioner based on an\nADI type splitting method. This preconditioner is then combined with both GMRES\nand Richardson iteration. The resulting numerical schemes scale almost ideally\n(i.e. the computational effort is proportional to the number of grid points).\nNumerical simulations conducted show that this can result in a speedup of close\nto two orders of magnitude (even for intermediate grid sizes) with respect to\nthe not preconditioned case. In addition, we discuss the characteristics of\nthese numerical methods and show the results for a number of numerical\nsimulations.", "category": "physics_comp-ph" }, { "text": "Modeling of Laser wakefield acceleration in Lorentz boosted frame using\n EM-PIC code with spectral solver: Simulating laser wakefield acceleration (LWFA) in a Lorentz boosted frame in\nwhich the plasma drifts towards the laser with $v_b$ can speedup the simulation\nby factors of $\\gamma^2_b=(1-v^2_b/c^2)^{-1}$. In these simulations the\nrelativistic drifting plasma inevitably induces a high frequency numerical\ninstability that contaminates the interested physics. Various approaches have\nbeen proposed to mitigate this instability. One approach is to solve Maxwell\nequations in Fourier space (a spectral solver) as this has been shown to\nsuppress the fastest growing modes of this instability in simple test problems\nusing a simple low pass, ring (in two dimensions), or shell (in three\ndimensions) filter in Fourier space. We describe the development of a fully\nparallelized, multi-dimensional, particle-in-cell code that uses a spectral\nsolver to solve Maxwell's equations and that includes the ability to launch a\nlaser using a moving antenna. This new EM-PIC code is called UPIC-EMMA and it\nis based on the components of the UCLA PIC framework (UPIC). We show that by\nusing UPIC-EMMA, LWFA simulations in the boosted frames with arbitrary\n$\\gamma_b$ can be conducted without the presence of the numerical instability.\nWe also compare the results of a few LWFA cases for several values of\n$\\gamma_b$, including lab frame simulations using OSIRIS, a EM-PIC code with a\nfinite difference time domain (FDTD) Maxwell solver. These comparisons include\ncases in both linear, and nonlinear regimes. We also investigate some issues\nassociated with numerical dispersion in lab and boosted frame simulations and\nbetween FDTD and spectral solvers.", "category": "physics_comp-ph" }, { "text": "A Novel Averaging Technique for Discrete Entropy-Stable Dissipation\n Operators for Ideal MHD: Entropy stable schemes can be constructed with a specific choice of the\nnumerical flux function. First, an entropy conserving flux is constructed.\nSecondly, an entropy stable dissipation term is added to this flux to guarantee\ndissipation of the discrete entropy. Present works in the field of entropy\nstable numerical schemes are concerned with thorough derivations of entropy\nconservative fluxes for ideal MHD. However, as we show in this work, if the\ndissipation operator is not constructed in a very specific way, it cannot lead\nto a generally stable numerical scheme.\n The two main findings presented in this paper are that the entropy conserving\nflux of Ismail & Roe can easily break down for certain initial conditions\ncommonly found in astrophysical simulations, and that special care must be\ntaken in the derivation of a discrete dissipation matrix for an entropy stable\nnumerical scheme to be robust.\n We present a convenient novel averaging procedure to evaluate the entropy\nJacobians of the ideal MHD and the compressible Euler equations that yields a\ndiscretization with favorable robustness properties.", "category": "physics_comp-ph" }, { "text": "Uncertainty quantification of tabulated supercritical thermodynamics for\n compressible Navier-Stokes solvers: Non-ideal state equations are needed to compute a growing number of\nengineering-relevant problems. The additional computational overhead from the\ncomplex thermodynamics accounts for a significant portion of the total\ncomputation, especially the near-critical or transcritical thermodynamic\nregimes. A compromise between computational speed and the accuracy of the\nthermodynamic property evaluations results in a propagation of the error from\nthe thermodynamics to the hydrodynamic computations. This work proposes a\nsystematic error quantification and computational cost estimate of the various\napproaches for equation of state computation for use in compressible\nNavier-Stokes solvers in the supercritical regime. We develop a parallelized,\nhigh-order, finite volume solver with highly-modular thermodynamic\nimplementation to compute the compressible equations in conservative form.\nThree tabular approaches are investigated: homogeneous tabulation, block\nstructured adaptive mesh refinement tabulation, and a n-dimensional Bezier\nsurface patch on an adaptive structured mesh. We define a set of standardized\nerror metrics and evaluate the thermodynamic error, table size and\ncomputational expense for each approach. We also present an uncertainty\nquantification methodology for tabular equation of state.", "category": "physics_comp-ph" }, { "text": "On the Self-Consistent Event Biasing Schemes for Monte Carlo Simulations\n of Nanoscale MOSFETs: Different techniques of event biasing have been implemented in the\nparticle-based Monte Carlo simulations of a 15nm n-channel MOSFET. The primary\ngoal is to achieve enhancement in the channel statistics and faster convergence\nin the calculation of terminal current. Enhancement algorithms are especially\nuseful when the device behavior is governed by rare events in the carrier\ntransport process. After presenting a brief overview on the Monte Carlo\ntechnique for solving the Boltzmann transport equation, the basic steps of\nderiving the approach in presence of both the initial and the boundary\nconditions have been discussed. In the derivation, the linearity of the\ntransport problem has been utilized first, where Coulomb forces between the\ncarriers are initially neglected. The generalization of the approach for\nHartree carriers has been established in the iterative procedure of coupling\nwith the Poisson equation. It is shown that the weight of the particles, as\nobtained by biasing of the Boltzmann equation, survives between the successive\nsteps of solving the Poisson equation.", "category": "physics_comp-ph" }, { "text": "Spectrum-splitting approach for Fermi-operator expansion in all-electron\n Kohn-Sham DFT calculations: We present a spectrum-splitting approach to conduct all-electron Kohn-Sham\ndensity functional theory (DFT) calculations by employing Fermi-operator\nexpansion of the Kohn-Sham Hamiltonian. The proposed approach splits the\nsubspace containing the occupied eigenspace into a core-subspace, spanned by\nthe core eigenfunctions, and its complement, the valence-subspace, and thereby\nenables an efficient computation of the Fermi-operator expansion by reducing\nthe expansion to the valence-subspace projected Kohn-Sham Hamiltonian. The key\nideas used in our approach are: (i) employ Chebyshev filtering to compute a\nsubspace containing the occupied states followed by a localization procedure to\ngenerate non-orthogonal localized functions spanning the Chebyshev-filtered\nsubspace; (ii) compute the Kohn-Sham Hamiltonian projected onto the\nvalence-subspace; (iii) employ Fermi-operator expansion in terms of the\nvalence-subspace projected Hamiltonian to compute the density matrix,\nelectron-density and band energy. We demonstrate the accuracy and performance\nof the method on benchmark materials systems involving silicon nano-clusters up\nto 1330 electrons, a single gold atom and a six-atom gold nano-cluster. The\nbenchmark studies on silicon nano-clusters revealed a staggering five-fold\nreduction in the Fermi-operator expansion polynomial degree by using the\nspectrum-splitting approach for accuracies in the ground-state energies of\n$\\sim 10^{-4} Ha/atom$ with respect to reference calculations. Further,\nnumerical investigations on gold suggest that spectrum splitting is\nindispensable to achieve meaningful accuracies, while employing Fermi-operator\nexpansion.", "category": "physics_comp-ph" }, { "text": "An O(N) Method for Rapidly Computing Periodic Potentials Using\n Accelerated Cartesian Expansions: The evaluation of long-range potentials in periodic, many-body systems arises\nas a necessary step in the numerical modeling of a multitude of interesting\nphysical problems. Direct evaluation of these potentials requires O(N^2)\noperations and O(N^2) storage, where N is the number of interacting bodies. In\nthis work, we present a method, which requires O(N) operations and O(N)\nstorage, for the evaluation of periodic Helmholtz, Coulomb, and Yukawa\npotentials with periodicity in 1-, 2-, and 3-dimensions, using the method of\nAccelerated Cartesian Expansions (ACE). We present all aspects necessary to\neffect this acceleration within the framework of ACE including the necessary\ntranslation operators, and appropriately modifying the hierarchical\ncomputational algorithm. We also present several results that validate the\nefficacy of this method with respect to both error convergence and cost\nscaling, and derive error bounds for one exemplary potential.", "category": "physics_comp-ph" }, { "text": "Efficient Field-Only Surface Integral Equations for Electromagnetics: In a recent paper, Klaseboer et al. (IEEE Trans. Antennas Propag., vol. 65,\nno. 2, pp. 972-977, Feb. 2017) developed a surface integral formulation of\nelectromagnetics that does not require working with integral equations that\nhave singular kernels. Instead of solving for the induced surface currents, the\nmethod involves surface integral solutions for 4 coupled Helmholtz equations: 3\nfor each Cartesian component of the electric E field plus 1 for the scalar\nfunction r*E on the surface of scatterers. Here we improve on this approach by\nadvancing a formulation due to Yuffa et al. (IEEE Trans.Antennas Propag., vol.\n66, no. 10, pp. 5274-5281, Oct. 2018) that solves for E and its normal\nderivative. Apart from a 25% reduction in problem size, the normal derivative\nof the field is often of interest in micro-photonic applications.", "category": "physics_comp-ph" }, { "text": "Computation of the solid-liquid interfacial free energy in hard spheres\n by means of thermodynamic integration: We used a thermodynamic integration scheme, which is specifically designed\nfor disordered systems, to compute the interfacial free energy of the\nsolid-liquid interface in the hard-sphere model. We separated the bulk\ncontribution to the total free energy from the interface contribution,\nperformed a finite-size scaling analysis and obtained for the (100)-interface\n$\\gamma=0.591(11)k_{B}T\\sigma^{-2}$.", "category": "physics_comp-ph" }, { "text": "Dynamics of ferrofluidic flow in the Taylor-Couette system with a small\n aspect ratio: We investigate fundamental nonlinear dynamics of ferrofluidic Taylor-Couette\nflow - flow confined between two concentric independently rotating cylinders -\nconsider small aspect ratio by solving the ferrohydrodynamical equations,\ncarrying out systematic bifurcation analysis. Without magnetic field, we find\nsteady flow patterns, previously observed with a simple fluid, such as those\ncontaining normal one- or two vortex cells, as well as anomalous one-cell and\ntwin-cell flow states. However, when a symmetry-breaking transverse magnetic\nfield is present, all flow states exhibit stimulated, finite two-fold mode.\nVarious bifurcations between steady and unsteady states can occur,\ncorresponding to the transitions between the two-cell and one-cell states.\nWhile unsteady, axially oscillating flow states can arise, we also detect the\nemergence of new unsteady flow states. In particular, we uncover two new\nstates: one contains only the azimuthally oscillating solution in the\nconfiguration of the twin-cell flow state, and another a rotating flow state.\nTopologically, these flow states are a limit cycle and a quasiperiodic solution\non a two-torus, respectively. Emergence of new flow states in addition to\nobserved ones with classical fluid, indicates that richer but potentially more\ncontrollable dynamics in ferrofluidic flows, as such flow states depend on the\nexternal magnetic field.", "category": "physics_comp-ph" }, { "text": "RLEKF: An Optimizer for Deep Potential with Ab Initio Accuracy: It is imperative to accelerate the training of neural network force field\nsuch as Deep Potential, which usually requires thousands of images based on\nfirst-principles calculation and a couple of days to generate an accurate\npotential energy surface. To this end, we propose a novel optimizer named\nreorganized layer extended Kalman filtering (RLEKF), an optimized version of\nglobal extended Kalman filtering (GEKF) with a strategy of splitting big and\ngathering small layers to overcome the $O(N^2)$ computational cost of GEKF.\nThis strategy provides an approximation of the dense weights error covariance\nmatrix with a sparse diagonal block matrix for GEKF. We implement both RLEKF\nand the baseline Adam in our $\\alpha$Dynamics package and numerical experiments\nare performed on 13 unbiased datasets. Overall, RLEKF converges faster with\nslightly better accuracy. For example, a test on a typical system, bulk copper,\nshows that RLEKF converges faster by both the number of training epochs\n($\\times$11.67) and wall-clock time ($\\times$1.19). Besides, we theoretically\nprove that the updates of weights converge and thus are against the gradient\nexploding problem. Experimental results verify that RLEKF is not sensitive to\nthe initialization of weights. The RLEKF sheds light on other AI-for-science\napplications where training a large neural network (with tons of thousands\nparameters) is a bottleneck.", "category": "physics_comp-ph" }, { "text": "Dynamically coupling the non-linear Stokes equations with the Shallow\n Ice Approximation in glaciology: Description and first applications of the\n ISCAL method: We propose and implement a new method, called the Ice Sheet Coupled\nApproximation Levels (ISCAL) method, for simulation of ice sheet flow in large\ndomains under long time-intervals. The method couples the exact, full Stokes\n(FS) equations with the Shallow Ice Approximation (SIA). The part of the domain\nwhere SIA is applied is determined automatically and dynamically based on\nestimates of the modeling error. For a three dimensional model problem where\nthe number of degrees of freedom is comparable to a real world application,\nISCAL performs almost an order of magnitude faster with a low reduction in\naccuracy compared to a monolithic FS. Furthermore, ISCAL is shown to be able to\ndetect rapid dynamic changes in the flow. Three different error estimations are\napplied and compared. Finally, ISCAL is applied to the Greenland Ice Sheet,\nproving ISCAL to be a potential valuable tool for the ice sheet modeling\ncommunity.", "category": "physics_comp-ph" }, { "text": "OpenQSEI: a MATLAB package for Quasi Static Elasticity Imaging: Quasi Static Elasticity Imaging (QSEI) aims to computationally reconstruct\nthe inhomogeneous distribution of the elastic modulus using a measured\ndisplacement field. QSEI is a well-established imaging modality used in medical\nimaging for localizing tissue abnormalities. More recently, QSEI has shown\npromise in applications of structural health monitoring and materials\ncharacterization. Despite the growing usage of QSEI in multiple fields,\nfully-executable open source packages are not readily available. To address\nthis, OpenQSEI is developed using a MATLAB platform and shared in an online\ncommunity setting for continual algorithmic improvement. In this article, we\ndescribe the mathematical background of QSEI and demonstrate the basic\nfunctionalities of OpenQSEI with examples.", "category": "physics_comp-ph" }, { "text": "Random Numbers in Scientific Computing: An Introduction: Random numbers play a crucial role in science and industry. Many numerical\nmethods require the use of random numbers, in particular the Monte Carlo\nmethod. Therefore it is of paramount importance to have efficient random number\ngenerators. The differences, advantages and disadvantages of true and pseudo\nrandom number generators are discussed with an emphasis on the intrinsic\ndetails of modern and fast pseudo random number generators. Furthermore,\nstandard tests to verify the quality of the random numbers produced by a given\ngenerator are outlined. Finally, standard scientific libraries with built-in\ngenerators are presented, as well as different approaches to generate\nnonuniform random numbers. Potential problems that one might encounter when\nusing large parallel machines are discussed.", "category": "physics_comp-ph" }, { "text": "Robust self-assembly of nonconvex shapes in 2D: We present fast simulation methods for the self-assembly of complex shapes in\ntwo dimensions. The shapes are modeled via a general boundary curve and\ninteract via a standard volume term promoting overlap and an interpenetration\npenalty. To efficiently realize the Gibbs measure on the space of possible\nconfigurations we employ the hybrid Monte Carlo algorithm together with a\ncareful use of signed distance functions for energy evaluation.\n Motivated by the self-assembly of identical coat proteins of the tobacco\nmosaic virus which assemble into a helical shell, we design a particular\nnonconvex 2D model shape and demonstrate its robust self-assembly into a unique\nfinal state. Our numerical experiments reveal two essential prerequisites for\nthis self-assembly process: blocking and matching (i.e., local repulsion and\nattraction) of different parts of the boundary; and nonconvexity and handedness\nof the shape.", "category": "physics_comp-ph" }, { "text": "A modified projection approach to line mixing: This paper presents a simple approach to combine the high-resolution\nnarrowband features of some desired isolated line models together with the far\nwing behavior of the projection based strong collision (SC) method to line\nmixing which was introduced by Bulanin, Dokuchaev, Tonkov and Filippov. The\nmethod can be viewed in terms of a small diagonal perturbation of the SC\nrelaxation matrix providing the required narrowband accuracy close to the line\ncenters at the same time as the SC line coupling transfer rates are retained\nand can be optimally scaled to thermalize the radiator after impact. The method\ncan conveniently be placed in the framework of the Boltzmann-Liouville\ntransport equation where a rigorous diagonalization of the line mixing problem\nrequires that molecular phase and velocity changes are assumed to be\nuncorrelated. A detailed analysis for the general Doppler case is given based\non the first order Rosenkranz approximation, and which also provides the\npossibility to incorporate quadratically speed dependent parameters. Exact\nsolutions for pure pressure broadening and explicit Rosenkranz approximations\nare given in the case with velocity independent parameters (line frequency,\nstrength, width and shift) which can readily be retrieved from databases such\nas HITRAN for a large number of species. Numerical examples including\ncomparisons to published measured data are provided in two specific cases\nconcerning the absorption of carbon dioxide in its infrared band of asymmetric\nstretching, as well as of atmospheric water vapor and oxygen in relevant\nmillimeter bands.", "category": "physics_comp-ph" }, { "text": "Data analysis with R in an experimental physics environment: A software package has been developed to bridge the R analysis model with the\nconceptual analysis environment typical of radiation physics experiments. The\nnew package has been used in the context of a project for the validation of\nsimulation models, where it has demonstrated its capability to satisfy typical\nrequirements pertinent to the problem domain.", "category": "physics_comp-ph" }, { "text": "CRKSPH - A Conservative Reproducing Kernel Smoothed Particle\n Hydrodynamics Scheme: We present a formulation of smoothed particle hydrodynamics (SPH) that\nutilizes a first-order consistent reproducing kernel, a smoothing function that\nexactly interpolates linear fields with particle tracers. Previous formulations\nusing reproducing kernel (RK) interpolation have had difficulties maintaining\nconservation of momentum due to the fact the RK kernels are not, in general,\nspatially symmetric. Here, we utilize a reformulation of the fluid equations\nsuch that mass, linear momentum, and energy are all rigorously conserved\nwithout any assumption about kernel symmetries, while additionally maintaining\napproximate angular momentum conservation. Our approach starts from a\nrigorously consistent interpolation theory, where we derive the evolution\nequations to enforce the appropriate conservation properties, at the sacrifice\nof full consistency in the momentum equation. Additionally, by exploiting the\nincreased accuracy of the RK method's gradient, we formulate a simple limiter\nfor the artificial viscosity that reduces the excess diffusion normally\nincurred by the ordinary SPH artificial viscosity. Collectively, we call our\nsuite of modifications to the traditional SPH scheme Conservative Reproducing\nKernel SPH, or CRKSPH. CRKSPH retains many benefits of traditional SPH methods\n(such as preserving Galilean invariance and manifest conservation of mass,\nmomentum, and energy) while improving on many of the shortcomings of SPH,\nparticularly the overly aggressive artificial viscosity and zeroth-order\ninaccuracy. We compare CRKSPH to two different modern SPH formulations\n(pressure based SPH and compatibly differenced SPH), demonstrating the\nadvantages of our new formulation when modeling fluid mixing, strong shock, and\nadiabatic phenomena.", "category": "physics_comp-ph" }, { "text": "Predicting the phase behaviors of superionic water at planetary\n conditions: Most water in the universe may be superionic, and its thermodynamic and\ntransport properties are crucial for planetary science but difficult to probe\nexperimentally or theoretically. We use machine learning and free energy\nmethods to overcome the limitations of quantum mechanical simulations, and\ncharacterize hydrogen diffusion, superionic transitions, and phase behaviors of\nwater at extreme conditions. We predict that a close-packed superionic phase\nwith mixed stacking is stable over a wide temperature and pressure range, while\na body-centered cubic phase is only thermodynamically stable in a small window\nbut is kinetically favored. Our phase boundaries, which are consistent with the\nexisting-albeit scarce-experimental observations, help resolve the fractions of\ninsulating ice, different superionic phases, and liquid water inside of ice\ngiants.", "category": "physics_comp-ph" }, { "text": "Performing highly efficient Minima Hopping structure predictions using\n the Atomic Simulation Environment (ASE): In the dynamic field of materials science, the quest to find optimal\nstructures with low potential energy is of great significance. Over the past\ntwo decades, the minima hopping algorithm has emerged as a successful tool in\nthis pursuit. We present a robust, user friendly and efficient implementation\nof the minima hopping algorithm as a Python library, enhancing in this way the\nglobal structure optimization simulations significantly. Our implementation\nsignificantly accelerates the exploration the potential energy surfaces,\nleveraging an MPI parallelization scheme that allows for multi level\nparallelization. In this scheme, multiple minima hopping processes are running\nsimultaneously communicating their findings to a single database and,\ntherefore, sharing information with each other about which parts of the\npotential energy surface have already been explored. Also multiple features\nfrom several existing implementations such as variable cell shape molecular\ndynamics and combined atomic position and cell geometry optimization for bulk\nsystems, enhanced temperature feedback and fragmentation fixing for clusters\nare included in this implementation. Finally, this implementation takes\nadvantage of the Atomic Simulation Environment (ASE) Python library allowing\nfor high flexibility regarding the underlying energy and force evaluation.", "category": "physics_comp-ph" }, { "text": "Application of the Space-Time Method to Stimulated Raman Adiabatic\n Passage on the Simple Harmonic Oscillator: The space-time method is applied to a model system-the Simple Harmonic\nOscillator in a laser field to simulate the Stimulated Raman Adiabatic Passage\n(STIRAP) process. The Space-Time method is a computational theory first\nintroduced by Weatherford et. al. to solve Time-Dependent Systems with one\nboundary value and applied to electron spin system with invariant Hamiltonian\n[Journal of Molecular Structure {\\bf 592} 47]. The implementation in the\npresent work provides an efficient and general way to solve the Time-Dependent\nSchr{\\\"o}dinger Equation and can be applied to multi-state systems. The\nalgorithm for simulating the Simple Harmonic Oscillator STIRAP can be applied\nto solve STIRAP problems for complex systems.", "category": "physics_comp-ph" }, { "text": "A Boundary Thickening-based Direct Forcing Immersed Boundary Method for\n Fully Resolved Simulation of Particle-laden Flows: A boundary thickening-based direct forcing (BTDF) immersed boundary (IB)\nmethod is proposed for fully resolved simulation of incompressible viscous\nflows laden with finite size particles. By slightly thickening the boundary\nthickness, the local communication between the Lagrangian points on the solid\nboundary and their neighboring fluid Eulerian grids is improved, based on an\nimplicit direct forcing (IDF) approach and a partition-of-unity condition for\nthe regularized delta function. This strategy yields a simple, yet much better\nimposition of the no-slip and no-penetration boundary conditions than the\nconventional direct forcing (DF) technique. In particular, the present BTDF\nmethod can achieve a numerical accuracy comparable with other representative\nimproved methods, such as multi-direct forcing (MDF), implicit velocity\ncorrection (IVC) and the reproducing kernel particle method (RKPM), while its\ncomputation cost remains much lower and nearly equivalent to the conventional\nDF scheme. The dependence of the optimum thickness value of boundary thickening\non the form of the regularized delta functions is also revealed. By coupling\nthe lattice Boltzmann method (LBM) with BTDF-IB, the robustness of the present\nBTDF IB method is demonstrated using numerical simulations of creeping flow\n(Re=0.1), steady vortex separating flow (Re=40) and unsteady vortex shedding\nflow (Re=200) around a circular cylinder.", "category": "physics_comp-ph" }, { "text": "On the incompressibility of cylindrical origami patterns: The art and science of folding intricate three-dimensional structures out of\npaper has occupied artists, designers, engineers, and mathematicians for\ndecades, culminating in the design of deployable structures and mechanical\nmetamaterials. Here we investigate the axial compressibility of origami\ncylinders, i.e., cylindrical structures folded from rectangular sheets of\npaper. We prove, using geometric arguments, that a general fold pattern only\nallows for a finite number of \\emph{isometric} cylindrical embeddings.\nTherefore, compressibility of such structures requires either stretching the\nmaterial or deforming the folds. Our result considerably restricts the space of\nconstructions that must be searched when designing new types of origami-based\nrigid-foldable deployable structures and metamaterials.", "category": "physics_comp-ph" }, { "text": "Determination of thermal emission spectra maximizing thermophotovoltaic\n performance using a genetic algorithm: Optimal radiator thermal emission spectra maximizing thermophotovoltaic (TPV)\nconversion efficiency and output power density are determined when temperature\neffects in the cell are considered. To do this, a framework is designed in\nwhich a TPV model that accounts for radiative, electrical and thermal losses is\ncoupled with a genetic algorithm. The TPV device under study involves a\nspectrally selective radiator at a temperature of 2000 K, a gallium antimonide\ncell, and a cell thermal management system characterized by a fluid temperature\nand a heat transfer coefficient of 293 K and 600 Wm-2K-1. It is shown that a\nmaximum conversion efficiency of 38.8% is achievable with an emission spectrum\nthat has emissivity of unity between 0.719 eV and 0.763 eV and zero elsewhere.\nThis optimal spectrum is less than half of the width of those when thermal\nlosses are neglected. A maximum output power density of 41708 Wm-2 is\nachievable with a spectrum having emissivity values of unity between 0.684 eV\nand 1.082 eV and zero elsewhere when thermal losses are accounted for. These\nemission spectra are shown to greatly outperform blackbody and tungsten\nradiators, and could be obtained using artificial structures such as\nmetamaterials or photonic crystals.", "category": "physics_comp-ph" }, { "text": "MCNNTUNES: tuning Shower Monte Carlo generators with machine learning: The parameters tuning of event generators is a research topic characterized\nby complex choices: the generator response to parameter variations is difficult\nto obtain on a theoretical basis, and numerical methods are hardly tractable\ndue to the long computational times required by generators. Event generator\ntuning has been tackled by parametrisation-based techniques, with the most\nsuccessful one being a polynomial parametrisation. In this work, an\nimplementation of tuning procedures based on artificial neural networks is\nproposed. The implementation was tested with closure testing and experimental\nmeasurements from the ATLAS experiment at the Large Hadron Collider.", "category": "physics_comp-ph" }, { "text": "Strategic Plan for a Scientific Software Innovation Institute (S2I2) for\n High Energy Physics: The quest to understand the fundamental building blocks of nature and their\ninteractions is one of the oldest and most ambitious of human scientific\nendeavors. Facilities such as CERN's Large Hadron Collider (LHC) represent a\nhuge step forward in this quest. The discovery of the Higgs boson, the\nobservation of exceedingly rare decays of B mesons, and stringent constraints\non many viable theories of physics beyond the Standard Model (SM) demonstrate\nthe great scientific value of the LHC physics program. The next phase of this\nglobal scientific project will be the High-Luminosity LHC (HL-LHC) which will\ncollect data starting circa 2026 and continue into the 2030's. The primary\nscience goal is to search for physics beyond the SM and, should it be\ndiscovered, to study its details and implications. During the HL-LHC era, the\nATLAS and CMS experiments will record circa 10 times as much data from 100\ntimes as many collisions as in LHC Run 1. The NSF and the DOE are planning\nlarge investments in detector upgrades so the HL-LHC can operate in this\nhigh-rate environment. A commensurate investment in R&D for the software for\nacquiring, managing, processing and analyzing HL-LHC data will be critical to\nmaximize the return-on-investment in the upgraded accelerator and detectors.\nThe strategic plan presented in this report is the result of a\nconceptualization process carried out to explore how a potential Scientific\nSoftware Innovation Institute (S2I2) for High Energy Physics (HEP) can play a\nkey role in meeting HL-LHC challenges.", "category": "physics_comp-ph" }, { "text": "Web Portal for Photonic Technologies Using Grid Infrastructures: The modeling of physical processes is an integral part of scientific and\ntechnical research. In this area, the Extendible C++ Application in Quantum\nTechnologies (ECAQT) package provides the numerical simulations and modeling of\ncomplex quantum systems in the presence of decoherence with wide applications\nin photonics. It allows creating models of interacting complex systems and\nsimulates their time evolution with a number of available time-evolution\ndrivers. Physical simulations involving massive amounts of calculations are\noften executed on distributed computing infrastructures. It is often difficult\nfor non expert users to use such computational infrastructures or even to use\nadvanced libraries over the infrastructures, because they often require being\nfamiliar with middleware and tools, parallel programming techniques and\npackages. The P-RADE Grid Portal is a Grid portal solution that allows users to\nmanage the whole life-cycle for executing a parallel application on the\ncomputing Grid infrastructures. The article describes the functionality and the\nstructure of the web portal based on ECAQT package.", "category": "physics_comp-ph" }, { "text": "Computer simulation of coherent interaction of charged particles and\n photons with crystalline solids at high energies: Monte Carlo simulation code has been developed and tested for studying the\npassage of charged particle beams and radiation through the crystalline matter\nat energies from tens of MeV up to hundreds of GeV. The developed Monte Carlo\ncode simulates electron, positron and photon shower in single crystals and\namorphous media. The Monte Carlo code tracks all the generations of charged\nparticles and photons through the aligned crystal by taking into account the\nparameters of incoming beam, multiple scattering, energy loss, emission angles,\ntransverse dimension of beams, and linear polarization of produced photons.\n The simulation results are compared with the CERN-NA-59 experimental data.\nThe realistic descriptions of the electron and photon beams and the physical\nprocesses within the silicon and germanium single crystals have been\nimplemented.", "category": "physics_comp-ph" }, { "text": "A New Scheme for Solving High-Order DG Discretizations of Thermal\n Radiative Transfer using the Variable Eddington Factor Method: We present a new approach for solving high-order thermal radiative transfer\n(TRT) using the Variable Eddington Factor (VEF) method (also known as\nquasidiffusion). Our approach leverages the VEF equations, which consist of the\nfirst and second moments of the $S_N$ transport equation, to more efficiently\ncompute the TRT solution for each time step. The scheme consists of two loops -\nan outer loop to converge the Eddington tensor and an inner loop to converge\nthe iteration between the temperature equation and the VEF system. By\nconverging the outer iteration, one obtains the fully implicit TRT solution for\nthe given time step with a relatively low number of transport sweeps. However,\none could choose to perform exactly one outer iteration (and therefore exactly\none sweep) per time step, resulting in a semi-implicit scheme that is both\nhighly efficient and robust. Our results indicate that the error between the\none-sweep and fully implicit variants of our scheme may be small enough for\nconsideration in many problems of interest.", "category": "physics_comp-ph" }, { "text": "Direct numerical simulation of particulate flows with an overset grid\n method: We evaluate an efficient overset grid method for two-dimensional and\nthree-dimensional particulate flows for small numbers of particles at finite\nReynolds number. The rigid particles are discretised using moving overset grids\noverlaid on a Cartesian background grid. This allows for strongly-enforced\nboundary conditions and local grid refinement at particle surfaces, thereby\naccurately capturing the viscous boundary layer at modest computational cost.\nThe incompressible Navier--Stokes equations are solved with a fractional-step\nscheme which is second-order-accurate in space and time, while the fluid--solid\ncoupling is achieved with a partitioned approach including multiple\nsub-iterations to increase stability for light, rigid bodies. Through a series\nof benchmark studies we demonstrate the accuracy and efficiency of this\napproach compared to other boundary conformal and static grid methods in the\nliterature. In particular, we find that fully resolving boundary layers at\nparticle surfaces is crucial to obtain accurate solutions to many common test\ncases. With our approach we are able to compute accurate solutions using as\nlittle as one third the number of grid points as uniform grid computations in\nthe literature. A detailed convergence study shows a 13-fold decrease in CPU\ntime over a uniform grid test case whilst maintaining comparable solution\naccuracy.", "category": "physics_comp-ph" }, { "text": "Autonomous Efficient Experiment Design for Materials Discovery with\n Bayesian Model Averaging: The accelerated exploration of the materials space in order to identify\nconfigurations with optimal properties is an ongoing challenge. Current\nparadigms are typically centered around the idea of performing this exploration\nthrough high-throughput experimentation/computation. Such approaches, however,\ndo not account fo the always present constraints in resources available.\nRecently, this problem has been addressed by framing materials discovery as an\noptimal experiment design. This work augments earlier efforts by putting\nforward a framework that efficiently explores the materials design space not\nonly accounting for resource constraints but also incorporating the notion of\nmodel uncertainty. The resulting approach combines Bayesian Model Averaging\nwithin Bayesian Optimization in order to realize a system capable of\nautonomously and adaptively learning not only the most promising regions in the\nmaterials space but also the models that most efficiently guide such\nexploration. The framework is demonstrated by efficiently exploring the MAX\nternary carbide/nitride space through Density Functional Theory (DFT)\ncalculations.", "category": "physics_comp-ph" }, { "text": "Adaptive resolution for multiphase smoothed particle hydrodynamics: The smoothed particle hydrodynamics (SPH) method has been increasingly used\nto study fluid problems in recent years; but its computational cost can be high\nif high resolution is required. In this study, an adaptive resolution method\nbased on SPH is developed for multiphase flow simulation. The numerical SPH\nparticles are refined or coarsened as needed, depending on the distance to the\ninterface. In developing the criteria, reference particle spacing is defined\nfor each particle, and it changes dynamically with the location of the\ninterface. A variable smoothing length is used together with adaptive\nresolution. An improved algorithm for calculating the variable smoothing length\nis further developed to reduce numerical errors. The proposed adaptive\nresolution method is validated by five examples involving liquid drops impact\non dry or wet surfaces, water entry of a cylinder and dam break flow, with the\nconsideration of ambient gas. Different resolution levels are used in the\nsimulations. Numerical validations have proven that the present adaptive\nresolution method can accurately capture the dynamics of liquid-gas interface\nwith low computational costs. The present adaptive method can be incorporated\ninto other SPH-based methods for efficient fluid dynamics simulation.", "category": "physics_comp-ph" }, { "text": "A self-similarity principle for the computation of rare event\n probability: The probability of rare and extreme events is an important quantity for\ndesign purposes. However, computing the probability of rare events can be\nexpensive because only a few events, if any, can be observed. To this end, it\nis necessary to accelerate the observation of rare events using methods such as\nthe importance splitting technique, which is the main focus here. In this work,\nit is shown how a genealogical importance splitting technique can be made more\nefficient if one knows how the rare event occurs in terms of the mean path\nfollowed by the observables. Using Monte Carlo simulations, it is shown that\none can estimate this path using less rare paths. A self-similarity model is\nformulated and tested using an a priori and a posteriori analysis. The\nself-similarity principle is also tested on more complex systems including a\nturbulent combustion problem with $10^7$ degrees of freedom. While the\nself-similarity model is shown to not be strictly valid in general, it can\nstill provide a good approximation of the rare mean paths and is a promising\nroute for obtaining the statistics of rare events in chaotic high-dimensional\nsystems.", "category": "physics_comp-ph" }, { "text": "Code-Verification Techniques for the Method-of-Moments Implementation of\n the Combined-Field Integral Equation: Code verification plays an important role in establishing the credibility of\ncomputational simulations by assessing the correctness of the implementation of\nthe underlying numerical methods. In computational electromagnetics, the\nnumerical solution to integral equations incurs multiple interacting sources of\nnumerical error, as well as other challenges, which render traditional\ncode-verification approaches ineffective. In this paper, we provide approaches\nto separately measure the numerical errors arising from these different error\nsources for the method-of-moments implementation of the combined-field integral\nequation. We demonstrate the effectiveness of these approaches for cases with\nand without coding errors.", "category": "physics_comp-ph" }, { "text": "An extension to VORO++ for multithreaded computation of Voronoi cells: VORO++ is a software library written in C++ for computing the Voronoi\ntessellation, a technique in computational geometry that is widely used for\nanalyzing systems of particles. VORO++ was released in 2009 and is based on\ncomputing the Voronoi cell for each particle individually. Here, we take\nadvantage of modern computer hardware, and extend the original serial version\nto allow for multithreaded computation of Voronoi cells via the OpenMP\napplication programming interface. We test the performance of the code, and\ndemonstrate that we can achieve parallel efficiencies greater than 95% in many\ncases. The multithreaded extension follows standard OpenMP programming\nparadigms, allowing it to be incorporated into other programs. We provide an\nexample of this using the VoroTop software library, performing a multithreaded\nVoronoi cell topology analysis of up to 102.4 million particles.", "category": "physics_comp-ph" }, { "text": "Modeling of Supersonic Radiative Marshak waves using Simple Models and\n Advanced Simulations: We study the problem of radiative heat (Marshak) waves using advanced\napproximate approaches. Supersonic radiative Marshak waves that are propagating\ninto a material are radiation dominated (i.e. hydrodynamic motion is\nnegligible), and can be described by the Boltzmann equation. However, the exact\nthermal radiative transfer problem is a nontrivial one, and there still exists\na need for approximations that are simple to solve. The discontinuous\nasymptotic $P_1$ approximation, which is a combination of the asymptotic $P_1$\nand the discontinuous asymptotic diffusion approximations, was tested in\nprevious work via theoretical benchmarks. Here we analyze a fundamental and\ntypical experiment of a supersonic Marshak wave propagation in a low-density\n$\\mathrm{SiO_2}$ foam cylinder, embedded in gold walls. First, we offer a\nsimple analytic model, that grasps the main effects dominating the physical\nsystem. We find the physics governing the system to be dominated by a simple,\none-dimensional effect, based on the careful observation of the different\nradiation temperatures that are involved in the problem. The model is completed\nwith the main two-dimensional effect which is caused by the loss of energy to\nthe gold walls. Second, we examine the validity of the discontinuous asymptotic\n$P_1$ approximation, comparing to exact simulations with good accuracy.\nSpecifically, the heat front position as a function of the time is reproduced\nperfectly in compare to exact Boltzmann solutions.", "category": "physics_comp-ph" }, { "text": "Variational Time Integration Approach for Smoothed Particle\n Hydrodynamics Simulation of Fluids: Variational time integrators are derived in the context of discrete\nmechanical systems. In this area, the governing equations for the motion of the\nmechanical system are built following two steps: (a) Postulating a discrete\naction; (b) Computing the stationary value for the discrete action. The former\nis formulated by considering Lagrangian (or Hamiltonian) systems with the\ndiscrete action being constructed through numerical approximations of the\naction integral. The latter derives the discrete Euler-Lagrange equations whose\nsolutions give the variational time integrator. In this paper, we build\nvariational time integrators in the context of smoothed particle hydrodynamics\n(SPH). So, we start with a variational formulation of SPH for fluids. Then, we\napply the generalized midpoint rule, which depends on a parameter $\\alpha$, in\norder to generate the discrete action. Then, the step (b) yields a variational\ntime integration scheme that reduces to a known explicit one if\n$\\alpha\\in\\{0,1\\}$ but it is implicit otherwise. Hence, we design a fixed point\niterative method to approximate the solution and prove its convergence\ncondition. Besides, we show that the obtained discrete Euler-Lagrange equations\npreserve linear momentum. In the experimental results, we consider artificial\nviscous as well as boundary interaction effects and simulate a dam breaking set\nup. We compare the explicit and implicit SPH solutions and analyze momentum\nconservation of the dam breaking simulations.", "category": "physics_comp-ph" }, { "text": "Introduction to molecular dynamics simulations: We provide an introduction to molecular dynamics simulations in the context\nof the Kob-Andersen model of a glass. We introduce a complete set of tools for\ndoing and analyzing the results of simulations at fixed NVE and NVT. The\nmodular format of the paper allows readers to select sections that meet their\nneeds. We start with an introduction to molecular dynamics independent of the\nprogramming language, followed by introductions to an implementation using\nPython and then the freely available open source software package LAMMPS. We\nalso describe analysis tools for the quick testing of the program during its\ndevelopment and compute the radial distribution function and the mean square\ndisplacement using both Python and LAMMPS.", "category": "physics_comp-ph" }, { "text": "Transfer Learning as a Method to Reproduce High-Fidelity NLTE Opacities\n in Simulations: Simulations of high-energy density physics often need non-local thermodynamic\nequilibrium (NLTE) opacity data. This data, however, is expensive to produce at\nrelatively low-fidelity. It is even more so at high-fidelity such that the\nopacity calculations can contribute ninety-five percent of the total\ncomputation time. This proportion can even reach large proportions. Neural\nnetworks can be used to replace the standard calculations of low-fidelity data,\nand the neural networks can be trained to reproduce artificial, high-fidelity\nopacity spectra. In this work, it is demonstrated that a novel neural network\narchitecture trained to reproduce high-fidelity krypton spectra through\ntransfer learning can be used in simulations. Further, it is demonstrated that\nthis can be done while achieving a relative percent error of the peak radiative\ntemperature of the hohlraum of approximately 1\\% to 4\\% while achieving a 19.4x\nspeed up.", "category": "physics_comp-ph" }, { "text": "Effects of Membrane Morphology on the Efficiency of Direct Contact\n Membrane Distillation: A computer simulation is used to predict the effects of membrane morphology\non the thermal efficiency of direct contact membrane distillation. The mass\ntransfer through the porous microstructure and the heat conduction through the\nmembrane are both related by the membrane morphology. The interrelated\ntortuosity of the porous structure and solid phase influences the mass transfer\nand thermal conductivity, respectively. The effects of varying the morphology\nare elucidated and introducing a lattice structure, which tailors the\nmorphology, can significantly increase thermal efficiency. A three-layer system\nis also simulated, where the pore size in the middle layer can be increased\nwithout significantly increasing the risk of membrane pore wetting. Three-layer\nsystems that possess a lattice morphology are found to result in thermal\nefficiencies around 20\\% higher than random morphologies.", "category": "physics_comp-ph" }, { "text": "Jump at the onset of saltation: We reveal a discontinuous transition in the saturated flux for aeolian\nsaltation by simulating explicitly particle motion in turbulent flow. The\ndiscontinuity is followed by a coexistence interval with two metastable\nsolutions. The modification of the wind profile due to momentum exchange\nexhibits a second maximum at high shear strength. The saturated flux depends on\nthe strength of the wind as $q_s=q_0+A(u_*-u_t)(u_*^2+u_t^2)$.", "category": "physics_comp-ph" }, { "text": "The Neutron Star Outer Crust Equation of State: A Machine Learning\n approach: Constructing the outer crust of the neutron stars requires the knowledge of\nthe Binding Energy (BE) of the atomic nuclei. Although the BE of a lot of the\nnuclei is experimentally determined and can be obtained from the AME data\ntable, for the others we need to depend on theoretical models. There exist a\nlot of physical theories to predict the BE, each with its own strengths and\nweaknesses. In this paper, we apply Machine Learning (ML) algorithms on AME2016\ndata set to predict the Binding Energy {of atomic nuclei}. The novel feature of\nour work is that it is model independent. We do not assume or use any nuclear\nphysics model but use only ML algorithms directly on the AME2016 data set. Our\nresults are further refined by using another ML algorithm to train the errors\nof the first algorithm, and repeating this process iteratively. Our best\nalgorithm gives $\\sigma_{\\rm rms} \\approx 0.58$ MeV for Binding Energy on\nrandomized testing sets. This is comparable to all physics models or ML\nimproved physics models studied in literature till date. Using the predictions\nof our Machine Learning algorithm, we construct the outer crust equation of\nstate (EoS) of a neutron star and show that our model is comparable to existing\nmodels. This work also demonstrates the use of various ML algorithms and a\ndetailed analysis on how we arrived at our best algorithm. It will help the\nphysics community in understanding how to choose an ML algorithm which would be\nsuited for their data set. Our algorithms and best fit model is also made\npublicly available for the use of the community.", "category": "physics_comp-ph" }, { "text": "Extension of the INFN Tier-1 on a HPC system: The INFN Tier-1 located at CNAF in Bologna (Italy) is a center of the WLCG\ne-Infrastructure, supporting the 4 major LHC collaborations and more than 30\nother INFN-related experiments. After multiple tests towards elastic expansion\nof CNAF compute power via Cloud resources (provided by Azure, Aruba and in the\nframework of the HNSciCloud project), and building on the experience gained\nwith the production quality extension of the Tier-1 farm on remote owned sites,\nthe CNAF team, in collaboration with experts from the ALICE, ATLAS, CMS, and\nLHCb experiments, has been working to put in production a solution of an\nintegrated HTC+HPC system with the PRACE CINECA center, located nearby Bologna.\nSuch extension will be implemented on the Marconi A2 partition, equipped with\nIntel Knights Landing (KNL) processors. A number of technical challenges were\nfaced and solved in order to successfully run on low RAM nodes, as well as to\novercome the closed environment (network, access, software distribution, ... )\nthat HPC systems deploy with respect to standard GRID sites. We show\npreliminary results from a large scale integration effort, using resources\nsecured via the successful PRACE grant N. 2018194658, for 30 million KNL core\nhours.", "category": "physics_comp-ph" }, { "text": "Developing Machine-Learned Potentials for Coarse-Grained Molecular\n Simulations: Challenges and Pitfalls: Coarse graining (CG) enables the investigation of molecular properties for\nlarger systems and at longer timescales than the ones attainable at the\natomistic resolution. Machine learning techniques have been recently proposed\nto learn CG particle interactions, i.e. develop CG force fields. Graph\nrepresentations of molecules and supervised training of a graph convolutional\nneural network architecture are used to learn the potential of mean force\nthrough a force matching scheme. In this work, the force acting on each CG\nparticle is correlated to a learned representation of its local environment\nthat goes under the name of SchNet, constructed via continuous filter\nconvolutions. We explore the application of SchNet models to obtain a CG\npotential for liquid benzene, investigating the effect of model architecture\nand hyperparameters on the thermodynamic, dynamical, and structural properties\nof the simulated CG systems, reporting and discussing challenges encountered\nand future directions envisioned.", "category": "physics_comp-ph" }, { "text": "Consistent thermodynamic derivative estimates for tabular equations of\n state: Numerical simulations of compressible fluid flows require an equation of\nstate (EOS) to relate the thermodynamic variables of density, internal energy,\ntemperature, and pressure. A valid EOS must satisfy the thermodynamic\nconditions of consistency (derivation from a free energy) and stability\n(positive sound speed squared). When phase transitions are significant, the EOS\nis complicated and can only be specified in a table. For tabular EOS's such as\nSESAME from Los Alamos National Laboratory, the consistency and stability\nconditions take the form of a differential equation relating the derivatives of\npressure and energy as functions of temperature and density, along with\npositivity constraints. Typical software interfaces to such tables based on\npolynomial or rational interpolants compute derivatives of pressure and energy\nand may enforce the stability conditions, but do not enforce the consistency\ncondition and its derivatives. We describe a new type of table interface based\non a constrained local least squares regression technique. It is applied to\nseveral SESAME EOS's showing how the consistency condition can be satisfied to\nround-off while computing first and second derivatives with demonstrated\nsecond-order convergence. An improvement of 14 orders of magnitude over\nconventional derivatives is demonstrated, although the new method is apparently\ntwo orders of magnitude slower, due to the fact that every evaluation requires\nsolving an 11-dimensional nonlinear system.", "category": "physics_comp-ph" }, { "text": "Multiscale QM/MM Molecular Dynamics Study on the First Steps of\n Guanine-Damage by Free Hydroxyl Radicals in Solution: Understanding the damage of DNA bases from hydrogen abstraction by free OH\nradicals is of particular importance to reveal the effect of hydroxyl radicals\nproduced by the secondary effect of radiation. Previous studies address the\nproblem with truncated DNA bases as ab-initio quantum simulation required to\nstudy such electronic spin dependent processes are computationally expensive.\nHere, for the first time, we employ a multiscale and hybrid\nQuantum-Mechanical-Molecular-Mechanical simulation to study the interaction of\nOH radicals with guanine-deoxyribose-phosphate DNA molecular unit in the\npresence of water where all the water molecules and the deoxyribose-phosphate\nfragment are treated with the simplistic classical Molecular-Mechanical scheme.\nOur result illustrates that the presence of water strongly alters the\nhydrogen-abstraction reaction as the hydrogen bonding of OH radicals with water\nrestricts the relative orientation of the OH-radicals with respective to the\nthe DNA base (here guanine). This results in an angular anisotropy in the\nchemical pathway and a lower efficiency in the hydrogen abstraction mechanisms\nthan previously anticipated for identical system in vacuum. The method can\neasily be extended to single and double stranded DNA without any appreciable\ncomputational cost as these molecular units can be treated in the classical\nsubsystem as has been demonstrated here.", "category": "physics_comp-ph" }, { "text": "Finite element modeling of micropolar-based phononic crystals: The performance of a Cosserat/micropolar solid as a numerical vehicle to\nrepresent dispersive media is explored. The study is conducted using the finite\nelement method with emphasis on Hermiticity, positive definiteness, principle\nof virtual work and Bloch-Floquet boundary conditions. The periodic boundary\nconditions are given for both translational and rotational degrees of freedom\nand for the associated force- and couple-traction vectors. Results in terms of\nband structures for different material cells and mechanical parameters are\nprovided.", "category": "physics_comp-ph" }, { "text": "Improved success rate and stability for phase retrieval by including\n randomized overrelaxation in the hybrid input output algorithm: In this paper, we study the success rate of the reconstruction of objects of\nfinite extent given the magnitude of its Fourier transform and its geometrical\nshape. We demonstrate that the commonly used combination of the hybrid input\noutput and error reduction algorithm is significantly outperformed by an\nextension of this algorithm based on randomized overrelaxation. In most cases,\nthis extension tremendously enhances the success rate of reconstructions for a\nfixed number of iterations as compared to reconstructions solely based on the\ntraditional algorithm. The good scaling properties in terms of computational\ntime and memory requirements of the original algorithm are not influenced by\nthis extension.", "category": "physics_comp-ph" }, { "text": "Simulating disordered quantum systems via dense and sparse restricted\n Boltzmann machines: In recent years, generative artificial neural networks based on restricted\nBoltzmann machines (RBMs) have been successfully employed as accurate and\nflexible variational wave functions for clean quantum many-body systems. In\nthis article we explore their use in simulations of disordered quantum spin\nmodels. The standard dense RBM with all-to-all inter-layer connectivity is not\nparticularly appropriate for large disordered systems, since in such systems\none cannot exploit translational invariance to reduce the amount of parameters\nto be optimized. To circumvent this problem, we implement sparse RBMs, whereby\nthe visible spins are connected only to a subset of local hidden neurons, thus\nreducing the amount of parameters. We assess the performance of sparse RBMs as\na function of the range of the allowed connections, and compare it with the one\nof dense RBMs. Benchmark results are provided for two sign-problem free\nHamiltonians, namely pure and random quantum Ising chains. The RBM ansatzes are\ntrained using the unsupervised learning scheme based on projective quantum\nMonte Carlo (PQMC) algorithms. We find that the sparse connectivity facilitates\nthe training process and allows sparse RBMs to outperform the dense\ncounterparts. Furthermore, the use of sparse RBMs as guiding functions for PQMC\nsimulations allows us to perform PQMC simulations at a reduced computational\ncost, avoiding possible biases due to finite random-walker populations. We\nobtain unbiased predictions for the ground-state energies and the magnetization\nprofiles with fixed boundary conditions, at the ferromagnetic quantum critical\npoint. The magnetization profiles agree with the Fisher-de Gennes scaling\nrelation for conformally invariant systems, including the scaling dimension\npredicted by the renormalization-group analysis.", "category": "physics_comp-ph" }, { "text": "SIMEX: Simulation of Experiments at Advanced Light Sources: Realistic simulations of experiments at large scale photon facilities, such\nas optical laser laboratories, synchrotrons, and free electron lasers, are of\nvital importance for the successful preparation, execution, and analysis of\nthese experiments investigating ever more complex physical systems, e.g.\nbiomolecules, complex materials, and ultra-short lived states of highly excited\nmatter. Traditional photon science modelling takes into account only isolated\naspects of an experiment, such as the beam propagation, the photon-matter\ninteraction, or the scattering process, making idealized assumptions about the\nremaining parts, e.g.\\ the source spectrum, temporal structure and coherence\nproperties of the photon beam, or the detector response. In SIMEX, we have\nimplemented a platform for complete start-to-end simulations, following the\nradiation from the source, through the beam transport optics to the sample or\ntarget under investigation, its interaction with and scattering from the\nsample, and its registration in a photon detector, including a realistic model\nof the detector response to the radiation. Data analysis tools can be hooked up\nto the modelling pipeline easily. This allows researchers and facility\noperators to simulate their experiments and instruments in real life scenarios,\nidentify promising and unattainable regions of the parameter space and\nultimately make better use of valuable beamtime.\n This work is licensed under the Creative Commons Attribution 3.0 Unported\nLicense: http://creativecommons.org/licenses/by/3.0/.", "category": "physics_comp-ph" }, { "text": "Noise limits in the assembly of diffraction data: We obtain an information theoretic criterion for the feasibility of\nassembling diffraction signals from noisy tomographs when the positions of the\ntomographs within the signal are unknown. For shot-noise limited data, the\nminimum number of detected photons per tomograph for successful assembly is\nmuch smaller than previously believed necessary, growing only logarithmically\nwith the number of resolution elements of the diffracting object. We also\ndemonstrate assembly up to the information theoretic limit with a\nconstraint-based algorithm.", "category": "physics_comp-ph" }, { "text": "A general-purpose element-based approach to compute dispersion relations\n in periodic materials with existing finite element codes: In most of standard Finite Element (FE) codes it is not easy to calculate\ndispersion relations from periodic materials. Here we propose a new strategy to\ncalculate such dispersion relations with available FE codes using user element\nsubroutines. Typically, the Bloch boundary conditions are applied to the global\nassembled matrices of the structure through a transformation matrix or\nrow-and-column operations. Such a process is difficult to implement in standard\nFE codes since the user does not have access to the global matrices. In this\nwork, we apply those Bloch boundary conditions directly at the elemental level.\nThe proposed strategy can be easily implemented in any FE code. This strategy\ncan be used either in real or complex algebra solvers. It is general enough to\npermit any spatial dimension and physical phenomena involving periodic\nstructures. A detailed process of calculation and assembly of the elemental\nmatrices is shown. We verify our method with available analytical solutions and\nexternal numerical results, using different material models and unit cell\ngeometries", "category": "physics_comp-ph" }, { "text": "The antiferromagnetic phase transition in the layered\n Cu$_{0.15}$Fe$_{0.85}$PS$_3$ semiconductor: experiment and DFT modelling: The experimental studies of the paramagnetic-antiferromagnetic phase\ntransition through M\\\"{o}ssbauer spectroscopy and measurements of temperature\nand field dependencies of magnetic susceptibility in the layered\nCu$_{0.15}$Fe$_{0.85}$PS$_3$ crystal are presented. The peculiar behavior of\nthe magnetization - field dependence at low-temperature region gives evidence\nof a weak ferromagnetism in the studied alloy. By the ab initio simulation of\nelectronic and spin subsystems, in the framework of electron density functional\ntheory, the peculiarities of spin ordering at low temperature as well as\nchanges in interatomic interactions in the vicinity of the Cu substitutional\natoms are analyzed. The calculated components of the electric field gradient\ntensor and asymmetry parameter for Fe ions are close to the ones found from\nM\\\"{o}ssbauer spectra values. The Mulliken populations show that the main\ncontribution to the ferromagnetic spin density is originated from $3d$-copper\nand $3p$-sulfur orbitals. The estimated total magnetic moment of the unit cell\n(8.543~emu/mol) is in reasonable agreement with the measured experimental value\nof $\\sim9$~emu/mol.", "category": "physics_comp-ph" }, { "text": "On the Generalization of DIRECTFN for Singular Integrals Over\n Quadrilateral Patches: A set of fully numerical algorithms for evaluating the four-dimensional\nsingular integrals arising from Galerkin surface integral equation methods over\nconforming quadrilateral meshes is presented. This work is an extension of\nDIRECTFN, which was recently developed for the case of triangular patches,\nutilizing in a same fashion a series of coordinate transformations together\nwith appropriate integration re-orderings. The resulting formulas consist of\nsufficiently smooth kernels and exhibit several favorable characteristics when\ncompared with the vast majority of the methods currently available. More\nspecifically, they can be applied---without modifications---to the following\nchallenging cases: 1) weakly and strongly singular kernels, 2) basis and\ntesting functions of arbitrary order, 3) planar and curvilinear patches, 4)\nproblem-specific Green functions (e.g. expressed in spectral integral form), 5)\nspectral convergence to machine precision. Finally, we show that the overall\nperformance of the fully numerical schemes can be further improved by a\njudicious choice of the integration order for each dimension.", "category": "physics_comp-ph" }, { "text": "Effect of adaptive cruise control systems on mixed traffic flow near an\n on-ramp: Mixed traffic flow consisting of vehicles equipped with adaptive cruise\ncontrol (ACC) and manually driven vehicles is analyzed using car-following\nsimulations. Unlike simulations that show suppression of jams due to increased\nstring stability, simulations of merging from an on-ramp onto a freeway have\nnot thus far demonstrated a substantial positive impact of ACC. In this paper\ncooperative merging is proposed to increase throughput and increase distance\ntraveled in a fixed time (reduce travel times). In such a system an ACC vehicle\nsenses not only the preceding vehicle in the same lane but also the vehicle\nimmediately in front in the opposite lane. Prior to reaching the merge region,\nthe ACC vehicle adjusts its velocity to ensure that a safe gap for merging is\nobtained. If on-ramp demand is moderate, partial implementation of cooperative\nmerging where only main line ACC vehicles react to an on-ramp vehicle is\neffective. Significant improvement in throughput (18%) and increases up to 3 km\nin distance traveled in 500 s are found for 50% ACC mixed flow relative to the\nflow of all manual vehicles. For large demand, full implementation is required\nto reduce congestion.", "category": "physics_comp-ph" }, { "text": "Direct evidence of helium rain in Jupiter and Saturn: The immiscibility of hydrogen-helium mixture under the temperature and\npressure conditions of planetary interiors is crucial for understanding the\nstructures of gas giant planets (e.g., Jupiter and Saturn). While the\nexperimental probe at such extreme conditions is challenging, theoretical\nsimulation is heavily relied in an effort to unravel the mixing behavior of\nhydrogen and helium. Here we develop a method via a machine learning\naccelerated molecular dynamics simulation to quantify the physical separation\nof hydrogen and helium under the conditions of planetary interiors. The\nimmiscibility line achieved with the developed method yields substantially\nhigher demixing temperatures at pressure above 1.5 Mbar than earlier\ntheoretical data, but matches better to the experimental estimate. Our results\nrevise the structures of Jupiter and Saturn where H-He demixing takes place in\na large fraction of the interior radii, i.e., 27.5% in Jupiter and 48.3% in\nSaturn. This direct evidence of an H-He immiscible layer supports the formation\nof helium rain and explains the helium reduction in atmosphere of Jupiter and\nSaturn.", "category": "physics_comp-ph" }, { "text": "CERNLIB status: We present a revived version of CERNLIB, the basis for software ecosystems of\nmost of the pre-LHC HEP experiments. The efforts to consolidate CERNLIB are\npart of the activities of the Data Preservation for High Energy Physics\ncollaboration to preserve data and software of the past HEP experiments. The\npresented version is based on CERNLIB version 2006 with numerous patches made\nfor compatibility with modern compilers and operating systems. The code is\navailable in the CERN GitLab repository with all the development history\nstarting from the early 1990s. The updates also include a re-implementation of\nthe build system in CMake to ensure CERNLIB compliance with the current best\npractices and to increase the chances of preserving the code in a compilable\nstate for the decades to come. The revived CERNLIB project also includes\nupdated documentation, which we believe is a cornerstone for any preserved\nsoftware depending on it.", "category": "physics_comp-ph" }, { "text": "Pushing the limit of molecular dynamics with ab initio accuracy to 100\n million atoms with machine learning: For 35 years, {\\it ab initio} molecular dynamics (AIMD) has been the method\nof choice for modeling complex atomistic phenomena from first principles.\nHowever, most AIMD applications are limited by computational cost to systems\nwith thousands of atoms at most. We report that a machine learning-based\nsimulation protocol (Deep Potential Molecular Dynamics), while retaining {\\it\nab initio} accuracy, can simulate more than 1 nanosecond-long trajectory of\nover 100 million atoms per day, using a highly optimized code (GPU DeePMD-kit)\non the Summit supercomputer. Our code can efficiently scale up to the entire\nSummit supercomputer, attaining $91$ PFLOPS in double precision ($45.5\\%$ of\nthe peak) and {$162$/$275$ PFLOPS in mixed-single/half precision}. The great\naccomplishment of this work is that it opens the door to simulating\nunprecedented size and time scales with {\\it ab initio} accuracy. It also poses\nnew challenges to the next-generation supercomputer for a better integration of\nmachine learning and physical modeling.", "category": "physics_comp-ph" }, { "text": "Dynamic load balancing with enhanced shared-memory parallelism for\n particle-in-cell codes: Furthering our understanding of many of today's interesting problems in\nplasma physics---including plasma based acceleration and magnetic reconnection\nwith pair production due to quantum electrodynamic effects---requires\nlarge-scale kinetic simulations using particle-in-cell (PIC) codes. However,\nthese simulations are extremely demanding, requiring that contemporary PIC\ncodes be designed to efficiently use a new fleet of exascale computing\narchitectures. To this end, the key issue of parallel load balance across\ncomputational nodes must be addressed. We discuss the implementation of dynamic\nload balancing by dividing the simulation space into many small, self-contained\nregions or \"tiles,\" along with shared-memory (e.g., OpenMP) parallelism both\nover many tiles and within single tiles. The load balancing algorithm can be\nused with three different topologies, including two space-filling curves. We\ntested this implementation in the code OSIRIS and show low overhead and\nimproved scalability with OpenMP thread number on simulations with both uniform\nload and severe load imbalance. Compared to other load-balancing techniques,\nour algorithm gives order-of-magnitude improvement in parallel scalability for\nsimulations with severe load imbalance issues.", "category": "physics_comp-ph" }, { "text": "Suppression of Overfitting in Extraction of Spectral Data from Imaginary\n Frequency Green Function Using Maximum Entropy Method: Although maximum entropy method (maxEnt method) is currently the standard\nalgorithm for extracting real frequency information from imaginary frequency\nGreen function, still this method is beset with overfitting problem, which\nmanifests itself as the spurious spikes in the resultant spectral functions. To\naddress this issue and motivated by the regularization techniques widely used\nin machine learning and statistics, here we propose to add one more\nregularization term into the original maxEnt loss function to suppress these\nredundant spikes. The essence of this extra regularization term is to demand\nthat the resultant spectral functions should pay a price for being spiky. We\ntest our algorithm with both artificial and real data, and find that spurious\nspikes in the resultant spectral functions can be effectively suppressed by\nthis method.", "category": "physics_comp-ph" }, { "text": "New approach to Dynamical Monte Carlo Methods: application to an\n Epidemic Model: A new approach to Dynamical Monte Carlo Methods is introduced to simulate\nmarkovian processes. We apply this approach to formulate and study an epidemic\nGeneralized SIRS model. The results are in excellent agreement with the forth\norder Runge-Kutta Method in a region of deterministic solution. We also\ndemonstrate that purely local interactions reproduce a poissonian-like process\nat mesoscopic level. The simulations for this case are checked\nself-consistently using a stochastic version of the Euler Method.", "category": "physics_comp-ph" }, { "text": "Darwin-Vlasov Simulations of magnetized Plasmas: We present a new Vlasov code for collisionless plasmas in the nonrelativistic\nregime. A Darwin approximation is used for suppressing electromagnetic vacuum\nmodes. The spatial integration is based on an extension of the\nflux-conservative scheme, introduced by Filbet et al. [J. Comp. Phys. Vol. 172\n(2001) 166]. Performance and accuracy is demonstrated by comparing it to a\nstandard finite differences scheme for two test cases, including a Harris sheet\nmagnetic reconnection scenario. This comparison suggests that the presented\nscheme is a promising alternative to finite difference schemes.", "category": "physics_comp-ph" }, { "text": "Tiling Phosphorene: We present a scheme to categorize the structure of different layered\nphosphorene allotropes by mapping their non-planar atomic structure onto a\ntwo-color 2D triangular tiling pattern. In the buckled structure of a\nphosphorene monolayer, we assign atoms in \"top\" positions to dark tiles and\natoms in \"bottom\" positions to light tiles. Optimum $sp^3$ bonding is\nmaintained throughout the structure when each triangular tile is surrounded by\nthe same number $N$ of like-colored tiles, with $0{\\le}N{\\le}2$. Our ab initio\ndensity functional calculations indicate that both the relative stability and\nelectronic properties depend primarily on the structural index $N$. The\nproposed mapping approach may also be applied to phosphorene structures with\nnon-hexagonal rings and 2D quasicrystals with no translational symmetry, which\nwe predict to be nearly as stable as the hexagonal network.", "category": "physics_comp-ph" }, { "text": "A graphics processor-based intranuclear cascade and evaporation\n simulation: Monte Carlo simulations of the transport of protons in human tissue have been\ndeployed on graphics processing units (GPUs) with impressive results. To\nprovide a more complete treatment of non-elastic nuclear interactions in these\nsimulations, we developed a fast intranuclear cascade-evaporation simulation\nfor the GPU. This can be used to model non-elastic proton collisions on any\ntherapeutically relevant nuclei at incident energies between 20 and 250 MeV.\nPredictions are in good agreement with Geant4.9.6p2. It takes approximately 2 s\nto calculate $1\\times 10^6$ 200 MeV proton-$^{16}$O interactions on a NVIDIA\nGTX680 GPU. A speed-up factor of $\\sim$20 relative to one Intel i7-3820 core\nprocessor thread was achieved.", "category": "physics_comp-ph" }, { "text": "An Alternative to Stride-Based RNG for Monte Carlo Transport: The techniques used to generate pseudo-random numbers for Monte Carlo (MC)\napplications bear many implications on the quality and speed of that programs\nwork. As a random number generator (RNG) slows, the production of random\nnumbers begins to dominate runtime. As RNG output grows in correlation, the\nfinal product becomes less reliable.\n These difficulties are further compounded by the need for reproducibility and\nparallelism. For reproducibility, the numbers generated to determine any\noutcome must be the same each time a simulation is run. However, the\nconcurrency that comes with most parallelism introduces race conditions. To\nhave both reproducibility and concurrency, separate RNG states must be tracked\nfor each independently schedulable unit of simulation, forming independent\nrandom number streams.\n We propose an alternative to the stride-based parallel LCG seeding approach\nthat scales more practically with increased concurrency and workload by\ngenerating seeds through hashing and allowing for repeated outputs. Data\ngathered from normality tests of tally results from simple MC transport\nbenchmark calculations indicates that the proposed hash-based RNG does not\nsignificantly affect the tally result normality property as compared to the\nconventional stride-based RNG.", "category": "physics_comp-ph" }, { "text": "A massively parallel semi-Lagrangian solver for the six-dimensional\n Vlasov-Poisson equation: This paper presents an optimized and scalable semi-Lagrangian solver for the\nVlasov-Poisson system in six-dimensional phase space. Grid-based solvers of the\nVlasov equation are known to give accurate results. At the same time, these\nsolvers are challenged by the curse of dimensionality resulting in very high\nmemory requirements, and moreover, requiring highly efficient parallelization\nschemes. In this paper, we consider the 6d Vlasov-Poisson problem discretized\nby a split-step semi-Lagrangian scheme, using successive 1d interpolations on\n1d stripes of the 6d domain. Two parallelization paradigms are compared, a\nremapping scheme and a classical domain decomposition approach applied to the\nfull 6d problem. From numerical experiments, the latter approach is found to be\nsuperior in the massively parallel case in various respects. We address the\nchallenge of artificial time step restrictions due to the decomposition of the\ndomain by introducing a blocked one-sided communication scheme for the purely\nelectrostatic case and a rotating mesh for the case with a constant magnetic\nfield. In addition, we propose a pipelining scheme that enables to hide the\ncosts for the halo communication between neighbor processes efficiently behind\nuseful computation. Parallel scalability on up to 65k processes is demonstrated\nfor benchmark problems on a supercomputer.", "category": "physics_comp-ph" }, { "text": "Deep learning for diffusion in porous media: We adopt convolutional neural networks (CNN) to predict the basic properties\nof the porous media. Two different media types are considered: one mimics the\nsand packings, and the other mimics the systems derived from the extracellular\nspace of biological tissues. The Lattice Boltzmann Method is used to obtain the\nlabeled data necessary for performing supervised learning. We distinguish two\ntasks. In the first, networks based on the analysis of the system's geometry\npredict porosity and effective diffusion coefficient. In the second, networks\nreconstruct the concentration map. In the first task, we propose two types of\nCNN models: the C-Net and the encoder part of the U-Net. Both networks are\nmodified by adding a self-normalization module [Graczyk \\textit{et al.}, Sci\nRep 12, 10583 (2022)]. The models predict with reasonable accuracy but only\nwithin the data type, they are trained on. For instance, the model trained on\nsand packings-like samples overshoots or undershoots for biological-like\nsamples. In the second task, we propose the usage of the U-Net architecture. It\naccurately reconstructs the concentration fields. In contrast to the first\ntask, the network trained on one data type works well for the other. For\ninstance, the model trained on sand packings-like samples works perfectly on\nbiological-like samples. Eventually, for both types of the data, we fit\nexponents in the Archie's law to find tortuosity that is used to describe the\ndependence of the effective diffusion on porosity.", "category": "physics_comp-ph" }, { "text": "Graph-based linear scaling electronic structure theory: We show how graph theory can be combined with quantum theory to calculate the\nelectronic structure of large complex systems. The graph formalism is general\nand applicable to a broad range of electronic structure methods and materials,\nincluding challenging systems such as biomolecules. The methodology combines\nwell-controlled accuracy, low computational cost, and natural low-communication\nparallelism. This combination addresses substantial shortcomings of linear\nscaling electronic structure theory, in particular with respect to\nquantum-based molecular dynamics simulations.", "category": "physics_comp-ph" }, { "text": "A simple numerical scheme for the 2D shallow-water system: This paper presents a simple numerical scheme for the two dimensional\nShallow-Water Equations (SWEs). Inspired by the study of numerical\napproximation of the one dimensional SWEs Audusse et al. (2015), this paper\nextends the problem from 1D to 2D with the simplicity of application preserves.\nThe new scheme is implemented into the code TELEMAC-2D [tel2d, 2014] and\nseveral tests are made to verify the scheme ability under an equilibrium state\nat rest and different types of flow regime (i.e., fluvial regime, transcritical\nflow from fluvial to torrential regime, transcritical flow with a hydraulic\njump). The sensitivity analysis is conducted to exam the scheme convergence.", "category": "physics_comp-ph" }, { "text": "A self-consistent spin-diffusion model for micromagnetics: We propose a three-dimensional micromagnetic model that dynamically solves\nthe Landau-Lifshitz-Gilbert equation coupled to the full spin-diffusion\nequation. In contrast to previous methods, we solve for the magnetization\ndynamics and the electric potential in a self-consistent fashion. This\ntreatment allows for an accurate description of magnetization dependent\nresistance changes. Moreover, the presented algorithm describes both spin\naccumulation due to smooth magnetization transitions and due to material\ninterfaces as in multilayer structures. The model and its finite-element\nimplementation are validated by current driven motion of a magnetic vortex\nstructure. In a second experiment, the resistivity of a magnetic multilayer\nstructure in dependence of the tilting angle of the magnetization in the\ndifferent layers is investigated. Both examples show good agreement with\nreference simulations and experiments respectively.", "category": "physics_comp-ph" }, { "text": "Numerical issues of the two-dimensional Dirac equation: The two-dimensional Dirac equation has been widely used in graphene physics,\nthe surface of topological insulators, and especially quantum scarring.\nAlthough a numerical approach to tackling an arbitrary confining problem was\nproposed several years ago, several fundamental issues must be thoroughly\nunderstood and solved. In this work, we conceal and address these challenges\nand finally develop a complete method, validated by comparison with analytical\nresults.", "category": "physics_comp-ph" }, { "text": "Geant4 modeling of the bremsstrahlung converter optimal thickness for\n studying the radiation damage processes in organic dyes solutions: This work is dedicated to computer modeling of the parameters of a tungsten\nconverter for studying the processes of radiation damage during the interaction\nof ionizing radiation with solutions of organic dyes. Simulation was carried\nout in order to determine the optimal thickness of the converter under\npredetermined experimental conditions. Experimental conditions include:\nenergies and type of primary particles, radiation intensity, target dimensions,\nrelative position of the radiation source and target. Experimental studies of\nthe processes of radiation damage occurring in solutions of organic dyes are\nplanned to be carried out using the linear electron. The tungsten converter is\nused to generate a flux of bremsstrahlung gamma rays. One modeling problem is\ndetermination of the converter thickness at which the flux of bremsstrahlung\ngamma will be maximal in front of the target. At the same time, the flow of\nelectrons and positrons in front of the target should be as low as possible.\nAnother important task of the work is to identify the possibility of\ndetermining the relative amount of radiation damage in the target material by\nthe Geant4 modeling method. Computational experiments were carried out for\nvarious values of the converter thickness - from 0 mm (converter is absent) to\n8 mm with a step of 1 mm. The developed program operates in a multithreaded\nmode. The G4EmStandardPhysics_option3 model of the PhysicsList was used in the\ncalculations. A detailed analysis of the obtained data has been performed. As a\nresult of the data analysis, the optimal value of the tungsten converter\nthickness was obtained.", "category": "physics_comp-ph" }, { "text": "A Fast Parallel Poisson Solver on Irregular Domains Applied to Beam\n Dynamic Simulations: We discuss the scalable parallel solution of the Poisson equation within a\nParticle-In-Cell (PIC) code for the simulation of electron beams in particle\naccelerators of irregular shape. The problem is discretized by Finite\nDifferences. Depending on the treatment of the Dirichlet boundary the resulting\nsystem of equations is symmetric or `mildly' nonsymmetric positive definite. In\nall cases, the system is solved by the preconditioned conjugate gradient\nalgorithm with smoothed aggregation (SA) based algebraic multigrid (AMG)\npreconditioning. We investigate variants of the implementation of SA-AMG that\nlead to considerable improvements in the execution times. We demonstrate good\nscalability of the solver on distributed memory parallel processor with up to\n2048 processors. We also compare our SAAMG-PCG solver with an FFT-based solver\nthat is more commonly used for applications in beam dynamics.", "category": "physics_comp-ph" }, { "text": "FELIX-1.0: A finite element solver for the time dependent generator\n coordinate method with the Gaussian overlap approximation: We describe the software package FELIX that solves the equations of the\ntime-dependent generator coordinate method (TDGCM) in N-dimensions (N $\\geq$ 1)\nunder the Gaussian overlap approximation. The numerical resolution is based on\nthe Galerkin finite element discretization of the collective space and the\nCrank-Nicolson scheme for time integration. The TDGCM solver is implemented\nentirely in C++. Several additional tools written in C++, Python or bash\nscripting language are also included for convenience. In this paper, the solver\nis tested with a series of benchmarks calculations. We also demonstrate the\nability of our code to handle a realistic calculation of fission dynamics.", "category": "physics_comp-ph" }, { "text": "New Immersed Boundary Method with Irrotational Discrete Delta Vector for\n Droplet Simulations with Large Density ratio: The Immersed Boundary Method (IBM) is one of the popular one-fluid mixed\nEulerian-Lagrangian methods to simulate motion of droplets. While the treatment\nof a moving complex boundary is an extremely time consuming and formidable task\nin a traditional boundary-fitted fluid solver, the one-fluid methods provide a\nrelatively easier way to track moving interfaces on a fixed Cartesian grid\nsince the regeneration of a mesh system that conforms to the interface at every\ntime step can be avoided. In the IBM, a series of connected Lagrangian markers\nare used to represent a fluid-fluid interface and the boundary condition is\nenforced by adding a forcing term to the Navier-Stokes equations. It is known\nthat the IBM suffers two problems. One is spontaneous generation of unphysical\nkinetic energy, which is known as parasitic currents, and the other is spurious\nreconstruction of interface. These two problems need to be solved for useful\nlong-time-scale simulations of droplets with high density ratio and large\nsurface tension. This work detects that the discrete delta function is the\ncause of unphysical parasitic currents. Specifically, the irrotational\ncondition is not preserved when the common discrete delta function is used to\nspread the surface tension from Lagrangian markers to Cartesian grid cells. To\nsolve this problem, a new scheme that preserves the irrotational condition is\nproposed to remove the spurious currents. Furthermore, for a smooth\nreconstruction of an interface, a B-spline fitting by least squares is adopted\nto relocate the Lagrangian markers. The conventional and new interpolation\nschemes are implemented in a multigrid finite volume Direct Numerical\nSimulation (DNS) solver and are subjected to standard test cases. It is\nconfirmed that the unphysical parasitic currents are substantially reduced in\nthe new scheme and droplet's surface fluctuation is eliminated.", "category": "physics_comp-ph" }, { "text": "A diffuse-interface lattice Boltzmann method for fluid-particle\n interaction problems: In this paper, a diffuse-interface lattice Boltzmann method (DI-LBM) is\ndeveloped for fluid-particle interaction problems. In this method, the sharp\ninterface between the fluid and solid is replaced by a thin but nonzero\nthickness transition region named diffuse interface, where the physical\nvariables varies continuously. In order to describe the diffuse interface, we\nintroduce a smooth function, which is similar to the order parameter in\nphase-field model or the volume fraction of solid phase in the partially\nsaturated lattice Boltzmann method (PS-LBM). In addition, to depict the\nfluid-particle interaction more accurately, a modified force term is also\nproposed and included in the evolution equation of the DI-LBM. Some classical\nproblems are used to test the DI-LBM, and the results are in good agreement\nwith some available theoretical and numerical works. Finally, it is also found\nthat the DI-LBM is more efficient and accurate than the PS-LBM with the\nsuperposition model.", "category": "physics_comp-ph" }, { "text": "Relativistic hydrodynamics on graphics processing units: Hydrodynamics calculations have been successfully used in studies of the bulk\nproperties of the Quark-Gluon Plasma, particularly of elliptic flow and shear\nviscosity. However, there are areas (for instance event-by-event simulations\nfor flow fluctuations and higher-order flow harmonics studies) where further\nadvancement is hampered by lack of efficient and precise 3+1D~program. This\nproblem can be solved by using Graphics Processing Unit (GPU) computing, which\noffers unprecedented increase of the computing power compared to standard CPU\nsimulations. In this work, we present an implementation of 3+1D ideal\nhydrodynamics simulations on the Graphics Processing Unit using Nvidia CUDA\nframework. MUSTA-FORCE (MUlti STAge, First ORder CEntral, with a~slope limiter\nand MUSCL reconstruction) and WENO (Weighted Essentially Non-Oscillating)\nschemes are employed in the simulations, delivering second (MUSTA-FORCE), fifth\nand seventh (WENO) order of accuracy. Third order Runge-Kutta scheme was used\nfor integration in the time domain. Our implementation improves the performance\nby about 2~orders of magnitude compared to a~single threaded program. The\nalgorithm tests of 1+1D~shock tube and 3+1D~simulations with ellipsoidal and\nHubble-like expansion are presented.", "category": "physics_comp-ph" }, { "text": "Massively parallel pixel-by-pixel nanophotonic optimization using a\n Green's function formalism: We introduce an efficient parallelization scheme to implement pixel-by-pixel\nnanophotonic optimization using a Green's function based formalism. The crucial\ninsight in our proposal is the reframing of the optimization algorithm as a\nlarge-scale data processing pipeline, which allows for the efficient\ndistribution of computational tasks across thousands of workers. We demonstrate\nthe utility of our implementation by exercising it to optimize a high numerical\naperture focusing metalens at problem sizes that would otherwise be far out of\nreach for the Green's function based method. Finally, we highlight the\nconnection to powerful ideas from reinforcement learning as a natural corollary\nof reinterpreting the nanophotonic inverse design problem as a graph traversal\nenabled by the pixel-by-pixel optimization paradigm.", "category": "physics_comp-ph" }, { "text": "TetraScatt model: Born approximation for the estimation of acoustic\n dispersion of fluid-like objects of arbitrary geometries: Modelling the acoustic scattering response due to penetrable objects of\narbitrary shapes, such as those of many marine organisms, can be\ncomputationally intensive, often requiring high-performance computing equipment\nwhen considering a completely general situation. However, when the physical\nproperties (sound speed and density) of the scatterer object under\nconsideration are similar to those of the surrounding medium, the Born\napproximation provides a computationally efficient way to calculate the\nscattering. For simple geometrical shapes like spheres and spheroids, the\nacoustic scattering in the far field can be evaluated through the Born\napproximation recipe as a formula which has been historically employed to\npredict the response of weakly scattering organisms, such as zooplankton.\nMoreover, the Born approximation has been extended to bodies whose geometry can\nbe described as a collection of non-circular rings centred on a smooth curve.\nIn this work, we have developed a numerical approach to calculate the far-field\nbackscattering by arbitrary 3D objects under the Born approximation. The\nobject's geometry is represented by a volumetric mesh composed of tetrahedrons,\nand the computation is efficiently performed through analytical 3D integration,\nyielding a solution expressed in terms of elementary functions for each\ntetrahedron. The method's correctness has been successfully validated against\nbenchmark solutions. Additionally, we present acoustic scattering results for\nspecies with complex shapes. To enable other researchers to use and validate\nthe method a computational package named tetrascatt implemented in the R\nprogramming language was developed and published in the CRAN (Comprehensive R\nArchive Network).", "category": "physics_comp-ph" }, { "text": "Hybrid functionals for periodic systems in the density functional\n tight-binding method: Screened range-separated hybrid (SRSH) functionals within generalized\nKohn-Sham density functional theory (GKS-DFT) have been shown to restore a\ngeneral $1/(r\\varepsilon)$ asymptotic decay of the electrostatic interaction in\ndielectric environments. Major achievements of SRSH include an improved\ndescription of optical properties of solids and correct prediction of\npolarization-induced fundamental gap renormalization in molecular crystals. The\ndensity functional tight-binding method (DFTB) is an approximate DFT that\nbridges the gap between first principles methods and empirical electronic\nstructure schemes. While purely long-range corrected RSH are already accessible\nwithin DFTB for molecular systems, this work generalizes the theoretical\nfoundation to also include screened range-separated hybrids, with conventional\npure hybrid functionals as a special case. The presented formulation and\nimplementation is also valid for periodic boundary conditions (PBC) beyond the\n$\\Gamma$-point. To treat periodic Fock exchange and its integrable singularity\nin reciprocal space, we resort to techniques successfully employed by DFT, in\nparticular a truncated Coulomb operator and the minimum image convention.\nStarting from the first principles Hartree-Fock operator, we derive suitable\nexpressions for the DFTB method, using standard integral approximations and\ntheir efficient implementation in the DFTB+ software package. Convergence\nbehavior is investigated and demonstrated for the polyacene series as well as\ntwo- and three-dimensional materials. Benzene and pentacene molecular and\ncrystalline systems show the correct polarization-induced gap renormalization\nby SRSH-DFTB at heavily reduced computational cost compared to first principles\nmethods.", "category": "physics_comp-ph" }, { "text": "Interface-Flattening Transform for EM Field Modeling in Tilted,\n Cylindrically-Stratified Geophysical Media: We propose and investigate an \"interface-flattening\" transformation, hinging\nupon Transformation Optics (T.O.) techniques, to facilitate the rigorous\nanalysis of electromagnetic (EM) fields radiated by sources embedded in tilted,\ncylindrically-layered geophysical media. Our method addresses the major\nchallenge in such problems of appropriately approximating the domain boundaries\nin the computational model while, in a full-wave manner, predicting the effects\nof tilting in the layers. When incorporated into standard pseudo-analytical\nalgorithms, moreover, the proposed method is quite robust, as it is not limited\nby absorption, anisotropy, and/or eccentering profile of the cylindrical\ngeophysical formations, nor is it limited by the radiation frequency. These\nattributes of the proposed method are in contrast to past analysis methods for\ntilted-layer media that often place limitations on the source and medium\ncharacteristics. Through analytical derivations as well as a preliminary\nnumerical investigation, we analyze and discuss the method's strengths and\nlimitations.", "category": "physics_comp-ph" }, { "text": "Borophene hydride: a stiff 2D material with high thermal conductivity\n and attractive optical and electronic properties: Two-dimensional (2D) structures of boron atoms so called borophene, have\nrecently attracted remarkable attention. In a latest exciting experimental\nstudy, a hydrogenated borophene structure was realized. Motivated by this\nsuccess, we conducted extensive first-principles calculations to explore the\nmechanical, thermal conduction, electronic and optical responses of borophene\nhydride. The mechanical response of borophene hydride was found to be\nanisotropic in which it can yield an elastic modulus of 131 N/m and a high\ntensile strength of 19.9 N/m along the armchair direction. Notably, it was\nshown that by applying mechanical loading the metallic electronic character of\nborophene hydride can be altered to direct band-gap semiconducting, very\nappealing for the application in nanoelectronics. The absorption edge of the\nimaginary part of the dielectric function was found to occur in the visible\nrange of light for parallel polarization. Finally, it was estimated that this\nnovel 2D structure at the room temperature can exhibit high thermal\nconductivities of 335 W/mK and 293 W/mK along zigzag and armchair directions,\nrespectively. Our study confirms that borophene hydride benefits an outstanding\ncombination of interesting mechanical, electronic, optical and thermal\nconduction properties, promising for the design of novel nanodevices.", "category": "physics_comp-ph" }, { "text": "Efficiency of linked cell algorithms: The linked cell list algorithm is an essential part of molecular simulation\nsoftware, both molecular dynamics and Monte Carlo. Though it scales linearly\nwith the number of particles, there has been a constant interest in increasing\nits efficiency, because a large part of CPU time is spent to identify the\ninteracting particles. Several recent publications proposed improvements to the\nalgorithm and investigated their efficiency by applying them to particular\nsetups. In this publication we develop a general method to evaluate the\nefficiency of these algorithms, which is mostly independent of the parameters\nof the simulation, and test it for a number of linked cell list algorithms. We\nalso propose a combination of linked cell reordering and interaction sorting\nthat shows a good efficiency for a broad range of simulation setups.", "category": "physics_comp-ph" }, { "text": "Two-Dimensional Hydrogen Structure at Ultra-High Pressure: We introduce a novel method that combines the accuracy of Quantum Monte Carlo\nsimulations with ab-initio Molecular Dynamics, in the spirit of Car-Parrinello.\nThis method is then used for investigating the structure of a two-dimensional\nlayer of hydrogen at $T=0~\\text{K}$ and high densities. We find that\nmetallization is to be expected at $r_s \\approx 1.1$, with an estimated\npressure of $1.0\\cdot10^3~a_0~\\text{GPa}$, changing from a graphene molecular\nlattice to an atomic phase.", "category": "physics_comp-ph" }, { "text": "A unified Eulerian framework for multimaterial continuum mechanics: A framework for simulating the interactions between multiple different\ncontinua is presented. Each constituent material is governed by the same set of\nequations, differing only in terms of their equations of state and strain\ndissipation functions. The interfaces between any combination of fluids,\nsolids, and vacuum are handled by a new Riemann Ghost Fluid Method, which is\nagnostic to the type of material on either side (depending only on the desired\nboundary conditions).\n The Godunov-Peshkov-Romenski (GPR) model is used for modeling the continua\n(having recently been used to solve a range of problems involving Newtonian and\nnon-Newtonian fluids, and elastic and elastoplastic solids), and this study\nrepresents a novel approach for handling multimaterial problems under this\nmodel.\n The resulting framework is simple, yet capable of accurately reproducing a\nwide range of different physical scenarios. It is demonstrated here to\naccurately reproduce analytical results for known Riemann problems, and to\nproduce expected results in other cases, including some featuring heat\nconduction across interfaces, and impact-induced deformation and detonation of\ncombustible materials. The framework thus has the potential to streamline\ndevelopment of simulation software for scenarios involving multiple materials\nand phases of matter, by reducing the number of different systems of equations\nthat require solvers, and cutting down on the amount of theoretical work\nrequired to deal with the interfaces between materials.", "category": "physics_comp-ph" }, { "text": "Data-driven dynamical coarse-graining for condensed matter systems: Simulations of condensed matter systems often focus on the dynamics of a few\ndistinguished components but require integrating the dynamics of the full\nsystem. A prime example is a molecular dynamics simulation of a (macro)molecule\nin solution, where both the molecules(s) and the solvent dynamics needs to be\nintegrated. This renders the simulations computationally costly and often\nunfeasible for physically or biologically relevant time scales. Standard coarse\ngraining approaches are capable of reproducing equilibrium distributions and\nstructural features but do not properly include the dynamics. In this work, we\ndevelop a stochastic data-driven coarse-graining method inspired by the\nMori-Zwanzig formalism. This formalism shows that macroscopic systems with a\nlarge number of degrees of freedom can in principle be well described by a\nsmall number of relevant variables plus additional noise and memory terms. Our\ncoarse-graining method consists of numerical integrators for the distinguished\ncomponents of the system, where the noise and interaction terms with other\nsystem components are substituted by a random variable sampled from a\ndata-driven model. Applying our methodology on three different systems -- a\ndistinguished particle under a harmonic potential and under a bistable\npotential; and a dimer with two metastable configurations -- we show that the\nresulting coarse-grained models are not only capable of reproducing the correct\nequilibrium distributions but also the dynamic behavior due to temporal\ncorrelations and memory effects. Our coarse-graining method requires data from\nfull-scale simulations to be parametrized, and can in principle be extended to\ndifferent types of models beyond Langevin dynamics.", "category": "physics_comp-ph" }, { "text": "Reconstruction of rough surfaces from a single receiver at grazing angle: The paper develops a method for recovering a one-dimensional rough surface\nprofile from scattered wave field, using a single receiver and repeated\nmeasurements when the surface is moving with respect to source and receiver.\nThis extends a previously introduced marching method utilizing low grazing\nangles, and addresses the key issue of the requirement for many simultaneous\nreceivers. The algorithm recovers the surface height below the receiver point\nstep-by-step as the surface is moved, using the parabolic wave integral\nequations. Numerical examples of reconstructed surfaces demonstrate that the\nmethod is robust in both Dirichlet and Neumann boundary conditions, and with\nrespect to different roughness characteristics and to some degree of\nmeasurement noise.", "category": "physics_comp-ph" }, { "text": "Lecture Notes of Tensor Network Contractions: Tensor network (TN), a young mathematical tool of high vitality and great\npotential, has been undergoing extremely rapid developments in the last two\ndecades, gaining tremendous success in condensed matter physics, atomic\nphysics, quantum information science, statistical physics, and so on. In this\nlecture notes, we focus on the contraction algorithms of TN as well as some of\nthe applications to the simulations of quantum many-body systems. Starting from\nbasic concepts and definitions, we first explain the relations between TN and\nphysical problems, including the TN representations of classical partition\nfunctions, quantum many-body states (by matrix product state, tree TN, and\nprojected entangled pair state), time evolution simulations, etc. These\nproblems, which are challenging to solve, can be transformed to TN contraction\nproblems. We present then several paradigm algorithms based on the ideas of the\nnumerical renormalization group and/or boundary states, including density\nmatrix renormalization group, time-evolving block decimation,\ncoarse-graining/corner tensor renormalization group, and several distinguished\nvariational algorithms. Finally, we revisit the TN approaches from the\nperspective of multi-linear algebra (also known as tensor algebra or tensor\ndecompositions) and quantum simulation. Despite the apparent differences in the\nideas and strategies of different TN algorithms, we aim at revealing the\nunderlying relations and resemblances in order to present a systematic picture\nto understand the TN contraction approaches.", "category": "physics_comp-ph" }, { "text": "Heat and Mass Transfer during Chemical Vapor Deposition on the Particle\n Surface Subjected to Nanosecond Laser Heating: A thermal model of chemical vapor deposition of titanium nitride (TiN) on the\nspherical particle surface under irradiation by a nanosecond laser pulse is\npresented in this paper. Heat and mass transfer on a single spherical metal\npowder particle surface subjected to temporal Gaussian heat flux is\ninvestigated analytically. The chemical reaction on the particle surface and\nthe mass transfer in the gas phase are also considered. The surface\ntemperature, thermal penetration depth, and deposited film thickness under\ndifferent laser fluence, pulse width, initial particle temperature, and\nparticle radius are investigated. The effect of total pressure in the reaction\nchamber on deposition rate is studied as well. The particle-level model\npresented in this paper is an important step toward development of multiscale\nmodel of LCVI.", "category": "physics_comp-ph" }, { "text": "General framework for E(3)-equivariant neural network representation of\n density functional theory Hamiltonian: Combination of deep learning and ab initio calculation has shown great\npromise in revolutionizing future scientific research, but how to design neural\nnetwork models incorporating a priori knowledge and symmetry requirements is a\nkey challenging subject. Here we propose an E(3)-equivariant deep-learning\nframework to represent density functional theory (DFT) Hamiltonian as a\nfunction of material structure, which can naturally preserve the Euclidean\nsymmetry even in the presence of spin-orbit coupling. Our DeepH-E3 method\nenables very efficient electronic-structure calculation at ab initio accuracy\nby learning from DFT data of small-sized structures, making routine study of\nlarge-scale supercells ($> 10^4$ atoms) feasible. Remarkably, the method can\nreach sub-meV prediction accuracy at high training efficiency, showing\nstate-of-the-art performance in our experiments. The work is not only of\ngeneral significance to deep-learning method development, but also creates new\nopportunities for materials research, such as building Moir\\'e-twisted material\ndatabase.", "category": "physics_comp-ph" }, { "text": "Minimal subspace rotation on the Stiefel manifold for stabilization and\n enhancement of projection-based reduced order models for the compressible\n Navier-Stokes equations: For a projection-based reduced order model (ROM) of a fluid flow to be stable\nand accurate, the dynamics of the truncated subspace must be taken into\naccount. This paper proposes an approach for stabilizing and enhancing\nprojection-based fluid ROMs in which truncated modes are accounted for a priori\nvia a minimal rotation of the projection subspace. Attention is focused on the\nfull non-linear compressible Navier-Stokes equations in specific volume form as\na step toward a more general formulation for problems with generic\nnon-linearities. Unlike traditional approaches, no empirical turbulence\nmodeling terms are required, and consistency between the ROM and the full order\nmodel from which the ROM is derived is maintained. Mathematically, the approach\nis formulated as a trace minimization problem on the Stiefel manifold. The\nreproductive as well as predictive capabilities of the method are evaluated on\nseveral compressible flow problems, including a problem involving laminar flow\nover an airfoil with a high angle of attack, and a channel-driven cavity flow\nproblem.", "category": "physics_comp-ph" }, { "text": "Strategies to cure numerical shock instability in HLLEM Riemann solver: The HLLEM scheme is a popular contact and shear preserving approximate\nRiemann solver for cheap and accurate computation of high speed gasdynamical\nflows. Unfortunately this scheme is known to be plagued by various forms of\nnumerical shock instability. In this paper we present various strategies to\nsave the HLLEM scheme from developing such spurious solutions. A linear scale\nanalysis of its mass and interface-normal momentum flux discretizations reveal\nthat its antidiffusive terms, which are primarily responsible for resolution of\nlinear wavefields, are inadvertently activated along a normal shock front due\nto numerical perturbations. These erroneously activated terms counteract the\nfavourable damping mechanism provided by its inherent HLL-type diffusive terms\nand trigger the shock instability. To avoid this, two different strategies are\nproposed for discretization of these critical flux components in the vicinity\nof a shock: one that deals with increasing the magnitude of inherent HLL-type\ndissipation through careful manipulation of specific non-linear wave speed\nestimates while the other deals with reducing the magnitude of these critical\nantidiffusive terms. A linear perturbation analysis is performed to gauge the\neffectiveness of these cures and estimate von-Neumann type stability bounds on\nthe CFL number arising from their use. Results from classic numerical test\ncases show that both types of modified HLLEM schemes are able to provide\nexcellent shock stable solutions while retaining commendable accuracy on shear\ndominated viscous flows.", "category": "physics_comp-ph" }, { "text": "WENO-Wombat: Scalable Fifth-Order Constrained-Transport\n Magnetohydrodynamics for Astrophysical Applications: Due to increase in computing power, high-order Eulerian schemes will likely\nbecome instrumental for the simulations of turbulence and magnetic field\namplification in astrophysical fluids in the next years. We present the\nimplementation of a fifth order weighted essentially non-oscillatory scheme for\nconstrained-transport magnetohydrodynamics into the code WOMBAT. We establish\nthe correctness of our implementation with an extensive number tests. We find\nthat the fifth order scheme performs as accurately as a common second order\nscheme at half the resolution. We argue that for a given solution quality the\nnew scheme is more computationally efficient than lower order schemes in three\ndimensions. We also establish the performance characteristics of the solver in\nthe WOMBAT framework. Our implementation fully vectorizes using flattened\narrays in thread-local memory. It performs at about 0.6 Million zones per\nsecond per node on Intel Broadwell. We present scaling tests of the code up to\n98 thousand cores on the Cray XC40 machine \"Hazel Hen\", with a sustained\nperformance of about 5 percent of peak at scale.", "category": "physics_comp-ph" }, { "text": "Numerical Simulation of the Perrin - like Experiments: A simple model of random Brownian walk of a spherical mesoscopic particle in\nviscous liquids is proposed. The model can be both solved analytically and\nsimulated numerically. The analytic solution gives the known\nEistein-Smoluchowski diffusion law $ = Dt$ where the diffusion constant\n$D$ is expressed by the mass and geometry of a particle, the viscosity of a\nliquid and the average effective time between consecutive collisions of the\ntracked particle with liquid molecules. The latter allows to make a simulation\nof the Perrin experiment and verify in detailed study the influence of the\nstatistics on the expected theoretical results. To avoid the problem of small\nstatistics causing departures from the diffusion law we introduce in the second\npart of the paper the idea of so called Artificially Increased Statistics (AIS)\nand prove that within this method of experimental data analysis one can confirm\nthe diffusion law and get a good prediction for the diffusion constant even if\ntrajectories of just few particles immersed in a liquid are considered.", "category": "physics_comp-ph" }, { "text": "Optimal array of sand fences: Sand fences are widely applied to prevent soil erosion by wind in areas\naffected by desertification. Sand fences also provide a way to reduce the\nemission rate of dust particles, which is triggered mainly by the impacts of\nwind-blown sand grains onto the soil and affects the Earth's climate. Many\ndifferent types of fence have been designed and their effects on the sediment\ntransport dynamics studied since many years. However, the search for the\noptimal array of fences has remained largely an empirical task. In order to\nachieve maximal soil protection using the minimal amount of fence material, a\nquantitative understanding of the flow profile over the relief encompassing the\narea to be protected including all employed fences is required. Here we use\nComputational Fluid Dynamics to calculate the average turbulent airflow through\nan array of fences as a function of the porosity, spacing and height of the\nfences. Specifically, we investigate the factors controlling the fraction of\nsoil area over which the basal average wind shear velocity drops below the\nthreshold for sand transport when the fences are applied. We introduce a cost\nfunction, given by the amount of material necessary to construct the fences. We\nfind that, for typical sand-moving wind velocities, the optimal fence height\n(which minimizes this cost function) is around $50\\,$cm, while using fences of\nheight around $1.25\\,$m leads to maximal cost.", "category": "physics_comp-ph" }, { "text": "Investigating the interplay between mechanisms of anomalous diffusion\n via fractional Brownian walks on a comb-like structure: The comb model is a simplified description for anomalous diffusion under\ngeometric constraints. It represents particles spreading out in a\ntwo-dimensional space where the motions in the x-direction are allowed only\nwhen the y coordinate of the particle is zero. Here, we propose an extension\nfor the comb model via Langevin-like equations driven by fractional Gaussian\nnoises (long-range correlated). By carrying out computer simulations, we show\nthat the correlations in the y-direction affect the diffusive behavior in the\nx-direction in a non-trivial fashion, resulting in a quite rich diffusive\nscenario characterized by usual, superdiffusive or subdiffusive scaling of\nsecond moment in the x-direction. We further show that the long-range\ncorrelations affect the probability distribution of the particle positions in\nthe x-direction, making their tails longer when noise in the y-direction is\npersistent and shorter for anti-persistent noise. Our model thus combines and\nallows the study/analysis of the interplay between different mechanisms of\nanomalous diffusion (geometric constraints and long-range correlations) and may\nfind direct applications for describing diffusion in complex systems such as\nliving cells.", "category": "physics_comp-ph" }, { "text": "UGKWP method for polydisperse gas-solid particle multiphase flow: In this paper, a unified algorithm will be proposed for the study of\ngas-solid particle multiphase flow. The gas-kinetic scheme (GKS) is used to\nsimulate the continuum gas phase and the multiscale unified gas-kinetic\nwave-particle (UGKWP) method is developed for the multiple dispersed solid\nparticle phase. At the same time, the momentum and energy exchanges between\ngas-particle phases will be included under the GKS-UGKWP framework. For each\ndisperse solid particle phase, the decomposition of deterministic wave and\nstatistic particle in UGKWP is based on the local cell's Knudsen number. This\nis very significant for simulating dispersed particle phases at different\nKnudsen numbers due to the variation of physical properties in individual\nparticle phase, such as the particle diameter, material density, and the\ncorresponding mass fraction inside each control volume. For the gas phase, the\nGKS is basically an Eulerian approach for the NS solution. Therefore, the\nGKS-UGKWP method for the gas-particle flow unifies the Eulerian-Eulerian (EE)\nand Eulerian-Lagrangian (EL) methods. An optimal strategy can be obtained for\nthe solid particle phase with the consideration of physical accuracy and\nnumerical efficiency. Two cases of gas-solid fluidization system, i.e., one\ncirculating fluidized bed and one turbulent fluidized bed, are simulated. The\ntypical flow structures of the fluidized particles are captured, and the\ntime-averaged variables of the flow field agree well with the experimental\nmeasurements. In addition, the shock particle-bed interaction is studied by the\nproposed GKS-UGKWP, which validates the method for polydisperse gas-particle\nsystem in the supersonic case, where the dynamic evolution process of the\nparticle cloud is investigated.", "category": "physics_comp-ph" }, { "text": "On the design of Monte-Carlo particle coagulation solver interface: a\n CPU/GPU Super-Droplet Method case study with PySDM: Super-Droplet Method (SDM) is a probabilistic Monte-Carlo-type model of\nparticle coagulation process, an alternative to the mean-field formulation of\nSmoluchowski. SDM as an algorithm has linear computational complexity with\nrespect to the state vector length, the state vector length is constant\nthroughout simulation, and most of the algorithm steps are readily\nparallelizable. This paper discusses the design and implementation of two\nnumber-crunching backends for SDM implemented in PySDM, a new free and\nopen-source Python package for simulating the dynamics of atmospheric aerosol,\ncloud and rain particles. The two backends share their application programming\ninterface (API) but leverage distinct parallelism paradigms, target different\nhardware, and are built on top of different lower-level routine sets. First\noffers multi-threaded CPU computations and is based on Numba (using Numpy\narrays). Second offers GPU computations and is built on top of ThrustRTC and\nCURandRTC (and does not use Numpy arrays). In the paper, the API is discussed\nfocusing on: data dependencies across steps, parallelisation opportunities, CPU\nand GPU implementation nuances, and algorithm workflow. Example simulations\nsuitable for validating implementations of the API are presented.", "category": "physics_comp-ph" }, { "text": "Transition Jitter in Heat Assisted Magnetic Recording by Micromagnetic\n Simulation: In this paper we apply an extended Landau-Lifschitz equation, as introduced\nby Ba\\v{n}as et al. for the simulation of heat-assisted magnetic recording.\nThis equation has similarities with the Landau-Lifshitz-Bloch equation. The\nBa\\v{n}as equation is supposed to be used in a continuum setting with sub-grain\ndiscretization by the finite-element method. Thus, local geometric features and\nnonuniform magnetic states during switching are taken into account. We\nimplement the Ba\\v{n}as model and test its capability for predicting the\nrecording performance in a realistic recording scenario. By performing\nrecording simulations on 100 media slabs with randomized granular structure and\nconsecutive read back calculation, the write position shift and transition\njitter for bit lengths of 10nm, 12nm, and 20nm are calculated.", "category": "physics_comp-ph" }, { "text": "Resonating Valence Bond Quantum Monte Carlo: Application to the ozone\n molecule: We study the potential energy surface of the ozone molecule by means of\nQuantum Monte Carlo simulations based on the resonating valence bond concept.\nThe trial wave function consists of an antisymmetrized geminal power arranged\nin a single-determinant that is multiplied by a Jastrow correlation factor.\nWhereas the determinantal part incorporates static correlation effects, the\naugmented real-space correlation factor accounts for the dynamics electron\ncorrelation. The accuracy of this approach is demonstrated by computing the\npotential energy surface for the ozone molecule in three vibrational states:\nsymmetric, asymmetric and scissoring. We find that the employed wave function\nprovides a detailed description of rather strongly-correlated multi-reference\nsystems, which is in quantitative agreement with experiment.", "category": "physics_comp-ph" }, { "text": "Shedding Light on Moire Excitons: A First-Principles Perspective: Moire superlattices in van der Waals (vdW) heterostructures could trap\nstrongly bonded and long lived interlayer excitons. Assumed to be localized,\nthese moire excitons could form ordered quantum dot arrays, paving the way for\nnovel optoelectronic and quantum information applications. Here we perform\nfirst principles simulations to shed light on moire excitons in twisted\nMoS2/WS2 heterostructures. We provide the direct evidence of localized\ninterlayer moire excitons in vdW heterostructures. The moire potentials are\nmapped out based on spatial modulations of energy gaps. Nearly flat valence\nbands are observed in the heterostructures without magic angles. The dependence\nof spatial localization and binding energy of the moire excitons on the twist\nangle of the heterostructures is examined. We explore how electric field can be\ntuned to control the position, polarity, emission energy, and hybridization\nstrength of the moire excitons. We predict that alternating electric fields\ncould modulate the dipole moments of hybridized moire excitons and suppress\ntheir diffusion in Moire lattices.", "category": "physics_comp-ph" }, { "text": "A non-empirical free volume viscosity model for alkane lubricants under\n severe pressures: Viscosities $\\eta$ and diffusion coefficients $D_s$ of linear and branched\nalkanes at high pressures $P$$<$0.7 GPa and temperatures $T$=500-600 K are\ncalculated by equilibrium molecular dynamics. Combining Stokes-Einstein, free\nvolume and random walk concepts results in an accurate viscosity model\n$\\eta(D_s(P,T))$ for the considered P and T. All model parameters (hydrodynamic\nradius, random walk step size and attempt frequency) are defined as microscopic\nensemble averages and extracted from EMD simulations rendering $\\eta(D_s(P,T))$\na parameter-free predictor for lubrication simulations.", "category": "physics_comp-ph" }, { "text": "Monographie sur le tol\u00e9rancement modal: In order to analyze the geometric quality of any surface we have defined a\nshape language that can be used in tolerancing and metrology softwares. Modal\nparameters defines a shape langage allowing to describe geometric variations\nassociating undulation, form, position, orientation and dimensions. It defines\na geometric basis, easy to use by a simple user or deeply by an expert. The\nprincipal properties of this basis are the exhaustiveness and the metric of the\nparameters. We can use either natural mode shapes that can be modified by\ntechnological mode shapes.", "category": "physics_comp-ph" }, { "text": "Eulerian method for multiphase interactions of soft solid bodies in\n fluids: We introduce an Eulerian approach for problems involving one or more soft\nsolids immersed in a fluid, which permits mechanical interactions between all\nphases. The reference map variable is exploited to simulate finite-deformation\nconstitutive relations in the solid(s) on the same fixed grid as the fluid\nphase, which greatly simplifies the coupling between phases. Our coupling\nprocedure, a key contribution in the current work, is shown to be\ncomputationally faster and more stable than an earlier approach, and admits the\nability to simulate both fluid--solid and solid--solid interaction between\nsubmerged bodies. The interface treatment is demonstrated with multiple\nexamples involving a weakly compressible Navier--Stokes fluid interacting with\na neo-Hookean solid, and we verify the method's convergence. The solid contact\nmethod, which exploits distance-measures already existing on the grid, is\ndemonstrated with two examples. A new, general routine for cross-interface\nextrapolation is introduced and used as part of the new interfacial treatment.", "category": "physics_comp-ph" }, { "text": "Hybrid Auxiliary Field Quantum Monte Carlo for Molecular Systems: We propose a quantum Monte Carlo approach to solve the ground state many-body\nSchrodinger equation for the electronic ground state. The method combines\noptimization from variational Monte Carlo and propagation from auxiliary field\nquantum Monte Carlo, in a way that significantly alleviates the sign problem.\nIn application to molecular systems, we obtain highly accurate results for\nconfigurations dominated by either dynamic or static electronic correlation.", "category": "physics_comp-ph" }, { "text": "LBsoft: a parallel open-source software for simulation of colloidal\n systems: We present LBsoft, an open-source software developed mainly to simulate the\nhydro-dynamics of colloidal systems based on the concurrent coupling between\nlattice Boltzmann methods for the fluid and discrete particle dynamics for the\ncolloids. Such coupling has been developed before, but, to the best of our\nknowledge, no detailed discussion of the programming issues to be faced in\norder to attain efficient implementation on parallel architectures, has ever\nbeen presented to date. In this paper, we describe in detail the underlying\nmulti-scale models, their coupling procedure, along side with a description of\nthe relevant input variables, to facilitate third-parties usage. The code is\ndesigned to exploit parallel computing platforms, taking advantage also of the\nrecent AVX-512 instruction set. We focus on LBsoft structure, functionality,\nparallel implementation, performance and availability, so as to facilitate the\naccess to this computational tool to the research community in the field. The\ncapabilities of LBsoft are highlighted for a number of prototypical case\nstudies, such as pickering emulsions, bicontinuous systems, as well as an\noriginal study of the coarsening process in confined bijels under shear.", "category": "physics_comp-ph" }, { "text": "Implementation of a hybrid particle code with a PIC description in r-z\n and a gridless description in $\u03c6$ into OSIRIS: For many plasma physics problems, three-dimensional and kinetic effects are\nvery important. However, such simulations are very computationally intensive.\nFortunately, there is a class of problems for which there is nearly azimuthal\nsymmetry and the dominant three-dimensional physics is captured by the\ninclusion of only a few azimuthal harmonics. Recently, it was proposed [A.\nLifschitz et al., J. Comp. Phys. 228 (5) (2009) 1803-1814] to model one such\nproblem, laser wakefield acceleration, by expanding the fields and currents in\nazimuthal harmonics and truncating the expansion after only the first harmonic.\nThe complex amplitudes of the fundamental and first harmonic for the fields\nwere solved on an r-z grid and a procedure for calculating the complex current\namplitudes for each particle based on its motion in Cartesian geometry was\npresented using a Marder's correction to maintain the validity of Gauss's law.\nIn this paper, we describe an implementation of this algorithm into OSIRIS\nusing a rigorous charge conserving current deposition method to maintain the\nvalidity of Gauss's law. We show that this algorithm is a hybrid method which\nuses a particles-in-cell description in r-z and a gridless description in\n$\\phi$. We include the ability to keep an arbitrary number of harmonics and\nhigher order particle shapes. Examples, for laser wakefield acceleration,\nplasma wakefield acceleration, and beam loading are also presented and\ndirections for future work are discussed.", "category": "physics_comp-ph" }, { "text": "Modelling Meso-Scale Diffusion Processes in Stochastic Fluid\n Bio-Membranes: The space-time dynamics of rigid inhomogeneities (inclusions) free to move in\na randomly fluctuating fluid bio-membrane is derived and numerically simulated\nas a function of the membrane shape changes. Both vertically placed (embedded)\ninclusions and horizontally placed (surface) inclusions are considered. The\nenergetics of the membrane, as a two-dimensional (2D) meso-scale continuum\nsheet, is described by the Canham-Helfrich Hamiltonian, with the membrane\nheight function treated as a stochastic process. The diffusion parameter of\nthis process acts as the link coupling the membrane shape fluctuations to the\nkinematics of the inclusions. The latter is described via Ito stochastic\ndifferential equation. In addition to stochastic forces, the inclusions also\nexperience membrane-induced deterministic forces. Our aim is to simulate the\ndiffusion-driven aggregation of inclusions and show how the external inclusions\narrive at the sites of the embedded inclusions. The model has potential use in\nsuch emerging fields as designing a targeted drug delivery system.", "category": "physics_comp-ph" }, { "text": "Characteristic boundary conditions for magnetohydrodynamic equations: In the present study, a characteristic-based boundary condition scheme is\ndeveloped for the compressible magnetohydrodynamic (MHD) equations in the\ngeneral curvilinear coordinate system, which is an extension of the\ncharacteristic boundary scheme for the Navier-Stokes equations. The\neigenstructure and the complete set of characteristic waves are derived for the\nideal MHD equations in general curvilinear coordinates $(\\xi, \\eta, \\zeta)$.\nThe characteristic boundary conditions are derived and implemented in a\nhigh-order MHD solver where the sixth-order compact scheme is used for the\nspatial discretization. The fifth-order Weighted Essentially Non-Oscillatory\n(WENO) scheme is also employed for the spatial discretization of problems with\ndiscontinuities. In our MHD solver, the fourth-order Runge-Kutta scheme is\nutilized for time integration. The characteristic boundary scheme is first\nverified for the non-magnetic (i.e., $\\mathbf{B}=\\textbf{0}$) Sod shock tube\nproblem. Then, various in-house test cases are designed to examine the derived\nMHD characteristic boundary scheme for three different types of boundaries:\nnon-reflecting inlet and outlet, solid wall, and single characteristic wave\ninjection. The numerical examples demonstrate the accuracy and robustness of\nthe MHD characteristic boundary scheme.", "category": "physics_comp-ph" }, { "text": "PDE-NetGen 1.0: from symbolic PDE representations of physical processes\n to trainable neural network representations: Bridging physics and deep learning is a topical challenge. While deep\nlearning frameworks open avenues in physical science, the design of\nphysically-consistent deep neural network architectures is an open issue. In\nthe spirit of physics-informed NNs, PDE-NetGen package provides new means to\nautomatically translate physical equations, given as PDEs, into neural network\narchitectures. PDE-NetGen combines symbolic calculus and a neural network\ngenerator. The later exploits NN-based implementations of PDE solvers using\nKeras. With some knowledge of a problem, PDE-NetGen is a plug-and-play tool to\ngenerate physics-informed NN architectures. They provide\ncomputationally-efficient yet compact representations to address a variety of\nissues, including among others adjoint derivation, model calibration,\nforecasting, data assimilation as well as uncertainty quantification. As an\nillustration, the workflow is first presented for the 2D diffusion equation,\nthen applied to the data-driven and physics-informed identification of\nuncertainty dynamics for the Burgers equation.", "category": "physics_comp-ph" }, { "text": "Size reduction of complex networks preserving modularity: The ubiquity of modular structure in real-world complex networks is being the\nfocus of attention in many trials to understand the interplay between network\ntopology and functionality. The best approaches to the identification of\nmodular structure are based on the optimization of a quality function known as\nmodularity. However this optimization is a hard task provided that the\ncomputational complexity of the problem is in the NP-hard class. Here we\npropose an exact method for reducing the size of weighted (directed and\nundirected) complex networks while maintaining invariant its modularity. This\nsize reduction allows the heuristic algorithms that optimize modularity for a\nbetter exploration of the modularity landscape. We compare the modularity\nobtained in several real complex-networks by using the Extremal Optimization\nalgorithm, before and after the size reduction, showing the improvement\nobtained. We speculate that the proposed analytical size reduction could be\nextended to an exact coarse graining of the network in the scope of real-space\nrenormalization.", "category": "physics_comp-ph" }, { "text": "Flexible Bond and Angle, FBA/epsilon model of water: We propose a new flexible force field for water. The model in addition to the\nLennard-Jones and electrostatic parameters, includes the flexibility of the OH\nbonds and angles. The parameters are selected to give the experimental values\nof the density and dielectric constant of water at at 1 bar at 240K and the\ndipole moment of minimum density. The FBA/epsilon reproduces the experimental\nvalues of structural, thermodynamic and the phase behavior of water in a wide\nrange of temperatures with better accuracy than atomistic and other flexible\nmodels. We expect that this new approach would be suitable for studying water\nsolutions.", "category": "physics_comp-ph" }, { "text": "Application of artificial neural networks for rigid lattice kinetic\n Monte Carlo studies of Cu surface diffusion: Kinetic Monte Carlo (KMC) is a powerful method for simulation of diffusion\nprocesses in various systems. The accuracy of the method, however, relies on\nthe extent of details used for the parameterization of the model. Migration\nbarriers are often used to describe diffusion on atomic scale, but the full set\nof these barriers may become easily unmanageable in materials with increased\nchemical complexity or a large number of defects. This work is a feasibility\nstudy for applying a machine learning approach for Cu surface diffusion. We\ntrain an artificial neural network on a subset of the large set of $2^{26}$\nbarriers needed to correctly describe the surface diffusion in Cu. Our KMC\nsimulations using the obtained barrier predictor show sufficient accuracy in\nmodelling processes on the low-index surfaces and display the correct\nthermodynamical stability of these surfaces.", "category": "physics_comp-ph" }, { "text": "A Discontinuous Galerkin method for three-dimensional elastic and\n poroelastic wave propagation: forward and adjoint problems: We develop a numerical solver for three-dimensional wave propagation in\ncoupled poroelastic-elastic media, based on a high-order discontinuous Galerkin\n(DG) method, with the Biot poroelastic wave equation formulated as a first\norder conservative velocity/strain hyperbolic system. To derive an upwind\nnumerical flux, we find an exact solution to the Riemann problem, including the\nporoelastic-elastic interface; we also consider attenuation mechanisms both in\nBiot's low- and high-frequency regimes. Using either a low-storage explicit or\nimplicit-explicit (IMEX) Runge-Kutta scheme, according to the stiffness of the\nproblem, we study the convergence properties of the proposed DG scheme and\nverify its numerical accuracy. In the Biot low frequency case, the wave can be\nhighly dissipative for small permeabilities; here, numerical errors associated\nwith the dissipation terms appear to dominate those arising from discretisation\nof the main hyperbolic system.\n We then implement the adjoint method for this formulation of Biot's equation.\nIn contrast with the usual second order formulation of the Biot equation, we\nare not dealing with a self-adjoint system but, with an appropriate inner\nproduct, the adjoint may be identified with a non-conservative velocity/stress\nformulation of the Biot equation. We derive dual fluxes for the adjoint and\npresent a simple but illuminating example of the application of the adjoint\nmethod.", "category": "physics_comp-ph" }, { "text": "The Adaptive Shift Method in Full Configuration Interaction Quantum\n Monte Carlo: Development and Applications: In a recent paper, we proposed the adaptive shift method for correcting the\nundersampling bias of the initiator-FCIQMC. The method allows faster\nconvergence with the number of walkers to the FCI limit than the normal\ninitiator method, particularly for large systems. In its application to\nstrongly correlated molecules, however, the method is prone to overshooting the\nFCI energy at intermediate walker numbers, with convergence to the FCI limit\nfrom below. In this paper, we present a solution to the overshooting problem in\nstrongly correlated molecules, as well as further accelerating convergence to\nthe FCI energy. This is achieved by offsetting the reference energy to a value\ntypically below the Hartree-Fock energy but above the exact energy. This\noffsetting procedure does not change the exactness property of the algorithm,\nnamely convergence to the exact FCI solution in the large-walker limit, but at\nits optimal value greatly accelerates convergence. There is no overhead cost\nassociated with this offsetting procedure, and is therefore a pure and\nsubstantial computational gain. We illustrate the behavior of this offset\nadaptive shift method by applying it to the N$_2$ molecule, the ozone molecule\nat three different geometries (equilibrium open minimum, a hypothetical ring\nminimum, and a transition state) in three basis sets (cc-pV$X$Z, $X$=D,T,Q),\nand the chromium dimer in cc-pVDZ basis set, correlating 28 electrons in 76\norbitals. We show that in most cases the offset adaptive shift method converges\nmuch faster than both the normal initiator method and the original adaptive\nshift method.", "category": "physics_comp-ph" }, { "text": "A nonlocal operator method for solving partial differential equations: We propose a nonlocal operator method for solving partial differential\nequations (PDEs). The nonlocal operator is derived from the Taylor series\nexpansion of the unknown field, and can be regarded as the integral form\n\"equivalent\" to the differential form in the sense of nonlocal interaction. The\nvariation of a nonlocal operator is similar to the derivative of shape function\nin meshless and finite element methods, thus circumvents difficulty in the\ncalculation of shape function and its derivatives. {The nonlocal operator\nmethod is consistent with the variational principle and the weighted residual\nmethod, based on which the residual and the tangent stiffness matrix can be\nobtained with ease.} The nonlocal operator method is equipped with an hourglass\nenergy functional to satisfy the linear consistency of the field. Higher order\nnonlocal operators and higher order hourglass energy functional are\ngeneralized. The functional based on the nonlocal operator converts the\nconstruction of residual and stiffness matrix into a series of matrix\nmultiplications on the nonlocal operators. The nonlocal strong forms of\ndifferent functionals can be obtained easily via support and dual-support, two\nbasic concepts introduced in the paper. Several numerical examples are\npresented to validate the method.", "category": "physics_comp-ph" }, { "text": "Turbulence Model Development based on a Novel Method Combining Gene\n Expression Programming with an Artificial Neural Network: Data-driven methods are widely used to develop physical models, but there\nstill exist limitations that affect their performance, generalizability and\nrobustness. By combining gene expression programming (GEP) with artificial\nneural network (ANN), we propose a novel method for symbolic regression called\nthe gene expression programming neural network (GEPNN). In this method,\ncandidate expressions generated by evolutionary algorithms are transformed\nbetween the GEP and ANN structures during training iterations, and efficient\nand robust convergence to accurate models is achieved by combining the GEP's\nglobal searching and the ANN's gradient optimization capabilities. In addition,\nsparsity-enhancing strategies have been introduced to GEPNN to improve the\ninterpretability of the trained models. The GEPNN method has been tested for\nfinding different physical laws, showing improved convergence to models with\nprecise coefficients. Furthermore, for large-eddy simulation of turbulence, the\nsubgrid-scale stress model trained by GEPNN significantly improves the\nprediction of turbulence statistics and flow structures over traditional\nmodels, showing advantages compared to the existing GEP and ANN methods in both\na priori and a posteriori tests.", "category": "physics_comp-ph" }, { "text": "Facilitating {\\it ab initio} configurational sampling of multicomponent\n solids using an on-lattice neural network model and active learning: We propose a scheme for {\\it ab initio} configurational sampling in\nmulticomponent crystalline solids using Behler-Parinello type neural network\npotentials (NNPs) in an unconventional way: the NNPs are trained to predict the\nenergies of relaxed structures from the perfect lattice with configurational\ndisorder instead of the usual way of training to predict energies as functions\nof continuous atom coordinates. An active learning scheme is employed to obtain\na training set containing configurations of thermodynamic relevance. This\nenables bypassing of the structural relaxation procedure which is necessary\nwhen applying conventional NNP approaches to the lattice configuration problem.\nThe idea is demonstrated on the calculation of the temperature dependence of\nthe degree of A/B site inversion in three spinel oxides, MgAl$_2$O$_4$,\nZnAl$_2$O$_4$, and MgGa$_2$O$_4$. The present scheme may serve as an\nalternative to cluster expansion for `difficult' systems, e.g., complex bulk or\ninterface systems with many components and sublattices that are relevant to\nmany technological applications today.", "category": "physics_comp-ph" }, { "text": "Magnetic quantization in multilayer graphenes: Essential properties of multilayer graphenes are diversified by the number of\nlayers and the stacking configurations. For an $N$-layer system, Landau levels\nare divided into $N$ groups, with each identified by a dominant sublattice\nassociated with the stacking configuration. We focus on the main\ncharacteristics of Landau levels, including the degeneracy, wave functions,\nquantum numbers, onset energies, field-dependent energy spectra,\nsemiconductor-metal transitions, and crossing patterns, which are reflected in\nthe magneto-optical spectroscopy, scanning tunneling spectroscopy, and quantum\ntransport experiments. The Landau levels in AA-stacked graphene are responsible\nfor multiple Dirac cones, while in AB-stacked graphene the Dirac properties\ndepend on the number of graphene layers, and in ABC-stacked graphene the\nlow-lying levels are related to surface states. The Landau-level mixing leads\nto anticrossings patterns in energy spectra, which are seen for intergroup\nLandau levels in AB-stacked graphene, while in particular, a formation of both\nintergroup and intragroup anticrossings is observed in ABC-stacked graphene.\nThe aforementioned magneto-electronic properties lead to diverse optical\nspectra, plasma spectra, and transport properties when the stacking order and\nthe number of layers are varied. The calculations are in agreement with optical\nand transport experiments, and novel features that have not yet been verified\nexperimentally are presented.", "category": "physics_comp-ph" }, { "text": "WENO interpolation-based and upwind-biased schemes with free-stream\n preservation: Based on the understandings regarding linear upwind schemes with flux\nsplitting to achieve free-stream preservation (Q. Li, etc. Commun. Comput.\nPhys., 22 (2017) 64-94), a series of WENO interpolation-based and upwind-biased\nnonlinear schemes are proposed in this study. By means of engagement of fluxes\non midpoints, the nonlinearity of schemes is introduced through WENO\ninterpolations, and upwind-biased features are acquired through the choice of\ndependent grid stencil. Regarding the third- and fifth-order versions, schemes\nwith one and two midpoints are devised and carefully tested. With the\nintegration of the piecewise-polynomial mapping function methods (Q. Li, etc.\nCommun. Comput. Phys. 18 (2015) 1417-1444), the proposed schemes are found to\nachieve the designed orders and free-stream preservation property. In 1-D Sod\nand Shu-Osher problems, all schemes succeed in yielding well predictions. In\n2-D cases, the vortex preservation, supersonic inviscid flow around cylinder at\nM=4, Riemann problem and Shock-vortex interaction problems are tested. In each\nproblem, two types of grids are employed, i.e. the uniformed/smooth grids and\nthe randomized/partially-randomized grids. On the latter, the shock wave and\ncomplex flow structures are located/partially located. All schemes fulfill\ncomputations in uniformed/smooth grids with satisfactory results. On randomized\ngrids, all schemes accomplish computations and yield reasonable results except\nthe third-order one with two midpoints engaged fails in Riemann problem and\nshock-vortex interaction problem. Overall speaking, the proposed schemes\nmanifest the capability to solve problems on grids with bad quality, and\ntherefore indicate their potential in engineering applications.", "category": "physics_comp-ph" }, { "text": "$T$-$\u03a9$ formulation with higher order hierarchical basis functions\n for non simply connected conductors: This paper extends the $T$-$\\Omega$ formulation for eddy currents based on\nhigher order hierarchical basis functions so that it can deal with conductors\nof arbitrary topology. To this aim we supplement the classical hierarchical\nbasis functions with non-local basis functions spanning the first de Rham\ncohomology group of the insulating region. Such non-local basis functions may\nbe efficiently found in negligible time with the recently introduced\nD{\\l}otko--Specogna (DS) algorithm.", "category": "physics_comp-ph" }, { "text": "Energy Loss of High-Energy Particles in Particle-in-Cell Simulation: When a charged particle moves through a plasma at a speed much higher than\nthe thermal velocity of the plasma, it is subjected to the force of the\nelectrostatic field induced in the plasma by itself and loses its energy. This\nprocess is well-known as the stopping power of a plasma. In this paper we show\nthat the same process works in particle-in-cell (PIC) simulations as well and\nthe energy loss rate of fast particles due to this process is mainly determined\nby the number of plasma electrons contained in the electron skin depth volume.\nHowever, since there are generally very few particles in that volume in PIC\nsimulations compared with real plasmas, the energy loss effect can be\nexaggerated significantly and can affect the results. Therefore, especially for\nthe simulations that investigate the particle acceleration processes, the\nnumber of particles used in the simulations should be chosen large enough to\navoid this artificial energy loss.", "category": "physics_comp-ph" }, { "text": "GenASiS Mathematics: Object-oriented manifolds, operations, and solvers\n for large-scale physics simulations: The large-scale computer simulation of a system of physical fields governed\nby partial differential equations requires some means of approximating the\nmathematical limit of continuity. For example, conservation laws are often\ntreated with a `finite-volume' approach in which space is partitioned into a\nlarge number of small `cells,' with fluxes through cell faces providing an\nintuitive discretization modeled on the mathematical definition of the\ndivergence operator. Here we describe and make available Fortran 2003 classes\nfurnishing extensible object-oriented implementations of simple meshes and the\nevolution of generic conserved currents thereon, along with individual `unit\ntest' programs and larger example problems demonstrating their use. These\nclasses inaugurate the Mathematics division of our developing astrophysics\nsimulation code GenASiS (General Astrophysical Simulation System), which will\nbe expanded over time to include additional meshing options, mathematical\noperations, solver types, and solver variations appropriate for many\nmultiphysics applications.", "category": "physics_comp-ph" }, { "text": "Lattice Boltzmann model for weakly compressible flows: We present an energy conserving lattice Boltzmann model based on a\ncrystallographic lattice for simulation of weakly compressible flows. The\ntheoretical requirements and the methodology to construct such a model are\ndiscussed. We demonstrate that the model recovers the isentropic sound speed in\naddition to the effects of viscous heating and heat flux dynamics. Several test\ncases for acoustics, thermal and thermoacoustic flows are simulated to show the\naccuracy of the proposed model.", "category": "physics_comp-ph" }, { "text": "Mutation++: MUlticomponent Thermodynamic And Transport properties for\n IONized gases in C++: The Mutation++ library provides accurate and efficient computation of\nphysicochemical properties associated with partially ionized gases in various\ndegrees of thermal nonequilibrium. With v1.0.0, users can compute thermodynamic\nand transport properties, multiphase linearly-constrained equilibria, chemical\nproduction rates, energy transfer rates, and gas-surface interactions. The\nframework is based on an object-oriented design in C++, allowing users to\nplug-and-play various models, algorithms, and data as necessary. Mutation++ is\navailable open-source under the GNU Lesser General Public License v3.0.", "category": "physics_comp-ph" }, { "text": "Micromechanics-based prediction of thermoelastic properties of high\n energy materials: High energy materials such as polymer bonded explosives are commonly used as\npropellants. These particulate composites contain explosive crystals suspended\nin a rubbery binder. However, the explosive nature of these materials limits\nthe determination of their mechanical properties by experimental means.\nTherefore micromechanics-based methods for the determination of the effective\nthermoelastic properties of polymer bonded explosives are investigated in this\nresearch. Polymer bonded explosives are two-component particulate composites\nwith high volume fractions of particles (volume fraction $>$ 90%) and high\nmodulus contrast (ratio of Young's modulus of particles to binder of\n5,000-10,000). Experimentally determined elastic moduli of one such material,\nPBX 9501, are used to validate the micromechanics methods examined in this\nresearch. The literature on micromechanics is reviewed; rigorous bounds on\neffective elastic properties and analytical methods for determining effective\nproperties are investigated in the context of PBX 9501. Since detailed\nnumerical simulations of PBXs are computationally expensive, simple numerical\nhomogenization techniques have been sought. Two such techniques explored in\nthis research are the Generalized Method of Cells and the Recursive Cell\nMethod. Effective properties calculated using these methods have been compared\nwith finite element analyses and experimental data.", "category": "physics_comp-ph" }, { "text": "A comparative evaluation of three volume rendering libraries for the\n visualization of sheared thermal convection: Oceans play a big role in the nature of our planet, about $ 70 \\% $ of our\nearth is covered by water. Strong currents are transporting warm water around\nthe world making life possible, and allowing us to harvest its power producing\nenergy. Yet, oceans also carry a much more deadly side. Floods and tsunamis can\neasily annihilate whole cities and destroy life in seconds. The earth's climate\nsystem is also very much linked to the currents in the ocean due to its large\ncoverage of the earth's surface, thus, gaining scientific insights into the\nmechanisms and effects through simulations is of high importance. Deep ocean\ncurrents can be simulated by means of wall-bounded turbulent flow simulations.\nTo support these very large scale numerical simulations and enable the\nscientists to interpret their output, we deploy an interactive visualization\nframework to study sheared thermal convection. The visualizations are based on\nvolume rendering of the temperature field. To address the needs of\nsupercomputer users with different hardware and software resources, we evaluate\ndifferent volume rendering implementations supported in the ParaView\nenvironment: two GPU-based solutions with Kitware's native volume mapper or\nNVIDIA's IndeX library, and a CPU-only Intel OSPRay-based implementation.", "category": "physics_comp-ph" }, { "text": "An immersed-boundary method for compressible viscous flow and its\n application in gas-kinetic BGK scheme: An immersed-boundary (IB) method is proposed and applied in the gas-kinetic\nBGK scheme to simulate incompressible/compressible viscous flow with\nstationary/moving boundary. In present method the ghost-cell technique is\nadopted to fulfill the boundary condition on the immersed boundary. A novel\nidea \"local boundary determination\" is put forward to identify the ghost cells,\neach of which may have several different ghost-cell constructions corresponding\nto different boundary segments, thus eliminating the singularity of the ghost\ncell. Furthermore, the so-called \"fresh-cell\" problem when implementing the IB\nmethod in moving-boundary simulation is resolved by a simple extrapolation in\ntime. The method is firstly applied in the gas-kinetic BGK scheme to simulate\nthe Taylor-Couette flow, where the second-order spatial accuracy of the method\nis validated and the \"super-convergence\" of the BGK scheme is observed. Then\nthe test cases of supersonic flow around a stationary cylinder, incompressible\nflow around an oscillating cylinder and compressible flow around a moving\nairfoil are conducted to verify the capability of the present method in\nsimulating compressible flows and handling the moving boundary.", "category": "physics_comp-ph" }, { "text": "Particle-based Fast Jet Simulation at the LHC with Variational\n Autoencoders: We study how to use Deep Variational Autoencoders for a fast simulation of\njets of particles at the LHC. We represent jets as a list of constituents,\ncharacterized by their momenta. Starting from a simulation of the jet before\ndetector effects, we train a Deep Variational Autoencoder to return the\ncorresponding list of constituents after detection. Doing so, we bypass both\nthe time-consuming detector simulation and the collision reconstruction steps\nof a traditional processing chain, speeding up significantly the events\ngeneration workflow. Through model optimization and hyperparameter tuning, we\nachieve state-of-the-art precision on the jet four-momentum, while providing an\naccurate description of the constituents momenta, and an inference time\ncomparable to that of a rule-based fast simulation.", "category": "physics_comp-ph" }, { "text": "Quantum computing for fluids: where do we stand?: We present a pedagogical introduction to the current state of quantum\ncomputing algorithms for the simulation of classical fluids. Different\nstrategies, along with their potential merits and liabilities, are discussed\nand commented on.", "category": "physics_comp-ph" }, { "text": "Boundary Variation Diminishing (BVD) reconstruction: a new approach to\n improve Godunov scheme: This paper presents a new approach, so-called boundary variation diminishing\n(BVD), for reconstructions that minimize the discontinuities (jumps) at cell\ninterfaces in Godunov type schemes. It is motivated by the observation that\ndiminishing the jump at the cell boundary might effectively reduce the\ndissipation in numerical flux. Different from the existing practices which seek\nhigh-order polynomials within mesh cells while assuming discontinuities being\nalways at the cell interfaces, we proposed a new strategy that combines a\nhigh-order polynomial-based interpolation and a jump-like reconstruction that\nallows a discontinuity being partly represented within the mesh cell rather\nthan at the interface. It is shown that new schemes of high fidelity for both\ncontinuous and discontinuous solutions can be devised by the BVD guideline with\nproperly-chosen candidate reconstruction schemes. Excellent numerical results\nhave been obtained for both scalar and Euler conservation laws with\nsubstantially improved solution quality in comparison with the existing\nmethods. This work provides a simple and accurate alternative of great\npractical significance to the current Godunov paradigm which overly pursues the\nsmoothness within mesh cell under the questionable premiss that discontinuities\nonly appear at cell interfaces.", "category": "physics_comp-ph" }, { "text": "Comparison of polynomial approximations to speed up planewave-based\n quantum Monte Carlo calculations: The computational cost of quantum Monte Carlo (QMC) calculations of realistic\nperiodic systems depends strongly on the method of storing and evaluating the\nmany-particle wave function. Previous work [A. J. Williamson et al., Phys. Rev.\nLett. 87, 246406 (2001); D. Alf\\`e and M. J. Gillan, Phys. Rev. B 70, 161101\n(2004)] has demonstrated the reduction of the O(N^3) cost of evaluating the\nSlater determinant with planewaves to O(N^2) using localized basis functions.\nWe compare four polynomial approximations as basis functions -- interpolating\nLagrange polynomials, interpolating piecewise-polynomial-form (pp-) splines,\nand basis-form (B-) splines (interpolating and smoothing). All these basis\nfunctions provide a similar speedup relative to the planewave basis. The\npp-splines have eight times the memory requirement of the other methods. To\ntest the accuracy of the basis functions, we apply them to the ground state\nstructures of Si, Al, and MgO. The polynomial approximations differ in accuracy\nmost strongly for MgO and smoothing B-splines most closely reproduce the\nplanewave value for of the variational Monte Carlo energy. Using separate\napproximations for the Laplacian of the orbitals increases the accuracy\nsufficiently to justify the increased memory requirement, making smoothing\nB-splines, with separate approximation for the Laplacian, the preferred choice\nfor approximating planewave-represented orbitals in QMC calculations.", "category": "physics_comp-ph" }, { "text": "Ab initio path integral Monte Carlo simulations of warm dense\n two-component systems without fixed nodes: structural properties: We present extensive new \\emph{ab initio} path integral Monte Carlo (PIMC)\nresults for a variety of structural properties of warm dense hydrogen and\nberyllium. To deal with the fermion sign problem -- an exponential\ncomputational bottleneck due to the antisymmetry of the electronic thermal\ndensity matrix -- we employ the recently proposed\n[\\textit{J.~Chem.~Phys.}~\\textbf{157}, 094112 (2022); \\textbf{159}, 164113\n(2023)] $\\xi$-extrapolation method and find excellent agreement with exact\ndirect PIMC reference data where available. This opens up the intriguing\npossibility to study a gamut of properties of light elements and potentially\nmaterial mixtures over a substantial part of the warm dense matter regime, with\ndirect relevance for astrophysics, material science, and inertial confinement\nfusion research.", "category": "physics_comp-ph" }, { "text": "Phase-field modeling of multivariant martensitic transformation at\n finite-strain: computational aspects and large-scale finite-element\n simulations: Large-scale 3D martensitic microstructure evolution problems are studied\nusing a finite-element discretization of a finite-strain phase-field model. The\nmodel admits an arbitrary crystallography of transformation and arbitrary\nelastic anisotropy of the phases, and incorporates Hencky-type elasticity, a\npenalty-regularized double-obstacle potential, and viscous dissipation. The\nfinite-element discretization of the model is performed in Firedrake and relies\non the PETSc solver library. The large systems of linear equations arising are\nefficiently solved using GMRES and a geometric multigrid preconditioner with a\ncarefully chosen relaxation. The modeling capabilities are illustrated through\na 3D simulation of the microstructure evolution in a pseudoelastic CuAlNi\nsingle crystal during nano-indentation, with all six orthorhombic martensite\nvariants taken into account. Robustness and a good parallel scaling performance\nhave been demonstrated, with the problem size reaching 150 million degrees of\nfreedom.", "category": "physics_comp-ph" }, { "text": "A unified gas-kinetic particle method for frequency-dependent radiative\n transfer equations with isotropic scattering process on unstructured mesh: In this paper, we extend the unified kinetic particle (UGKP) method to the\nfrequency-dependent radiative transfer equation with both absorption-emission\nand scattering processes. The extended UGKP method could not only capture the\ndiffusion and free transport limit, but also provide a smooth transition in the\nphysical and frequency space in the regime between the above two limits. The\nproposed scheme has the properties of asymptotic-preserving, regime-adaptive,\nand entropy-preserving, which make it an accurate and efficient scheme in the\nsimulation of multiscale photon transport problems. The methodology of scheme\nconstruction is a coupled evolution of macroscopic energy equation and the\nmicroscopic radiant intensity equation, where the numerical flux in macroscopic\nenergy equation and the closure in microscopic radiant intensity equation are\nconstructed based on the integral solution. Both numerical dissipation and\ncomputational complexity are well controlled especially in the optical thick\nregime. A 2D multi-thread code on a general unstructured mesh has been\ndeveloped. Several numerical tests have been simulated to verify the numerical\nscheme and code, covering a wide range of flow regimes. The numerical scheme\nand code that we developed are highly demanded and widely applicable in the\nhigh energy density engineering applications.", "category": "physics_comp-ph" }, { "text": "Fluctuation capture in non-polar gases and liquids: We present a new model to identify natural fluctuations in fluids, allowing\nus to describe localization phenomena in the transport of electrons, positrons\nand positronium through non-polar fluids. The theory contains no free\nparameters and allows for the calculation of capture cross sections\n$\\sigma_{cap}(\\epsilon)$ of light-particles in any non-polar fluid, required\nfor non-equilibrium transport simulations. We postulate that localization\noccurs through large shallow traps before stable bound states are formed. Our\nresults allow us to explain most of the experimental observations of changes in\nmobility and annihilation rates in the noble gases and liquids as well as make\npredictions for future experiments. Quantities which are currently inaccessible\nto experiment, such as positron mobilities, can be obtained from our theory.\nUnlike other theoretical approaches to localization, the outputs of our theory\ncan be applied in non-equilibrium transport simulations and an extension to the\ndetermination of waiting time distributions for localized states is straight\nforward.", "category": "physics_comp-ph" }, { "text": "On Boundary Conditions for Lattice Kinetic Schemes for\n Magnetohydrodynamics Bounded by Walls with Finite Electrical Conductivities: Magnetohydrodynamic (MHD) flow of liquid metals through conduits play an\nimportant role in the proposed systems for harnessing fusion energy, and\nvarious other engineering and scientific problems. The interplay between the\nmagnetic fields and the fluid motion gives rise to complex flow physics, which\ndepend on the electrical conductivity of the bounding walls. An effective\napproach to represent the latter is via the Shercliff boundary condition for\nthin conducting walls relating the induced magnetic field and its wall normal\ngradient at the boundary via a parameter referred to as the wall conductance\nratio (Shercliff, JA, J. Fluid Mech. 1, 644 (1956)). Within the framework of\nthe highly parallelizable lattice Boltzmann (LB) method, a lattice kinetic\nscheme for MHD involving a vector distribution function for the magnetic fields\nwas proposed by Dellar (Dellar, PJ, J. Comp. Phys. 179, 95 (2002)). However,\nthe prior LB algorithms only accounted for limiting special cases involving\nperfectly insulated boundaries and did not consider the finite conductivity\neffects. In this paper, we present two new boundary schemes that enforce the\nShercliff boundary condition in the LB schemes for MHD. It allows for the\nspecification of the wall conductance ratio to any desired value based on the\nactual conductivities and length scales of the container and the wall\nthicknesses. One approach is constructed using a link-based formulation\ninvolving a weighted combination of the bounce back and anti-bounce back of the\ndistribution function for the magnetic field and the other approach involves an\non-node moment-based implementation. Moreover, their extensions to representing\nmoving walls are also presented. Numerical validations of the boundary schemes\nfor body force or shear driven MHD flows for a wide range of the values of the\nwall conductance ratio and their second order grid convergence are\ndemonstrated.", "category": "physics_comp-ph" }, { "text": "Discrete Breathers in a Nonlinear Polarizability Model of Ferroelectrics: We present a family of discrete breathers, which exists in a nonlinear\npolarizability model of ferroelectric materials. The core-shell model is set up\nin its non-dimensionalized Hamiltonian form and its linear spectrum is\nexamined. Subsequently, seeking localized solutions in the gap of the linear\nspectrum, we establish that numerically exact and potentially stable discrete\nbreathers exist for a wide range of frequencies therein.\n In addition, we present nonlinear normal mode, extended spatial profile\nsolutions from which the breathers bifurcate, as well as other associated\nphenomena such as the formation of phantom breathers within the model.\n The full bifurcation picture of the emergence and disappearance of the\nbreathers is complemented by direct numerical simulations of their dynamical\ninstability, when the latter arises.", "category": "physics_comp-ph" }, { "text": "Solution of Poisson's equation for finite systems using plane wave\n methods: Reciprocal space methods for solving Poisson's equation for finite charge\ndistributions are investigated. Improvements to previous proposals are\npresented, and their performance is compared in the context of a real-space\ndensity functional theory code. Two basic methodologies are followed:\ncalculation of correction terms, and imposition of a cut-off to the Coulomb\npotential. We conclude that these methods can be safely applied to finite or\naperiodic systems with a reasonable control of speed and accuracy.", "category": "physics_comp-ph" }, { "text": "Full-Spin-Wave-Scaled Finite Element Stochastic Micromagnetism:\n Mesh-Independent FUSSS LLG Simulations of Ferromagnetic Resonance and\n Reversal: In this paper, we address the problem that standard stochastic\nLandau-Lifshitz-Gilbert (sLLG) simulations typically produce results that show\nunphysical mesh-size dependence. The root cause of this problem is that the\neffects of spin wave fluctuations are ignored in sLLG. We propose to represent\nthe effect of these fluctuations by a \"FUll-Spinwave-Scaled Stochastic LLG\", or\nFUSSS LLG method. In FUSSS LLG, the intrinsic parameters of the sLLG\nsimulations are first scaled by scaling factors that integrate out the spin\nwave fluctuations up to the mesh size, and the sLLG simulation is then\nperformed with these scaled parameters. We developed FUSSS LLG by studying the\nFerromagnetic Resonance (FMR) in Nd$_2$Fe$_{14}$B cubes. The nominal scaling\ngreatly reduced the mesh size dependence relative to sLLG. We further\ndiscovered that adjusting one scaling exponent by less than 10% delivered fully\nmesh-size-independent results for the FMR peak. We then performed three tests\nand validations of our FUSSS LLG with this modified scaling. 1) We studied the\nsame FMR but with magnetostatic fields included. 2) We simulated the total\nmagnetization of the Nd$_2$Fe$_{14}$B cube. 3) We studied the effective,\ntemperature- and sweeping rate-dependent coercive field of the cubes. In all\nthree cases we found that FUSSS LLG delivered essentially mesh-size-independent\nresults, which tracked the theoretical expectations better than unscaled sLLG.\nMotivated by these successful validations, we propose that FUSSS LLG provides\nmarked, qualitative progress towards accurate, high precision modeling of\nmicromagnetics in hard, permanent magnets.", "category": "physics_comp-ph" }, { "text": "PArallel, Robust, Interface Simulator (PARIS): Paris (PArallel, Robust, Interface Simulator) is a finite volume code for\nsimulations of immiscible multifluid or multiphase flows. It is based on the\n\"one-fluid\" formulation of the Navier-Stokes equations where different fluids\nare treated as one material with variable properties, and surface tension is\nadded as a singular interface force. The fluid equations are solved on a\nregular structured staggered grid using an explicit projection method with a\nfirst-order or second-order time integration scheme. The interface separating\nthe different fluids is tracked by a Front-Tracking (FT) method, where the\ninterface is represented by connected marker points, or by a Volume-of-Fluid\n(VOF) method, where the marker function is advected directly on the fixed grid.\nParis is written in Fortran95/2002 and parallelized using MPI and domain\ndecomposition. It is based on several earlier FT or VOF codes such as Ftc3D,\nSurfer or Gerris. These codes and similar ones, as well as Paris, have been\nused to simulate a wide range of multifluid and multiphase flows.", "category": "physics_comp-ph" }, { "text": "Divergence-Free Magnetohydrodynamics on Conformally Moving, Adaptive\n Meshes Using a Vector Potential Method: We present a new method for evolving the equations of magnetohydrodynamics\n(both Newtonian and relativistic) that is capable of maintaining a\ndivergence-free magnetic field ($\\nabla \\cdot \\mathbf{B} = 0$) on adaptively\nrefined, conformally moving meshes. The method relies on evolving the magnetic\nvector potential and then using it to reconstruct the magnetic fields. The\nadvantage of this approach is that the vector potential is not subject to a\nconstraint equation in the same way the magnetic field is, and so can be\nrefined and moved in a straightforward way. We test this new method against a\nwide array of problems from simple Alfven waves on a uniform grid to general\nrelativistic MHD simulations of black hole accretion on a nested,\nspherical-polar grid. We find that the code produces accurate results and in\nall cases maintains a divergence-free magnetic field to machine precision.", "category": "physics_comp-ph" }, { "text": "High Rayleigh number variational multiscale large eddy simulations of\n Rayleigh-B\u00e9nard Convection: The variational multiscale (VMS) formulation is used to develop\nresidual-based VMS large eddy simulation (LES) models for Rayleigh-B\\'{e}nard\nconvection. The resulting model is a mixed model that incorporates the VMS\nmodel and an eddy viscosity model. The Wall-Adapting Local Eddy-viscosity\n(WALE) model is used as the eddy viscosity model in this work. The new LES\nmodels were implemented in the finite element code Drekar. Simulations are\nperformed using continuous, piecewise linear finite elements. The simulations\nranged from $Ra = 10^6$ to $Ra = 10^{14}$ and were conducted at $Pr = 1$ and\n$Pr = 7$. Two domains were considered: a two-dimensional domain of aspect ratio\n2 with a fluid confined between two parallel plates and a three-dimensional\ncylinder of aspect ratio $1/4$. The Nusselt number from the VMS results is\ncompared against three dimensional direct numerical simulations and\nexperiments. In all cases, the VMS results are in good agreement with existing\nliterature.", "category": "physics_comp-ph" }, { "text": "Symbolic computation of the Hartree-Fock energy from a chiral EFT\n three-nucleon interaction at N$^2$LO: We present the first of a two-part Mathematica notebook collection that\nimplements a symbolic approach for the application of the density matrix\nexpansion (DME) to the Hartree-Fock (HF) energy from a chiral effective field\ntheory (EFT) three-nucleon interaction at N$^2$LO. The final output from the\nnotebooks is a Skyrme-like energy density functional that provides a\nquasi-local approximation to the nonlocal HF energy. In this paper, we discuss\nthe derivation of the HF energy and its simplification in terms of the\nscalar/vector-isoscalar/isovector parts of the one-body density matrix.\nFurthermore, a set of steps is described and illustrated on how to extend the\napproach to other three-nucleon interactions.", "category": "physics_comp-ph" }, { "text": "A Constrained Transport Method for the Solution of the Resistive\n Relativistic MHD Equations: We describe a novel Godunov-type numerical method for solving the equations\nof resistive relativistic magnetohydrodynamics. In the proposed approach, the\nspatial components of both magnetic and electric fields are located at zone\ninterfaces and are evolved using the constrained transport formalism. Direct\napplication of Stokes' theorem to Faraday's and Ampere's laws ensures that the\nresulting discretization is divergence-free for the magnetic field and\ncharge-conserving for the electric field. Hydrodynamic variables retain,\ninstead, the usual zone-centred representation commonly adopted in\nfinite-volume schemes. Temporal discretization is based on Runge-Kutta\nimplicit-explicit (IMEX) schemes in order to resolve the temporal scale\ndisparity introduced by the stiff source term in Ampere's law. The implicit\nstep is accomplished by means of an improved and more efficient Newton-Broyden\nmultidimensional root-finding algorithm. The explicit step relies on a\nmultidimensional Riemann solver to compute the line-averaged electric and\nmagnetic fields at zone edges and it employs a one-dimensional Riemann solver\nat zone interfaces to update zone-centred hydrodynamic quantities. For the\nlatter, we introduce a five-wave solver based on the frozen limit of the\nrelaxation system whereby the solution to the Riemann problem can be decomposed\ninto an outer Maxwell solver and an inner hydrodynamic solver. A number of\nnumerical benchmarks demonstrate that our method is superior in stability and\nrobustness to the more popular charge-conserving divergence cleaning approach\nwhere both primary electric and magnetic fields are zone-centered. In addition,\nthe employment of a less diffusive Riemann solver noticeably improves the\naccuracy of the computations.", "category": "physics_comp-ph" }, { "text": "U-net architectures for fast prediction of incompressible laminar flows: Machine learning is a popular tool that is being applied to many domains,\nfrom computer vision to natural language processing. It is not long ago that\nits use was extended to physics, but its capabilities remain to be accurately\ncontoured. In this paper, we are interested in the prediction of 2D velocity\nand pressure fields around arbitrary shapes in laminar flows using supervised\nneural networks. To this end, a dataset composed of random shapes is built\nusing Bezier curves, each shape being labeled with its pressure and velocity\nfields by solving Navier-Stokes equations using a CFD solver. Then, several\nU-net architectures are trained on the latter dataset, and their predictive\nefficiency is assessed on unseen shapes, using ad hoc error functions.", "category": "physics_comp-ph" }, { "text": "Reproducibility, accuracy and performance of the Feltor code and library\n on parallel computer architectures: Feltor is a modular and free scientific software package. It allows\ndeveloping platform independent code that runs on a variety of parallel\ncomputer architectures ranging from laptop CPUs to multi-GPU distributed memory\nsystems. Feltor consists of both a numerical library and a collection of\napplication codes built on top of the library. Its main target are two- and\nthree-dimensional drift- and gyro-fluid simulations with discontinuous Galerkin\nmethods as the main numerical discretization technique. We observe that\nnumerical simulations of a recently developed gyro-fluid model produce\nnon-deterministic results in parallel computations. First, we show how we\nrestore accuracy and bitwise reproducibility algorithmically and\nprogrammatically. In particular, we adopt an implementation of the exactly\nrounded dot product based on long accumulators, which avoids accuracy losses\nespecially in parallel applications. However, reproducibility and accuracy\nalone fail to indicate correct simulation behaviour. In fact, in the physical\nmodel slightly different initial conditions lead to vastly different end\nstates. This behaviour translates to its numerical representation. Pointwise\nconvergence, even in principle, becomes impossible for long simulation times.\nIn a second part, we explore important performance tuning considerations. We\nidentify latency and memory bandwidth as the main performance indicators of our\nroutines. Based on these, we propose a parallel performance model that predicts\nthe execution time of algorithms implemented in Feltor and test our model on a\nselection of parallel hardware architectures. We are able to predict the\nexecution time with a relative error of less than 25% for problem sizes between\n0.1 and 1000 MB. Finally, we find that the product of latency and bandwidth\ngives a minimum array size per compute node to achieve a scaling efficiency\nabove 50% (both strong and weak).", "category": "physics_comp-ph" }, { "text": "Green's function-based control-oriented modeling of electric field for\n dielectrophoresis: In this paper, we propose a novel approach to obtaining a reliable and simple\nmathematical model of a dielectrophoretic force for model-based feedback\nmicromanipulation. Any such model is expected to sufficiently accurately relate\nthe voltages (electric potentials) applied to the electrodes to the resulting\nforces exerted on microparticles at given locations in the workspace. This\nmodel also has to be computationally simple enough to be used in real time as\nrequired by model-based feedback control. Most existing models involve solving\ntwo- or three-dimensional mixed boundary value problems. As such, they are\nusually analytically intractable and have to be solved numerically instead. A\nnumerical solution is, however, infeasible in real time, hence such models are\nnot suitable for feedback control. We present a novel approximation of the\nboundary value data for which a closed-form analytical solution is feasible; we\nsolve a mixed boundary value problem numerically off-line only once, and based\non this solution we approximate the mixed boundary conditions by Dirichlet\nboundary conditions. This way we get an approximated boundary value problem\nallowing the application of the analytical framework of Green's functions. Thus\nobtained closed-form analytical solution is amenable to real-time use and\nclosely matches the numerical solution of the original exact problem.", "category": "physics_comp-ph" }, { "text": "Learning molecular energies using localized graph kernels: Recent machine learning methods make it possible to model potential energy of\natomic configurations with chemical-level accuracy (as calculated from\nab-initio calculations) and at speeds suitable for molecular dynam- ics\nsimulation. Best performance is achieved when the known physical constraints\nare encoded in the machine learning models. For example, the atomic energy is\ninvariant under global translations and rotations, it is also invariant to\npermutations of same-species atoms. Although simple to state, these symmetries\nare complicated to encode into machine learning algorithms. In this paper, we\npresent a machine learning approach based on graph theory that naturally\nincorporates translation, rotation, and permutation symmetries. Specifically,\nwe use a random walk graph kernel to measure the similarity of two adjacency\nmatrices, each of which represents a local atomic environment. This Graph\nApproximated Energy (GRAPE) approach is flexible and admits many possible\nextensions. We benchmark a simple version of GRAPE by predicting atomization\nenergies on a standard dataset of organic molecules.", "category": "physics_comp-ph" }, { "text": "Langevin theory of fluctuations in the discrete Boltzmann equation: The discrete Boltzmann equation for both the ideal and a non-ideal fluid is\nextended by adding Langevin noise terms in order to incorporate the effects of\nthermal fluctuations. After casting the fluctuating discrete Boltzmann equation\nin a form appropriate to the Onsager-Machlup theory of linear fluctuations, the\nstatistical properties of the noise are determined by invoking a\nfluctuation-dissipation theorem at the kinetic level. By integrating the\nfluctuating discrete Boltzmann equation, the fluctuating lattice Boltzmann\nequation is obtained, which provides an efficient way to solve the equations of\nfluctuating hydrodynamics for ideal and non-ideal fluids. Application of the\nframework to a generic force-based non-ideal fluid model leads to ideal\ngas-type thermal noise. Simulation results indicate proper thermalization of\nall degrees of freedom.", "category": "physics_comp-ph" }, { "text": "Ultra-large-scale electronic structure theory and numerical algorithm: This article is composed of two parts; In the first part (Sec. 1), the\nultra-large-scale electronic structure theory is reviewed for (i) its\nfundamental numerical algorithm and (ii) its role in nano-material science. The\nsecond part (Sec. 2) is devoted to the mathematical foundation of the\nlarge-scale electronic structure theory and their numerical aspects.", "category": "physics_comp-ph" }, { "text": "Kinetic energy densities based on the fourth order gradient expansion:\n performance in different classes of materials and improvement via machine\n learning: We study the performance of fourth-order gradient expansions of the kinetic\nenergy density (KED) in semi-local kinetic energy functionals depending on the\ndensity-dependent variables. The formal fourth-order expansion is convergent\nfor periodic systems and small molecules but does not improve over the\nsecond-order expansion (Thomas-Fermi term plus one-ninth of von Weizs\\\"acker\nterm). Linear fitting of the expansion coefficients somewhat improves on the\nformal expansion. The tuning of the fourth order expansion coefficients allows\nfor better reproducibility of Kohn-Sham kinetic energy density than the tuning\nof the second-order expansion coefficients alone. The possibility of a much\nmore accurate match with the Kohn-Sham kinetic energy density by using neural\nnetworks trained using the terms of the 4th order expansion as\ndensity-dependent variables is demonstrated. We obtain ultra-low fitting errors\nwithout overfitting. Small single hidden layer neural networks can provide good\naccuracy in separate KED fits of each compound, while for joint fitting of KEDs\nof multiple compounds multiple hidden layers were required to achieve good fit\nquality. The critical issue of data distribution is highlighted. We also show\nthe critical role of pseudopotentials in the performance of the expansion,\nwhere in the case of a too rapid decay of the valence density at the nucleus\nwith some pseudopotentials, numeric instabilities arise.", "category": "physics_comp-ph" }, { "text": "Geometry and scaling of tangled vortex lines in three-dimensional random\n wave fields: The short- and long-scale behaviour of tangled wave vortices (nodal lines) in\nrandom three-dimensional wave fields is studied via computer experiment. The\nzero lines are tracked in numerical simulations of periodic superpositions of\nthree-dimensional complex plane waves. The probability distribution of local\ngeometric quantities such as curvature and torsion are compared to previous\nanalytical and new Monte Carlo results from the isotropic Gaussian random wave\nmodel. We further examine the scaling and self-similarity of tangled wave\nvortex lines individually and in the bulk, drawing comparisons with other\nphysical systems of tangled filaments.", "category": "physics_comp-ph" }, { "text": "Raman Spectra of Titanium Carbide MXene from Machine-Learning Force\n Field Molecular Dynamics: MXenes represent one of the largest class of 2D materials with promising\napplications in many fields and their properties tunable by the surface group\ncomposition. Raman spectroscopy is expected to yield rich information about the\nsurface composition, but the interpretation of measured spectra has proven\nchallenging. The interpretation is usually done via comparison to simulated\nspectra, but there are large discrepancies between the experimental and earlier\nsimulated spectra. In this work, we develop a computational approach to\nsimulate Raman spectra of complex materials that combines machine-learning\nforce-field molecular dynamics and reconstruction of Raman tensors via\nprojection to pristine system modes. The approach can account for the effects\nof finite temperature, mixed surfaces, and disorder. We apply our approach to\nsimulate Raman spectra of titanium carbide MXene and show that all these\neffects must be included in order to properly reproduce the experimental\nspectra, in particular the broad features. We discuss the origin of the peaks\nand how they evolve with surface composition, which can then be used to\ninterpret experimental results.", "category": "physics_comp-ph" }, { "text": "A Novel Symmetric Four Dimensional Polytope Found Using Optimization\n Strategies Inspired by Thomson's Problem of Charges on a Sphere: Inspired by, and using methods of optimization derived from classical three\ndimensional electrostatics, we note a novel beautiful symmetric four\ndimensional polytope we have found with 80 vertices. We also describe how the\nmethod used to find this symmetric polytope, and related methods can\npotentially be used to find good examples for the kissing and packing problems\nin D dimensions.", "category": "physics_comp-ph" }, { "text": "Comprehensive Molecular Representation from Equivariant Transformer: The tradeoff between precision and performance in molecular simulations can\nnowadays be addressed by machine-learned force fields (MLFF), which combine\n\\textit{ab initio} accuracy with force field numerical efficiency. Different\nfrom conventional force fields however, incorporating relevant electronic\ndegrees of freedom into MLFFs becomes important. Here, we implement an\nequivariant transformer that embeds molecular net charge and spin state without\nadditional neural network parameters. The model trained on a singlet/triplet\nnon-correlated \\ce{CH2} dataset can identify different spin states and shows\nstate-of-the-art extrapolation capability. Therein, self-attention sensibly\ncaptures non-local effects, which, as we show, can be finely tuned over the\nnetwork hyper-parameters. We indeed found that Softmax activation functions\nutilised in the self-attention mechanism of graph networks outperformed\nReLU-like functions in prediction accuracy. Increasing the attention\ntemperature from $\\tau = \\sqrt{d}$ to $\\sqrt{2d}$ further improved the\nextrapolation capability, indicating a weighty role of nonlocality.\nAdditionally, a weight initialisation method was purposed that sensibly\naccelerated the training process.", "category": "physics_comp-ph" }, { "text": "A Stochastic Finite Element Model for the Dynamics of Globular\n Macromolecules: We describe a novel coarse-grained simulation method for modelling the\ndynamics of globular macromolecules, such as proteins. The macromolecule is\ntreated as a continuum that is subject to thermal fluctuations. The model\nincludes a non-linear treatment of elasticity and viscosity with thermal noise\nthat is solved using finite element analysis. We have validated the method by\ndemonstrating that the model provides average kinetic and potential energies\nthat are in agreement with the classical equipartition theorem. In addition, we\nhave performed Fourier analysis on the simulation trajectories obtained for a\nseries of linear beams to confirm that the correct average energies are present\nin the first two Fourier bending modes. We have then used the new modelling\nmethod to simulate the thermal fluctuations of a representative protein over\n500ns timescales. Using reasonable parameters for the material properties, we\nhave demonstrated that the overall deformation of the biomolecule is consistent\nwith the results obtained for proteins in general from atomistic molecular\ndynamics simulations.", "category": "physics_comp-ph" }, { "text": "Convergence issues in derivatives of Monte Carlo null-collision integral\n formulations: a solution: When a Monte Carlo algorithm is used to evaluate a physical observable A, it\nis possible to slightly modify the algorithm so that it evaluates\nsimultaneously A and the derivatives $\\partial$ $\\varsigma$ A of A with respect\nto each problem-parameter $\\varsigma$. The principle is the following: Monte\nCarlo considers A as the expectation of a random variable, this expectation is\nan integral, this integral can be derivated as function of the\nproblem-parameter to give a new integral, and this new integral can in turn be\nevaluated using Monte Carlo. The two Monte Carlo computations (of A and\n$\\partial$ $\\varsigma$ A) are simultaneous when they make use of the same\nrandom samples, i.e. when the two integrals have the exact same structure. It\nwas proven theoretically that this was always possible, but nothing insures\nthat the two estimators have the same convergence properties: even when a large\nenough sample-size is used so that A is evaluated very accurately, the\nevaluation of $\\partial$ $\\varsigma$ A using the same sample can remain\ninaccurate. We discuss here such a pathological example: null-collision\nalgorithms are very successful when dealing with radiative transfer in\nheterogeneous media, but they are sources of convergence difficulties as soon\nas sensitivity-evaluations are considered. We analyse theoretically these\nconvergence difficulties and propose an alternative solution.", "category": "physics_comp-ph" }, { "text": "A method for solving systems of non-linear differential equations with\n moving singularities: We present a method for solving a class of initial valued, coupled,\nnon-linear differential equations with `moving singularities' subject to some\nsubsidiary conditions. We show that this type of singularities can be\nadequately treated by establishing certain `moving' jump conditions across\nthem. We show how a first integral of the differential equations, if available,\ncan also be used for checking the accuracy of the numerical solution.", "category": "physics_comp-ph" }, { "text": "Stochastic Runge-Kutta Software Package for Stochastic Differential\n Equations: As a result of the application of a technique of multistep processes\nstochastic models construction the range of models, implemented as a\nself-consistent differential equations, was obtained. These are partial\ndifferential equations (master equation, the Fokker--Planck equation) and\nstochastic differential equations (Langevin equation). However, analytical\nmethods do not always allow to research these equations adequately. It is\nproposed to use the combined analytical and numerical approach studying these\nequations. For this purpose the numerical part is realized within the framework\nof symbolic computation. It is recommended to apply stochastic Runge--Kutta\nmethods for numerical study of stochastic differential equations in the form of\nthe Langevin. Under this approach, a program complex on the basis of analytical\ncalculations metasystem Sage is developed. For model verification logarithmic\nwalks and Black--Scholes two-dimensional model are used. To illustrate the\nstochastic \"predator--prey\" type model is used. The utility of the combined\nnumerical-analytical approach is demonstrated.", "category": "physics_comp-ph" }, { "text": "An Alternative Method to Implement Contact Angle Boundary Condition on\n Immersed Surfaces for Phase-Field Simulations: In this paper, we propose an alternative approach to implement the contact\nangle boundary condition on immersed surfaces for phase-field simulations of\ntwo-phase flows using the Cahn-Hilliard equation on a Cartesian mesh. This\nsimple and effective method was inspired by previous works on the geometric\nformulation of the wetting boundary condition. In two dimensions, by making\nfull use of the hyperbolic tangent profile of the order parameter, we were able\nto obtain its unknown value at a ghost point from the information at only one\npoint in the fluid. This is in contrast with previous approaches using\ninterpolations involving several points. The special feature allows this method\nto be easily implemented on immersed surfaces (including curved ones) that cut\nthrough the grid lines. It is verified through the study of two examples: (1)\nthe shape of a drop on a circular cylinder with different contact angles; (2)\nthe spreading of a drop on an embedded inclined wall with a given contact\nangle.", "category": "physics_comp-ph" }, { "text": "A simple alteration of the peridynamics correspondence principle to\n eliminate zero-energy deformation: We look for an enhancement of the correspondence model of peridynamics with a\nview to eliminating the zero-energy deformation modes. Since the non-local\nintegral definition of the deformation gradient underlies the problem, we\ninitially look for a remedy by introducing a class of localizing corrections to\nthe integral. Since the strategy is found to afford only a reduction, and not\ncomplete elimination, of the oscillatory zero-energy deformation, we propose in\nthe sequel an alternative approach based on the notion of sub-horizons. A most\nuseful feature of the last proposal is that the setup, whilst providing the\nsolution with the necessary stability, deviates only marginally from the\noriginal correspondence formulation. We also undertake a set of numerical\nsimulations that attest to the remarkable efficacy of the sub-horizon based\nmethodology.", "category": "physics_comp-ph" }, { "text": "Estimating relative diffusion from 3D micro-CT images using CNNs: In the past several years, convolutional neural networks (CNNs) have proven\ntheir capability to predict characteristic quantities in porous media research\ndirectly from pore-space geometries. Due to the frequently observed significant\nreduction in computation time in comparison to classical computational methods,\nbulk parameter prediction via CNNs is especially compelling, e.g. for effective\ndiffusion. While the current literature is mainly focused on fully saturated\nporous media, the partially saturated case is also of high interest. Due to the\nqualitatively different and more complex geometries of the domain available for\ndiffusive transport present in this case, standard CNNs tend to lose robustness\nand accuracy with lower saturation rates. In this paper, we demonstrate the\nability of CNNs to perform predictions of relative diffusion directly from full\npore-space geometries. As such, our CNN conveniently fuses diffusion prediction\nand a well-established morphological model which describes phase distributions\nin partially saturated porous media.", "category": "physics_comp-ph" }, { "text": "A Deficiency Problem of the Least Squares Finite Element Method for\n Solving Radiative Transfer in Strongly Inhomogeneous Media: The accuracy and stability of the least squares finite element method (LSFEM)\nand the Galerkin finite element method (GFEM) for solving radiative transfer in\nhomogeneous and inhomogeneous media are studied theoretically via a frequency\ndomain technique. The theoretical result confirms the traditional understanding\nof the superior stability of the LSFEM as compared to the GFEM. However, it is\ndemonstrated numerically and proved theoretically that the LSFEM will suffer a\ndeficiency problem for solving radiative transfer in media with strong\ninhomogeneity. This deficiency problem of the LSFEM will cause a severe\naccuracy degradation, which compromises too much of the performance of the\nLSFEM and makes it not a good choice to solve radiative transfer in strongly\ninhomogeneous media. It is also theoretically proved that the LSFEM is\nequivalent to a second order form of radiative transfer equation discretized by\nthe central difference scheme.", "category": "physics_comp-ph" }, { "text": "Generalized network modeling of capillary-dominated two-phase flow: We present a generalized network model for simulating capillary-dominated\ntwo-phase flow through porous media at the pore scale. Three-dimensional images\nof the pore space are discretized using a generalized network -- described in a\ncompanion paper (https://doi.org/10.1103/PhysRevE.96.013312) -- that comprises\npores that are divided into smaller elements called half-throats and\nsubsequently into corners. Half-throats define the connectivity of the network\nat the coarsest level, connecting each pore to half-throats of its neighboring\npores from their narrower ends, while corners define the connectivity of pore\ncrevices. The corners are discretized at different levels for accurate\ncalculation of entry pressures, fluid volumes and flow conductivities that are\nobtained using direct simulation of flow on the underlying image. This paper\ndiscusses the two-phase flow model that is used to compute the averaged flow\nproperties of the generalized network, including relative permeability and\ncapillary pressure. We validate the model using direct finite-volume two-phase\nflow simulations on synthetic geometries, and then present a comparison of the\nmodel predictions with a conventional pore-network model and experimental\nmeasurements of relative permeability in the literature.", "category": "physics_comp-ph" }, { "text": "An adaptive Cartesian embedded boundary approach for fluid simulations\n of two- and three-dimensional low temperature plasma filaments in complex\n geometries: We review a scalable two- and three-dimensional computer code for\nlow-temperature plasma simulations in multi-material complex geometries. Our\napproach is based on embedded boundary (EB) finite volume discretizations of\nthe minimal fluid-plasma model on adaptive Cartesian grids, extended to also\naccount for charging of insulating surfaces. We discuss the spatial and\ntemporal discretization methods, and show that the resulting overall method is\nsecond order convergent, monotone, and conservative (for smooth solutions).\nWeak scalability with parallel efficiencies over 70\\% are demonstrated up to\n8192 cores and more than one billion cells. We then demonstrate the use of\nadaptive mesh refinement in multiple two- and three-dimensional simulation\nexamples at modest cores counts. The examples include two-dimensional\nsimulations of surface streamers along insulators with surface roughness; fully\nthree-dimensional simulations of filaments in experimentally realizable\npin-plane geometries, and three-dimensional simulations of positive plasma\ndischarges in multi-material complex geometries. The largest computational\nexample uses up to $800$ million mesh cells with billions of unknowns on $4096$\ncomputing cores. Our use of computer-aided design (CAD) and constructive solid\ngeometry (CSG) combined with capabilities for parallel computing offers\npossibilities for performing three-dimensional transient plasma-fluid\nsimulations, also in multi-material complex geometries at moderate pressures\nand comparatively large scale.", "category": "physics_comp-ph" }, { "text": "Metadynamics with Discriminants: a Tool for Understanding Chemistry: We introduce an extension of a recently published method\\cite{Mendels2018} to\nobtain low-dimensional collective variables for studying multiple states free\nenergy processes in chemical reactions. The only information needed is a\ncollection of simple statistics of the equilibrium properties of the reactants\nand product states. No information on the reaction mechanism has to be given.\nThe method allows studying a large variety of chemical reactivity problems\nincluding multiple reaction pathways, isomerization, stereo- and\nregiospecificity. We applied the method to two fundamental organic chemical\nreactions. First we study the \\ce{S_N2} nucleophilic substitution reaction of a\n\\ce{Cl} in \\ce{CH_2 Cl_2} leading to an understanding of the kinetic origin of\nthe chirality inversion in such processes. Subsequently, we tackle the problem\nof regioselectivity in the hydrobromination of propene revealing that the\nnature of empirical observations such as the Markovinikov's rules lies in the\nchemical kinetics rather than the thermodynamic stability of the products.", "category": "physics_comp-ph" }, { "text": "Minimax rational approximation of the Fermi-Dirac distribution: Accurate rational approximations of the Fermi-Dirac distribution are a useful\ncomponent in many numerical algorithms for electronic structure calculations.\nThe best known approximations use $O( \\log (\\beta \\Delta) \\log\n(\\epsilon^{-1}))$ poles to achieve an error tolerance $\\epsilon$ at temperature\n$\\beta^{-1}$ over an energy interval $\\Delta$. We apply minimax approximation\nto reduce the number of poles by a factor of four and replace $\\Delta$ with\n$\\Delta_{\\mathrm{occ}}$, the occupied energy interval. This is particularly\nbeneficial when $\\Delta \\gg \\Delta_{\\mathrm{occ}}$, such as in electronic\nstructure calculations that use a large basis set.", "category": "physics_comp-ph" }, { "text": "Physics-informed neural networks for solving forward and inverse flow\n problems via the Boltzmann-BGK formulation: In this study, we employ physics-informed neural networks (PINNs) to solve\nforward and inverse problems via the Boltzmann-BGK formulation (PINN-BGK),\nenabling PINNs to model flows in both the continuum and rarefied regimes. In\nparticular, the PINN-BGK is composed of three sub-networks, i.e., the first for\napproximating the equilibrium distribution function, the second for\napproximating the non-equilibrium distribution function, and the third one for\nencoding the Boltzmann-BGK equation as well as the corresponding\nboundary/initial conditions. By minimizing the residuals of the governing\nequations and the mismatch between the predicted and provided boundary/initial\nconditions, we can approximate the Boltzmann-BGK equation for both continuous\nand rarefied flows. For forward problems, the PINN-BGK is utilized to solve\nvarious benchmark flows given boundary/initial conditions, e.g., Kovasznay\nflow, Taylor-Green flow, cavity flow, and micro Couette flow for Knudsen number\nup to 5. For inverse problems, we focus on rarefied flows in which accurate\nboundary conditions are difficult to obtain. We employ the PINN-BGK to infer\nthe flow field in the entire computational domain given a limited number of\ninterior scattered measurements on the velocity with unknown boundary\nconditions. Results for the two-dimensional micro Couette and micro cavity\nflows with Knudsen numbers ranging from 0.1 to 10 indicate that the PINN-BGK\ncan infer the velocity field in the entire domain with good accuracy. Finally,\nwe also present some results on using transfer learning to accelerate the\ntraining process. Specifically, we can obtain a three-fold speedup compared to\nthe standard training process (e.g., Adam plus L-BFGS-B) for the\ntwo-dimensional flow problems considered in our work.", "category": "physics_comp-ph" }, { "text": "Penalty and auxiliary wave function methods for electronic Excitation in\n neural network variational Monte Carlo: This study explores the application of neural network variational Monte Carlo\n(NN-VMC) for the computation of low-lying excited states in molecular systems.\nOur focus lies on the implementation and evaluation of two distinct\nmethodologies, the penalty method and a novel modification of the auxiliary\nwave function (AW) method, within the framework of the FermiNet-based NN-VMC\npackage. Importantly, this specific application has not been previously\nreported.Our investigation advocates for the efficacy of the modified AW\nmethod, emphasizing its superior robustness when compared to the penalty\nmethod. This methodological advancement introduces a valuable tool for the\nscientific community, offering a distinctive approach to target low-lying\nexcited states. We anticipate that the modified AW method will garner interest\nwithin the research community, serving as a complementary and robust\nalternative to existing techniques. Moreover, this contribution enriches the\nongoing development of various neural network ansatz, further expanding the\ntoolkit available for the accurate exploration of excited states in molecular\nsystems.", "category": "physics_comp-ph" }, { "text": "An Effective-Current Approach for Hall\u00e9n's Equation in Center-Fed\n Dipole Antennas with Finite Conductivity: We propose a remedy for the unphysical oscillations arising in the current\ndistribution of carbon nanotube and imperfectly conducting antennas\ncenter-driven by a delta-function generator when the approximate kernel is\nused. We do so by formulating an effective current, which was studied in detail\nin a 2011 and a 2013 paper for a perfectly conducting linear cylindrical\nantenna of infinite length, with application to the finite-length antenna. We\ndiscuss our results in connection with the perfectly conducting antenna,\nproviding perturbative corrections to the current distribution for a large\nconductance, as well as presenting a delta-sequence and the field of a Hertzian\ndipole for the effective current in the limit of vanishing conductance. To that\nend, we employ both analytical tools and numerical methods to compare with\nexperimental results.", "category": "physics_comp-ph" }, { "text": "What Determines the Yield Stress in Amorphous Solids?: A crucially important material parameter for all amorphous solids is the\nyield stress, which is the value of the stress for which the material yields to\nplastic flow when it is strained quasi-statically at zero temperature. It is\ndifficult in laboratory experiments to determine what parameters of the\ninter-particle potential effect the value of the yield stress. Here we use the\nversatility of numerical simulations to study the dependence of the yield\nstress on the parameters of the inter-particle potential. We find a very simple\ndependence on the fundamental scales which characterize the repulsive and\nattractive parts of the potential respectively, and offer a scaling theory that\ncollapses the data for widely different potentials and in different space\ndimensions.", "category": "physics_comp-ph" }, { "text": "On the extrapolation of perturbation series: We discuss certain special cases of algebraic approximants that are given as\nzeroes of so-called \"effective characteristic polynomials\" and their\ngeneralization to a multiseries setting. These approximants are useful for the\nconvergence acceleration or summation of quantum mechanical perturbation\nseries. Examples will be given and some properties will be discussed.", "category": "physics_comp-ph" }, { "text": "Predictive Reduced Order Modeling of Chaotic Multi-scale Problems Using\n Adaptively Sampled Projections: An adaptive projection-based reduced-order model (ROM) formulation is\npresented for model-order reduction of problems featuring chaotic and\nconvection-dominant physics. An efficient method is formulated to adapt the\nbasis at every time-step of the on-line execution to account for the unresolved\ndynamics. The adaptive ROM is formulated in a Least-Squares setting using a\nvariable transformation to promote stability and robustness. An efficient\nstrategy is developed to incorporate non-local information in the basis\nadaptation, significantly enhancing the predictive capabilities of the\nresulting ROMs. A detailed analysis of the computational complexity is\npresented, and validated. The adaptive ROM formulation is shown to require\nnegligible offline training and naturally enables both future-state and\nparametric predictions. The formulation is evaluated on representative reacting\nflow benchmark problems, demonstrating that the ROMs are capable of providing\nefficient and accurate predictions including those involving significant\nchanges in dynamics due to parametric variations, and transient phenomena. A\nkey contribution of this work is the development and demonstration of a\ncomprehensive ROM formulation that targets predictive capability in chaotic,\nmulti-scale, and transport-dominated problems.", "category": "physics_comp-ph" }, { "text": "Percolation study for the capillary ascent of a liquid through a\n granular soil: Capillary rise plays a crucial role in the construction of road embankments\nin flood zones, where hydrophobic compounds are added to the soil to suppress\nthe rising of water and avoid possible damage of the pavement. Water rises\nthrough liquid bridges, menisci and trimers, whose width and connectivity\ndepends on the maximal half-length {\\lambda} of the capillary bridges among\ngrains. Low {\\lambda} generate a disconnect structure, with small clusters\neverywhere. On the contrary, for high {\\lambda}, create a percolating cluster\nof trimers and enclosed volumes that form a natural path for capillary rise.\nHereby, we study the percolation transition of this geometric structure as a\nfunction of {\\lambda} on a granular media of monodisperse spheres in a random\nclose packing. We determine both the percolating threshold {\\lambda}_{c} =\n(0.049 \\pm 0.004)R (with R the radius of the granular spheres), and the\ncritical exponent of the correlation length {\\nu} = (0.830 \\pm 0.051),\nsuggesting that the percolation transition falls into the universality class of\nordinary percolation.", "category": "physics_comp-ph" }, { "text": "Asymptotic-preserving gyrokinetic implicit particle-orbit integrator for\n arbitrary electromagnetic fields: We extend the asymptotic preserving and energy conserving time integrator for\ncharged-particle motion developed in [Ricketson & Chac\\'on, JCP, 2020] to\ninclude finite Larmor-radius (FLR) effects in the presence of electric-field\nlength-scales comparable to the particle gyro-radius (the gyro-kinetic limit).\nWe introduce two modifications to the earlier scheme. The first is the explicit\ngyro-averaging of the electric field at the half time-step, along with an\nanalogous modification to the current deposition, which we show preserves total\nenergy conservation in implicit PIC schemes. The number of gyrophase samples is\nchosen adaptively, ensuring proper averaging for large timesteps, and the\nrecovery of full-orbit dynamics in the small time-step limit. The second\nmodification is an alternating large and small time-step strategy that ensures\nthe particle trajectory samples gyrophases evenly. We show that this strategy\nrelaxes the time-step restrictions on the scheme, allowing even larger\nspeed-ups than previously achievable. We demonstrate the new method with\nseveral single-particle motion tests in a variety of electromagnetic field\nconfigurations featuring gyro-scale variation in the electric field. The\nresults demonstrate the advertised ability to capture FLR effects accurately\neven when significantly stepping over the gyration time-scale.", "category": "physics_comp-ph" }, { "text": "Robust chimera states in SQUID metamaterials with local interactions: We report on the emergence of robust multi-clustered chimera states in a\ndissipative-driven system of symmetrically and locally coupled identical SQUID\noscillators. The \"snake-like\" resonance curve of the single SQUID\n(Superconducting QUantum Interference Device) is the key to the formation of\nthe chimera states and is responsible for the extreme multistability exhibited\nby the coupled system that leads to attractor crowding at the geometrical\nresonance (inductive-capacitive) frequency. Until now, chimera states were\nmostly believed to exist for nonlocal coupling. Our findings provide\ntheoretical evidence that nearest neighbor interactions are indeed capable of\nsupporting such states in a wide parameter range. SQUID metamaterials are the\nsubject of intense experimental investigations and we are highly confident that\nthe complex dynamics demonstrated in this manuscript can be confirmed in the\nlaboratory.", "category": "physics_comp-ph" }, { "text": "Temperature expressions and ergodicity of the Nos\u00e9-Hoover\n deterministic schemes: Thermostats are dynamic equations used to model thermodynamic variables in\nmolecular dynamics. The applicability of thermostats is based on the ergodic\nhypothesis. The most commonly used thermostats are designed according to the\nNos\\'e-Hoover scheme, although it is known that it often violates ergodicity.\nHere, following a method from our recent study \\citep{SamoletovVasiev2017}, we\nhave extended the classic Nos\\'e-Hoover scheme with an additional temperature\ncontrol tool. However, as with the NH scheme, a single thermostat variable is\nused. In the present study we analyze the statistical properties of the\nmodified equations of motion with an emphasis on ergodicity. Simultaneous\nthermostatting of all phase variables with minimal extra computational costs is\nan advantage of the specific theoretical scheme presented here.", "category": "physics_comp-ph" }, { "text": "Geometric effect on near-field heat transfer analysis using efficient\n graphene and nanotube models: Following the recent research enthusiasm on the effect of geometry on\nnear-field heat transfer (NFHT) enhancement, we present an analysis based on\nsimplified yet highly efficient graphene and nanotube models. Two geometries\nare considered: that of two parallel infinite \"graphene\" surfaces and that of a\none-dimensional infinite \"nanotube\" line in parallel with an infinite surface.\nDue to its symmetry, the former is in principal simpler to analyze and even so,\nearlier works suggested that the application of a full model in this problem\nstill demands heavy computations. Among other findings, our simplified\ncomputation - having successfully replicated the results of relevant earlier\nworks - suggests a sharper NFHT enhancement dependence on distance for the\nline-surface system, namely $J\\sim d^{-5.1}$ as compared to $J\\sim d^{-2.2}$\nfor the parallel surface. Such comparisons together with applications of our\nefficient approach would be the important first steps in the attempt to find a\ngeneral rule describing geometric dependence of NFHT.", "category": "physics_comp-ph" }, { "text": "Efficient Monte Carlo methods for simulating diffusion-reaction\n processes in complex systems: We briefly review the principles, mathematical bases, numerical shortcuts and\napplications of fast random walk (FRW) algorithms. This Monte Carlo technique\nallows one to simulate individual trajectories of diffusing particles in order\nto study various probabilistic characteristics (harmonic measure, first\npassage/exit time distribution, reaction rates, search times and strategies,\netc.) and to solve the related partial differential equations. The adaptive\ncharacter and flexibility of FRWs make them particularly efficient for\nsimulating diffusive processes in porous, multiscale, heterogeneous, disordered\nor irregularly-shaped media.", "category": "physics_comp-ph" }, { "text": "MOCSA: multiobjective optimization by conformational space annealing: We introduce a novel multiobjective optimization algorithm based on the\nconformational space annealing (CSA) algorithm, MOCSA. It has three\ncharacteristic features: (a) Dominance relationship and distance between\nsolutions in the objective space are used as the fitness measure, (b) update\nrules are based on the fitness as well as the distance between solutions in the\ndecision space and (c) it uses a constrained local minimizer. We have tested\nMOCSA on 12 test problems, consisting of ZDT and DTLZ test suites. Benchmark\nresults show that solutions obtained by MOCSA are closer to the Pareto front\nand covers a wider range of the objective space than those by the elitist\nnon-dominated sorting genetic system (NSGA2).", "category": "physics_comp-ph" }, { "text": "A conservative discontinuous Galerkin scheme for the 2D incompressible\n Navier--Stokes equations: In this paper we consider a conservative discretization of the\ntwo-dimensional incompressible Navier--Stokes equations. We propose an\nextension of Arakawa's classical finite difference scheme for fluid flow in the\nvorticity-stream function formulation to a high order discontinuous Galerkin\napproximation. In addition, we show numerical simulations that demonstrate the\naccuracy of the scheme and verify the conservation properties, which are\nessential for long time integration. Furthermore, we discuss the massively\nparallel implementation on graphic processing units.", "category": "physics_comp-ph" }, { "text": "Multi-moment advection scheme in three dimension for Vlasov simulations\n of magnetized plasma: We present an extension of the multi-moment advection scheme (Minoshima et\nal., 2011, J. Comput. Phys.) to the three-dimensional case, for full\nelectromagnetic Vlasov simulations of magnetized plasma. The scheme treats not\nonly point values of a profile but also its zeroth to second order piecewise\nmoments as dependent variables, and advances them on the basis of their\ngoverning equations. Similar to the two-dimensional scheme, the\nthree-dimensional scheme can accurately solve the solid body rotation problem\nof a gaussian profile with little numerical dispersion or diffusion. This is a\nvery important property for Vlasov simulations of magnetized plasma. We apply\nthe scheme to electromagnetic Vlasov simulations. Propagation of linear waves\nand nonlinear evolution of the electron temperature anisotropy instability are\nsuccessfully simulated with a good accuracy of the energy conservation.", "category": "physics_comp-ph" }, { "text": "Uncertainty quantification in Eulerian-Lagrangian simulations of\n (point-)particle-laden flows with data-driven and empirical forcing models: An uncertainty quantification framework is developed for Eulerian-Lagrangian\nmodels of particle-laden flows, where the fluid is modeled through a system of\npartial differential equations in the Eulerian frame and inertial particles are\ntraced as points in the Lagrangian frame. The source of uncertainty in such\nproblems is the particle forcing, which is determined empirically or\ncomputationally with high-fidelity methods (data-driven). The framework relies\non the averaging of the deterministic governing equations with the stochastic\nforcing and allows for an estimation of the first and second moment of the\nquantities of interest. Via comparison with Monte Carlo simulations, it is\ndemonstrated that the moment equations accurately predict the uncertainty for\nproblems whose Eulerian dynamics are either governed by the linear advection\nequation or the compressible Euler equations. In areas of singular particle\ninterfaces and shock singularities significant uncertainty is generated. An\ninvestigation into the effect of the numerical methods shows that\nlow-dissipative higher-order methods are necessary to capture numerical\nsingularities (shock discontinuities, singular source terms, particle\nclustering) with low diffusion in the propagation of uncertainty.", "category": "physics_comp-ph" }, { "text": "Data-driven parameterization of the generalized Langevin equation: We present a data-driven approach to determine the memory kernel and random\nnoise in generalized Langevin equations. To facilitate practical\nimplementations, we parameterize the kernel function in the Laplace domain by a\nrational function, with coefficients directly linked to the equilibrium\nstatistics of the coarse-grain variables. We show that such an approximation\ncan be constructed to arbitrarily high order and the resulting generalized\nLangevin dynamics can be embedded in an extended stochastic model without\nexplicit memory. We demonstrate how to introduce the stochastic noise so that\nthe second fluctuation-dissipation theorem is exactly satisfied. Results from\nseveral numerical tests are presented to demonstrate the effectiveness of the\nproposed method.", "category": "physics_comp-ph" }, { "text": "How Large is the Elephant in the Density Functional Theory Room?: A recent paper compares density functional theory results for atomization\nenergies and dipole moments using a multi-wavelet based method with traditional\nGaussian basis set results, and concludes that Gaussian basis sets are\nproblematic for achieving high accuracy. We show that by a proper choice of\nGaussian basis sets they are capable of achieving essentially the same accuracy\nas the multi-wavelet approach, and identify a couple of possible problems in\nthe multi-wavelet calculations.", "category": "physics_comp-ph" }, { "text": "Simulating Soft-Sphere Margination in Arterioles and Venules: In this paper, we deploy a Lattice Boltzmann - Particle Dynamics (LBPD)\nmethod to dissect the transport properties within arterioles and venules.\nFirst, the numerical approach is applied to study the transport of Red Blood\nCells (RBC) through plasma and validated by means of comparison with the\nexperimental data in the seminal work by F{\\aa}hr{\\ae}us and Lindqvist. Then,\nthe presence of micro-scale, soft spheres within the blood flow is considered:\nthe evolution in time of the position of such spheres is studied, in order to\nhighlight the presence of possible \\textit{margination} effects. The results of\nthe simulations and the evaluation of the computational eff", "category": "physics_comp-ph" }, { "text": "A Coupled Two-relaxation-time Lattice Boltzmann-Volume Penalization\n method for Flows Past Obstacles: In this article, a coupled Two-relaxation-time Lattice Boltzmann-Volume\npenalization (TRT-LBM-VP) method is presented to simulate flows past obstacles.\nTwo relaxation times are used in the collision operator, of which one is\nrelated to the fluid viscosity and the other one is related to the numerical\nstability and accuracy. The volume penalization method is introduced into the\nTRT-LBM by an external forcing term. In the procedure of the TRT-LBM-VP, the\nprocesses of interpolating velocities on the boundaries points and distributing\nthe force density to the Eulerian points are unneeded. Performing the\nTRT-LBM-VP on a certain point, only the variables of this point are needed. As\na consequence, the TRT-LBM-VP can be conducted parallelly. From the comparison\nbetween the result of the cylindrical Couette flow solved by the TRT-LBM-VP and\nthat solved by the Single-relaxation-time LBM-VP (SRT-LBM-VP), the accuracy of\nthe TRT-LBM-VP is higher than that of the SRT-LBM-VP. Flows past a single\ncircular cylinder, a pair of cylinders in tandem and side-by-side arrangements,\ntwo counter-rotating cylinders and a NACA-0012 airfoil are chosen as numerical\nexperiments to verify the present method further. Good agreements between the\npresent results and those in the previous literatures are achieved.", "category": "physics_comp-ph" }, { "text": "The Fermion Sign Problem in Path Integral Monte Carlo Simulations:\n Quantum Dots, Ultracold Atoms, and Warm Dense Matter: The ab initio thermodynamic simulation of correlated Fermi systems is of\ncentral importance for many applications, such as warm dense matter, electrons\nin quantum dots, and ultracold atoms. Unfortunately, path integral Monte Carlo\n(PIMC) simulations of fermions are severely restricted by the notorious fermion\nsign problem (FSP). In this work, we present a hands-on discussion of the FSP\nand investigate in detail its manifestation with respect to temperature, system\nsize, interaction-strength and -type, and the dimensionality of the system.\nMoreover, we analyze the probability distribution of fermionic expectation\nvalues, which can be non-Gaussian and fat-tailed when the FSP is severe. As a\npractical application, we consider electrons and dipolar atoms in a harmonic\nconfinement, and the uniform electron gas in the warm dense matter regime. In\naddition, we provide extensive PIMC data, which can be used as a reference for\nthe development of new methods and as a benchmark for approximations.", "category": "physics_comp-ph" }, { "text": "Modal analysis of electromagnetic resonators: user guide for the MAN\n program: All electromagnetic systems, in particular resonators or antennas, have\nresonances with finite lifetimes. The associated eigenstates, also called\nquasinormal modes, are essentially non-Hermitian and determine the optical\nresponses of the system. We introduce MAN (Modal Analysis of Nanoresonators), a\nsoftware with many open scripts, which computes and normalizes the quasinormal\nmodes of virtually any electromagnetic resonator, be it composed of dispersive,\nanisotropic, or non-reciprocal materials. MAN reconstructs the scattered field\nin the basis formed by the quasinormal modes of the resonator and provides a\ntransparent interpretation of the physics. The software is implemented in\nMATLAB and has been developed over the past ten years. MAN features many\ntoolboxes that illustrate how to use the software for various emblematic\ncomputations in low and high frequency regimes. A specific effort has been\ndevoted to interface the solver with the finite-element software COMSOL\nMultiphysics. However, MAN can also be used with other frequency-domain\nnumerical solvers. This article introduces the program, summarizes the relevant\ntheoretical background. MAN includes a comprehensive set of classical models\nand toolboxes that can be downloaded from the web.", "category": "physics_comp-ph" }, { "text": "Mesoscopic modelling of epithelial tissues: Over the last two decades, scientific literature has been blooming with\nvarious means of simulating epithelial cell colonies. Each of these simulations\ncan be separated by their respective efficiency (expressed in terms of consumed\ncomputational resources), the amount of cells/the size of tissues that can be\nsimulated, the time scale of the simulated dynamics and the coarse grained\nlevel of precision. Choosing the right algorithm for the simulation of\nepithelial cells and tissues is a compromise between each of these key\nelements. Irrespective of the method, each algorithm includes part, or all, of\nthe following features: short-range membrane-mediated attraction between cells,\nsoft-core repulsion between cells, cell proliferation, cell death, cell\nmotility, fluctuations, etc. We will first give a non-exhaustive overview of\ncommonly used modeling approaches for tissues at a mesoscopic level, giving a\nrough idea of the coarse-graining decisions made for every one of them. Then we\nwill dive into greater detail on how to implement a relaxation procedure\naccording to the Vertex Model, refreshing aspects of the theoretical\ngroundwork, describing required data structures and simulation steps and\npointing out details of the simulation that can present pitfalls to a\nfirst-time implementation of the model.", "category": "physics_comp-ph" }, { "text": "A Parallel Multi-Domain Solution Methodology Applied to Nonlinear\n Thermal Transport Problems in Nuclear Fuel Pins: This paper describes an efficient and nonlinearly consistent parallel\nsolution methodology for solving coupled nonlinear thermal transport problems\nthat occur in nuclear reactor applications over hundreds of individual 3D\nphysical subdomains. Efficiency is obtained by leveraging knowledge of the\nphysical domains, the physics on individual domains, and the couplings between\nthem for preconditioning within a Jacobian Free Newton Krylov method. Details\nof the computational infrastructure that enabled this work, namely the open\nsource Advanced Multi-Physics (AMP) package developed by the authors is\ndescribed. Details of verification and validation experiments, and parallel\nperformance analysis in weak and strong scaling studies demonstrating the\nachieved efficiency of the algorithm are presented. Furthermore, numerical\nexperiments demonstrate that the preconditioner developed is independent of the\nnumber of fuel subdomains in a fuel rod, which is particularly important when\nsimulating different types of fuel rods. Finally, we demonstrate the power of\nthe coupling methodology by considering problems with couplings between surface\nand volume physics and coupling of nonlinear thermal transport in fuel rods to\nan external radiation transport code.", "category": "physics_comp-ph" }, { "text": "Cell Size Effect on Computational Fluid Dynamics: The Limitation\n Principle for Flow Simulation: For theoretical gas dynamics, the flow regimes are classified according to\nthe Knudsen number. For computational fluid dynamics (CFD), the numerical flow\nfield is the projection of the physical flow field onto the discrete space and\ntime, which is related to the cell Knudsen number. The real representable flow\nregimes are controlled by these two parameters. According to the values of\nKnudsen number and cell Knudsen number, we study the classification of the\nnumerical flow regimes. In the process of mesh refinement, the numerical\nexperiments show the change of numerical flow regime from continuum, to\nnear-continuum, and to non-equilibrium one. The change of flow regime with\ndifferent cell resolution is the limitation principle for the numerical\nsimulation, which is the best a multiscale method can do. In other words, we\nshould have changeable numerical governing equations in different mesh size\nscale, and they are coupled with the traditional physical equations in\ndifferent scales, such as the Navier-Stokes and Boltzmann. Under the multiscale\nmodeling, a mesh refinement is a process in resolving the flow physics in\ndifferent scale. The verification and validation (V$\\&$V) need include the\nphysical modeling mechanism in the mesh refinement process. The traditional\nidea of mesh refinement for targeting a fixed partial differential equation\ncannot achieve the final goal of computation, which is to recover the flow\nphysics as truthfully as possible under the limitation of the cell resolution.", "category": "physics_comp-ph" }, { "text": "Spatial coupling of an explicit temporal adaptive integration scheme\n with an implicit time integration scheme: The Reynolds-Averaged Navier-Stokes equations and the Large-Eddy Simulation\nequations can be coupled using a transition function to switch from a set of\nequations applied in some areas of a domain to the other set in the other part\nof the domain. Following this idea, different time integration schemes can be\ncoupled. In this context, we developed a hybrid time integration scheme that\nspatially couples the explicit scheme of Heun and the implicit scheme of Crank\nand Nicolson using a dedicated transition function. This scheme is linearly\nstable and second-order accurate. In this paper, an extension of this hybrid\nscheme is introduced to deal with a temporal adaptive procedure. The idea is to\ntreat the time integration procedure with unstructured grids as it is performed\nwith Cartesian grids with local mesh refinement. Depending on its\ncharacteristic size, each mesh cell is assigned a rank. And for two cells from\ntwo consecutive ranks, the ratio of the associated time steps for time marching\nthe solutions is $2$. As a consequence, the cells with the lowest rank iterate\nmore than the other ones to reach the same physical time. In a finite-volume\ncontext, a key ingredient is to keep the conservation property for the\ninterfaces that separate two cells of different ranks. After introducing the\ndifferent schemes, the paper recalls briefly the coupling procedure, and\ndetails the extension to the temporal adaptive procedure. The new time\nintegration scheme is validated with the propagation of 1D wave packet, the\nSod's tube, and the transport of a bi-dimensional vortex in an uniform flow.", "category": "physics_comp-ph" }, { "text": "Improvements to the Prototype Micro-Brittle Linear Elasticity Model of\n Peridynamics: This paper assesses the accuracy and convergence of the linear-elastic,\nbond-based Peridynamic model with brittle failure, known as the prototype\nmicro-brittle (PMB) model. We investigate the discrete equations of this model,\nsuitable for numerical implementation. It is shown that the widely used\ndiscretization approach incurs rather large errors. Motivated by this\nobservation, a correction is proposed, which significantly increases the\naccuracy by cancelling errors associated with the discretization. As an\nadditional result, we derive equations to treat the interactions between\ndifferently sized particles, i.e., a non-homogeneous discretization spacing.\nThis presents an important step forward for the applicability of the PMB model\nto complex geometries, where it is desired to model interesting parts with a\nfine resolution (small particle spacings) and other parts with a coarse\nresolution in order to gain numerical efficiency. Validation of the corrected\nPeridynamic model is performed by comparing longitudinal sound wave propagation\nvelocities with exact theoretical results. We find that the corrected approach\ncorrectly reproduces the sound wave velocity, while the original approach\nseverely overestimates this quantity. Additionally, we present simulations for\na crack growth problem which can be analytically solved within the framework of\nLinear Elastic Fracture Mechanics Theory. We find that the corrected\nPeridynamics model is capable of quantitatively reproducing crack initiation\nand propagation.", "category": "physics_comp-ph" }, { "text": "Application of the iterative approach to modal methods for the solution\n of Maxwell's equations: In this work we discuss the possibility to reduce the computational\ncomplexity of modal methods, i.e. methods based on eigenmodes expansion, from\nthe third power to the second power of the number of eigenmodes. The proposed\napproach is based on the calculation of the eigenmodes part by part by using\nshift-and-invert iterative technique and by applying the iterative approach to\nsolve linear equations to compute eigenmodes expansion coefficients. As\npractical implementation, the iterative modal methods based on polynomials and\ntrigonometric functions as well as on finite-difference scheme are developed.\nAlternatives to the scattering matrix (S-matrix) technique which are based on\npure iterative or mixed direct-iteractive approaches allowing to markedly\nreduce the number of required numerical operations are discussed. Additionally,\nthe possibility of diminishing the memory demand of the whole algorithm from\nsecond to first power of the number of modes by implementing the iterative\napproach is demonstrated. This allows to carry out calculations up to hundreds\nof thousands eigenmodes without using a supercomputer.", "category": "physics_comp-ph" }, { "text": "A graph theoretic framework for representation, exploration and analysis\n on computed states of physical systems: A graph theoretic perspective is taken for a range of phenomena in continuum\nphysics in order to develop representations for analysis of large scale,\nhigh-fidelity solutions to these problems. Of interest are phenomena described\nby partial differential equations, with solutions being obtained by\ncomputation. The motivation is to gain insight that may otherwise be difficult\nto attain because of the high dimensionality of computed solutions. We consider\ngraph theoretic representations that are made possible by low-dimensional\nstates defined on the systems. These states are typically functionals of the\nhigh-dimensional solutions, and therefore retain important aspects of the\nhigh-fidelity information present in the original, computed solutions. Our\napproach is rooted in regarding each state as a vertex on a graph and\nidentifying edges via processes that are induced either by numerical solution\nstrategies, or by the physics. Correspondences are drawn between the sampling\nof stationary states, or the time evolution of dynamic phenomena, and the\nanalytic machinery of graph theory. A collection of computations is examined in\nthis framework and new insights to them are presented through analysis of the\ncorresponding graphs.", "category": "physics_comp-ph" }, { "text": "The Strongly Coupled Electron Liquid: ab initio Path Integral Monte\n Carlo Simulations and Dielectric Theories: The strongly coupled electron liquid provides a unique opportunity to study\nthe complex interplay of strong coupling with quantum degeneracy effects and\nthermal excitations. To this end, we carry out extensive \\textit{ab initio}\npath integral Monte Carlo (PIMC) simulations to compute the static structure\nfactor, interaction energy, density response function, and the corresponding\nstatic local field correction in the range of $20\\leq r_s \\leq 100$ and\n$0.5\\leq \\theta\\leq 4$. We subsequently compare these data to several\ndielectric approximations, and find that different schemes are capable to\nreproduce different features of the PIMC results at certain parameters.\nMoreover, we provide a comprehensive data table of interaction energies and\ncompare those to two recent parametrizations of the exchange-correlation free\nenergy, where they are available. Finally, we briefly touch upon the\npossibility of a charge-density wave. The present study is complementary to\nprevious investigations of the uniform electron gas in the warm dense matter\nregime and, thus, further completes our current picture of this fundamental\nmodel system at finite temperature. All PIMC data are available online.", "category": "physics_comp-ph" }, { "text": "Manifold learning techniques and model reduction applied to dissipative\n PDEs: We link nonlinear manifold learning techniques for data analysis/compression\nwith model reduction techniques for evolution equations with time scale\nseparation. In particular, we demonstrate a `\"nonlinear extension\" of the\nPOD-Galerkin approach to obtaining reduced dynamic models of dissipative\nevolution equations. The approach is illustrated through a reaction-diffusion\nPDE, and the performance of different simulators on the full and the reduced\nmodels is compared. We also discuss the relation of this nonlinear extension\nwith the so-called \"nonlinear Galerkin\" methods developed in the context of\nApproximate Inertial Manifolds.", "category": "physics_comp-ph" }, { "text": "Design and performance evaluations of generic programming techniques in\n a R&D prototype of Geant4 physics: A R&D project has been recently launched to investigate Geant4 architectural\ndesign in view of addressing new experimental issues in HEP and other related\nphysics disciplines. In the context of this project the use of generic\nprogramming techniques besides the conventional object oriented is\ninvestigated. Software design features and preliminary results from a new\nprototype implementation of Geant4 electromagnetic physics are illustrated.\nPerformance evaluations are presented. Issues related to quality assurance in\nGeant4 physics modelling are discussed.", "category": "physics_comp-ph" }, { "text": "Thermal conductivity of B-DNA: The thermal conductivity of B-form double-stranded DNA (dsDNA) of the\nDrew-Dickerson sequence d(CGCGAATTCGCG) is computed using classical Molecular\nDynamics (MD) simulations. In contrast to previous studies, which focus on a\nsimplified 1D model or a coarse-grained model of DNA to improve simulation\ntimes, full atomistic simulations are employed to understand the thermal\nconduction in B-DNA. Thermal conductivity at different temperatures from 100 to\n400 K are investigated using the Einstein Green-Kubo equilibrium and\nM\\\"uller-Plathe non-equilibrium formalisms. The thermal conductivity of B-DNA\nat room temperature is found to be 1.5 W/m$\\cdot$K in equilibrium and 1.225\nW/m$\\cdot$K in non-equilibrium approach. In addition, the denaturation regime\nof B-DNA is obtained from the variation of thermal conductivity with\ntemperature. It is in agreement with previous works using Peyrard-Bishop\nDauxois (PBD) model at a temperature of around 350 K. The quantum heat capacity\n($C_{vq}$) has given the additional clues regarding the Debye and denaturation\ntemperature of 12-bp B-DNA.", "category": "physics_comp-ph" }, { "text": "Generalized Lattice-Boltzmann Equation with Forcing Term for Computation\n of Wall-Bounded Turbulent Flows: We present a framework based on the generalized lattice-Boltzmann equation\nusing multiple relaxation times with forcing term for eddy capturing simulation\nof wall bounded turbulent flows. Due to its flexibility in using disparate\nrelaxation times, the GLBE is well suited to maintaining numerical stability on\ncoarser grids and in obtaining improved solution fidelity of near-wall\nturbulent fluctuations. The subgrid scale turbulence effects are represented by\nthe standard Smagorinsky eddy-viscosity model, which is modified by using the\nvan Driest wall-damping function for near wall effects. For simulation of a\nwider class of problems, we introduce general forcing terms in the natural\nmoment space of the GLBE. Expressions for the strain rate tensor used in the\nSGS model are derived in terms of the non-equilibrium moments of the GLBE to\ninclude such forcing terms. Variable resolutions are introduced into this\nextended GLBE framework through a conservative multiblock approach. The\napproach is assessed for two canonical flow problems bounded by walls, viz.,\nfully-developed turbulent channel flow at a shear or friction Reynolds number\n($\\mathrm{Re}$) of 183.6 based on the channel half-width and three-dimensional\n(3D) shear-driven flows in a cubical cavity at a $\\mathrm{Re}$ of 12,000 based\non the side length of the cavity. Comparisons of detailed computed near-wall\nturbulent flow structure, given in terms of various turbulence statistics, with\navailable data, including those from direct numerical simulations (DNS) and\nexperiments showed good agreement, with marked improvement in numerical\nstability characteristics.", "category": "physics_comp-ph" }, { "text": "Numerical solution of stochastic master equations using stochastic\n interacting wave functions: We develop a new approach for solving stochastic quantum master equations\nwith mixed initial states. First, we obtain that the solution of the\njump-diffusion stochastic master equation is represented by a mixture of pure\nstates satisfying a system of stochastic differential equations of\nSchr\\\"odinger type. Then, we design three exponential schemes for these coupled\nstochastic Schr\\\"odinger equations, which are driven by Brownian motions and\njump processes. Hence, we have constructed efficient numerical methods for the\nstochastic master equations based on quantum trajectories. The good performance\nof the new numerical integrators is illustrated by simulations of two quantum\nmeasurement processes.", "category": "physics_comp-ph" }, { "text": "Local Enhancement of lipid membrane permeability induced by irradiated\n gold nanoparticles: Photothermal therapies are based on the optical excitation of plasmonic\nnanoparticles in the biological environment. The effects of the irradiation on\nthe biological medium depend critically on the heat transfer process at the\nnanoparticle interface, on the temperature reached by the tissues as well as on\nthe spatial extent of temperature gradients. Unfortunately, both the\ntemperature and its biological effects are difficult to be probed\nexperimentally at the molecular scale. Here, we approach this problem using\nnonequilibrium molecular dynamics simulations. We focus on photoporation, a\nphotothermal application based on the irradiation of gold nanoparticles by\nsingle, short-duration laser pulses. The nanoparticles, stably bound to cell\nmembranes, convert the radiation into heat, inducing transient changes of\nmembrane permeability. We make a quantitative prediction of the temperature\ngradient around the nanoparticle upon irradiation by typical experimental laser\nfluences. Water permeability is locally enhanced around the NP, in an annular\nregion that extends only a few nm far from the nanoparticle interface. We\ncorrelate the local enhancement of permeability at the NP-lipid interface to\nthe temperature inhomogeneities of the membrane and to the consequent\navailability of free volume pockets within the membrane core.", "category": "physics_comp-ph" }, { "text": "Dynamic properties and the roton mode attenuation in the liquid 3He: an\n ab initio study within the self-consistent method of moments: The dynamic structure factor and the eigenmodes of density fluctuations in\nthe uniform liquid $^3$He are studied using a novel non-perturbative approach.\nThis new version of the self-consistent method of moments invokes up to nine\nsum rules and other exact relations involving the spectral density, the\ntwo-parameter Shannon information entropy maximization procedure, and the ab\ninitio path integral Monte Carlo (PIMC) simulations which provide crucial\nreliable input information on the system static properties. Detailed analysis\nof the dispersion relations of collective excitations, the modes decrements and\nthe static structure factor (SSF) of $^3$He at the saturated vapor pressure is\nperformed. The results are compared to available experimental data~[1,2]. The\ntheory reveals a clear signature of the roton-like feature in the particle-hole\nsegment of the excitation spectrum with a significant reduction of the roton\ndecrement in the wavenumber range $1.3 A^{-1} \\leq q\\leq 2.2 A^{-1}$. The\nobserved roton mode remains a well defined collective excitation even in the\nparticle-hole band, where, however, it is strongly damped. Hence, the existence\nof the roton-like mode in the bulk liquid $^3$He is confirmed like in other\nstrongly interacting quantum fluids~[3]. The phonon branch of the spectrum is\nalso studied with a reasonable agreement with the same experimental data being\nachieved. The presented combined approach permits to produce ab initio data on\nthe system dynamic characteristics in a wide range of physical parameters and\nfor other physical systems.", "category": "physics_comp-ph" }, { "text": "Three-dimensional honeycomb carbon: Junction line distortion and novel\n emergent fermions: Carbon enjoys a vast number of allotropic forms, each possessing unique\nproperties determined by the lattice structures and bonding characters. Here,\nbased on first-principles calculations, we propose a new three-dimensional\ncarbon allotrope--hC28. We show that hC28 possesses exceptional energetic,\ndynamical, thermal, and mechanical stability. It is energetically more stable\nthan most other synthesized or proposed carbon allotropes. The material has a\nrelatively small bulk modulus, but is thermally stable at temperatures as high\nas 2000 K. The structural, mechanical, x-ray diffraction, and electronic\nproperties are systematically investigated. Particularly, we show that its\nlow-energy band structure hosts multiple unconventional emergent fermions,\nincluding the quadratic-contact-point fermions, the birefringent Dirac\nfermions, and the triple-point fermions. We construct effective models to\ncharacterize each kind of fermions. Our work not only discovers a new carbon\nallotropic form, it also reveals remarkable mechanical and electronic\nproperties for this new material, which may pave the way towards both\nfundamental studies as well as practical applications.", "category": "physics_comp-ph" }, { "text": "Real-space formulation of the stress tensor for $\\mathcal{O}(N)$ density\n functional theory: application to high temperature calculations: We present an accurate and efficient real-space formulation of the\nHellmann-Feynman stress tensor for $\\mathcal{O}(N)$ Kohn-Sham density\nfunctional theory (DFT). While applicable at any temperature, the formulation\nis most efficient at high temperature where the Fermi-Dirac distribution\nbecomes smoother and density matrix becomes correspondingly more localized. We\nfirst rewrite the orbital-dependent stress tensor for real-space DFT in terms\nof the density matrix, thereby making it amenable to $\\mathcal{O}(N)$ methods.\nWe then describe its evaluation within the $\\mathcal{O}(N)$ infinite-cell\nClenshaw-Curtis Spectral Quadrature (SQ) method, a technique that is applicable\nto metallic as well as insulating systems, is highly parallelizable, becomes\nincreasingly efficient with increasing temperature, and provides results\ncorresponding to the infinite crystal without the need of Brillouin zone\nintegration. We demonstrate systematic convergence of the resulting formulation\nwith respect to SQ parameters to exact diagonalization results, and show\nconvergence with respect to mesh size to established planewave results. We\nemploy the new formulation to compute the viscosity of hydrogen at a million\nkelvin from Kohn-Sham quantum molecular dynamics, where we find agreement with\nprevious more approximate orbital-free density functional methods.", "category": "physics_comp-ph" }, { "text": "Coupled Cluster Greens function formulations based on the effective\n Hamiltonians: We demonstrate that the effective Hamiltonians obtained with the downfolding\nprocedure based on double unitary coupled cluster (DUCC) ansatz can be used in\nthe context of Greens function coupled cluster (GFCC) formalism to calculate\nspectral functions of molecular systems. This combined approach (DUCC-GFCC)\nprovides a significant reduction of numerical effort and good agreement with\nthe corresponding all-orbital GFCC methods in energy windows that are\nconsistent with the choice of active space. These features are demonstrated on\nthe example of two benchmark systems: H2O and N2, where DUCC-GFCC calculations\nwere performed for active spaces of various sizes.", "category": "physics_comp-ph" }, { "text": "Application of Coarse Integration to Bacterial Chemotaxis: We have developed and implemented a numerical evolution scheme for a class of\nstochastic problems in which the temporal evolution occurs on widely-separated\ntime scales, and for which the slow evolution can be described in terms of a\nsmall number of moments of an underlying probability distribution. We\ndemonstrate this method via a numerical simulation of chemotaxis in a\npopulation of motile, independent bacteria swimming in a prescribed gradient of\na chemoattractant. The microscopic stochastic model, which is simulated using a\nMonte Carlo method, uses a simplified deterministic model for\nexcitation/adaptation in signal transduction, coupled to a realistic,\nstochastic description of the flagellar motor. We show that projective time\nintegration of ``coarse'' variables can be carried out on time scales long\ncompared to that of the microscopic dynamics. Our coarse description is based\non the spatial cell density distribution. Thus we are assuming that the system\n``closes'' on this variable so that it can be described on long time scales\nsolely by the spatial cell density. Computationally the variables are the\ncomponents of the density distribution expressed in terms of a few basis\nfunctions, given by the singular vectors of the spatial density distribution\nobtained from a sample Monte Carlo time evolution of the system. We present\nnumerical results and analysis of errors in support of the efficacy of this\ntime-integration scheme.", "category": "physics_comp-ph" }, { "text": "FFT-based Kronecker product approximation to micromagnetic long-range\n interactions: We derive a Kronecker product approximation for the micromagnetic long range\ninteractions in a collocation framework by means of separable sinc quadrature.\nEvaluation of this operator for structured tensors (Canonical format, Tucker\nformat, Tensor Trains) scales below linear in the volume size. Based on\nefficient usage of FFT for structured tensors, we are able to accelerate\ncomputations to quasi linear complexity in the number of collocation points\nused in one dimension. Quadratic convergence of the underlying collocation\nscheme as well as exponential convergence in the separation rank of the\napproximations is proved. Numerical experiments on accuracy and complexity\nconfirm the theoretical results.", "category": "physics_comp-ph" }, { "text": "An iterative deep learning procedure for determining electron scattering\n cross-sections from transport coefficients: We propose improvements to the Artificial Neural Network (ANN) method of\ndetermining electron scattering cross-sections from swarm data proposed by\ncoauthors. A limitation inherent to this problem, known as the inverse swarm\nproblem, is the non-unique nature of its solutions, particularly when there\nexists multiple cross-sections that each describe similar scattering processes.\nConsidering this, prior methods leveraged existing knowledge of a particular\ncross-section set to reduce the solution space of the problem. To reduce the\nneed for prior knowledge, we propose the following modifications to the ANN\nmethod. First, we propose a Multi-Branch ANN (MBANN) that assigns an\nindependent branch of hidden layers to each cross-section output. We show that\nin comparison with an equivalent conventional ANN, the MBANN architecture\nenables an efficient and physics informed feature map of each cross-section.\nAdditionally, we show that the MBANN solution can be improved upon by\nsuccessive networks that are each trained using perturbations of the previous\nregression. Crucially, the method requires much less input data and fewer\nrestrictive assumptions, and only assumes knowledge of energy loss thresholds\nand the number of cross-sections present.", "category": "physics_comp-ph" }, { "text": "Long range correction for multi-site Lennard-Jones models and planar\n interfaces: A slab based long range correction approach for multi-site Lennard-Jones\nmodels is presented for systems with a planar film geometry that is based on\nthe work by Janecek, J. Phys. Chem. B 110: 6264 (2006). It is efficient because\nit relies on a center-of-mass cutoff scheme and scales in terms of numerics\nalmost perfectly with the molecule number. For validation, a series of\nsimulations with the two-center Lennard-Jones model fluid, carbon dioxide and\ncyclohexane is carried out. The results of the present approach, a site-based\nlong range correction and simulations without any long range correction are\ncompared with respect to the saturated liquid density and the surface tension.\nThe present simulation results exhibit only a weak dependence on the cutoff\nradius, indicating a high accuracy of the implemented long range correction.", "category": "physics_comp-ph" }, { "text": "Vibrational mean free paths and thermal conductivity of amorphous\n silicon from non-equilibrium molecular dynamics simulations: The frequency-dependent mean free paths (MFPs) of vibrational heat carriers\nin amorphous silicon are predicted from the length dependence of the spectrally\ndecomposed heat current (SDHC) obtained from non-equilibrium molecular dynamics\nsimulations. The results suggest a (frequency)$^{-2}$ scaling of the\nroom-temperature MFPs below 5 THz. The MFPs exhibit a local maximum at a\nfrequency of 8 THz and fall below 1 nm at frequencies greater than 10 THz,\nindicating localized vibrations. The MFPs extracted from sub-10 nm system-size\nsimulations are used to predict the length-dependence of thermal conductivity\nup to system sizes of 100 nm and good agreement is found with separate\nmolecular dynamics simulations. Weighting the SDHC by the frequency-dependent\nquantum occupation function provides a simple and convenient method to account\nfor quantum statistics and provides reasonable agreement with the\nexperimentally-measured trend and magnitude.", "category": "physics_comp-ph" }, { "text": "Deep Learning Architecture Based Approach For 2D-Simulation of Microwave\n Plasma Interaction: This paper presents a convolutional neural network (CNN)-based deep learning\nmodel, inspired from UNet with series of encoder and decoder units with skip\nconnections, for the simulation of microwave-plasma interaction. The microwave\npropagation characteristics in complex plasma medium pertaining to\ntransmission, absorption and reflection primarily depends on the ratio of\nelectromagnetic (EM) wave frequency and electron plasma frequency, and the\nplasma density profile. The scattering of a plane EM wave with fixed frequency\n(1 GHz) and amplitude incident on a plasma medium with different gaussian\ndensity profiles (in the range of $1\\times 10^{17}-1\\times 10^{22}{m^{-3}}$)\nhave been considered. The training data associated with microwave-plasma\ninteraction has been generated using 2D-FDTD (Finite Difference Time Domain)\nbased simulations. The trained deep learning model is then used to reproduce\nthe scattered electric field values for the 1GHz incident microwave on\ndifferent plasma profiles with error margin of less than 2\\%. We propose a\ncomplete deep learning (DL) based pipeline to train, validate and evaluate the\nmodel. We compare the results of the network, using various metrics like SSIM\nindex, average percent error and mean square error, with the physical data\nobtained from well-established FDTD based EM solvers. To the best of our\nknowledge, this is the first effort towards exploring a DL based approach for\nthe simulation of complex microwave plasma interaction. The deep learning\ntechnique proposed in this work is significantly fast as compared to the\nexisting computational techniques, and can be used as a new, prospective and\nalternative computational approach for investigating microwave-plasma\ninteraction in a real time scenario.", "category": "physics_comp-ph" }, { "text": "Self-consistent field theory based molecular dynamics with linear\n system-size scaling: We present an improved field-theoretic approach to the grand-canonical\npotential suitable for linear scaling molecular dynamics simulations using\nforces from self-consistent electronic structure calculations. It is based on\nan exact decomposition of the grand canonical potential for independent\nfermions and does neither rely on the ability to localize the orbitals nor that\nthe Hamilton operator is well-conditioned. Hence, this scheme enables highly\naccurate all-electron linear scaling calculations even for metallic systems.\nThe inherent energy drift of Born-Oppenheimer molecular dynamics simulations,\narising from an incomplete convergence of the self-consistent field cycle, is\ncircumvented by means of a properly modified Langevin equation. The predictive\npower of the present linear scaling \\textit{ab-initio} molecular dynamics\napproach is illustrated using the example of liquid methane under extreme\nconditions.", "category": "physics_comp-ph" }, { "text": "Quasi-Helmholtz Decomposition, Gauss' Laws and Charge Conservation for\n Finite Element Particle-in-Cell: Development of particle in cell methods using finite element based methods\n(FEMs) have been a topic of renewed interest; this has largely been driven by\n(a) the ability of finite element methods to better model geometry, (b) better\nunderstanding of function spaces that are necessary to represent all Maxwell\nquantities, and (c) more recently, the fundamental rubrics that should be\nobeyed in space and time so as to satisfy Gauss' laws and the equation of\ncontinuity. In that vein, methods have been developed recently that satisfy\nthese equations and are agnostic to time stepping methods. While is development\nis indeed a significant advance, it should be noted that implicit FEM transient\nsolvers support an underlying null space that corresponds to a gradient of a\nscalar potential $\\nabla \\Phi(\\textbf{r})$ (or $t \\nabla \\Phi (\\textbf{r})$ in\nthe case of wave equation solvers). While explicit schemes do not suffer from\nthis drawback, they are only conditionally stable, time step sizes are mesh\ndependent, and very small. A way to overcome this bottleneck, and indeed,\nsatisfy all four Maxwell's equation is to use a quasi-Helmholtz formulation on\na tessellation. In the re-formulation presented, we strictly satisfy the\nequation of continuity and Gauss' laws for both the electric and magnetic flux\ndensities. Results demonstrating the efficacy of this scheme will be presented.", "category": "physics_comp-ph" }, { "text": "Relationship between low-discrepancy sequence and static solution to\n multi-bodies problem: The main interest of this paper is to study the relationship between the\nlow-discrepancy sequence and the static solution to the multi-bodies problem in\nhigh-dimensional space. An assumption that the static solution to the\nmulti-bodies problem is a low-discrepancy sequence is proposed. Considering the\nstatic solution to the multi-bodies problem corresponds to the minimum\npotential energy principle, we further assume that the distribution of the\nbodies is the most uniform when the potential energy is the smallest. To verify\nthe proposed assumptions, a dynamical evolutionary model (DEM) based on the\nminimum potential energy is established to find out the static solution. The\ncentral difference algorithm is adopted to solve the DEM and an evolutionary\niterative scheme is developed. The selection of the mass and the damping\ncoefficient to ensure the convergence of the evolutionary iteration is\ndiscussed in detail. Based on the DEM, the relationship between the potential\nenergy and the discrepancy during the evolutionary iteration process is\nstudied. It is found that there is a significant positive correlation between\nthem, which confirms the proposed assumptions. We also combine the DEM with the\nrestarting technique to generate a series of low-discrepancy sequences. These\nsequences are unbiased and perform better than other low-discrepancy sequences\nin terms of the discrepancy, the potential energy, integrating eight test\nfunctions and computing the statistical moments for two practical stochastic\nproblems. Numerical examples also show that the DEM can not only generate\nuniformly distributed sequences in cubes, but also in non-cubes.", "category": "physics_comp-ph" }, { "text": "Boosting Material Modeling Using Game Tree Search: We demonstrate a heuristic optimization algorithm based on the game tree\nsearch for multi-component materials design. The algorithm searches for the\nlargest spin polarization of seven-component Heusler alloys. The algorithm can\nfind the peaks quickly and is more robust against local optima than Bayesian\noptimization approaches using the expected improvement or upper confidence\nbound approaches. We also investigate Heusler alloys including anti-site\ndisorder and show that\n[Fe$_{0.9}$Co$_{0.1}$]$_{2}$Cr$_{0.95}$Mn$_{0.05}$Si$_{0.3}$Ge$_{0.7}$ has the\npotential to be a high spin polarized material with robustness against\nanti-site disorder.", "category": "physics_comp-ph" }, { "text": "Simulation of Free Surface Compressible Flows Via a Two Fluid Model: The purpose of this communication is to discuss the simulation of a free\nsurface compressible flow between two fluids, typically air and water. We use a\ntwo fluid model with the same velocity, pressure and temperature for both\nphases. In such a numerical model, the free surface becomes a thin three\ndimensional zone. The present method has at least three advantages: (i) the\nfree-surface treatment is completely implicit; (ii) it can naturally handle\nwave breaking and other topological changes in the flow; (iii) one can easily\nvary the Equation of States (EOS) of each fluid (in principle, one can even\nconsider tabulated EOS). Moreover, our model is unconditionally hyperbolic for\nreasonable EOS.", "category": "physics_comp-ph" }, { "text": "Ab initio studies of the ground and first excited states of the Sr-H$_2$\n and Yb-H$_2$ complexes: Accurate intermolecular potential-energy surfaces (IPESs) for the ground and\nfirst excited states of the Sr-H$_2$ and Yb-H$_2$ complexes were calculated.\nAfter an extensive methodological study, the CCSD(T) method with the\nDouglas-Kroll-Hess Hamiltonian and correlation-consistent basis sets of\ntriple-$\\zeta$ quality extended with 2 sets of diffuse functions and a set of\nmidbond functions were chosen. The obtained ground-state IPESs are similar in\nboth complexes, being relatively isotropic with two minima and two transition\nstates (equivalent by symmetry). The global minima correspond to the collinear\ngeometries with $R=$ 5.45 and 5.10~{\\AA} and energies of $-$27.7 and\n$-$31.7~cm$^{-1}$ for the Sr-H$_2$ and Yb-H$_2$ systems, respectively. The\ncalculated surfaces for the Sr($^3P$)-H$_2$ and Yb($^3P$)-H$_2$ states are\ndeeper and more anisotropic and they exhibit similar patterns within both\ncomplexes. The deepest surfaces, where the singly occupied \\textit{p}-orbital\nof the metal atom is perpendicular to the intermolecular axis, are\ncharacterised by the global minima of ca. $-$2053 and $-$2260~cm$^{-1}$ in the\nT-shape geometries at $R=$ 2.41 and 2.29~{\\AA} for Sr-H$_2$ and Yb-H$_2$,\nrespectively. Additional calculations for the complexes of Sr and Yb with the\nHe atom revealed a similar, strong dependence of the interaction energy on the\norientation of the \\textit{p}-orbital in the the Sr($^3P$)-He and Yb($^3P$)-He\nstates.", "category": "physics_comp-ph" }, { "text": "Dynamical structure of entangled polymers simulated under shear flow: The non-linear response of entangled polymers to shear flow is complicated.\nIts current understanding is framed mainly as a rheological description in\nterms of the complex viscosity. However, the full picture requires an\nassessment of the dynamical structure of individual polymer chains which give\nrise to the macroscopic observables. Here we shed new light on this problem,\nusing a computer simulation based on a blob model, extended to describe shear\nflow in polymer melts and semi-dilute solutions. We examine the diffusion and\nthe intermediate scattering spectra during a steady shear flow. The relaxation\ndynamics are found to speed up along the flow direction, but slow down along\nthe shear gradient direction. The third axis, vorticity, shows a slowdown at\nthe short scale of a tube, but reaches a net speedup at the large scale of the\nchain radius of gyration.", "category": "physics_comp-ph" }, { "text": "Predicting Critical Transitions in Multiscale Dynamical Systems Using\n Reservoir Computing: We study the problem of predicting rare critical transition events for a\nclass of slow-fast nonlinear dynamical systems. The state of the system of\ninterest is described by a slow process, whereas a faster process drives its\nevolution and induces critical transitions. By taking advantage of recent\nadvances in reservoir computing, we present a data-driven method to predict the\nfuture evolution of the state. We show that our method is capable of predicting\na critical transition event at least several numerical time steps in advance.\nWe demonstrate the success as well as the limitations of our method using\nnumerical experiments on three examples of systems, ranging from low\ndimensional to high dimensional. We discuss the mathematical and broader\nimplications of our results.", "category": "physics_comp-ph" }, { "text": "Machine learning materials physics: Deep neural networks trained on\n elastic free energy data from martensitic microstructures predict homogenized\n stress fields with high accuracy: We present an approach to numerical homogenization of the elastic response of\nmicrostructures. Our work uses deep neural network representations trained on\ndata obtained from direct numerical simulation (DNS) of martensitic phase\ntransformations. The microscopic model leading to the microstructures is based\non non-convex free energy density functions that give rise to martensitic\nvariants, and must be extended to gradient theories of elasticity at finite\nstrain. These strain gradients introduce interfacial energies as well as\ncoercify the model, enabling the admission of a large number of solutions, each\nhaving finely laminated microstructures. The numerical stiffness of these DNS\nsolutions and the fine scales of response make the data expensive to obtain,\nwhile also motivating the search for homogenized representations of their\nresponse for the purpose of engineering design. The high-dimensionality of the\nproblem is reduced by training deep neural networks (DNNs) on the effective\nresponse by using the scalar free energy density data. The novelty in our\napproach is that the trained DNNs also return high-fidelity representations of\nderivative data, specifically the stresses. This allows the recapitulation of\nthe classic hyperelastic response of continuum elasticity via the DNN\nrepresentation. Also included are detailed optimization studies over\nhyperparameters, and convergence with size of datasets.", "category": "physics_comp-ph" }, { "text": "Solving the transport equation by the use of 6D spectral methods in\n spherical geometry: We present a numerical method for handling the resolution of a general\ntransport equation for radiative particles, aimed at physical problems with a\ngeneral spherical geometry. Having in mind the computational time difficulties\nencountered in problems such as neutrino transport in astrophysical supernovae,\nwe present a scheme based on full spectral methods in 6d spherical coordinates.\nThis approach, known to be suited when the characteristic length of the\ndynamics is much smaller than the domain size, has the potential advantage of a\nglobal speedup with respect to usual finite difference schemes. An analysis of\nthe properties of the Liouville operator expressed in our coordinates is\nnecessary in order to handle correctly the numerical behaviour of the solution.\nThis reflects on a specific (spherical) geometry of the computational domain.\nThe numerical tests, performed under several different regimes for the\nequation, prove the robustness of the scheme: their performances also point out\nto the suitability of such an approach to large scale computations involving\ntransport physics for mass less radiative particles.", "category": "physics_comp-ph" }, { "text": "Optimized Field/Circuit Coupling for the Simulation of Quenches in\n Superconducting Magnets: In this paper, we propose an optimized field/circuit coupling approach for\nthe simulation of magnetothermal transients in superconducting magnets. The\napproach improves the convergence of the iterative coupling scheme between a\nmagnetothermal partial differential model and an electrical lumped-element\ncircuit. Such a multi-physics, multi-rate and multi-scale problem requires a\nconsistent formulation and a dedicated framework to tackle the challenging\ntransient effects occurring at both circuit and magnet level during normal\noperation and in case of faults. We derive an equivalent magnet model at the\ncircuit side for the linear and the non-linear settings and discuss the\nconvergence of the overall scheme in the framework of optimized Schwarz\nmethods. The efficiency of the developed approach is illustrated by a numerical\nexample of an accelerator dipole magnet with accompanying protection system.", "category": "physics_comp-ph" }, { "text": "Neural network representability of fully ionized plasma fluid model\n closures: The closure problem in fluid modeling is a well-known challenge to modelers\naiming to accurately describe their system of interest. Over many years,\nanalytic formulations in a wide range of regimes have been presented but a\npractical, generalized fluid closure for magnetized plasmas remains an elusive\ngoal. In this study, as a first step towards constructing a novel data based\napproach to this problem, we apply ever-maturing machine learning methods to\nassess the capability of neural network architectures to reproduce crucial\nphysics inherent in popular magnetized plasma closures. We find encouraging\nresults, indicating the applicability of neural networks to closure physics but\nalso arrive at recommendations on how one should choose appropriate network\narchitectures for given locality properties dictated by underlying physics of\nthe plasma.", "category": "physics_comp-ph" }, { "text": "Discovering Quantum Phase Transitions with Fermionic Neural Networks: Deep neural networks have been extremely successful as highly accurate wave\nfunction ans\\\"atze for variational Monte Carlo calculations of molecular ground\nstates. We present an extension of one such ansatz, FermiNet, to calculations\nof the ground states of periodic Hamiltonians, and study the homogeneous\nelectron gas. FermiNet calculations of the ground-state energies of small\nelectron gas systems are in excellent agreement with previous initiator full\nconfiguration interaction quantum Monte Carlo and diffusion Monte Carlo\ncalculations. We investigate the spin-polarized homogeneous electron gas and\ndemonstrate that the same neural network architecture is capable of accurately\nrepresenting both the delocalized Fermi liquid state and the localized Wigner\ncrystal state. The network is given no \\emph{a priori} knowledge that a phase\ntransition exists, but converges on the translationally invariant ground state\nat high density and spontaneously breaks the symmetry to produce the\ncrystalline ground state at low density.", "category": "physics_comp-ph" }, { "text": "Variable thermal transport in black, blue, and violet phosphorene from\n extensive atomistic simulations with a neuroevolution potential: Phosphorus has diverse chemical bonds and even in its two-dimensional form\nthere are three stable allotropes: black phosphorene (Black-P), blue\nphosphorene (Blue-P), and violet phosphorene (Violet-P). Due to the complexity\nof these structures, no efficient and accurate classical interatomic potential\nhas been developed for them. In this paper, we develop an efficient\nmachine-learned neuroevolution potential model for these allotropes and apply\nit to study thermal transport in them via extensive molecular dynamics (MD)\nsimulations. Based on the homogeneous nonequilibrium MD method, the thermal\nconductivities are predicted to be $12.5 \\pm 0.2$ (Black-P in armchair\ndirection), $78.4 \\pm 0.4$ (Black-P in zigzag direction), $128 \\pm 3$ (Blue-P),\nand $2.36 \\pm 0.05$ (Violet-P) $\\mathrm{Wm^{-1}K^{-1}}$. The underlying reasons\nfor the significantly different thermal conductivity values in these allotropes\nare unraveled through spectral decomposition, phonon eigenmodes, and phonon\nparticipation ratio. Under external tensile strain, the thermal conductivity in\nblack-P and violet-P are finite, while that in blue-P appears unbounded due to\nthe linearization of the flexural phonon dispersion that increases the phonon\nmean free paths in the zero-frequency limit.", "category": "physics_comp-ph" }, { "text": "Fast GPU-based calculations in few-body quantum scattering: A principally novel approach towards solving the few-particle\n(many-dimensional) quantum scattering problems is described. The approach is\nbased on a complete discretization of few-particle continuum and usage of\nmassively parallel computations of integral kernels for scattering equations by\nmeans of GPU. The discretization for continuous spectrum of a few-particle\nHamiltonian is realized with a projection of all scattering operators and wave\nfunctions onto the stationary wave-packet basis. Such projection procedure\nleads to a replacement of singular multidimensional integral equations with\nlinear matrix ones having finite matrix elements. Different aspects of the\nemployment of a multithread GPU computing for fast calculation of the matrix\nkernel of the equation are studied in detail. As a result, the fully realistic\nthree-body scattering problem above the break-up threshold is solved on an\nordinary desktop PC with GPU for a rather small computational time.", "category": "physics_comp-ph" }, { "text": "Variational formulation for Wannier functions with entangled band\n structure: Wannier functions provide a localized representation of spectral subspaces of\nperiodic Hamiltonians, and play an important role for interpreting and\naccelerating Hartree-Fock and Kohn-Sham density functional theory calculations\nin quantum physics and chemistry. For systems with isolated band structure, the\nexistence of exponentially localized Wannier functions and numerical algorithms\nfor finding them are well studied. In contrast, for systems with entangled band\nstructure, Wannier functions must be generalized to span a subspace larger than\nthe spectral subspace of interest to achieve favorable spatial locality. In\nthis setting, little is known about the theoretical properties of these Wannier\nfunctions, and few algorithms can find them robustly. We develop a variational\nformulation to compute these generalized maximally localized Wannier functions.\nWhen paired with an initial guess based on the selected columns of the density\nmatrix (SCDM) method, our method can robustly find Wannier functions for\nsystems with entangled band structure. We formulate the problem as a\nconstrained nonlinear optimization problem, and show how the widely used\ndisentanglement procedure can be interpreted as a splitting method to\napproximately solve this problem. We demonstrate the performance of our method\nusing real materials including silicon, copper, and aluminum. To examine more\nprecisely the localization properties of Wannier functions, we study the free\nelectron gas in one and two dimensions, where we show that the\nmaximally-localized Wannier functions only decay algebraically. We also explain\nusing a one dimensional example how to modify them to obtain super-algebraic\ndecay.", "category": "physics_comp-ph" }, { "text": "Acceleration techniques for semiclassical Maxwell-Bloch systems: An\n application to discrete quantum dot ensembles: The solution to Maxwell-Bloch systems using an integral-equation-based\nframework has proven effective at capturing collective features of laser-driven\nand radiation-coupled quantum dots, such as light localization and\nmodifications of Rabi oscillations. Importantly, it enables observation of the\ndynamics of each quantum dot in large ensembles in a rigorous,\nerror-controlled, and self-consistent way without resorting to spatial\naveraging. Indeed, this approach has demonstrated convergence in ensembles\ncontaining up to $10^4$ interacting quantum dots. Scaling beyond $10^4$ quantum\ndots tests the limit of computational horsepower, however, due to the\n$\\mathcal{O}(N_t N_s^2)$ scaling (where $N_t$ and $N_s$ denote the number of\ntemporal and spatial degrees of freedom). In this work, we present an algorithm\nthat reduces the cost of analysis to $\\mathcal{O}(N_t N_s \\log^2 N_s)$. While\nthe foundations of this approach rely on well-known\nparticle-particle/particle-mesh and adaptive integral methods, we add\nrefinements specific to transient systems and systems with multiple spatial and\ntemporal derivatives. Accordingly, we offer numerical results that validate the\naccuracy, effectiveness and utility of this approach in analyzing the dynamics\nof large ensembles of quantum dots.", "category": "physics_comp-ph" }, { "text": "Femtosecond Laser Processing of Germanium: An Ab Initio Molecular\n Dynamics Study: An ab initio molecular dynamics study of femtosecond laser processing of\ngermanium is presented in this paper. The method based on the finite\ntemperature density functional theory is adopted to probe the structural\nchange, thermal motion of the atoms, dynamic property of the velocity\nautocorrelation, and the vibrational density of states. Starting from a cubic\nsystem at room temperature (300 K) containing 64 germanium atoms with an\nordered arrangement of 1.132 nm in each dimension, the femtosecond laser\nprocessing is simulated by imposing the Nose Hoover thermostat to the\nelectronic subsystem lasting for ~100 fs and continuing with microcanonical\nensemble simulation of ~200 fs. The simulation results show solid, liquid and\ngas phases of germanium under adjusted intensities of the femtosecond laser\nirradiation. We find the irradiated germanium distinguishes from the usual\ngermanium crystal by analyzing their melting and dynamic properties.", "category": "physics_comp-ph" }, { "text": "Transforming the Lindblad Equation into a System of Linear Equations:\n Performance Optimization and Parallelization of an Algorithm: With their constantly increasing peak performance and memory capacity, modern\nsupercomputers offer new perspectives on numerical studies of open many-body\nquantum systems. These systems are often modeled by using Markovian quantum\nmaster equations describing the evolution of the system density operators. In\nthis paper we address master equations of the Lindblad form, which are a\npopular theoretical tool in quantum optics, cavity quantum electrodynamics, and\noptomechanics. By using the generalized Gell-Mann matrices as a basis, any\nLindblad equation can be transformed into a system of ordinary differential\nequations with real coefficients. This allows us to use standard\nhigh-performance parallel algorithms to integrate the equations and thus to\nemulate open quantum dynamics in a computationally efficient way. Recently we\npresented an implementation of the transform with the computational complexity\nscaling as $O(N^5 log N)$ for dense Lindbaldians and $O(N^3 log N)$ for sparse\nones. However, infeasible memory costs remain a serious obstacle on the way to\nlarge models. Here we present a parallel cluster-based implementation of the\nalgorithm and demonstrate that it allows us to integrate a sparse Lindbladian\nmodel of the dimension $N=2000$ and a dense random Lindbladian model of the\ndimension $N=200$ by using $25$ nodes with $64$ GB RAM per node.", "category": "physics_comp-ph" }, { "text": "On Advantages of the Kelvin Mapping in Finite Element Implementations of\n Deformation Processes: Classical continuum mechanical theories operate on three-dimensional\nEu-clidian space using scalar, vector, and tensor-valued quantities usually up\nto the order of four. For their numerical treatment, it is common practice to\ntransform the relations into a matrix-vector format. This transformation is\nusually performed using the so-called Voigt mapping. This mapping does not\npreserve tensor character leaving significant room for error as stress and\nstrain quantities follow from different mappings and thus have to be treated\ndifferently in certain mathematical operations. Despite its conceptual and\nnotational difficulties having been pointed out, the Voigt mapping remains the\nfoundation of most current finite element programmes. An alternative is the\nso-called Kelvin mapping which has recently gained recognition in studies of\ntheoretical mechanics. This article is concerned with benefits of the Kelvin\nmapping in numerical modelling tools such as finite element software. The\ndecisive difference to the Voigt mapping is that Kelvin's method preserves\ntensor character, and thus the numerical matrix notation directly corresponds\nto the original tensor notation. Further benefits in numerical implementations\nare that tensor norms are calculated identically without distinguishing stress\nor strain-type quantities and tensor equations can be directly transformed into\nmatrix equations without additional considerations. The only implementational\nchanges are related to a scalar factor in certain finite element matrices and\nhence, harvesting the mentioned benefits comes at very little cost.", "category": "physics_comp-ph" }, { "text": "Physics-informed neural networks for inverse problems in nano-optics and\n metamaterials: In this paper we employ the emerging paradigm of physics-informed neural\nnetworks (PINNs) for the solution of representative inverse scattering problems\nin photonic metamaterials and nano-optics technologies. In particular, we\nsuccessfully apply mesh-free PINNs to the difficult task of retrieving the\neffective permittivity parameters of a number of finite-size scattering systems\nthat involve many interacting nanostructures as well as multi-component\nnanoparticles. Our methodology is fully validated by numerical simulations\nbased on the Finite Element Method (FEM). The development of physics-informed\ndeep learning techniques for inverse scattering can enable the design of novel\nfunctional nanostructures and significantly broaden the design space of\nmetamaterials by naturally accounting for radiation and finite-size effects\nbeyond the limitations of traditional effective medium theories.", "category": "physics_comp-ph" }, { "text": "Dual-support smoothed particle hydrodynamics for elastic mechanics: In the standard SPH method, the interaction between two particles might be\nnot pairwise when the support domain varies, which can result in a reduction of\naccuracy. To deal with this problem, a modified SPH approach is presented in\nthis paper. First of all, a Lagrangian kernel is introduced to eliminate\nspurious distortions of the domain of material stability, and the gradient is\ncorrected by a linear transformation so that linear completeness is satisfied.\nThen, concepts of support and dual-support are defined to deal with the\nunbalanced interactions between the particles with different support domains.\nSeveral benchmark problems in one, two and three dimensions are tested to\nverify the accuracy of the modified SPH model and highlight its advantages over\nthe standard SPH method through comparisons.", "category": "physics_comp-ph" }, { "text": "Lanczos Pseudospectral Propagation Method for Initial-Value Problems in\n Electrodynamics of Passive Media: Maxwell's equations for electrodynamics of dispersive and absorptive\n(passive) media are written in the form of the Schr\\\"odinger equation with a\nnon-Hermitian Hamiltonian. The Lanczos time-propagation scheme is modified to\ninclude non-Hermitian Hamiltonians and used, in combination with the Fourier\npseudospectral method, to solve the initial-value problem. The time-domain\nalgorithm developed is shown to be unconditionally stable. Variable time steps\nand/or variable computational costs per time step with error control are\npossible. The algorithm is applied to study transmission and reflection\nproperties of ionic crystal gratings with cylindric geometry in the infra-red\nrange.", "category": "physics_comp-ph" }, { "text": "Nuclear quantum effects in molecular dynamics simulations: To take into account nuclear quantum effects on the dynamics of atoms, the\npath integral molecular dynamics (PIMD) method used since 1980s is based on the\nformalism developed by R. P. Feynman. However, the huge computation time\nrequired for the PIMD reduces its range of applicability. Another drawback is\nthe requirement of additional techniques to access time correlation functions\n(ring polymer MD or centroid MD). We developed an alternative technique based\non a quantum thermal bath (QTB) which reduces the computation time by a factor\nof ~20. The QTB approach consists in a classical Langevin dynamics in which the\nwhite noise random force is replaced by a Gaussian random force having the\npower spectral density given by the quantum fluctuation-dissipation theorem.\nThe method has yielded satisfactory results for weakly anharmonic systems: the\nquantum harmonic oscillator, the heat capacity of a MgO crystal, and isotope\neffects in 7 LiH and 7 LiD. Unfortunately, the QTB is subject to the problem of\nzero-point energy leakage (ZPEL) in highly anharmonic systems, which is\ninherent in the use of classical mechanics. Indeed, a part of the energy of the\nhigh-frequency modes is transferred to the low-frequency modes leading to a\nwrong energy distribution. We have shown that in order to reduce or even\neliminate ZPEL, it is sufficient to increase the value of the frictional\ncoefficient. Another way to solve the ZPEL problem is to combine the QTB and\nPIMD techniques. It requires the modification of the power spectral density of\nthe random force within the QTB. This combination can also be seen as a way to\nspeed up the PIMD.", "category": "physics_comp-ph" }, { "text": "Recent Extensions of the ZKCM Library for Parallel and Accurate MPS\n Simulation of Quantum Circuits: A C++ library ZKCM and its extension library ZKCM_QC have been developed\nsince 2011 for multiple-precision matrix computation and accurate\nmatrix-product-state (MPS) quantum circuit simulation, respectively. In this\nreport, a recent progress in the extensions of these libraries is described,\nwhich are mainly for parallel processing with the OpenMP and CUDA frameworks.", "category": "physics_comp-ph" }, { "text": "On the derivatives of feed-forward neural networks: In this paper we present a C++ implementation of the analytic derivative of a\nfeed-forward neural network with respect to its free parameters for an\narbitrary architecture, known as back-propagation. We dubbed this code NNAD\n(Neural Network Analytic Derivatives) and interfaced it with the widely-used\nceres-solver minimiser to fit neural networks to pseudodata in two different\nleast-squares problems. The first is a direct fit of Legendre polynomials. The\nsecond is a somewhat more involved minimisation problem where the function to\nbe fitted takes part in an integral. Finally, using a consistent framework, we\nassess the efficiency of our analytic derivative formula as compared to\nnumerical and automatic differentiation as provided by ceres-solver. We thus\ndemonstrate the advantage of using NNAD in problems involving both deep or\nshallow neural networks.", "category": "physics_comp-ph" }, { "text": "A WENO-type slope-limiter for a family of piecewise polynomial methods: A new, high-order slope-limiting procedure for the Piecewise Parabolic Method\n(PPM) and the Piecewise Quartic Method (PQM) is described. Following a Weighted\nEssentially Non-Oscillatory (WENO)-type paradigm, the proposed slope-limiter\nseeks to reconstruct smooth, non-oscillatory piecewise polynomial profiles as a\nnon-linear combination of the natural and monotone-limited PPM and PQM\ninterpolants. Compared to existing monotone slope-limiting techniques, this new\nstrategy is designed to improve accuracy at smooth extrema, while controlling\nspurious oscillations in the neighbourhood of sharp features. Using the new\nslope-limited PPM and PQM interpolants, a high-order accurate\nArbitrary-Lagrangian-Eulerian framework for advection-dominated flows is\nconstructed, and its effectiveness is examined using a series of one- and\ntwo-dimensional benchmark cases. It is shown that the new WENO-type\nslope-limiting techniques offer a significant improvement in accuracy compared\nto existing strategies, allowing the PPM- and PQM- based schemes to achieve\nfully third- and fifth-order accurate convergence, respectively, for\nsufficiently smooth problems.", "category": "physics_comp-ph" }, { "text": "Benchmarking of a preliminary MFiX-Exa code: MFiX-Exa is a new code being actively developed at Lawrence Berkeley National\nLaboratory and the National Energy Technology Laboratory as part of the U.S.\nDepartment of Energy's Exascale Computing Project. The starting point for the\nMFiX-Exa code development was the extraction of basic computational fluid\ndynamic (CFD) and discrete element method (DEM) capabilities from the existing\nMFiX-DEM code which was refactored into an AMReX code architecture, herein\nreferred to as the preliminary MFiX-Exa code. Although drastic changes to the\ncodebase will be required to produce an exascale capable application,\nbenchmarking of the originating code helps to establish a valid start point for\nfuture development. In this work, four benchmark cases are considered, each\ncorresponding to experimental data sets with history of CFD-DEM validation. We\nfind that the preliminary MFiX-Exa code compares favorably with classic\nMFiX-DEM simulation predictions for three slugging/bubbling fluidized beds and\none spout-fluid bed. Comparison to experimental data is also acceptable (within\naccuracy expected from previous CFD-DEM benchmarking and validation exercises)\nwhich is comprised of several measurement techniques including particle\ntracking velocimetry, positron emission particle tracking and magnetic\nresonance imaging. The work concludes with an overview of planned developmental\nwork and potential benchmark cases to validate new MFiX-Exa capabilities.", "category": "physics_comp-ph" }, { "text": "Construction of SO(5)>SO(3) spherical harmonics and Clebsch-Gordan\n coefficients: The SO(5)>SO(3) spherical harmonics form a natural basis for expansion of\nnuclear collective model angular wave functions. They underlie the\nrecently-proposed algebraic method for diagonalization of the nuclear\ncollective model Hamiltonian in an SU(1,1)xSO(5) basis. We present a computer\ncode for explicit construction of the SO(5)>SO(3) spherical harmonics and use\nthem to compute the Clebsch-Gordan coefficients needed for collective model\ncalculations in an SO(3)-coupled basis. With these Clebsch-Gordan coefficients\nit becomes possible to compute the matrix elements of collective model\nobservables by purely algebraic methods.", "category": "physics_comp-ph" }, { "text": "Progress, challenges and perspectives of computational studies on glassy\n superionic conductors for solid-state batteries: Sulfide-based glasses and glass-ceramics showing high ionic conductivities\nand excellent mechanical properties are considered as promising solid-state\nelectrolytes. Nowadays, the computational material techniques with the\nadvantage of low research cost are being widely utilized for understanding,\neffectively screening and discovering of battery materials. In consideration of\nthe rising importance and contributions of computational studying on the glassy\nSSE materials, here, this work summarizes the common computational methods\nutilized for studying the amorphous inorganic materials, review the recent\nprogress in computational investigations of the lithium and sodium sulfide-type\nglasses for solid-state batteries, and outlines our understandings of the\nchallenges and future perspective on them. This review would facilitate and\naccelerate the future computational screening and discovering more glassy-state\nSSE materials for the solid-state batteries.", "category": "physics_comp-ph" }, { "text": "Geometric Random Inner Products: A New Family of Tests for Random Number\n Generators: We present a new computational scheme, GRIP (Geometric Random Inner\nProducts), for testing the quality of random number generators. The GRIP\nformalism utilizes geometric probability techniques to calculate the average\nscalar products of random vectors generated in geometric objects, such as\ncircles and spheres. We show that these average scalar products define a family\nof geometric constants which can be used to evaluate the quality of random\nnumber generators. We explicitly apply the GRIP tests to several random number\ngenerators frequently used in Monte Carlo simulations, and demonstrate a new\nstatistical property for good random number generators.", "category": "physics_comp-ph" }, { "text": "Any Data, Any Time, Anywhere: Global Data Access for Science: Data access is key to science driven by distributed high-throughput computing\n(DHTC), an essential technology for many major research projects such as High\nEnergy Physics (HEP) experiments. However, achieving efficient data access\nbecomes quite difficult when many independent storage sites are involved\nbecause users are burdened with learning the intricacies of accessing each\nsystem and keeping careful track of data location. We present an alternate\napproach: the Any Data, Any Time, Anywhere infrastructure. Combining several\nexisting software products, AAA presents a global, unified view of storage\nsystems - a \"data federation,\" a global filesystem for software delivery, and a\nworkflow management system. We present how one HEP experiment, the Compact Muon\nSolenoid (CMS), is utilizing the AAA infrastructure and some simple performance\nmetrics.", "category": "physics_comp-ph" }, { "text": "On-the-fly machine learned force fields for the study of warm dense\n matter: application to diffusion and viscosity of CH: We develop a framework for on-the-fly machine learned force field (MLFF)\nmolecular dynamics (MD) simulations of warm dense matter (WDM). In particular,\nwe employ an MLFF scheme based on the kernel method and Bayesian linear\nregression, with the training data generated from Kohn-Sham density functional\ntheory (DFT) using the Gauss Spectral Quadrature method, within which we\ncalculate energies, atomic forces, and stresses. We verify the accuracy of the\nformalism by comparing the predicted properties of warm dense carbon with\nrecent Kohn-Sham DFT results in the literature. In so doing, we demonstrate\nthat ab initio MD simulations of WDM can be accelerated by up to three orders\nof magnitude, while retaining ab initio accuracy. We apply this framework to\ncalculate the diffusion coefficients and shear viscosity of CH at a density of\n1 g/cm$^3$ and temperatures in the range of 75,000 to 750,000 K. We find that\nthe self- and inter-diffusion coefficients as well as the viscosity obey a\npower law with temperature, and that the diffusion coefficient results suggest\na weak coupling between C and H in CH. In addition, we find agreement within\nstandard deviation with previous results for C and CH but disagreement for H,\ndemonstrating the need for ab initio calculations as presented here.", "category": "physics_comp-ph" }, { "text": "Densest ternary sphere packings: We present our exhaustive exploration of the densest ternary sphere packings\n(DTSPs) for 45 radius ratios and 237 kinds of compositions, which is a packing\nproblem of three kinds of hard spheres with different radii, under periodic\nboundary conditions by a random structure searching method. To efficiently\nexplore DTSPs we further develop the searching method based on the piling-up\nand iterative balance methods [Koshoji et al., Phys. Rev. E 103, 023307\n(2021)]. The unbiased exploration identifies diverse 38 putative DTSPs\nappearing on phase diagrams in which 37 DTSPs of them are discovered in the\nstudy. The structural trend of DTSPs changes depending especially on the radius\nof small spheres. In case that the radius of small spheres is relatively small,\nstructures of many DTSPs can be understood as derivatives of densest binary\nsphere packings (DBSPs), while characteristic structures specific to the\nternary system emerge as the radius of small spheres becomes larger. In\naddition to DTSPs, we reveal a lot of semi-DTSPs (SDTSPs) which are obtained by\nexcluding DBSPs in the calculation of phase diagram, and investigate the\ncorrespondence of DTSPs and SDTSPs with real crystals based on the space group,\nshowing a considerable correspondence of SDTSPs having high symmetries with\nreal crystals including $\\mathrm{Cu}_2 \\mathrm{GaSr}$ and $\\mathrm{ThCr}_2\n\\mathrm{Si}_2$ structures. Our study suggests that the diverse structures of\nDBSPs, DTSPs, and SDTSPs can be effectively used as structural prototypes for\nsearching complex crystal structures.", "category": "physics_comp-ph" }, { "text": "Genetic algorithms for the numerical solution of variational problems\n without analytic trial functions: A coding of functions that allows a genetic algorithm to minimize functionals\nwithout analytic trial functions is presented and implemented for solving\nnumerically some instances of variational problems from physics.", "category": "physics_comp-ph" }, { "text": "Quasinormal mode solvers for resonators with dispersive materials: Optical resonators are widely used in modern photonics. Their spectral\nresponse and temporal dynamics are fundamentally driven by their natural\nresonances, the so-called quasinormal modes (QNMs), with complex frequencies.\nFor optical resonators made of dispersive materials, the QNM computation\nrequires solving a nonlinear eigenvalue problem. This rises a difficulty that\nis only scarcely documented in the literature. We review our recent efforts for\nimplementing efficient and accurate QNM-solvers for computing and normalizing\nthe QNMs of micro- and nano-resonators made of highly-dispersive materials. We\nbenchmark several methods for three geometries, a two-dimensional plasmonic\ncrystal, a two-dimensional metal grating, and a three-dimensional nanopatch\nantenna on a metal substrate, in the perspective to elaborate standards for the\ncomputation of resonance modes.", "category": "physics_comp-ph" }, { "text": "Architectural improvements and technological enhancements for the\n APEnet+ interconnect system: The APEnet+ board delivers a point-to-point, low-latency, 3D torus network\ninterface card. In this paper we describe the latest generation of APEnet NIC,\nAPEnet v5, integrated in a PCIe Gen3 board based on a state-of-the-art, 28 nm\nAltera Stratix V FPGA. The NIC features a network architecture designed\nfollowing the Remote DMA paradigm and tailored to tightly bind the computing\npower of modern GPUs to the communication fabric. For the APEnet v5 board we\nshow characterizing figures as achieved bandwidth and BER obtained by\nexploiting new high performance ALTERA transceivers and PCIe Gen3 compliancy.", "category": "physics_comp-ph" }, { "text": "A GPU-based Hydrodynamic Simulator with Boid Interactions: We present a hydrodynamic simulation system using the GPU compute shaders of\nDirectX for simulating virtual agent behaviors and navigation inside a smoothed\nparticle hydrodynamical (SPH) fluid environment with real-time water mesh\nsurface reconstruction. The current SPH literature includes interactions\nbetween SPH and heterogeneous meshes but seldom involves interactions between\nSPH and virtual boid agents. The contribution of the system lies in the\ncombination of the parallel smoothed particle hydrodynamics model with the\ndistributed boid model of virtual agents to enable agents to interact with\nfluids. The agents based on the boid algorithm influence the motion of SPH\nfluid particles, and the forces from the SPH algorithm affect the movement of\nthe boids. To enable realistic fluid rendering and simulation in a\nparticle-based system, it is essential to construct a mesh from the particle\nattributes. Our system also contributes to the surface reconstruction aspect of\nthe pipeline, in which we performed a set of experiments with the parallel\nmarching cubes algorithm per frame for constructing the mesh from the fluid\nparticles in a real-time compute and memory-intensive application, producing a\nwide range of triangle configurations. We also demonstrate that our system is\nversatile enough for reinforced robotic agents instead of boid agents to\ninteract with the fluid environment for underwater navigation and remote\ncontrol engineering purposes.", "category": "physics_comp-ph" }, { "text": "Numerical phase reduction beyond the first order approximation: We develop a numerical approach to reconstruct the phase dynamics of driven\nor coupled self-sustained oscillators. Employing a simple algorithm for\ncomputation of the phase of a perturbed system, we construct numerically the\nequation for the evolution of the phase. Our simulations demonstrate that the\ndescription of the dynamics solely by phase variables can be valid for rather\nstrong coupling strengths and large deviations from the limit cycle. Coupling\nfunctions depend crucially on the coupling and are generally non-decomposable\nin phase response and forcing terms. We also discuss limitations of the\napproach.", "category": "physics_comp-ph" }, { "text": "Viscoroute 2.0: a tool for the simulation of moving load effects on\n asphalt pavement: As shown by strains measured on full scale experimental aircraft structures,\ntraffic of slow-moving multiple loads leads to asymmetric transverse strains\nthat can be higher than longitudinal strains at the bottom of asphalt pavement\nlayers. To analyze this effect, a model and a software called ViscoRoute have\nbeen developed. In these tools, the structure is represented by a multilayered\nhalf-space, the thermo-viscoelastic behaviour of asphalt layers is accounted by\nthe Huet-Sayegh rheological law and loads are assumed to move at constant\nspeed. First, the paper presents a comparison of results obtained with\nViscoRoute to results stemming from the specialized literature. For thick\nasphalt pavement and several configurations of moving loads, other ViscoRoute\nsimulations confirm that it is necessary to incorporate viscoelastic effects in\nthe modelling to well predict the pavement behaviour and to anticipate possible\ndamages in the structure.", "category": "physics_comp-ph" }, { "text": "Code C# for chaos analysis of relativistic many-body systems: This work presents a new Microsoft Visual C# .NET code library, conceived as\na general object oriented solution for chaos analysis of three-dimensional,\nrelativistic many-body systems. In this context, we implemented the Lyapunov\nexponent and the \"fragmentation level\" (defined using the graph theory and the\nShannon entropy). Inspired by existing studies on billiard nuclear models and\nclusters of galaxies, we tried to apply the virial theorem for a simplified\nmany-body system composed by nucleons. A possible application of the \"virial\ncoefficient\" to the stability analysis of chaotic systems is also discussed.", "category": "physics_comp-ph" }, { "text": "Piecewise Diffusion Synthetic Acceleration Scheme for Neutron Transport\n Simulations in Diffusive Media: The method of discrete ordinates ($S_N$) is a popular choice for the solution\nof the neutron transport equation. It is however well known that it suffers\nfrom slow convergence of the scattering source in optically thick and diffusive\nmedia, such as pressurized water nuclear reactors (PWR). In reactor physics\napplications, the $S_N$ method is thus often accompanied by an acceleration\nalgorithm, such as the Diffusion Synthetic Acceleration (DSA). With the recent\nincrease in computational power, whole core transport calculations have become\na reasonable objective. It however requires using large computers and\nparallelizing the transport solver. Due to the elliptic nature of the DSA\noperator, its parallelization is not straightforward. In this paper, we present\nan acceleration operator derived from the DSA, but defined in a piecewise way\nsuch that its parallel implementation is straightforward. We show that, for\noptically thick enough media, this Piecewise Diffusion Synthetic Acceleration\n(PDSA) preserves the good properties of the DSA. This conclusion is supported\nby numerical experiments.", "category": "physics_comp-ph" }, { "text": "Physics-Constrained Bayesian Neural Network for Fluid Flow\n Reconstruction with Sparse and Noisy Data: In many applications, flow measurements are usually sparse and possibly\nnoisy. The reconstruction of a high-resolution flow field from limited and\nimperfect flow information is significant yet challenging. In this work, we\npropose an innovative physics-constrained Bayesian deep learning approach to\nreconstruct flow fields from sparse, noisy velocity data, where equation-based\nconstraints are imposed through the likelihood function and uncertainty of the\nreconstructed flow can be estimated. Specifically, a Bayesian deep neural\nnetwork is trained on sparse measurement data to capture the flow field. In the\nmeantime, the violation of physical laws will be penalized on a large number of\nspatiotemporal points where measurements are not available. A non-parametric\nvariational inference approach is applied to enable efficient\nphysics-constrained Bayesian learning. Several test cases on idealized vascular\nflows with synthetic measurement data are studied to demonstrate the merit of\nthe proposed method.", "category": "physics_comp-ph" }, { "text": "Imaging Mechanism for Hyperspectral Scanning Probe Microscopy via\n Gaussian Process Modelling: We investigate the ability to reconstruct and derive spatial structure from\nsparsely sampled 3D piezoresponse force microcopy data, captured using the\nband-excitation (BE) technique, via Gaussian Process (GP) methods. Even for\nweakly informative priors, GP methods allow unambiguous determination of the\ncharacteristic length scales of the imaging process both in spatial and\nfrequency domains. We further show that BE data set tends to be oversampled,\nwith ~30% of the original data set sufficient for high-quality reconstruction,\npotentially enabling the faster BE imaging. Finally, we discuss how the GP can\nbe used for automated experimentation in SPM, by combining GP regression with\nnon-rectangular scans. The full code for GP regression applied to hyperspectral\ndata is available at https://git.io/JePGr.", "category": "physics_comp-ph" }, { "text": "An initial investigation of the performance of GPU-based swept\n time-space decomposition: Simulations of physical phenomena are essential to the expedient design of\nprecision components in aerospace and other high-tech industries. These\nphenomena are often described by mathematical models involving partial\ndifferential equations (PDEs) without exact solutions. Modern design problems\nrequire simulations with a level of resolution that is difficult to achieve in\na reasonable amount of time even in effectively parallelized solvers. Though\nthe scale of the problem relative to available computing power is the greatest\nimpediment to accelerating these applications, significant performance gains\ncan be achieved through careful attention to the details of memory accesses.\nParallelized PDE solvers are subject to a trade-off in memory management: store\nthe solution for each timestep in abundant, global memory with high access\ncosts or in a limited, private memory with low access costs that must be passed\nbetween nodes. The GPU implementation of swept time-space decomposition\npresented here mitigates this dilemma by using private (shared) memory,\navoiding internode communication, and overwriting unnecessary values. It shows\nsignificant improvement in the execution time of the PDE solvers in one\ndimension achieving speedups of 6-2x for large and small problem sizes\nrespectively compared to naive GPU versions and 7-300x compared to parallel CPU\nversions.", "category": "physics_comp-ph" }, { "text": "Ubermag: Towards more effective micromagnetic workflows: Computational micromagnetics has become an essential tool in academia and\nindustry to support fundamental research and the design and development of\ndevices. Consequently, computational micromagnetics is widely used in the\ncommunity, and the fraction of time researchers spend performing computational\nstudies is growing. We focus on reducing this time by improving the interface\nbetween the numerical simulation and the researcher. We have designed and\ndeveloped a human-centred research environment called Ubermag. With Ubermag,\nscientists can control an existing micromagnetic simulation package, such as\nOOMMF, from Jupyter notebooks. The complete simulation workflow, including\ndefinition, execution, and data analysis of simulation runs, can be performed\nwithin the same notebook environment. Numerical libraries, co-developed by the\ncomputational and data science community, can immediately be used for\nmicromagnetic data analysis within this Python-based environment. By design, it\nis possible to extend Ubermag to drive other micromagnetic packages from the\nsame environment.", "category": "physics_comp-ph" }, { "text": "Targeting GPUs with OpenMP Directives on Summit: A Simple and Effective\n Fortran Experience: We use OpenMP to target hardware accelerators (GPUs) on Summit, a newly\ndeployed supercomputer at the Oak Ridge Leadership Computing Facility (OLCF),\ndemonstrating simplified access to GPU devices for users of our astrophysics\ncode GenASiS and useful speedup on a sample fluid dynamics problem. We modify\nour workhorse class for data storage to include members and methods that\nsignificantly streamline the persistent allocation of and association to GPU\nmemory. Users offload computational kernels with OpenMP target directives that\nare rather similar to constructs already familiar from multi-core\nparallelization. In this initial example we ask, \"With a given number of Summit\nnodes, how fast can we compute with and without GPUs?\", and find total wall\ntime speedups of $\\sim 12\\mathrm{X}$. We also find reasonable weak scaling up\nto 8000 GPUs (1334 Summit nodes). We make available the source code from this\nwork at https://github.com/GenASiS/GenASiS_Basics.", "category": "physics_comp-ph" }, { "text": "Accelerating Least Squares Imaging Using Deep Learning Techniques: Wave equation techniques have been an integral part of geophysical imaging\nworkflows to investigate the Earth's subsurface. Least-squares reverse time\nmigration (LSRTM) is a linearized inversion problem that iteratively minimizes\na misfit functional as a function of the model perturbation. The success of the\ninversion largely depends on our ability to handle large systems of equations\ngiven the massive computation costs. The size of the system almost\nexponentially increases with the demand for higher resolution images in\ncomplicated subsurface media. We propose an unsupervised deep learning approach\nthat leverages the existing physics-based models and machine learning\noptimizers to achieve more accurate and cheaper solutions. We compare different\noptimizers and demonstrate their efficacy in mitigating imaging artifacts.\nFurther, minimizing the Huber loss with mini-batch gradients and Adam optimizer\nis not only less memory-intensive but is also more robust. Our empirical\nresults on synthetic, densely sampled datasets suggest faster convergence to an\naccurate LSRTM result than a traditional approach.", "category": "physics_comp-ph" }, { "text": "Kinetics of Hexagonal Cylinders to Face-centered Cubic Spheres\n Transition of Triblock Copolymer in Selective Solvent: Brownian Dynamics\n Simulation: The kinetics of the transformation from the hexagonal packed cylinder (HEX)\nphase to the face-centered-cubic (FCC) phase was simulated using Brownian\nDynamics for an ABA triblock copolymer in a selective solvent for the A block.\nThe kinetics was obtained by instantaneously changing either the temperature of\nthe system or the well-depth of the Lennard-Jones potential. Detailed analysis\nshowed that the transformation occurred via a rippling mechanism. The\nsimulation results indicated that the order-order transformation (OOT) was a\nnucleation and growth process when the temperature of the system instantly\njumped from 0.8 to 0.5. The time evolution of the structure factor obtained by\nFourier Transformation showed that the peak intensities of the HEX and FCC\nphases could be fit well by an Avrami equation.", "category": "physics_comp-ph" }, { "text": "Use of groundwater lifetime expectancy for the performance assessment of\n a deep geologic waste repository: 1. Theory, illustrations, and implications: Long-term solutions for the disposal of toxic wastes usually involve\nisolation of the wastes in a deep subsurface geologic environment. In the case\nof spent nuclear fuel, if radionuclide leakage occurs from the engineered\nbarrier, the geological medium represents the ultimate barrier that is relied\nupon to ensure safety. Consequently, an evaluation of radionuclide travel times\nfrom a repository to the biosphere is critically important in a performance\nassessment analysis. In this study, we develop a travel time framework based on\nthe concept of groundwater lifetime expectancy as a safety indicator. Lifetime\nexpectancy characterizes the time that radionuclides will spend in the\nsubsurface after their release from the repository and prior to discharging\ninto the biosphere. The probability density function of lifetime expectancy is\ncomputed throughout the host rock by solving the backward-in-time solute\ntransport adjoint equation subject to a properly posed set of boundary\nconditions. It can then be used to define optimal repository locations. The\nrisk associated with selected sites can be evaluated by simulating an\nappropriate contaminant release history. The utility of the method is\nillustrated by means of analytical and numerical examples, which focus on the\neffect of fracture networks on the uncertainty of evaluated lifetime\nexpectancy.", "category": "physics_comp-ph" }, { "text": "Numerical path integral approach to quantum dynamics and stationary\n quantum states: Applicability of Feynman path integral approach to numerical simulations of\nquantum dynamics in real time domain is examined. Coherent quantum dynamics is\ndemonstrated with one dimensional test cases (quantum dot models) and\nperformance of the Trotter kernel as compared with the exact kernels is tested.\nA novel approach for finding the ground state and other stationary sates is\npresented. This is based on the incoherent propagation in real time. For both\napproaches the Monte Carlo grid and sampling are tested and compared with\nregular grids and sampling. We asses the numerical prerequisites for all of the\nabove.", "category": "physics_comp-ph" }, { "text": "Diverse quantization phenomena in layered materials: The diverse quantization phenomena in 2D condensed-matter systems, being due\nto a uniform perpendicular magnetic field and the geometry-created lattice\nsymmetries, are the focuses of this book. They cover the diversified\nmagneto-electronic properties, the various magneto-optical selection rules, the\nunusual quantum Hall conductivities, and the single- and many-particle\nmagneto-Coulomb excitations. The rich and unique behaviors are clearly revealed\nin few-layer graphene systems with the distinct stacking configurations, the\nstacking-modulated structures, and the silicon-doped lattices, bilayer\nsilicene/germanene systems with the bottom-top and bottom-bottom buckling\nstructures, monolayer and bilayer phosphorene systems, and quantum topological\ninsulators. The generalized tight-binding model, the static and dynamic Kubo\nformulas, and the random-phase approximation, are developed/modified to\nthoroughly explore the fundamental properties and propose the concise physical\npictures. The different high-resolution experimental measurements are discussed\nin detail, and they are consistent with the theoretical predictions.", "category": "physics_comp-ph" }, { "text": "Addressing the gas kinetics Boltzmann equation with branching-path\n statistics: This article proposes a new statistical numerical method to address gas\nkinetics problems obeying the Boltzmann equation. This method is inspired from\nsome Monte-Carlo algorithms used in linear transport physics, where virtual\nparticles are followed backwards in time along their paths. The non-linear\ncharacter of gas kinetics translates, in the numerical simulations presented\nhere, in branchings of the virtual particle paths. The obtained algorithms have\ndisplayed in the few tests presented here two noticeable qualities: (1) They\ninvolve no mesh. (2) They allow to easily compute the gas density at rarefied\nplaces of the phase space, for example at high kinetic energy.", "category": "physics_comp-ph" }, { "text": "Multi-Moment Advection scheme for Vlasov simulations: We present a new numerical scheme for solving the advection equation and its\napplication to the Vlasov simulation. The scheme treats not only point values\nof a profile but also its zeroth to second order piecewise moments as dependent\nvariables, and advances them on the basis of their governing equations. We have\ndeveloped one- and two-dimensional schemes and show that they provide quite\naccurate solutions compared to other existing schemes with the same memory\nusage. The two-dimensional scheme can solve the solid body rotation problem of\na gaussian profile with little numerical diffusion. This is a very important\nproperty for Vlasov simulations of magnetized plasma. The application of the\nscheme to the electromagnetic Vlasov simulation of collisionless shock waves is\npresented as a benchmark test.", "category": "physics_comp-ph" }, { "text": "Solving the acoustic VTI wave equation using physics-informed neural\n networks: Frequency-domain wavefield solutions corresponding to the anisotropic\nacoustic wave equations can be used to describe the anisotropic nature of the\nearth. To solve a frequency-domain wave equation, we often need to invert the\nimpedance matrix. This results in a dramatic increase in computational cost as\nthe model size increases. It is even a bigger challenge for anisotropic media,\nwhere the impedance matrix is far more complex. To address this issue, we use\nthe emerging paradigm of physics-informed neural networks (PINNs) to obtain\nwavefield solutions for an acoustic wave equation for transversely isotropic\n(TI) media with a vertical axis of symmetry (VTI). PINNs utilize the concept of\nautomatic differentiation to calculate its partial derivatives. Thus, we use\nthe wave equation as a loss function to train a neural network to provide\nfunctional solutions to form of the acoustic VTI wave equation. Instead of\npredicting the pressure wavefields directly, we solve for the scattered\npressure wavefields to avoid dealing with the point source singularity. We use\nthe spatial coordinates as input data to the network, which outputs the real\nand imaginary parts of the scattered wavefields and auxiliary function. After\ntraining a deep neural network (NN), we can evaluate the wavefield at any point\nin space instantly using this trained NN. We demonstrate these features on a\nsimple anomaly model and a layered model. Additional tests on a modified 3D\nOverthrust model and a model with irregular topography also show the\neffectiveness of the proposed method.", "category": "physics_comp-ph" }, { "text": "Numerical solutions of an unsteady 2-D incompressible flow with heat and\n mass transfer at low, moderate, and high Reynolds numbers: In this paper, we have proposed a modified Marker-And-Cell (MAC) method to\ninvestigate the problem of an unsteady 2-D incompressible flow with heat and\nmass transfer at low, moderate, and high Reynolds numbers with no-slip and slip\nboundary conditions. We have used this method to solve the governing equations\nalong with the boundary conditions and thereby to compute the flow variables,\nviz. $u$-velocity, $v$-velocity, $P$, $T$, and $C$. We have used the staggered\ngrid approach of this method to discretize the governing equations of the\nproblem. A modified MAC algorithm was proposed and used to compute the\nnumerical solutions of the flow variables for Reynolds numbers $Re = 10$, 500,\nand 50,000 in consonance with low, moderate, and high Reynolds numbers. We have\nalso used appropriate Prandtl $(Pr)$ and Schmidt $(Sc)$ numbers in consistence\nwith relevancy of the physical problem considered. We have executed this\nmodified MAC algorithm with the aid of a computer program developed and run in\nC compiler. We have also computed numerical solutions of local Nusselt $(Nu)$\nand Sherwood $(Sh)$ numbers along the horizontal line through the geometric\ncenter at low, moderate, and high Reynolds numbers for fixed $Pr = 6.62$ and\n$Sc = 340$ for two grid systems at time $t = 0.0001s$. Our numerical solutions\nfor u and v velocities along the vertical and horizontal line through the\ngeometric center of the square cavity for $Re = 100$ has been compared with\nbenchmark solutions available in the literature and it has been found that they\nare in good agreement. The present numerical results indicate that, as we move\nalong the horizontal line through the geometric center of the domain, we\nobserved that, the heat and mass transfer decreases up to the geometric center.\nIt, then, increases symmetrically.", "category": "physics_comp-ph" }, { "text": "On the Spurious Interior Resonance Modes of Time Domain Integral\n Equations for Analyzing Acoustic Scattering from Penetrable Objects: The interior resonance problem of time domain integral equations (TDIEs)\nformulated to analyze acoustic field interactions on penetrable objects is\ninvestigated. Two types of TDIEs are considered: The first equation, which is\ntermed the time domain potential integral equation (TDPIE) (in unknowns\nvelocity potential and its normal derivative), suffers from the interior\nresonance problem, i.e., its solution is replete with spurious modes that are\nexcited at the resonance frequencies of the acoustic cavity in the shape of the\nscatterer. Numerical experiments demonstrate that, unlike the frequency-domain\nintegral equations, the amplitude of these modes in the time domain could be\nsuppressed to a level that does not significantly affect the solution. The\nsecond equation is obtained by linearly combining TDPIE with its normal\nderivative. Weights of the combination are carefully selected to enable the\nnumerical computation of the singular integrals. The solution of this equation,\nwhich is termed the time domain combined potential integral equation (TDCPIE),\ndoes not involve any spurious interior resonance modes.", "category": "physics_comp-ph" }, { "text": "An improved non-reflecting outlet boundary condition for\n weakly-compressible SPH: Implementation of an outlet boundary condition is challenging in the context\nof the weakly-compressible Smoothed Particle Hydrodynamics method. We perform a\nsystematic numerical study of several of the available techniques for the\noutlet boundary condition. We propose a new hybrid approach that combines a\ncharacteristics-based method with a simpler frozen-particle (do-nothing)\ntechnique to accurately satisfy the outlet boundary condition in the context of\nwind-tunnel-like simulations. In addition, we suggest some improvements to the\ndo-nothing approach. We introduce a new suite of test problems that make it\npossible to compare these techniques carefully. We then simulate the flow past\na backward-facing step and circular cylinder. The proposed method allows us to\nobtain accurate results with an order of magnitude less particles than those\npresented in recent research. We provide a completely open source\nimplementation and a reproducible manuscript.", "category": "physics_comp-ph" }, { "text": "Multilevel Monte Carlo methods for the Grad-Shafranov free boundary\n problem: The equilibrium configuration of a plasma in an axially symmetric reactor is\ndescribed mathematically by a free boundary problem associated with the\ncelebrated Grad--Shafranov equation. The presence of uncertainty in the model\nparameters introduces the need to quantify the variability in the predictions.\nThis is often done by computing a large number of model solutions on a\ncomputational grid for an ensemble of parameter values and then obtaining\nestimates for the statistical properties of solutions. In this study, we\nexplore the savings that can be obtained using multilevel Monte Carlo methods,\nwhich reduce costs by performing the bulk of the computations on a sequence of\nspatial grids that are coarser than the one that would typically be used for a\nsimple Monte Carlo simulation. We examine this approach using both a set of\nuniformly refined grids and a set of adaptively refined grids guided by a\ndiscrete error estimator. Numerical experiments show that multilevel methods\ndramatically reduce the cost of simulation, with cost reductions typically on\nthe order of 60 or more and possibly as large as 200. Adaptive gridding results\nin more accurate computation of geometric quantities such as x-points\nassociated with the model.", "category": "physics_comp-ph" }, { "text": "GPU accelerated population annealing algorithm: Population annealing is a promising recent approach for Monte Carlo\nsimulations in statistical physics, in particular for the simulation of systems\nwith complex free-energy landscapes. It is a hybrid method, combining\nimportance sampling through Markov chains with elements of sequential Monte\nCarlo in the form of population control. While it appears to provide\nalgorithmic capabilities for the simulation of such systems that are roughly\ncomparable to those of more established approaches such as parallel tempering,\nit is intrinsically much more suitable for massively parallel computing. Here,\nwe tap into this structural advantage and present a highly optimized\nimplementation of the population annealing algorithm on GPUs that promises\nspeed-ups of several orders of magnitude as compared to a serial implementation\non CPUs. While the sample code is for simulations of the 2D ferromagnetic Ising\nmodel, it should be easily adapted for simulations of other spin models,\nincluding disordered systems. Our code includes implementations of some\nadvanced algorithmic features that have only recently been suggested, namely\nthe automatic adaptation of temperature steps and a multi-histogram analysis of\nthe data at different temperatures.", "category": "physics_comp-ph" }, { "text": "GemNet: Universal Directional Graph Neural Networks for Molecules: Effectively predicting molecular interactions has the potential to accelerate\nmolecular dynamics by multiple orders of magnitude and thus revolutionize\nchemical simulations. Graph neural networks (GNNs) have recently shown great\nsuccesses for this task, overtaking classical methods based on fixed molecular\nkernels. However, they still appear very limited from a theoretical\nperspective, since regular GNNs cannot distinguish certain types of graphs. In\nthis work we close this gap between theory and practice. We show that GNNs with\ndirected edge embeddings and two-hop message passing are indeed universal\napproximators for predictions that are invariant to translation, and\nequivariant to permutation and rotation. We then leverage these insights and\nmultiple structural improvements to propose the geometric message passing\nneural network (GemNet). We demonstrate the benefits of the proposed changes in\nmultiple ablation studies. GemNet outperforms previous models on the COLL,\nMD17, and OC20 datasets by 34%, 41%, and 20%, respectively, and performs\nespecially well on the most challenging molecules. Our implementation is\navailable online.", "category": "physics_comp-ph" }, { "text": "Convergence of Artificial Intelligence and High Performance Computing on\n NSF-supported Cyberinfrastructure: Significant investments to upgrade and construct large-scale scientific\nfacilities demand commensurate investments in R&D to design algorithms and\ncomputing approaches to enable scientific and engineering breakthroughs in the\nbig data era. Innovative Artificial Intelligence (AI) applications have powered\ntransformational solutions for big data challenges in industry and technology\nthat now drive a multi-billion dollar industry, and which play an ever\nincreasing role shaping human social patterns. As AI continues to evolve into a\ncomputing paradigm endowed with statistical and mathematical rigor, it has\nbecome apparent that single-GPU solutions for training, validation, and testing\nare no longer sufficient for computational grand challenges brought about by\nscientific facilities that produce data at a rate and volume that outstrip the\ncomputing capabilities of available cyberinfrastructure platforms. This\nrealization has been driving the confluence of AI and high performance\ncomputing (HPC) to reduce time-to-insight, and to enable a systematic study of\ndomain-inspired AI architectures and optimization schemes to enable data-driven\ndiscovery. In this article we present a summary of recent developments in this\nfield, and describe specific advances that authors in this article are\nspearheading to accelerate and streamline the use of HPC platforms to design\nand apply accelerated AI algorithms in academia and industry.", "category": "physics_comp-ph" }, { "text": "A Multilevel Method for Many-Electron Schr\u00f6dinger Equations Based on\n the Atomic Cluster Expansion: The atomic cluster expansion (ACE) (Drautz, 2019) yields a highly efficient\nand intepretable parameterisation of symmetric polynomials that has achieved\ngreat success in modelling properties of many-particle systems. In the present\nwork we extend the practical applicability of the ACE framework to the\ncomputation of many-electron wave functions. To that end, we develop a\ncustomized variational Monte-Carlo algorithm that exploits the sparsity and\nhierarchical properties of ACE wave functions. We demonstrate the feasibility\non a range of proof-of-concept applications to one-dimensional systems.", "category": "physics_comp-ph" }, { "text": "An efficient four-way coupled lattice Boltzmann - discrete element\n method for fully resolved simulations of particle-laden flows: A four-way coupling scheme for the direct numerical simulation of\nparticle-laden flows is developed and analyzed. It employs a novel adaptive\nmulti-relaxation time lattice Boltzmann method to simulate the fluid phase\nefficiently. The momentum exchange method is used to couple the fluid and the\nparticulate phase. The particle interactions in normal and tangential direction\nare accounted for by a discrete element method using linear contact forces. All\nparameters of the scheme are studied and evaluated in detail and precise\nguidelines for their choice are developed. The development is based on several\ncarefully selected calibration and validation tests of increasing physical\ncomplexity. It is found that a well-calibrated lubrication model is crucial to\nobtain the correct trajectories of a sphere colliding with a plane wall in a\nviscous fluid. For adequately resolving the collision dynamics it is found that\nthe collision time must be stretched appropriately. The complete set of tests\nestablishes a validation pipeline that can be universally applied to other\nfluid-particle coupling schemes providing a systematic methodology that can\nguide future developments.", "category": "physics_comp-ph" }, { "text": "GPU Accelerated Simulation of Channeling Radiation of Relativistic\n Particles: In this paper we describe and demonstrate a C++ code written to determine the\ntrajectory of particles traversing oriented single crystals and a CUDA code\nwritten to evaluate the radiation spectra from charged particles with arbitrary\ntrajectories. The CUDA/C++ code can evaluate both classical and quantum\nmechanical radiation spectra for spin 0 and 1/2 particles. We include multiple\nCoulomb scattering and energy loss due to radiation emission which produces\nradiation spectra in agreement with experimental spectra for both positrons and\nelectrons. We also demonstrate how GPUs can be used to speed up calculations by\nseveral orders of magnitude. This will allow research groups with limited\nfunding or sparse access to super computers to do numerical calculations as if\nit were a super computer. We show that one Titan V GPU can replace up to 100\nXeon 36 core CPUs running in parallel. We also show that choosing a GPU for a\nspecific job will have great impact on the performance, as some GPUs have\nbetter double precision performance than others.", "category": "physics_comp-ph" }, { "text": "Fluid-structure interaction with $H(\\text{div})$-conforming finite\n elements: In this paper a novel application of the (high-order)\n$H(\\text{div})$-conforming Hybrid Discontinuous Galerkin finite element method\nfor monolithic fluid-structure interaction (FSI) is presented. The Arbitrary\nLagrangian Eulerian (ALE) description is derived for $H(\\text{div})$-conforming\nfinite elements including the Piola transformation, yielding exact divergence\nfree fluid velocity solutions. The arising method is demonstrated by means of\nthe benchmark problems proposed by Turek and Hron [50]. With hp-refinement\nstrategies singularities and boundary layers are overcome leading to optimal\nspatial convergence rates.", "category": "physics_comp-ph" }, { "text": "Enabling Large-Scale Condensed-Phase Hybrid Density Functional Theory\n Based $Ab$ $Initio$ Molecular Dynamics I: Theory, Algorithm, and Performance: By including a fraction of exact exchange (EXX), hybrid functionals reduce\nthe self-interaction error in semi-local density functional theory (DFT), and\nthereby furnish a more accurate and reliable description of the electronic\nstructure in systems throughout biology, chemistry, physics, and materials\nscience. However, the high computational cost associated with the evaluation of\nall required EXX quantities has limited the applicability of hybrid DFT in the\ntreatment of large molecules and complex condensed-phase materials. To overcome\nthis limitation, we have devised a linear-scaling yet formally exact approach\nthat utilizes a local representation of the occupied orbitals (e.g., maximally\nlocalized Wannier functions, MLWFs) to exploit the sparsity in the real-space\nevaluation of the quantum mechanical exchange interaction in finite-gap\nsystems. In this work, we present a detailed description of the theoretical and\nalgorithmic advances required to perform MLWF-based ab initio molecular\ndynamics (AIMD) simulations of large-scale condensed-phase systems at the\nhybrid DFT level. We provide a comprehensive description of the exx algorithm,\nwhich is currently implemented in the Quantum ESPRESSO program and employs a\nhybrid MPI/OpenMP parallelization scheme to efficiently utilize\nhigh-performance computing (HPC) resources. This is followed by a critical\nassessment of the accuracy and parallel performance of this approach when\nperforming AIMD simulations of liquid water in the canonical ensemble. With\naccess to HPC resources, we demonstrate that exx enables hybrid DFT based AIMD\nsimulations of condensed-phase systems containing 500-1000 atoms with a\nwalltime cost that is comparable to semi-local DFT. In doing so, exx takes us\ncloser to routinely performing AIMD simulations of large-scale condensed-phase\nsystems for sufficiently long timescales at the hybrid DFT level of theory.", "category": "physics_comp-ph" }, { "text": "Numerical Simulation of Oscillating Multiphase Heat Transfer in Parallel\n plates using Pseudopotential Multiple-Relaxation-Time Lattice Boltzmann\n Method: Multiphase flows frequently occur in many important engineering and\nscientific applications, but modeling of such flows is a rather challenging\ntask due to complex interfacial dynamics between different phases, let alone if\nthe flow is oscillating in the porous media. Using humid air as the working\nfluid in the thermoacoustic refrigerator is one of the research focus to\nimprove the thermoacoustic performance, but the corresponding effect is the\ncondensation of humid air in the thermal stack. Due to the small sized spacing\nof thermal stack and the need to explore the detailed condensation process in\noscillating flow, a mesoscale numerical approach need to be developed. Over the\ndecades, several types of Lattice Boltzmann (LB) models for multiphase flows\nhave been developed under different physical pictures, for example the\ncolor-gradient model, the Shan-Chen model, the nonideal pressure tensor model\nand the HSD model. In the current study, a pseudopotential\nMultiple-Relaxation-Time (MRT) LBM simulation was utilized to simulate the\nincompressible oscillating flow and condensation in parallel plates. In the\ninitial stage of condensation, the oscillating flow benefits to accumulate the\nsaturated vapor at the exit regions, and the velocity vector of saturated vapor\nclearly showed the flow over the droplets. It was also concluded that if the\ncondensate can be removed out from the parallel plates, the oscillating flow\nand condensation will continuously feed the cold surface to form more water\ndroplets. The effect of wettability to the condensation was discussed, and it\nturned out that by increasing the wettability, the saturated water vapor was\neasier to condense on the cold walls, and the distance between each pair of\ndroplets was also strongly affected by the wettability.", "category": "physics_comp-ph" }, { "text": "An adaptive planewave method for electronic structure calculations: We propose an adaptive planewave method for eigenvalue problems in electronic\nstructure calculations. The method combines a priori convergence rates and\naccurate a posteriori error estimates into an effective way of updating the\nenergy cut-off for planewave discretizations, for both linear and nonlinear\neigenvalue problems. The method is error controllable for linear eigenvalue\nproblems in the sense that for a given required accuracy, an energy cut-off for\nwhich the solution matches the target accuracy can be reached efficiently.\nFurther, the method is particularly promising for nonlinear eigenvalue problems\nin electronic structure calculations as it shall reduce the cost of early\niterations in self-consistent algorithms. We present some numerical experiments\nfor both linear and nonlinear eigenvalue problems. In particular, we provide\nelectronic structure calculations for some insulator and metallic systems\nsimulated with Kohn--Sham density functional theory (DFT) and the projector\naugmented wave (PAW) method, illustrating the efficiency and potential of the\nalgorithm.", "category": "physics_comp-ph" }, { "text": "Warming or cooling from a random walk process in the temperature: A simple 3-parameter random walk model for monthly fluctuations $\\triangle T$\nof a temperature $T$ is introduced. Applied to a time range of 170 years,\ntemperature fluctuations of the model produce for about 14\\% of the runs\nwarming that exceeds the observed global warming of the earth surface\ntemperature from 1850 to 2019. On the other hand, there is a 50\\% likelihood\nfor runs of our model resulting in cooling. If a similar random walk process\ncan be used as an effective model for fluctuations of the global earth surface\ntemperature, effects due to internal and external forcing could be considerably\nover- or underestimated.", "category": "physics_comp-ph" }, { "text": "On Preconditioning Electromagnetic Integral Equations in the High\n Frequency Regime via Helmholtz Operators and quasi-Helmholtz Projectors: Fast and accurate resolution of electromagnetic problems via the \\ac{BEM} is\noftentimes challenged by conditioning issues occurring in three distinct\nregimes: (i) when the frequency decreases and the discretization density\nremains constant, (ii) when the frequency is kept constant while the\ndiscretization is refined and (iii) when the frequency increases along with the\ndiscretization density. While satisfactory remedies to the problems arising in\nregimes (i) and (ii), respectively based on Helmholtz decompositions and\nCalder\\'on-like techniques have been presented, the last regime is still\nchallenging. In fact, this last regime is plagued by both spurious resonances\nand ill-conditioning, the former can be tackled via combined field strategies\nand is not the topic of this work. In this contribution new symmetric scalar\nand vectorial electric type formulations that remain well-conditioned in all of\nthe aforementioned regimes and that do not require barycentric discretization\nof the dense electromagnetic potential operators are presented along with a\nspherical harmonics analysis illustrating their key properties.", "category": "physics_comp-ph" }, { "text": "Boundary conditions for the solution of the 3-dimensional Poisson\n equation in open metallic enclosures: Numerical solution of the Poisson equation in metallic enclosures, open at\none or more ends, is important in many practical situations such as High Power\nMicrowave (HPM) or photo-cathode devices. It requires imposition of a suitable\nboundary condition at the open end. In this paper, methods for solving the\nPoisson equation are investigated for various charge densities and aspect\nratios of the open ends. It is found that a mixture of second order and third\norder local asymptotic boundary condition (ABC) is best suited for large aspect\nratios while a proposed non-local matching method, based on the solution of the\nLaplace equation, scores well when the aspect ratio is near unity for all\ncharge density variations, including ones where the centre of charge is close\nto an open end or the charge density is non-localized. The two methods\ncomplement each other and can be used in electrostatic calculations where the\ncomputational domain needs to be terminated at the open boundaries of the\nmetallic enclosure.", "category": "physics_comp-ph" }, { "text": "On the Role of Atomic Binding Forces and Warm-Dense-Matter Physics in\n the Modeling of mJ-Class Laser-Induced Surface Ablation: Ultrafast laser heating of electrons on a metal surface breaks the pressure\nequilibrium within the material, thus initiating ablation. The stasis of a\nroom-temperature metal results from a balance between repulsive and attractive\nbinding pressures. We calculate this with a choice of Equation of State (EOS),\nwhose applicability in the Warm-Dense-Matter regime is varied. Hydrodynamic\nmodeling of surface ablation in this regime involves calculation of\nelectrostatic and thermal forces implied by the EOS, and therefore the physics\noutlining the evolution of the net inter-atomic binding (negative pressure)\nduring rapid heating is of interest. In particular, we discuss the\nThomas-Fermi-Dirac-Weizsacker model, and Averaged Atom Model, and their binding\npressure as compared to the more commonly used models. A fully nonlinear\nhydrodynamic code with a pressure-sourced electrostatic field solver is then\nimplemented to simulate the ablation process, and the ablation depths are\ncompared with known measurements with good agreement. Results also show that\nre-condensation of a previously melted layer significantly reduces the overall\nablated depth of copper for laser fluence between 10-30J/cm^2, further\nexplaining a well-known trend observed in experiments in this regime. A\ntransition from electrostatic to pressure-driven ablation is observed with\nlaser fluence increasing.", "category": "physics_comp-ph" }, { "text": "Electron-hole spectra created by adsorption on metals from\n density-functional theory: Non-adiabaticity in adsorption on metal surfaces gives rise to a number of\nmeasurable effects, such as chemicurrents and exo-electron emission. Here we\npresent a quantitative theory of chemicurrents on the basis of ground-state\ndensity-functional theory (DFT) calculations of the effective electronic\npotential and the Kohn-Sham band structure. Excitation probabilities are\ncalculated both for electron-hole pairs and for electrons and holes separately\nfrom first-order time-dependent perturbation theory. This is accomplished by\nevaluating the matrix elements (between Kohn-Sham states) of the rate of change\nof the effective electronic potential between subsequent (static) DFT\ncalculations. Our approach is related to the theory of electronic friction, but\nallows for direct access to the excitation spectra. The method is applied to\nadsorption of atomic hydrogen isotopes on the Al(111) surface. The results are\ncompatible with the available experimental data (for noble metal surfaces); in\nparticular, the observed isotope effect in H versus D adsorption is described\nby the present theory. Moreover, the results are in qualitative agreement with\ncomputationally elaborate calculations of the full dynamics within\ntime-dependent density-functional theory, with the notable exception of effects\ndue to the spin dynamics. Being a perturbational approach, the method proposed\nhere is simple enough to be applied to a wide class of adsorbates and surfaces,\nwhile at the same time allowing us to extract system-specific information.", "category": "physics_comp-ph" }, { "text": "Accurate and efficient computation of the Boltzmann equation for\n Kramer's problem: In this work, a novel synthetic iteration scheme (SIS) is developed for the\nLBE to find solutions to Kramer's problem accurately and efficiently: the\nvelocity distribution function is first solved by the conventional iterative\nscheme, then it is modified such that in each iteration i) the flow velocity is\nguided by an ordinary differential equation that is asymptotic-preserving at\nthe Navier-Stokes limit and ii) the shear stress is equal to the average shear\nstress. Based on the Bhatnagar-Gross-Krook model, the SIS is assessed to be\nefficient and accurate. Then we investigate the Kramer's problem for gases\ninteracting through the inverse power-law, shielded Coulomb, and Lennard-Jones\npotentials, subject to diffuse-specular and Cercignani-Lampis gas-surface\nboundary conditions. When the tangential momentum accommodation coefficient\n(TMAC) is not larger than one, the Knudsen layer function is strongly affected\nby the potential, where its value and width increase with the effective\nviscosity index of gas molecules. Moreover, the Knudsen layer function exhibits\nsimilarities among different values of TMAC when the intermolecular potential\nis fixed. For Cercignani-Lampis boundary condition with TMAC larger than one,\nboth the viscous slip coefficient and Knudsen layer function are affected by\nthe intermolecular potential, especially when the \"backward\" scattering limit\nis approached. With the asymptotic theory by Jiang and Luo for the singular\nbehavior of the velocity gradient in the vicinity of the solid surface, we find\nthat the whole Knudsen layer function can be well fitted by power series.", "category": "physics_comp-ph" }, { "text": "3D periodic dielectric composite homogenization based on the Generalized\n Source Method: The article encloses a new Fourier space method for rigorous optical\nsimulation of 3D periodic dielectric structures. The method relies upon\nrigorous solution of Maxwell's equations in complex composite structures by the\nGeneralized Source Method. Extremely fast GPU enabled calculations provide a\npossibility for an efficient search of eigenmodes in 3D periodic complex\nstructures on the basis of rigorously obtained resonant electromagnetic\nresponse. The method is applied to the homogenization problem demonstrating a\ncomplete anisotropic dielectric tensor retrieval.", "category": "physics_comp-ph" }, { "text": "TaylUR, an arbitrary-order diagonal automatic differentiation package\n for Fortran 95: We present TaylUR, a Fortran 95 module to automatically compute the numerical\nvalues of a complex-valued function's derivatives w.r.t. several variables up\nto an arbitrary order in each variable, but excluding mixed derivatives.\nArithmetic operators and Fortran intrinsics are overloaded to act correctly on\nobjects of defined type \"taylor\", which encodes a function along with its first\nfew derivatives w.r.t. the user-defined independent variables. Derivatives of\nproducts and composite functions are computed using Leibniz's rule and Faa di\nBruno's formula. TaylUR makes heavy use of operator overloading and other\nobject-oriented Fortran 95 features.", "category": "physics_comp-ph" }, { "text": "Evaluation of Surrogate Models for Multi-fin Flapping Propulsion Systems: The aim of this study is to develop surrogate models for quick, accurate\nprediction of thrust forces generated through flapping fin propulsion for given\noperating conditions and fin geometries. Different network architectures and\nconfigurations are explored to model the training data separately for the lead\nfin and rear fin of a tandem fin setup. We progressively improve the data\nrepresentation of the input parameter space for model predictions. The models\nare tested on three unseen fin geometries and the predictions validated with\ncomputational fluid dynamics (CFD) data. Finally, the orders of magnitude gains\nin computational performance of these surrogate models, compared to\nexperimental and CFD runs, vs their tradeoff with accuracy is discussed within\nthe context of this tandem fin configuration.", "category": "physics_comp-ph" }, { "text": "Non-perturbative heterogeneous mean-field approach to epidemic spreading\n in complex networks: Since roughly a decade ago, network science has focused among others on the\nproblem of how the spreading of diseases depends on structural patterns. Here,\nwe contribute to further advance our understanding of epidemic spreading\nprocesses by proposing a non-perturbative formulation of the heterogeneous mean\nfield approach that has been commonly used in the physics literature to deal\nwith this kind of spreading phenomena. The non-perturbative equations we\npropose have no assumption about the proximity of the system to the epidemic\nthreshold, nor any linear approximation of the dynamics. In particular, we\nfirst develop a probabilistic description at the node level of the epidemic\npropagation for the so-called susceptible-infected-susceptible family of\nmodels, and after we derive the corresponding heterogeneous mean-field\napproach. We propose to use the full extension of the approach instead of\npruning the expansion to first order, which leads to a non-perturbative\nformulation that can be solved by fixed point iteration, and used with\nreliability far away from the epidemic threshold to assess the prevalence of\nthe epidemics. Our results are in close agreement with Monte Carlo simulations\nthus enhancing the predictive power of the classical heterogeneous mean field\napproach, while providing a more effective framework in terms of computational\ntime.", "category": "physics_comp-ph" }, { "text": "Self-consistent assessment of Englert-Schwinger model on atomic\n properties: Our manuscript investigates a self-consistent solution of the statistical\natom model proposed by Berthold-Georg Englert and Julian Schwinger (the ES\nmodel) and benchmarks it against atomic Kohn-Sham and two orbital-free models\nof the Thomas-Fermi-Dirac (TFD)-$\\lambda$vW family. Results show that the ES\nmodel generally offers the same accuracy as the well-known TFD-$\\frac{1}{5}$vW\nmodel; however, the ES model corrects the failure in Pauli potential\nnear-nucleus region. We also point to the inability of describing low-$Z$ atoms\nas the foremost concern in improving the present model.", "category": "physics_comp-ph" }, { "text": "Numerical integration of the equations of motion for rigid polyatomics:\n The matrix method: A new scheme for numerical integration of motion for classical systems\ncomposed of rigid polyatomic molecules is proposed. The scheme is based on a\nmatrix representation of the rotational degrees of freedom. The equations of\nmotion are integrated within the Verlet framework in velocity form. It is shown\nthat, contrary to previous methods, in the approach introduced the rigidity of\nmolecules can be conserved automatically without any additional\ntransformations. A comparison of various techniques with respect to numerical\nstability is made.", "category": "physics_comp-ph" }, { "text": "A 2D self-organized percolation model for capillary impregnation: A two-dimension extension of the Self-organized Gradient Percolation (SGP)\nmethod initially developed for the one-dimensional simulation is proposed. The\ninitialization in the two directions is considered as the analytic solution of\nthe 2D (homogeneous) diffusion equation. The evolution of the saturation front\nis assumed to be the evolution of both standard deviations in each direction.\nThe validation of the implementation is done by comparisons between SGP and\nfinite element results.", "category": "physics_comp-ph" }, { "text": "Symmetry-breaking-induced multifunctionalities of two-dimensional\n chromium-based materials for nanoelectronics and clean energy conversion: Structural symmetry-breaking that could lead to exotic physical properties\nplays a crucial role in determining the functions of a system, especially for\ntwo-dimensional (2D) materials. Here we demonstrate that multiple\nfunctionalities of 2D chromium-based materials could be achieved by breaking\ninversion symmetry via replacing Y atoms in one face of pristine CrY (Y=P, As,\nSb) monolayers with N atoms, i.e., forming Janus Cr2NY monolayers. The\nfunctionalities include spin-gapless, very low work function, inducing carrier\ndoping and catalytic activity, which are predominately ascribed to the large\nintrinsic dipole of Janus Cr2NY monolayers, making them having great potentials\nin various applications. Specifically, Cr2NSb is found to be a spin-gapless\nsemiconductor, Cr2NP and Cr2NHPF could simultaneously induce n- and p-type\ncarrier doping for two graphene sheets with different concentrations (forming\nintrinsic p-n vertical junction), and Cr2NY exhibits excellent electrocatalytic\nhydrogen evolution activity, even superior to benchmark Pt. The results confirm\nthat breaking symmetry is a promising approach for the rational design of\nmultifunctional 2D materials.", "category": "physics_comp-ph" }, { "text": "Coercing Machine Learning to Output Physically Accurate Results: Many machine/deep learning artificial neural networks are trained to simply\nbe interpolation functions that map input variables to output values\ninterpolated from the training data in a linear/nonlinear fashion. Even when\nthe input/output pairs of the training data are physically accurate (e.g. the\nresults of an experiment or numerical simulation), interpolated quantities can\ndeviate quite far from being physically accurate. Although one could project\nthe output of a network into a physically feasible region, such a postprocess\nis not captured by the energy function minimized when training the network;\nthus, the final projected result could incorrectly deviate quite far from the\ntraining data. We propose folding any such projection or postprocess directly\ninto the network so that the final result is correctly compared to the training\ndata by the energy function. Although we propose a general approach, we\nillustrate its efficacy on a specific convolutional neural network that takes\nin human pose parameters (joint rotations) and outputs a prediction of vertex\npositions representing a triangulated cloth mesh. While the original network\noutputs vertex positions with erroneously high stretching and compression\nenergies, the new network trained with our physics prior remedies these issues\nproducing highly improved results.", "category": "physics_comp-ph" }, { "text": "Protein folding analysis using features obtained by persistent homology: Understanding the protein folding process is an outstanding issue in\nbiophysics; recent developments in molecular dynamics simulation have provided\ninsights into this phenomenon. However, the large freedom of atomic motion\nhinders the understanding of this process. In this study, we applied persistent\nhomology, an emerging methods to analyze topological features in a dataset, to\nreveal protein folding dynamics. We developed a new method to characterize\nprotein structure based on persistent homology and applied this method to\nmolecular dynamics simulations of chignolin. Using principle component analysis\nor non-negative matrix factorization, our analysis method revealed two stable\nstates and one saddle state, corresponding to the native, misfolded, and\ntransition states, respectively. We also identified an unfolded state with slow\ndynamics in the reduced space. Our method serves as a promising tool to\nunderstand the protein folding process.", "category": "physics_comp-ph" } ]