halid
stringlengths
8
12
lang
stringclasses
1 value
domain
sequencelengths
0
36
timestamp
stringclasses
652 values
year
stringclasses
55 values
url
stringlengths
43
370
text
stringlengths
16
2.18M
01758100
en
[ "phys.grqc" ]
2024/03/05 22:32:16
2018
https://hal.science/hal-01758100/file/1803.05015.pdf
Fabio D'ambrosio email: [email protected] Carlo Rovelli email: [email protected] How information crosses Schwarzschild's central singularity How Information Crosses Schwarzschild's Central Singularity Fabio D'Ambrosio * and Carlo Rovelli † CPT, Aix-Marseille Université, Université de Toulon, CNRS, Case 907, F-13288 Marseille, France. (Dated: March 15, 2018) We study the natural extension of spacetime across Schwarzschild's central singularity and the behavior of the geodesics crossing it. Locality implies that this extension is independent from the future fate of black holes. We argue that this extension is the natural → 0 limit of the effective quantum geometry inside a black hole, and show that the central region contains causal diamonds with area satisfying Bousso's bound for an entropy that can be as large as Hawking's radiation entropy. This result sheds light on the possibility that Hawking radiation is purified by information crossing the internal singularity. I. NON-RIEMANNIAN EXTENSION Einstein cautioned repeatedly against giving excessive weight to the fact that the gravitational field determines a (pseudo-) Riemannian geometry [START_REF] Lehmkuhl | Why Einstein did not believe that general relativity geometrizes gravity[END_REF]. He regarded this fact as a convenient mathematical feature and a tool to connect the theory to the geometry of Newton's and Minkowski's spaces [START_REF] Einstein | Geometrie und Erfahrung[END_REF], but the essential point about g µν is not that it describes gravitation as a manifestation of a Riemannian geometry; it is that it provides a relativistic field theoretical description of gravitation [START_REF] Einstein | The Meaning of Relativity[END_REF]. Well behaved solutions of the field equations might thus be physically relevant even when they fail to define a geometry which is -strictly speaking-a Riemannian manifold. This consideration is relevant for understanding the interior of black holes. There is no Riemannian manifold extending the Schwarzschild metric beyond the central singularity where the Schwarzschild radius vanishes: r s = 0. There is indeed abundant mathematical literature about the inextensibility in this sense and the related geodesic incompleteness of the Schwarzschild spacetime (see [START_REF] Hawking | The Large Scale Structure of Space-Time[END_REF][START_REF] Kriele | Spacetime: Foundations of General Relativity and Differential Geometry[END_REF][START_REF] Sbierski | The C 0 -inextendibility of the Schwarzschild spacetime and the spacelike diameter in Lorentzian Geometry[END_REF] for instance). But there is a smooth solution of the equations that continues across r s = 0. It defines a metric geometry that is Riemannian almost everywhere, with curvature invariants diverging on a low dimensional surface. The metric geometry defined by this extension continues the interior of the black hole across r s = 0 into the geometry of the interior of a white hole. This possibility was noticed by several authors over the past decades. To the best of our knowledge it was first reported by Synge in the fifties [START_REF] Synge | The Gravitational Field of a Particle[END_REF] and rediscovered by Peeters, Schweigert and van Holten in the nineties [START_REF] Peeters | Extended geometry of black holes[END_REF]. A similar observation has recently been made in the context of cosmology in [START_REF] Koslowski | Through the Big Bang[END_REF]. Here we study this extension and all geodesics that cross r s = 0. This geometry can be seen as the → 0 limit of an effective metric determined by quantum gravity. On physical grounds we expect what happens near r s = 0 to be affected by quantum effects, because curvature reaches the Planck scale in this region. Notice that quantum gravity is expected to render what happens at distances smaller than the Planck length physically irrelevant [START_REF] Rovelli | Discreteness of area and volume in quantum gravity[END_REF], therefore curvature singularities on low dimensional surfaces are likely to be physically meaningless anyway. The possibility of a quantum transitions across r s = 0 has been indeed explored by many authors, see for instance [START_REF] Modesto | Loop quantum black hole[END_REF][START_REF] Modesto | Semiclassical loop quantum black hole[END_REF][START_REF] Hossenfelder | A Model for non-singular black hole collapse and evaporation[END_REF]. Quantum gravity is also expected to bound curvature [START_REF] Hossenfelder | A Model for non-singular black hole collapse and evaporation[END_REF][START_REF] Narlikar | High energy radiation from white holes[END_REF][START_REF] Frolov | Quantum Gravity removes classical Singularities and shortens the Life of Black Holes[END_REF][START_REF] Frolov | Spherically Symmetric Collapse in Quantum Gravity[END_REF][START_REF] Stephens | Black hole evaporation without information loss[END_REF][START_REF] Modesto | Disappearance of black hole singularity in quantum gravity[END_REF][START_REF] Mazur | Gravitational vacuum condensate stars[END_REF][START_REF] Ashtekar | Black hole evaporation: A Paradigm[END_REF][START_REF] Balasubramanian | Information Recovery From Black Holes[END_REF][START_REF] Hayward | Formation and evaporation of regular black holes[END_REF][START_REF] Hossenfelder | Conservative solutions to the black hole information problem[END_REF][START_REF] Frolov | Information loss problem and a 'black hole' model with a closed apparent horizon[END_REF][START_REF] Rovelli | Evidence for Maximal Acceleration and Singularity Resolution in Covariant Loop Quantum Gravity[END_REF][START_REF] Bardeen | Black hole evaporation without an event horizon[END_REF][START_REF] Giddings | Black holes and massive remnants[END_REF][START_REF] Giddings | Quantum emission from two-dimensional black holes[END_REF]. If we assume that the curvature of the effective metric is bound at the Planck scale, the central singularity is crossed by a regular (pseudo-) Riemmannian metric without singular regions. Below we write an explicit ansatz for such an effective metric. The quantum bound on the curvature determines the size l of its minimal surface (the "Planck Star", where the geometry bounces) to be of order l ∼ m 1 3 in Planck units [START_REF] Rovelli | Planck stars[END_REF]. We show that the central region of a black hole contains causal diamonds with equators having large area. In the case of a black hole of initial mass m evaporating in a time ∼ m 3 , this area can be as large as A ∼ 2π √ 2ml m 3 16πm 2 . ( 1 ) According to Bousso's covariant bound [START_REF] Bousso | A Covariant entropy conjecture[END_REF], this region of spacetime is sufficiently large to contain an entropy of the same order as the entropy of Hawking radiation. This result supports the idea that Hawking radiation is purified by information that crosses the central singularity when a black hole quantum tunnels into a white hole [START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF]. We pick a generic point P inside the hole and we are interested in its future, in particular what happens past the upper line of the figure, which is the central Schwarzschild r s = 0 singularity. It is important to notice that this region is causally disconnected from the region indicated as B in the conformal diagram, which is the region relevant for the long term future of the black hole. Region B is going to be substantially affected by Hawking evaporation, possible final disappearance of the black hole, and the like. We are studying all this elsewhere [START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF]. But nothing of this concerns what happens in the future of P near the singularity, because this is causally disconnected from B. II. THE A REGION INSIDE A BLACK HOLE We call the local transition that we study here, unaffected by the long term behavior of the hole, "region A". To study this region, let us write the metric explicitly. The interior of a Schwarzschild black hole is spherically symmetric and homogeneous in a third spacial direction, which we coordinatize with a space-like coordinate x. (Which is the Schwarzschild coordinate t s that becomes space-like inside the horizon.) Therefore, it can be foliated by space-like surfaces that have each the geometry of a 3d cylinder. A sphere times the real line. By spherical symmetry, and homogeneity along the x coordinate, the gravitational field g µν (τ, x, θ, φ) can be written in the form ds 2 = g τ τ (τ )dτ 2 -g xx (τ )dx 2 -g θθ (τ )dΩ 2 , (2) where dΩ 2 = dθ 2 + sin 2 θ dφ 2 is the metric of the unit sphere. The coordinates θ ∈ [0, π] and φ ∈ [0, 2π[ are standard coordinates on the sphere. The coordinate x ∈ ]x min , x max [ runs along an arbitrary finite portion of the cylinder's axis, and τ is a temporal coordinate, whose range we will explore in studying the dynamics. Inserting this field in the Einstein equations we find the solution g τ τ (τ ) = 4τ 4 2m -τ 2 , g xx (τ ) = 2m -τ 2 τ 2 , g θθ (τ ) = τ 4 . The value τ = 0 locates where the cylinder's radius shrinks to zero. The corresponding line element is ds 2 = 4τ 4 2m -τ 2 dτ 2 - 2m -τ 2 τ 2 dx 2 -τ 4 dΩ 2 . ( 3 ) The region -√ 2m < τ < 0 is precisely the standard interior of a black hole, namely region II of the Kruskal extension of the Schwarzschild solution. This can be seen by going to the usual Schwarzschild coordinates t s = x and r s = τ 2 , ( 4 ) τ = - √ 2 m τ = √ 2 m τ = √ 2 m τ = 0 τ = - √ 2 m τ = - √ 2 m τ = - √ 2 m τ = √ 2 m τ = √ 2 m τ = 0 FIG. 2: Interior of black hole with (space-like) constant τ (or constant Schwarzschild radius) surfaces. which puts the metric in the usual Schwarzschild form ds 2 = 1 - 2m r s dt 2 s -1 - 2m r s -1 dr 2 s -r 2 s dΩ 2 . ( 5 ) This line element, as is well known, solves the Einstein equations also in the region r s < 2m where it describes the black hole interior. As τ flows from -√ 2m to zero, the Schwarzschild radius shrinks from the horizon to the central singularity. The resulting geometry is depicted in Figure 2, for the full range x ∈ ] -∞, +∞[. The divergence at τ = 0 is the central black hole singularity at r s = 0. But notice the following. Differential equations can develop fake singularities because they are formulated in inconvenient variables. For instance, a solution of the equation y ÿ-2 ẏ2 +y 2 = 0, is y(t) = 1/sin t which diverges at t = 0. However, by simply defining x = 1/y, the differential equation turns into the familiar ẍ = -x whose solution x = sin t is regular across t = 0. The same can be done for the back hole interior. Let us change variables from the three variables g τ τ , g xx , and g θθ to the three variables a, b, and N defined by [START_REF] Kenmoku | de Broglie-Bohm interpretation for the wave function of quantum black holes[END_REF] g τ τ = N 2 a b , g xx = b a , and g θθ = a 2 . ( 6 ) This is a change of dynamical (configuration) variables, not to be confused with a coordinate transformation, namely with a change of the independent parameters (τ, x, θ, φ). Inserting these new variables into the first order action of General Relativity yields S = v 4G dτ N - ȧḃ N , (7) where v = They are solved in particular by a(τ ) = τ 2 , b(τ ) = 2m -τ 2 , N 2 (τ ) = 4 a(τ ). ( 9 ) This gives precisely the solution (3), namely the black hole interior. So far, we have only done a consistent change of variables in a dynamical system. But now it is evident from equation ( 9) that the solution can be continued past τ = 0 without any loss of regularity. Expressed in terms of these variables, the gravitational field evolves regularly past the central singularity of a black hole, to positive values of τ . For positive values of τ the geometry determined by this solution of the gravitational field equations is simply the time reversal of the black hole interior, namely a white hole interior, joined to the black hole across the singularity, as depicted in Figure 3. The geometry defined in this way is given by the line element (3) where the coordinate τ covers the full range -√ 2m < τ < √ 2m. For positive and for negative τ this line element defines a Ricci flat pseudo-Riemannian geometry. Not so for τ = 0 where -for instance-the scalar K 2 ∼ R µνρσ R µνρσ constructed by squaring the Riemann tensor, diverges as τ = - √ 2 m τ = √ 2 m τ = √ 2 m τ = 0 τ = - √ 2 m τ = - √ 2 m τ = √ 2 m τ = √ 2 m τ = 0 τ = - √ 2 m τ = - √ 2 m τ = √ 2 m τ = √ 2 m τ = 0 τ = - √ 2 m τ = - √ 2 m τ = √ 2 m τ = √ 2 m τ = 0 τ = - √ 2 m τ = - √ 2 m τ = - √ 2 m τ = √ 2 m τ = √ 2 m τ = 0 K(τ ) ∼ m τ 6 . (10) Because of this divergence, this spacetime is not a Riemannian manifold. However, it is still a metric manifold and it can be approximated with arbitrary precision by a genuine (pseudo-) Riemannian manifold. More precisely, we can can view the metric (3) as a "distributional Riemannian geometry", in the following sense. We say that a distributional Riemannian geometry ds on a manifold is the assignment of a length L[γ] to any curve on the manifold, such that there is a oneparameter family of Riemannian geometries ds l such that lim l→0 γ ds l = L[γ]. The metric (3) is a distributional geometry in this sense. In the Section IV we give an explicit example of a one parameter family of Riemannian metrics ds l converging to the metric (3) and we argue that ds l can have a direct physical interpretation in quantum gravity. Before this, in the next section we study the geodesics that cross the singularity for the line element (3). III. GEODESICS CROSSING rs = 0 We study the geodesics of the metric described above using the relativistic Hamilton-Jacobi formalism. An ad-vantage of this method is that it does not require us to think in terms of evolution of the coordinates as functions of an unphysical parameter; rather, it gives us directly the physical worldline in terms of coordinates as functions of one another. It gives us directly a gauge invariant expression for the geodesic. The relativistic Hamilton-Jacobi approach requires us to find a three-parameter family of solutions to the Hamilton-Jacobi equation g µν ∂S ∂x µ ∂S ∂x ν = ε, (11) where S(x µ , P a ) is Hamilton's principal function. The three parameters P a , a = 1, 2, 3, are integration constants and ε = 1 for massive particles (time-like geodesics) while ε = 0 for massless particles (null geodesics). The geodesics are directly found by imposing ∂S(x µ , P a ) ∂P a -Q a = 0, ( 12 ) where Q a are the other three integration constants. Due to the spherical symmetry of the Schwarzschild spacetime, angular momentum is conserved and the motions are planar. Without loss of generality we can choose spherical coordinates such that the motions lie in the equatorial plane θ = π 2 . This effectively reduces the problem to two dimensions. In the θ = π 2 plane, the metric becomes ds 2 = 4τ 4 2m -τ 2 dτ 2 - 2m -τ 2 τ 2 dx 2 -τ 4 dφ 2 , (13) and the Hamilton-Jacobi equation reads 2m -τ 2 4 ∂S ∂τ 2 - τ 6 2m -τ 2 ∂S ∂x 2 - ∂S ∂φ 2 = τ 4 ε. Due to spherical symmetry we only need a two-parameter family of solutions. This is easy to write: S =P x + Lφ -2 ετ 4 + L 2 + P 2 τ 6 2m -τ 2 dτ √ 2m -τ 2 . It is parametrized by angular momentum L and the conserved charge P conjugate to the cyclic variable x. Using [START_REF] Modesto | Semiclassical loop quantum black hole[END_REF] we have then the following expressions for the geodesics x(τ ) = x 0 + 2P τ 6 (2m -τ 2 ) 3 2 ετ 4 + L 2 + P 2 τ 6 2m-τ 2 dτ, φ(τ ) = φ 0 + 2L √ 2m -τ 2 ετ 4 + L 2 + P 2 τ 6 2m-τ 2 dτ. ( 14 ) These give the geodesic motions. Notice that the equations of motion are well defined in τ = 0 since the integrands are finite. In what follows we will first uncover the physical meaning of the conserved charge P and then solve the integrals explicitly for time-like and null geodesics under different assumptions on the conserved charges P and L. A. The physical Meaning of S(x µ , Pa), P and L Hamilton's principal function for a particle on a fixed background has a transparent physical meaning: It is equal to the particle's proper time along a given trajectory. To see this in full generality, we consider the particle's Lagrangian L (q µ , qµ ) = g µν (q) qµ qν [START_REF] Frolov | Quantum Gravity removes classical Singularities and shortens the Life of Black Holes[END_REF] in configuration space variables q µ , µ = 1, . . . n. Trajectories q µ = q µ (λ) are assumed to be arbitrarily parametrized by λ and the dot indicates a derivative with respect to λ. As is well known, a Legendre transformation which trades the n velocities qµ for the n momenta p µ leaves us with the vanishing Hamiltonian H(q µ , p µ ) = p µ qµ -L(q µ , p µ ) = 0. ( 16 ) A consequent canonical transformation then leads to the Hamilton-Jacobi equation H q µ , ∂S ∂q µ = 0, (17) which is solved by S = S(q µ , P a ) with ∂S ∂q a = P a = const. for a = 1, . . . , k < n. The particle's phase space is now coordinatized by the n generalized coordinates q µ , the k constants P a and the nk momenta ∂S ∂q i with k < i ≤ n. For simplicity we denote the momenta collectively as p µ := (P a , ∂S ∂q i ). It then follows that dS(q µ , P a ) = ∂S ∂q µ dq µ = p µ dq µ = P a dq a + ∂S ∂q i dq i , [START_REF] Modesto | Disappearance of black hole singularity in quantum gravity[END_REF] which can be integrated along a geodesic with start and end point q µ 0 and q µ , respectively, to yield S(q µ , P a ) = q µ q µ 0 p µ dq µ = P a (q aq a 0 ) + q i q i 0 ∂S ∂ qi dq i . (19) This general expression is of the same form as the explicit solution found in the previous section. But notice that since the vanishing Hamiltonian implies p µ qµ = L(q µ , p µ ), the one-form dS can equivalently be written as dS(q µ , P a ) = p µ dq µ = p µ qµ dλ = L(q µ , p µ )dλ. [START_REF] Ashtekar | Black hole evaporation: A Paradigm[END_REF] Integrating this one-form along the same geodesic as before yields S(q µ , P a ) = q µ q µ 0 p µ dq µ = λ λ0 L(q µ , qµ )d λ = λ λ0 g µν (q) qµ qν d λ. ( 21 ) That is: Hamilton's principal function is equal to the particle's proper time along a given geodesic. This equivalence simplifies the interpretation of the conserved charges P and L. On the right hand side of [START_REF] Balasubramanian | Information Recovery From Black Holes[END_REF] we have the standard action for a particle on a fixed background g µν . This action is invariant under variations of the Schwarzschild coordinates t s and φ in the r > 2m region, which gives rise to two conserved charges. More precisely, there are two Killing vector fields, V = ∂ ts and W = ∂ φ , and the conserved charges can be written as E = g tsts V ts ṫs and L = g φφ W φ φ. ( 22 ) To call L angular momentum requires no further justification while E is found to coincide with the special relativistic notion of energy when r s → ∞. As the conserved charges are given in a manifestly coordinate independent form and we know of many gauges which extend smoothly across the horizon we reach the following conclusion. Particle trajectories in the outside region are labelled by E and L and a particle crossing the horizon from the outside continues on one of the inside geodesics discussed in this article, which are labelled by P and L. We can thus identify P with the energy E. The sign of P determines whether the geodesic is moving towards decreasing or increasing x. If we join the horizons τ = ± √ 2m to two complete Kruskal spacetimes (see Figure 4), time-like geodesics incoming from the lower region III and emerging in the upper region I have positive P , and P can be identified with the conventional energy E at r s → ∞ in this region. E is negative for the time-like geodesics moving in the opposite direction. I I III III P = E < 0 τ = 0 x = +∞ x = -∞ P = E > 0 τ = √ 2 m τ = √ 2 m τ = - √ 2 m τ = - √ 2 m FIG. 4: Time-like geodesics with E > 0 originate from the lower region III and extend into the upper region I. Geodesics with E < 0 move from the lower right to the top left. B. L = 0 We first consider the case of a null particle (a photon) falling into the black hole with vanishing angular momentum: L = 0. The general motions [START_REF] Narlikar | High energy radiation from white holes[END_REF] reduce to the simpler form x(τ ) = x 0 ± 2 |τ | 3 2m -τ 2 dτ φ(τ ) = φ 0 . (23) The signs derive from the sign of P and correspond to the null geodesics coming from the left or from the right (see also Figure 4). The integral gives x(τ ) = x 0 ∓ s τ τ 2 + 2m log 1 - τ 2 2m , (24) with s τ := sign τ for notational convenience. This solution is regular for all τ ∈ ] -√ 2m, √ 2m [. These null geodesics start at x = ∓∞ and end at x = ±∞, while intersecting the surface τ = 0 at x = x 0 . See the blue line in Figure 5. The equations of motion for time-like geodesics with zero angular momentum x = x 0 τ = 0 x = ∞ x = -∞ τ = - √ 2 m τ = √ 2 x(τ ) = x 0 + 2 τ 4 E (2m -τ 2 ) 3 2 1 + τ 2 2m-τ 2 E 2 dτ φ(τ ) = φ 0 , (25) can also be integrated explicitly yielding the solution x(τ ) =x 0 + 4m arctanh Eτ 2m + (E 2 -1)τ 2 + 2m (3 -2E 2 )E (E 2 -1) 3 2 arnsinh E 2 -1 2m τ - Eτ (E 2 -1)(2m + (E 2 -1)τ 2 ) (E 2 -1) 3 2 . ( 26 ) As in the null case, the solution is well-defined in τ = 0. What seems to be more worrisome is the parameter range |E| ≤ 1: For |E| → 1 the solution (26) seems to be divergent and for |E| < 1 some terms become complex. However, we show in Appendix A that the imaginary terms cancel rendering [START_REF] Bardeen | Black hole evaporation without an event horizon[END_REF] real also in the parameter range |E| < 1. Moreover, we show that the |E| → 1 limit exists and is given by x(τ ) = x 0 ± 4m arctanh τ √ 2m - 2 τ 3 3 √ 2m -2 √ 2m τ . (27) We also prove in Appendix A that in the |E| → ∞ limit the solution (26) converges to the null solution ( 24) lim |E|→∞ x(τ ) = x 0 ∓s τ τ 2 + 2m log 1 - τ 2 2m , (28) which is exactly what one would expect intuitively. C. E = 0 Under the assumption of vanishing E and arbitrary L ∈ R\{0}, the equations of motion for null geodesics read x(τ ) = x 0 φ(τ ) = φ 0 ± 2 dτ √ 2m -τ 2 , ( 29 ) where now the sign is determined by the sign of L. The above integral is elementary and yields φ(τ ) = φ 0 ± 2 arctan τ √ 2m -τ 2 . ( 30 ) We see that this solution is as well regular in τ = 0. Moreover, the limit |τ | → √ 2m exists and is found to be lim |τ |→ √ 2m φ(τ ) = φ 0 ± s τ π. (31) This means that in the interval ] -√ 2m, √ 2m[ the angular change is 2π. Interestingly, the trivial solution x(τ ) = x 0 is not as innocuous as it appears at first sight. A generic x(τ ) = x 0 curve is not a straight line at 45 • in a Carter-Penrose diagram. Rather, it looks like the red curve in Figure 5, which describes a time-like E = L = 0 geodesic. The only x(τ ) = x 0 lines which are null are obtained by sending x 0 → ±∞. We are therefore led to conclude that E = 0 null geodesics are confined to the horizons. The E = 0 equations of motion for time-like geodesics turn out to be not integrable in closed analytical form. It is nevertheless possible to integrate them numerically without running into any difficulties. D. The general Case For the general case with L = 0 and E = 0 it is not possible to write down closed analytic solutions to the equations of motion [START_REF] Narlikar | High energy radiation from white holes[END_REF]. But it is still possible to solve the equations numerically and to understand the behavior of geodesics in a neighborhood of τ = 0 by Taylor expanding the integrands of ( 14). This expansion results in the approximate solutions x(τ ) =x 0 + E √ 2mL τ 6 m + 3τ 8 4m 2 + (15L 2 -16m 2 ε)τ 10 32L 2 m 3 + O τ 11 φ(τ ) =φ 0 + 1 √ 2m 2τ + τ 3 6m + τ 5 80 3 m 3 - 16ε L 2 + 5L 2 -16m 2 (2E 2 + ε) τ 7 448L 2 m 3 + O τ 8 . ( 32 ) We observe that both solutions are well-behaved as τ → 0 and that there is no problem in crossing the singularity. Moreover, we observe that in both solutions the terms containing an ε, the parameter distinguishing between time-like and null geodesics, is highly suppressed as τ → 0. This implies that massive particles approach the behavior of photons the closer they get to the singularity. Notice also the contrast to special relativity: In special relativity, an infinite amount of energy is required to accelerate a massive particle to the speed of light. Hence, only in the limit |E| → ∞ does a time-like geodesic approach the behavior of a null geodesic. In the case of a time-like geodesic crossing the Schwarzschild singularity, nothing of the sort is required: Energy is conserved along every time-like geodesic and every E = 0 time-like geodesic crosses the r s = 0 singularity while approaching the behavior of a null geodesic as described by the approximate solution [START_REF] Kenmoku | de Broglie-Bohm interpretation for the wave function of quantum black holes[END_REF]. For completeness' sake we present a sample solution in Figure 6 obtained by numerical integration. The real world is quantum mechanical. The gravitational field is a quantum field and undergoes quantum fluctuations at small scales. In the real world, therefore, the spacetime metric cannot be everywhere sharp. A spacetime metric ds can still be defined in terms of the effective gravitational field, namely the expectation value of g µν on a quantum state. In general, ds will deviate from the Einstein equation in the vicinity of the classical singularity, because quantum effects are expected to become strong here, and the classical equations of motion are expected to fail; the deviations from an exact solution of the Einstein field equations are parametrized by . A simple ansatz for ds can be obtained replacing a(τ ) = τ 2 in (9) by a(τ ) = τ 2 + l, (33) where l m is a constant depending on in a manner that we shall fix soon. This defines the line element ds 2 l = 4(τ 2 + l) 2 2m -τ 2 dτ 2 - 2m -τ 2 τ 2 + l dx 2 -(τ 2 +l) 2 dΩ 2 . ( 34 ) This line element defines a genuine pseudo-Riemannian space, with no divergences and no singularities. The curvature is bounded (see Figure 7). In fact, up to terms of order O (l/m) we can easily compute K 2 (τ ) = 9 l 2 + 96 lτ 2 + 48 τ 4 (l + τ 2 ) 8 m 2 , (35) which has the finite maximum value K 2 (0) = 9 m 2 l 6 . (36) In this geometry the cylindric tube does not reach zero size but bounces at a small finite radius l. The Ricci tensor vanishes up to terms of order O(l/m). τ Κ 2 K 2 τ K 2 τ FIG. 7: The bounded curvature scalar [START_REF] Bengtsson | Black holes: Their large interiors[END_REF]. The essential point we emphasize in this article is that the → 0 limit of the effective quantum geometry ds is the geometry (3), depicted in Figure 3, and not just its lower half, namely region II of the Kruskal extension. That is: not a spacetime that ends at a singularity, but rather, a spacetime that crosses the singularity. The physical relevance of the classical theory is to describe the geometry at scales larger than the Planck scale, and the proper description of the geometry (34) at scales much larger than l is a classical spacetime that continues across the central singularity, as described in the first part of this article. We can estimate the value of the parameter l from the requirement that the curvature is bound at the Planck scale; we obtain (restoring physical units) l ∼ l P l m m P l 1 3 , (37) where l P l and m P l is the Planck length and Planck mass. Notice that the bounce away from r s = 0 is not at the Planck length, but at a larger scale, defining a "Planck star" [START_REF] Rovelli | Planck stars[END_REF]. Consider the proper time of a worldline of constant x going all the way from τ = -√ 2m to τ = + √ 2m, crossing τ = 0. Its proper time is T = √ 2m - √ 2m dτ 4(τ 2 + l) 2 2m -τ 2 = 2π (m + l) . (38) In the limit in which l can be disregarded with respect to m, a particle following this worldline goes from the Schwarzschild horizon to τ = 0 in a proper time πm as predicted by the standard theory, but then continues for another proper time lapse πm to the white hole Schwarzschild horizon on the other side of τ = 0. In the next section, we study an important aspect of the geometry of the effective metric [START_REF] Christodoulou | How big is a black hole?[END_REF]. A. Causal Diamonds crossing rs = 0 and their Entropy The recent article [START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF] discusses a solution to the black hole information paradox where quantum gravity effects spark a transition of a black hole into a white hole. The black hole horizon is then a trapped horizon but not an event horizon and information that fell into the black hole crosses the transition region and emerges from the white hole. While the full geometry considered in [START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF] is far more complicated than the geometry considered here, the transition across the A region is the same. A tentative estimate of the transition probability per unit time for the black-to-white hole tunneling has been computed from covariant loop quantum gravity in [START_REF] Christodoulou | Characteristic Time Scales for the Geometry Transition of a Black Hole to a White Hole from Spinfoams[END_REF] to be proportional to e -(m/m P l ) 2 where m is the mass of the hole at transition time. This makes the transition probable at the end of Hawking evaporation when m → m P l . The full evaporation time is ∼ m 3 o , where m o is the initial mass of the hole. During the evaporation, the interior volume of the black hole grows, reaching a volume of order ∼ m 4 o [START_REF] Christodoulou | How big is a black hole?[END_REF][START_REF] Bengtsson | Black holes: Their large interiors[END_REF][START_REF] Ong | Never Judge a Black Hole by Its Area[END_REF][START_REF] Wang | Maximal volume behind horizons without curvature singularity[END_REF][START_REF] Christodoulou | Volume inside old black holes[END_REF]. The quantum transition gives rise to a white hole with small horizon area and large interior volume. Remnants in the form of geometries with a small throat and a long tail were called "cornucopions" in [START_REF] Banks | Are horned particles the climax of Hawking evaporation?[END_REF] by Banks et.al. and studied in [START_REF] Giddings | Dynamics of extremal black holes[END_REF][START_REF] Banks | Black hole remnants and the information puzzle[END_REF][START_REF] Giddings | Constraints on black hole remnants[END_REF][START_REF] Banks | Lectures on black holes and information loss[END_REF]. What was realized in [START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF] is that objects of this kind are precisely predicted by conventional classical General Relativity -white holes with an horizon small enough to be stable-and are the natural results of the quantum tunneling that ends the life of the black hole. The large interior volume can encode a substantial amount of information, despite the smallness of the horizon area. This information is slowly released from the long-lived Planck-mass white hole, purifying the Hawking radiation emitted during the evaporation. For this scenario to be consistent, the transition region must be large enough to carry the relevant amount of information. In [START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF], an estimate of that amount was given in terms of the interior volume of a preferred foliation. Here we give a stronger argument, that avoids the non covariance of the choice of the foliation, and is based on Bousso's covariant entropy bound [START_REF] Bousso | A Covariant entropy conjecture[END_REF]. Bousso's conjecture states that the entropy S on a light-sheet L orthogonal to any two-dimensional surface B satisfies S(L) ≤ A(B)/4 , where A is the area of the surface B. Here we show that in the crossing region there are closed 2d surfaces with large area satisfying the conditions of Bousso's entropy bound for a large enough entropy to purify the Hawking radiation. More precisely, we study the causal diamond defined by two points at opposite sides of the minimal r s surface: a spacetime point p = (-τ p , x p , φ p , π 2 ) in the black hole interior (i.e. 0 < τ p < √ 2m) and a spacetime point p = (τ p , x p , φ p , π 2 ) in the white hole interior. As p lies in p's future, the future light cone of p intersects with the past light cone of p and hence gives rise to a causal spacetime diamond. In this case, the surface B is given by the intersection of the future and past light cone of p and p while L is the boundary of the causal diamond. The future light cone I + of p can be defined as the union of all future null geodesics emerging from that point. Geodesics are labelled by L and E and conservation of angular momentum implies that we can always choose coordinates such that the motion lies in a θ = const. plane. More precisely, there is always a rotation we can perform to achieve this and therefore it suffices to study in detail the θ = π 2 section of I + to reconstruct the whole light cone. We can formally write I + (p) θ= π 2 = L∈R E∈R (x(τ ), φ(τ )), (39) where the functions x(τ ) and φ(τ ) are explicitly given by x(τ ) = x p + τ -τp 2E τ 6 (2m -τ 2 ) 3 2 L 2 + E 2 τ 6 2m-τ 2 dτ φ(τ ) = φ p + τ -τp 2L √ 2m -τ 2 L 2 + E 2 τ 6 2m-τ 2 dτ , (40) where -τ p ≤ τ < √ 2m ensures that the geodesics pass through p and extend into its future. However, different choices of L and E can correspond to the same geodesic and hence there is a lot of redundancy in the above definition of the light cone. To get rid of this redundancy we rewrite x(τ ) and φ(τ ) as x(τ ) = x p + τ -τp 2λτ 6 (2m -τ 2 ) 3 2 1 + λ 2 τ 6 2m-τ 2 dτ φ(τ ) = φ p + τ -τp 2 sign L √ 2m -τ 2 1 + λ 2 τ 6 2m-τ 2 dτ . (41) These equations are obtained from (40) by pulling L out of the square root and defining the new parameter λ := E |L| . The advantage is that now it is obvious that all geodesics where E and L have a fixed ratio λ and where L has the same sign describe the same geodesic. Also, instead of having to build the union over the two continuous parameters E and L to define the light cone we only need to take the union over the continuous parameter λ and the discrete values of sign L. I + (p) θ= π 2 = λ∈R sign L=±1 λ=±∞ sign L=0 (x(τ ), φ(τ )). (42) The past light cone I -of p is defined in an analogous manner, the only difference being the replacement of -τ p with τ p and the interchange of the integration boundaries in [START_REF] Banks | Black hole remnants and the information puzzle[END_REF]. Due to the symmetrical set up, the intersection surface B := I + (p) ∩ I -(p ) lies on the τ = 0 hypersurface and the shape of its cross section is determined by ( 41) by setting τ = 0 and performing the integrals for all values of λ ∈ R. This gives two parametric curves in the x-φ-plane, one for sign L = -1 and an other one for sign L = +1. They are joined together by the special points λ = ±∞ with sign L = 0. Incidentally, these two points simply correspond to the solution discussed in subsection III B and are explicitly given by (φ p , x p ±τ p ±2m log 1τ 2 p /2m ). There are two other special points we can easily locate in the x-φ-plane: λ = 0 with sign L = ±1 corresponds to the solution discussed in subsection III C and we get (φ p ± 2 arctan(τ p /(2mτ 2 p ) 2 ), x p ). These four special cases determine the ranges over which φ and x change and as we wish to maximize the surface of intersection, we should maximize these ranges. This is achieved by assuming τ p to be close to √ 2m, i.e. τ p = √ 2m -. The range of x is then given by [-2m log( √ m √ 2 ), 2m log( √ m √ 2 )] and the range of φ is to very good approximation [-π, π]. All the other points on the two curves can be determined by numerically evaluating the integrals (41) for a large range of λ's. Figure 8 illustrates the result of such a numerical evaluation. The intersection of geodesics lying in other θ = const. planes with the τ = 0 hypersurface leads to the same elongated sort of rectangle as depicted in Figure 8. The intersection area can therefore be approximated using the regularized metric [START_REF] Christodoulou | How big is a black hole?[END_REF] = 8πm √ 2ml log √ m √ 2 . ( 43 ) This area can be made bigger and bigger by taking τ p closer to the horizon, but it cannot be made arbitrarily big. The reason is that we can only trust our computations as long as quantum gravity effects are negligible, i.e. as long as we are in region A of Figure 1. The finite extent ∆x = x maxx min of region A has been linked to the lifetime τ bh ∼ m 3 ∼ ∆x of the black hole [START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF] and yields a finite maximal area of A(B) ∼ 2π √ 2ml m 3 16πm 2 . ( 44 ) This result is consistent with the argument given in [START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF]. V. CONCLUSION Imagine our technology is so advanced that we can build a spaceship surviving Planckian pressure and we decide to enter the recently found 17 billion solar mass supermassive black hole in the galaxy NGC 1277 [START_REF] Van Den | An over-massive black hole in the compact lenticular galaxy NGC 1277[END_REF]. We of course enter the horizon without any particular bump and start descending. What happens next? Current physical knowledge is insufficient to answer this question. But the question is well posed in principle and should have a correct answer. One possibility is that the world ends at τ = 0. But there is another possibility, which may sound more plausible. Things can traverse the τ = 0 surface and find themselves in the metric of an expanding white hole. The results of this paper makes this possibility more plausible. Whether or not this portion of spacetime is going to be connected to the region outside the black hole depends on the physics of the region B of Figure 1, which requires a more specific use of quantum gravity. This is discussed elsewhere [START_REF] Bianchi | White Holes as Remnants: A Surprising Scenario for the End of a Black Hole[END_REF]. Figure 1 1 Figure 1 represents the standard Carter-Penrose conformal diagram of a star that collapses in classical General Relativity, disregarding any quantum effects.We pick a generic point P inside the hole and we are interested in its future, in particular what happens past the upper line of the figure, which is the central Schwarzschild r s = 0 singularity. It is important to notice that this region is causally disconnected from the region indicated as B in the conformal diagram, which is the region relevant for the long term future of the black hole. Region B is going to be substantially affected by FIG. 1 : 1 FIG.1:The conformal diagram of the spacetime of a collapsing star predicted by classical GR. The star is light grey, the horizon is dotted, the rs = 0 singularity is the upper thick line. xmaxx min dx and G is Newton's constant. The equations of motion of this action are FIG. 3 : 3 FIG. 3:The interior transition across the A region. mFIG. 5 : 5 FIG. 5: Illustration of null (blue line) and time-like (green curve with E < 0) geodesics with L = 0. The geodesics start in the black hole region (lower part of the diamond), cross the singularity, and continue into the white hole region (top part of the diamond). FIG. 6 : 6 FIG. 6: Integration of (14) for x(τ ) and φ(τ ), with m = 20, L = -2, E = 7. Blue curve: Null solution. Green curve: Time-like solution integrated over [x min , x max ] × [θ min , θ max ] = [-2m log( √ m √ 2 ), 2m log( √ m √2 )] × [0, π] for both choices of sign L = ±1 and neglecting the φ con- FIG. 8 : 8 FIG. 8: Numerical evaluation of the intersection of (41) with the τ = 0 hypersurface for m = 200 and √ 2m -τp = 10 -11 . Aknowledgement We thank Pierre Martin-Dussaud, Tommaso De Lorenzo and Alejandro Perez for many important exchanges. CR thanks Tom Banks for pointing out the importance of studying causal diamonds inside the black hole. Appendix A: Various Limiting Cases Here we show that the solution [START_REF] Bardeen | Black hole evaporation without an event horizon[END_REF] is real in the parameter range |E| < 1, despite the presence of complex terms. Moreover, we show that the limits |E| → 1 and |E| → ∞ exist and are given by the equations ( 27) and [START_REF] Giddings | Quantum emission from two-dimensional black holes[END_REF], respectively. To verify that the imaginary part of (26) vanishes we observe that the argument of the artanh function is real and well defined for all values of the parameter E ∈ R since τ is restricted to the interval [. We therefore do not need to worry about it. The last term in the bracket of (26) has, under the assumptions |E| < 1 and τ ∈ I, a purely imaginary nominator and a purely imaginary denominator. It is therefore, as a whole, a real term. The argument of the arsinh function, on the other hand, is purely imaginary. Using the identity we deduce Since the Arg-function is real and the term in front of the arsinh is purely imaginary, we find that the middle term in the bracket of ( 26) is real, too. This shows that the solution ( 26) is real in |E| < 1. The simplest way to verify the validity of equation ( 27) is to start from [START_REF] Rovelli | Evidence for Maximal Acceleration and Singularity Resolution in Covariant Loop Quantum Gravity[END_REF] and set |E| = 1. This results in the integral equation which indeed yields That this is the same as taking the |E| → 1 limit of equation ( 26) follows from the fact that the integrand of ( 25) converges uniformly to the integrand of (A4). That is, define the functions The |E| → 1 limit of solution ( 26) now follows suit: This is precisely the anticipated result. The |E| → ∞ limit of equation ( 26) can be obtained in a similar manner. To this end, we define the functions (A9) We recognize the second function to be the integrand of [START_REF] Hossenfelder | Conservative solutions to the black hole information problem[END_REF], i.e. the integrand of the null equation of motion with L = 0. Moreover, one verifies easily that g n -→ g uniformly on I. We can therefore again exchange limit and integration from which we find for the solution (26 which is precisely the result anticipated in [START_REF] Giddings | Quantum emission from two-dimensional black holes[END_REF]. We conclude that ( 26) is real valued and well-defined for all parameter values E ∈ R\{-1, 1} and that the limits |E| → 1 and |E| → ∞ exist and are given by the equations ( 27) and [START_REF] Giddings | Quantum emission from two-dimensional black holes[END_REF], respectively.
01724838
en
[ "phys.grqc" ]
2024/03/05 22:32:16
2018
https://hal.science/hal-01724838/file/1802.04264.pdf
Eugenio Bianchi email: [email protected] Marios Christodoulou email: [email protected] Fabio D' Carlo Rovelli email: [email protected] Hal M Haggard email: [email protected] White Holes as Remnants: A Surprising Scenario for the End of a Black Hole Quantum tunneling of a black hole into a white hole provides a model for the full life cycle of a black hole. The white hole acts as a long-lived remnant, solving the black-hole information paradox. The remnant solution of the paradox has long been viewed with suspicion, mostly because remnants seemed to be such exotic objects. We point out that (i) established physics includes objects with precisely the required properties for remnants: white holes with small masses but large finite interiors; (ii) non-perturbative quantum-gravity indicates that a black hole tunnels precisely into such a white hole, at the end of its evaporation. We address the objections to the existence of white-hole remnants, discuss their stability, and show how the notions of entropy relevant in this context allow them to evade several no-go arguments. A black hole's formation, evaporation, tunneling to a white hole, and final slow decay, form a unitary process that does not violate any known physics. I. INTRODUCTION The conventional description of black hole evaporation is based on quantum field theory on curved spacetime, with the back-reaction on the geometry taken into account via a mean-field approximation [START_REF] Hawking | Black hole explosions?[END_REF]. The approximation breaks down before evaporation brings the black hole mass down to the Planck mass (m P l = c/G ∼ the mass of a 1 2 -centimeter hair). To figure out what happens next we need quantum gravity. A quantum-gravitational process that disrupts a black hole was studied in [START_REF] Rovelli | Planck stars[END_REF][START_REF] Haggard | Black hole fireworks: quantum-gravity effects outside the horizon spark black to white hole tunneling[END_REF][START_REF] Lorenzo | Improved black hole fireworks: Asymmetric black-hole-to-white-hole tunneling scenario[END_REF][START_REF] Christodoulou | Planck star tunneling time: An astrophysically relevant observable from background-free quantum gravity[END_REF][START_REF] Christodoulou | Characteristic Time Scales for the Geometry Transition of a Black Hole to a White Hole from Spinfoams[END_REF]. It is a conventional quantum tunneling, where classical equations (here the Einstein equations) are violated for a brief interval. This alters the causal structure predicted by classical general relativity [START_REF] Frolov | Quantum Gravity removes Classical Singularities and Shortens the Life of Black Holes[END_REF][START_REF] Frolov | Spherically symmetric collapse in quantum gravity[END_REF][START_REF] Stephens | Black hole evaporation without information loss[END_REF][START_REF] Modesto | Disappearance of the black hole singularity in loop quantum gravity[END_REF][START_REF] Modesto | Evaporating loop quantum black hole[END_REF][START_REF] Mazur | Gravitational vacuum condensate stars[END_REF][START_REF] Ashtekar | Black hole evaporation: A paradigm[END_REF][START_REF] Balasubramanian | Information Recovery From Black Holes[END_REF][START_REF] Hayward | Formation and Evaporation of Nonsingular Black Holes[END_REF][START_REF] Hossenfelder | A model for non-singular black hole collapse and evaporation[END_REF][START_REF] Hossenfelder | Conservative solutions to the black hole information problem[END_REF][START_REF] Frolov | Information loss problem and a "black hole" model with a closed apparent horizon[END_REF][START_REF] Gambini | A scenario for black hole evaporation on a quantum Geometry[END_REF][START_REF] Gambini | Quantum shells in a quantum space-time[END_REF][START_REF] Bardeen | Black hole evaporation without an event horizon[END_REF][START_REF] Giddings | Quantum emission from two-dimensional black holes[END_REF], by modifying the dynamics of the local apparent horizon. As a result, the apparent horizon fails to evolve into an event horizon. Crucially, the black hole does not just 'disappear': it tunnels into a white hole [START_REF] Narlikar | High energy radiation from white holes[END_REF][START_REF] Hájíček | Singularity avoidance by collapsing shells in quantum gravity[END_REF][START_REF] Ambrus | Quantum superposition principle and gravitational collapse: Scattering times for spherical shells[END_REF][START_REF] Olmedo | From black holes to white holes: a quantum gravitational, symmetric bounce[END_REF] (from the outside, an object very similar to a black hole), which can then leak out the information trapped inside. The likely end of a black hole is therefore not to suddenly pop out of existence, but to tunnel to a white hole, which can then slowly emit whatever is inside and disappear, possibly only after a long time [START_REF] Aharonov | The unitarity puzzle and Planck mass stable particles[END_REF][START_REF] Giddings | Black holes and massive remnants[END_REF][START_REF] Callan | Evanescent black holes[END_REF][START_REF] Giddings | Constraints on black hole remnants[END_REF][START_REF] Preskill | Do Black Holes Destroy Information?[END_REF][START_REF] Banks | Classical and quantum production of cornucopions at energies below 1018 GeV[END_REF][START_REF] Banks | Lectures on black holes and information loss[END_REF][START_REF] Ashtekar | Information is Not Lost in the Evaporation of 2-dimensional Black Holes[END_REF][START_REF] Ashtekar | Evaporation of 2-Dimensional Black Holes[END_REF][START_REF] Ashtekar | Surprises in the Evaporation of 2-Dimensional Black Holes[END_REF][START_REF] Rama | Remarks on Black Hole Evolution a la Firewalls and Fuzzballs[END_REF][START_REF] Almheiri | An Uneventful Horizon in Two Dimensions[END_REF][START_REF] Chen | Black Hole Remnants and the Information Loss Paradox[END_REF][START_REF] Malafarina | Classical collapse to black holes and quantum bounces: A review[END_REF]. The tunneling probability may be small for a macroscopic black hole, but becomes large toward the end of the evaporation. This is because it increases as the mass decreases. Specifically, it will be suppressed at most by the standard tunneling factor p ∼ e -S E / (1) where S E is the Euclidean action for the process. This can be estimated on dimensional grounds for a stationary black hole of mass m to be S E ∼ Gm 2 /c, giving p ∼ e -(m/m P l ) 2 , (2) which becomes of order unity towards the end of the evaporation, when m → m P l . A more detailed derivation is in [START_REF] Christodoulou | Planck star tunneling time: An astrophysically relevant observable from background-free quantum gravity[END_REF][START_REF] Christodoulou | Characteristic Time Scales for the Geometry Transition of a Black Hole to a White Hole from Spinfoams[END_REF]. As the black hole shrinks towards the end of its evaporation, the probability to tunnel into a white hole is no longer suppressed. The transition gives rise to a long-lived white hole with Planck size horizon and very large but finite interior. Remnants in the form of geometries with a small throat and a long tail were called "cornucopions" in [START_REF] Banks | Are horned particles the end point of Hawking evaporation?[END_REF] by Banks et.al. and studied in [START_REF] Banks | Lectures on black holes and information loss[END_REF][START_REF] Giddings | Dynamics of extremal black holes[END_REF][START_REF] Banks | Black hole remnants and the information puzzle[END_REF][START_REF] Giddings | Constraints on black hole remnants[END_REF]. As far as we are aware, the connection to the conventional white holes of general relativity was never made. This scenario offers a resolution of the information-loss paradox. Since there is an apparent horizon but no event horizon, a black hole can trap information for a long time, releasing it after the transition to white hole. If we have a quantum field evolving on a black hole background metric and we call S its (renormalized) entanglement entropy across the horizon, then consistency requires the metric to satisfy non-trivial conditions: (a) The remnant has to store information with entropy S ∼ m 2 o / (we adopt units G=c=1, while keeping explicit), where m o is the initial mass of the hole, before evaporation [START_REF] Marolf | The Black Hole information problem: past, present, and future[END_REF]. This is needed to purify Hawking radiation. (b) Because of its small mass, the remnant can release the inside information only slowly-hence it must be long-lived. Unitarity and energy considerations impose that its lifetime be equal to or larger than τ R ∼ m 4 o / 3/2 [START_REF] Preskill | Do Black Holes Destroy Information?[END_REF][START_REF] Bianchi | Entanglement entropy production in gravitational collapse: covariant regularization and solvable models[END_REF]. (c) The metric has to be stable under perturbations, so as to guarantee that information can be released [START_REF] Lorenzo | Improved black hole fireworks: Asymmetric black-hole-to-white-hole tunneling scenario[END_REF][START_REF] Frolov | Black Hole Physics: Basic Concepts and New Developments[END_REF][START_REF] Barrab\'es | Death of white holes[END_REF][START_REF] Ori | Death of cosmological white holes[END_REF]. In this paper we show that under simple assumptions the effective metric that describes standard black hole evaporation followed by a transition to a Planck-mass white hole satisfies precisely these conditions. This result shows that this scenario is consistent with known physics and does not violate unitarity. One reason this scenario may not have been recognised earlier is because of some prejudices (including against white holes), which we discuss below. But the scenario presented here turns out to be consistent with general expectations that are both in the AdS/CFT community (see for instance [START_REF] Engelhardt | Holographic consequences of a no transmission principle[END_REF][START_REF] Fitzpatrick | On information loss in AdS3/CFT2[END_REF]) and in the quantum gravity community (see for instance the 'paradigm' [START_REF] Ashtekar | Black hole evaporation: A paradigm[END_REF]). II. THE INTERNAL GEOMETRY BEFORE QUANTUM GRAVITY BECOMES RELEVANT We begin by studying the geometry before any quantum gravitational effect becomes relevant. The standard classical conformal diagram of a black hole formed by collapsing matter is depicted in Figure 1, for the case of spherical symmetry. Classical general relativity becomes insufficient when either (a) curvature becomes sufficiently large, or (b) sufficient time has ellapsed. The two corresponding regions, A and B, where we expect classical general relativity to fail are depicted in the figure . Consider the geometry before these regions, namely on a Cauchy surface Σ that crosses the horizon at some (advanced) time v after the collapse. See Figure 1. We are interested in particular in the geometry of the portion Σ i of Σ which is inside the horizon. Lack of focus on this interior geometry is, in our opinion, one of the sources of the current confusion. Notice that we are here fully in the expected domain of validity of established physics. The interior Cauchy surface can be conveniently fixed as follows. First, observe that a (2d, spacelike) sphere S in (4d) Minkowski space determines a preferred (3d) ball Σ i bounded by S: the one sitting on the same linear subspace-simultaneity surface-as S; or, equivalently, the one with maximum volume. (Deformations from linearity in Minkowski space decrease the volume). The first characterisation-linearity-makes no sense on a curved space, but the second-extremized volume-does. Following [START_REF] Christodoulou | How big is a black hole?[END_REF], we use this characterization to fix Σ i , which, incidentally, provides an invariant definition of the "Volume inside S". Large interior volumes and their possible role in the information paradox have also been considered in [START_REF] Stanford | Complexity and shock wave geometries[END_REF][START_REF] Perez | No firewalls in quantum gravity: the role of discreteness of quantum geometry in resolving the information loss paradox[END_REF][START_REF] Ori | Firewall or smooth horizon?[END_REF][START_REF] Ashtekar | The Issue of Information Loss: Current Status[END_REF][START_REF] Susskind | Black Holes and Complexity Classes[END_REF]. The interior is essentially a very long tube. As time passes, the radius of the tube shrinks, while its length increases, see Figure 2. It is shown in [START_REF] Christodoulou | How big is a black hole?[END_REF][START_REF] Bengtsson | Black holes: Their large interiors[END_REF][START_REF] Ong | Never Judge a Black Hole by Its Area[END_REF][START_REF] Wang | Maximal volume behind horizons without curvature singularity[END_REF], that for large time v the volume of Σ i is proportional to the time from collapse: V ∼ 3 √ 3 m 2 o v. (3) Christodoulou and De Lorenzo have shown [START_REF] Christodoulou | Volume inside old black holes[END_REF] that this picture is not changed by Hawking evaporation: toward the end of the evaporation the area of the (apparent) horizon of the black hole has shrunk substantially, but the length of the interior tube keeps growing linearly with time elapsed from the collapse. This can be huge for a black hole that started out as macroscopic (m o m P l ), even if the horizon area and mass have become small. The key point is that (3) still hold, with m o being the initial mass of the hole [START_REF] Christodoulou | Volume inside old black holes[END_REF], see also [START_REF] Ong | The Persistence of the Large Volumes in Black Holes[END_REF]. The essential fact that is often neglected, generating confusion, is that an old black hole that has evaporated down to mass m has the same exterior geometry as a young black hole with the same mass, but not the same interior: an old, largely evaporated hole has an interior vastly bigger than a young black hole with the same mass. This is conventional physics. To understand the end of a black hole's evaporation, it is important to distinguish the phenomena concerning along collapsing shell, ing their normal derivatives undetermined. Thus there seems to be plenty of room for patching in a nonsingular vacuum solution. To obtain some feeling for the motion of the collapsing shell we have made the fairly arbitrary assumption that a fi(r)- (2.13) This gives us a single first-order ordinary differential equation for R (r). The solution so obtained behaves like R(r)=Q+e r', as ~~oo. We can then use this solution to check that the other coefficient functions, to leading order, are well behaved for all finite values of ~. We can continue this procedure perturbatively, to verify that the coefficients in the expansion in powers of n are smooth functions of ~. Of course, this demonstration of a smooth perturbation expansion around the shell, does not guarantee the existence of an everywhere smooth solu-tion. We continue to search for a sensible ansatz that will enable us to demonstrate explicitly the existence of a smooth collapsing solution, but we feel confident that such a solution exists. The collapsing solution that we have described, begins as a dimple on Aat space. At any finite time after its for- mation, it will have the geometry shown in Fig. 2. We will refer to such an object as a finite volume cornu- copion. It is a solution of the field equations that is static over most of space. The time dependence occurs only in the tip of the horn. FIG. side the shell, but this is inconsistent with the field equa- tions. Similarly, an attempt to keep the three-dimensionally conformally Oat form of the metric, with conformal factor tied to the dilaton, is inconsistent. We have not been able to come up with a natural ansatz. Nonetheless, we believe that smooth solutions exist. There are many smooth solutions of the vacuum field equations restricted to a manifold with the topology of a hemi-three-sphere cross time. Our matching conditions fix only the values of the metric functions and dilaton along the timelike world line of the collapsing shell, leav-ing their normal derivatives undetermined. Thus there seems to be plenty of room for patching in a nonsingular vacuum solution. To obtain some feeling for the motion of the collapsing shell we have made the fairly arbitrary assumption that a fi(r)- (2.13) This gives us a single first-order ordinary differential equation for R (r). The solution so obtained behaves like R(r)=Q+e r', as ~~oo. We can then use this solution to check that the other coefficient functions, to leading order, are well behaved for all finite values of ~. We can continue this procedure perturbatively, to verify that the coefficients in the expansion in powers of n are smooth functions of ~. Of course, this demonstration of a smooth perturbation expansion around the shell, does not guarantee the existence of an everywhere smooth solu-tion. We continue to search for a sensible ansatz that will enable us to demonstrate explicitly the existence of a smooth collapsing solution, but we feel confident that such a solution exists. The collapsing solution that we have described, begins as a dimple on Aat space. At any finite time after its for- mation, it will have the geometry shown in Fig. 2. We will refer to such an object as a finite volume cornu- copion. It is a solution of the field equations that is static over most of space. The time dependence occurs only in the tip of the horn. (2.12) he fields there is m solu-FIG. hemi-three-sphere cross time. Our matching conditions fix only the values of the metric functions and dilaton along the timelike world line of the collapsing shell, leav-ing their normal derivatives undetermined. Thus there seems to be plenty of room for patching in a nonsingular vacuum solution. To obtain some feeling for the motion of the collapsing shell we have made the fairly arbitrary assumption that a fi(r)- (2.13) This gives us a single first-order ordinary differential equation for R (r). The solution so obtained behaves like R(r)=Q+e r', as ~~oo. We can then use this solution to check that the other coefficient functions, to leading order, are well behaved for all finite values of ~. We can continue this procedure perturbatively, to verify that the coefficients in the expansion in powers of n are smooth functions of ~. Of course, this demonstration of a smooth perturbation expansion around the shell, does not guarantee the existence of an everywhere smooth solu-tion. We continue to search for a sensible ansatz that will enable us to demonstrate explicitly the existence of a smooth collapsing solution, but we feel confident that such a solution exists. The collapsing solution that we have described, begins as a dimple on Aat space. At any finite time after its for- mation, it will have the geometry shown in Fig. 2. We will refer to such an object as a finite volume cornu- copion. It is a solution of the field equations that is static over most of space. The time dependence occurs only in the tip of the horn. Nonetheless, we believe that smooth solutions exist. There are many smooth solutions of the vacuum field equations restricted to a manifold with the topology of a hemi-three-sphere cross time. Our matching conditions fix only the values of the metric functions and dilaton along the timelike world line of the collapsing shell, leav-ing their normal derivatives undetermined. Thus there seems to be plenty of room for patching in a nonsingular vacuum solution. To obtain some feeling for the motion of the collapsing shell we have made the fairly arbitrary assumption that a fi(r)- (2.13) This gives us a single first-order ordinary differential equation for R (r). The solution so obtained behaves like R(r)=Q+e r', as ~~oo. We can then use this solution to check that the other coefficient functions, to leading order, are well behaved for all finite values of ~. We can continue this procedure perturbatively, to verify that the coefficients in the expansion in powers of n are smooth functions of ~. Of course, this demonstration of a smooth perturbation expansion around the shell, does not guarantee the existence of an everywhere smooth solu-tion. We continue to search for a sensible ansatz that will enable us to demonstrate explicitly the existence of a smooth collapsing solution, but we feel confident that such a solution exists. The collapsing solution that we have described, begins as a dimple on Aat space. At any finite time after its for- mation, it will have the geometry shown in Fig. 2. We will refer to such an object as a finite volume cornu- copion. It is a solution of the field equations that is static over most of space. The time dependence occurs only in the tip of the horn. The quantum gravitational effects in regions A and B are distinct, and confusing them is a source of misunderstanding. Notice that a generic spacetime region in A is spacelike separated and in general very distant from region B. By locality, there is no reason to expect these two regions to influence one another. The quantum gravitational physical process happening at these two regions must be considered separately. III. THE A REGION: TRANSITIONING ACROSS THE SINGULARITY To study the A region, let us focus on an arbitrary finite portion of the collapsing interior tube. As we approach the singularity, the Schwarzschild radius r s , which is a temporal coordinate inside the hole, decreases and the curvature increases. When the curvature approaches Planckian values, the classical approximation becomes unreliable. Quantum gravity effects are expected to bound the curvature [8-11, 13-19, 22-24, 27, 29, 64, 65]. Let us see what a bound on the curvature can yield. Following [START_REF] Rovelli | How Information Crosses Schwarzschild's Central Singularity[END_REF], consider the line element ds 2 = - 4(τ 2 + l) 2 2m -τ 2 dτ 2 + 2m -τ 2 τ 2 + l dx 2 +(τ 2 +l) 2 dΩ 2 , (4) where l m. This line element defines a genuine Riemannian spacetime, with no divergences and no singularities. Curvature is bounded. For instance, the Kretschmann invariant K ≡ R µνρσ R µνρσ is easily computed to be K(τ ) ≈ 9 l 2 -24 lτ 2 + 48 τ 4 (l + τ 2 ) 8 m 2 (5) in the large mass limit, which has the finite maximum K(0) ≈ 9 m 2 l 6 . ( 6 ) For all the values of τ where l τ 2 < 2m the line element is well approximated by taking l = 0 which gives ds 2 = - 4τ 4 2m -τ 2 dτ 2 + 2m -τ 2 τ 2 dx 2 + τ 4 dΩ 2 . (7) For τ < 0, this is the Schwarzschild metric inside the black hole, as can be readily seen going to Schwarzschild coordinates t s = x, and r s = τ 2 . ( 8 ) For τ > 0, this is the Schwarzschild metric inside a white hole. Thus the metric (4) represents a continuous transition of the geometry of a black hole into the geometry of a white hole, across a region of Planckian, but bounded curvature. Geometrically, τ = constant (space-like) surfaces foliate the interior of a black hole. Each of these surfaces has the topology S 2 × R, namely is a long cylinder. As time passes, the radial size of the cylinder shrinks while the axis of the cylinder gets stretched. Around τ = 0 the cylinder reaches a minimal size, and then smoothly bounces back and starts increasing its radial size and shrinking its length. The cylinder never reaches zero size but bounces at a small finite radius l. The Ricci tensor vanishes up to terms O(l/m). The resulting geometry is depicted in Figure 3. The region around τ = 0 is the smoothing of the central black hole singularity at r s = 0. This geometry can be given a simple physical interpretation. General relativity is not reliable at high curvature, because of quantum gravity. Therefore the "prediction" of the singularity by the classical theory has no ground. High curvature induces quantum particle creation, including gravitons, and these can have an effective energy momentum tensor that back-reacts on the classical geometry, modifying its evolution. Since the energy momentum tensor of these quantum particles can violate energy conditions (Hawking radiation does), the evolution is not constrained by Penrose's singularity theorem. Equivalently, we can assume that the expectation value of the gravitational field will satisfy modified effective quantum corrections that alter the classical evolution. The expected scale of the correction is the Planck scale. As long as l m the correction to the classical theory is negligible in all regions of small curvature; as we approach the high-curvature region the curvature is suppressed with respect to the classical evolution, and the geometry continues smoothly past τ = 0. One may be tempted to take l to be Planckian l P l = G/c 3 ∼ √ , but this would be wrong. The value of l can be estimated from the requirement that the curvature is bounded at the Planck scale, K(0) ∼ 1/ 2 . Using this in [START_REF] Christodoulou | Characteristic Time Scales for the Geometry Transition of a Black Hole to a White Hole from Spinfoams[END_REF] gives l ∼ (m ) 1 3 , (9) or, restoring for a moment physical units l ∼ l P l m m P l 1 3 , (10) which is much larger than the Planck length when m m P l [START_REF] Rovelli | Planck stars[END_REF]. The three-geometry inside the hole at the transition time is ds 2 3 = 2m l dx 2 + l 2 dΩ 2 . ( 11 ) The volume of the "Planck star" [START_REF] Rovelli | Planck stars[END_REF], namely the minimal radius surface is V = 4πl 2 2m l (x max -x min ). ( 12 ) The range of x is determined by the lifetime of the hole from the collapse to the onset of region B, as x = t s . If region B is at the end of the Hawking evaporation, then (x max -x min ) ∼ m 3 / and from Eq. ( 9), l ∼ (m ) 1/3 , leading to an internal volume at crossover that scales as V ∼ m 4 / √ . ( 13 ) We observe that in the classical limit the interior volume diverges, but quantum effects make it finite. The l → 0 limit of the line element (4) defines a metric space which is a Riemannian manifold almost everywhere and which can be taken as a solution of the Einstein's equations that is not everywhere a Riemannian manifold [START_REF] Rovelli | How Information Crosses Schwarzschild's Central Singularity[END_REF]. Geodesics of this solution crossing the singularity are studied in [START_REF] Rovelli | How Information Crosses Schwarzschild's Central Singularity[END_REF]: they are well behaved at τ = 0 and they cross the singularity in a finite proper time. The possibility of this natural continuation of the Einstein equations across the central singularity of the Schwarzschild metric has been noticed repeatedly by many authors. To the best of our knowledge it was first noticed by Synge in the fifties [START_REF] Synge | The Gravitational Field of a Particle[END_REF] and rediscovered by Peeters, Schweigert and van Holten in the nineties [START_REF] Peeters | Extended geometry of black holes[END_REF]. A similar observation has recently been made in the context of cosmology in [START_REF] Koslowski | Through the Big Bang[END_REF]. As we shall see in the next section, what the → 0 limit does is to confine the transition inside an event horizon, making it invisible from the exterior. Reciprocally, the effect of turning on is to de-confine the interior of the hole. IV. THE TRANSITION AND THE GLOBAL STRUCTURE The physics of the B region concerns gravitational quantum phenomena that can happen around the horizon after a sufficiently long time. The Hawking radiation provides the upper bound ∼ m 3 o / for this time. After this time the classical theory does not work anymore. Before studying the details of the B region, let us consider what we have so far. The spacetime diagram utilized to discuss the black hole evaporation is often drawn as in the left panel of Figure 4. What happens in the circular shaded region? What physics determines it? This diagram rests on an unphysical assumption: that the Hawking process proceeds beyond the Planck curvature at the horizon and pinches off the large interior of the black hole from the rest of spacetime. This assumption uses quantum field theory on curved spacetimes beyond its regime of validity. Without a physical mechanism for the pinching off, this scenario is unrealistic. Spacetime diagrams representing the possible formation and full evaporation of a black hole more realistically abound in the literature [8-11, 13-19, 22-24, 29] and they are all similar. In particular, it is shown in [START_REF] Haggard | Black hole fireworks: quantum-gravity effects outside the horizon spark black to white hole tunneling[END_REF][START_REF] Lorenzo | Improved black hole fireworks: Asymmetric black-hole-to-white-hole tunneling scenario[END_REF] that the spacetime represented in the right panel of Figure 4, can be an exact solution of the Einstein equations, except for the two regions A and B, but including regions within the horizons. If the quantum effects in the region A are simply the crossing described in the previous section, this deter-mines the geometry of the region past it, and shows that the entire problem of the end of a black hole reduces to the quantum transition in the region B. The important point is that there are two regions inside horizons: one below and one above the central singularity. That is, the black hole does not simply pop out of existence: it tunnels into a region that is screened inside an (anti-trapping) horizon. Since it is anti-trapped, this region is actually the interior of a white hole. Thus, black holes die by tunneling into white holes. Unlike for the case of the left panel of Figure 4, now running the time evolution backwards makes sense: the central singularity is screened by an horizon ('time reversed cosmic censorship') and the overall backward evolution behaves qualitatively (not necessarily quantitively, as initial conditions may differ) like the time-forward one. Since we have the explicit metric across the central singularity, we know the features of the resulting white hole. The main consequence is that its interior is what results from the transition described in the above section: namely a white hole born possibly with a small horizon area, but in any case with a very large interior volume, inherited from the black hole that generated it. If the original black hole is an old hole that started out with a large mass m o , then its interior is a very long tube. Continuity of the size of the tube in the transition across the singularity, results in a white hole formed by the bounce, which initially also consists of a very long interior tube, as in Figure 5. Subsequent evolution shortens it (because the time evolution of a white hole is the time reversal of that of a black hole), but this process can take a long time. Remarkably, this process results in a white hole that has a small Planckian mass and a long life determined by how old the parent black hole was. In other words, the outcome of the end of a black hole evaporation is a long-lived remnant. The time scales of the process can be labelled as in Figure 5. We call v o the advanced time of the collapse, v -and v + the advanced time of the onset and end of the quantum transition, u o the retarded time of the final disappearance of the white hole, and u -and u + the retarded times of the onset and end of the quantum transition. The black hole lifetime is τ bh = v --v o . (14) The white hole lifetime is τ wh = u o -u + . ( 15 ) And we assume that the duration of the quantum transition of the B region satisfies u + -u -= v + -v -≡ ∆τ . Disregarding Hawking evaporation, a metric describing this process outside the B region can be written explicitly by cutting and pasting the extended Schwarzschild solution, following [START_REF] Haggard | Black hole fireworks: quantum-gravity effects outside the horizon spark black to white hole tunneling[END_REF]. This is illustrated in Figure 6: two Kruskal spacetimes are glued across the singularity as described in the previous section and the shaded region is the metric of the portion of spacetime outside a collapsing shell (here chosen to be null). While the location of the A region is determined by the classical theory, the location of the B region, instead, is determined by quantum theory. The B process is indeed a typical quantum tunneling process: it has a long lifetime. A priori, the value of τ bh is determined probabilistically by quantum theory. As in conventional tunneling, in a stationary situation (when the horizon area varies slowly), we expect the probability p per unit time for the tunneling to happen to be time independent. This implies that the normalised probability P (t) that the tunneling happens between times t and t + dt is governed by dP (t)/dt = -pP (t), namely is P (t) = 1 τ bh e -t τ bh , (16) which is normalised ( ∞ 0 P (t)dt = 1) and where τ bh satisfies τ bh = 1/p. ( 17 ) We note parenthetically that the quantum spread in the lifetime can be a source of apparent unitarity violation, for the following reason. In conventional nuclear decay, a tunneling phenomenon, the quantum indetermination in the decay time is of the same order as the lifetime. The unitary evolution of the state of a particle trapped in the nucleus is such that the state slowly leaks out, spreading it over a vast region. A Geiger counter has a small probability of detecting a particle at the time where it happens to be. Once the detection happens, there is an apparent violation of unitarity. (In the Copenhagen language the Geiger counter measures the state, causing it to collapse, loosing information. In the Many Worlds language, the state splits into a continuum of branches that decohere and the information of a single branch is less than the initial total information.) In either case, the evolution of the quantum state from the nucleus to a given Geiger counter detection is not unitary; unitarity is recovered by taking into account the full spread of different detection times. The same must be true for the tunneling that disrupts the black hole. If tunneling will happen at a time t, unitarity can only be recovered by taking into account the full quantum spread of the tunneling time, which is to say: over different future goemetries. The quantum state is actually given by a quantum superposition of a continuum of spacetimes as in Figure 5, each with a different value of v -and v + . We shall not further pursue here the analysis of this apparent source of unitarity, but we indicate it for future reference. V. THE B REGION: THE HORIZON AT THE TRANSITION The geometry surrounding the transition in the B region is depicted in detail in Figure 7. The metric of the entire neighbourhood of the B region is an extended Schwarzschild metric. can therefore be written in null Kruskal coordinates ds 2 = - 32m 3 r e -r 2m dudv + r 2 dΩ 2 , ( 18 ) where 1 - 2m e r 2m = uv. ( 19 ) On the two horizons we have respectively v = 0 and u = 0, and separate regions where u and v have different signs as in the right panel of Figure 7. Notice the rapid change of the value of the radius across the B region, which yields a rapid variation of the metric components in [START_REF] Hossenfelder | Conservative solutions to the black hole information problem[END_REF]. To fix the region B, we need to specify more precisely its boundary, which we have not done so far. It is possible to do so by identifying it with the diamond (in the 2d diagram) defined by two points P + and P -with coordinates v ± , u ± both outside the horizon, at the same radius r P , and at opposite timelike distance from the bounce time, see Figure 8. The same radius r P implies v + u + = v -u -≡ 1 - r P 2m e r P 2m . ( 20 ) The same time from the horizon implies that the light lines u = u -and v = v + cross on t s = 0, or u + v = 0, hence u -= -v + . ( 21 ) This crossing point is the outermost reach of the quantum region, with radius r m determined by v + u -≡ 1 - r m 2m e rm 2m . ( 22 ) The region is then entirely specified by two parameters. We can take them to be r P and ∆τ = v + -v -∼ u + -u -. The first characterizes the radius at which the quantum transition starts. The second its duration. (Strictly speaking, we could also have v + -v -and u + -u -of different orders of magnitude, but we do not explore this possibility here.) There are indications about both metric scales in the literature. In [START_REF] Haggard | Black hole fireworks: quantum-gravity effects outside the horizon spark black to white hole tunneling[END_REF][START_REF] Haggard | Quantum Gravity Effects around Sagittarius A*[END_REF], arguments where given for r P ∼ 7/3 m. Following [START_REF] Christodoulou | Planck star tunneling time: An astrophysically relevant observable from background-free quantum gravity[END_REF], the duration of the transition has been called "crossing time" and computed by Christodoulou and D'Ambrosio in [START_REF] Christodoulou | Characteristic Time Scales for the Geometry Transition of a Black Hole to a White Hole from Spinfoams[END_REF][START_REF] Christodoulou | Geometry Transition in Covariant Loop Quantum Gravity[END_REF] using Loop Quantum Gravity: the result is ∆τ ∼ m, which can be taken as a confirmation of earlier results [START_REF] Ambrus | Quantum superposition principle and gravitational collapse: Scattering times for spherical shells[END_REF][START_REF] Barceló | The lifetime problem of evaporating black holes: mutiny or resignation[END_REF][START_REF] Barceló | Black holes turn white fast, otherwise stay black: no half measures[END_REF] obtained with other methods. The two crucial remaining parameters are the black hole and the white hole lifetimes, τ bh and τ wh . The result in [START_REF] Christodoulou | Characteristic Time Scales for the Geometry Transition of a Black Hole to a White Hole from Spinfoams[END_REF] indicates also that p, the probability of tunneling per unit time, is suppressed exponentially by a factor e -m 2 / . Here m is not the initial mass m o of the black hole at the time of its formation, rather, it is the mass of the black hole at the decay time. This is in accord with the semiclassical estimate that tunneling is suppressed as in ( 1) and ( 2). As mentioned in the introduction, because of Hawking evaporation, the mass of the black hole shrinks to Planckian values in a time of order m 3 o / , where the probability density becomes of order unit, giving τ bh ∼ m 3 o / (23) and ∆τ ∼ √ . ( 24 ) We conclude that region B has a Planckian size. We notice parenthetically that the value of p above is at odds with the arguments given in [START_REF] Haggard | Black hole fireworks: quantum-gravity effects outside the horizon spark black to white hole tunneling[END_REF] for a shorter lifetime τ bh ∼ m 2 o / √ . This might be because the analysis in [START_REF] Christodoulou | Characteristic Time Scales for the Geometry Transition of a Black Hole to a White Hole from Spinfoams[END_REF] captures the dynamics of only a few of the relevant degrees of freedom, but we do not consider this possibility here. The entire range of possibilities for the black to white transition lifetime, m 2 o / √ ≤ τ bh ≤ m 3 o / , may have phenomenological consequences, which have been explored in [START_REF] Barrau | Planck star phenomenology[END_REF][START_REF] Barrau | Fast radio bursts and white hole signals[END_REF][START_REF] Barrau | Phenomenology of bouncing black holes in quantum gravity: a closer look[END_REF][START_REF] Barrau | Bouncing black holes in quantum gravity and the Fermi gamma-ray excess[END_REF][START_REF] Rovelli | Planck stars as observational probes of quantum gravity[END_REF]. (On hypothetical white hole observations see also [START_REF] Retter | The revival of white holes as Small Bangs[END_REF]). VI. INTERIOR VOLUME AND PURIFICATION TIME Consider a quantum field living on the background geometry described above. Near the black hole horizon there is production of Hawking radiation. Its backreaction on the geometry gradually decreases the area of the horizon. This, in turn, increases the transition probability to a white hole. After a time τ bh ∼ m 3 o / , the area of the black hole reaches the Planckian scale A bh (final) ∼ , and the transition probability becomes of order unity. The volume of the transition surface is huge. To compute it with precision, we should compute the back-reaction of the inside component of the Hawking radiation, which gradually decreases the value of m as the coordinate x increases. Intuitively, the inside components of the Hawking pairs fall toward the singularity, decreasing m. Since most of the decrease is at the end of the process, we may approximate the full interior of the hole with that of a Schwarzschild solution of mass m o and the first order estimate of the inside volume should not be affected by this process. Thus we may assume that the volume at the transition has the same order as the one derived in Eq. ( 13), namely V bh (final) ∼ √ m o τ bh ∼ m 4 o / √ . (25) Using the same logic in the future of the transition, we approximate the inside metric of the white hole with that of a Schwarzschild solution of Planckian mass, since in the future of the singularity, the metric is again of Kruskal type, but now for a white hole of Plankian mass. The last parameter to estimate is the lifetime τ wh = u 0 -u + of the white hole produced by the transition. To do so, we can assume that the internal volume is conserved in the quantum transition. The volume of the region of Planckian curvature inside the white hole horizon is then V wh (u) ∼ l 2 m l τ wh , (26) where now l ∼ m ∼ √ , and therefore V wh (initial) ∼ τ wh . (27) Gluing the geometry on the past of the singularity to the geometry on the future side requires that the two volumes match, namely that ( 26) matches ( 13) and this gives τ wh ∼ m 4 o / 3/2 . ( 28 ) This shows that the Planck-mass white hole is a longlived remnant [START_REF] Christodoulou | Volume inside old black holes[END_REF]. With these results, we can address the black hole information paradox. The Hawking radiation reaches future infinity before u -, and is described by a mixed state with an entropy of order m 2 o / . This must be purified by correlations with field excitations inside the hole. In spite of the smallness of the mass of the hole, the large internal volume ( 25) is sufficient to host these excitations [START_REF] Rovelli | Black holes have more states than those giving the Bekenstein-Hawking entropy: a simple argument[END_REF]. This addresses the requirement (a) of the introduction, namely that there is a large information capacity. To release this entropy, the remnant must be longlived. During this time, any internal information that was trapped by the black hole horizon can leak out. Intuitively, the interior member of a Hawking pair can now escape and purify the exterior quantum state. The long lifetime of the white hole allows this information to escape in the form of very low frequency particles, thus respecting bounds on the maximal entropy contained in a given volume with given energy. The lower bound imposed by unitarity and energy con- [START_REF] Preskill | Do Black Holes Destroy Information?[END_REF][START_REF] Marolf | The Black Hole information problem: past, present, and future[END_REF][START_REF] Bianchi | Entanglement entropy production in gravitational collapse: covariant regularization and solvable models[END_REF] and this is precisely the white hole lifetime (28) deduced above; hence we see that they satisfy the requirement (b) of the introduction. Therefore white holes realize precisely the long-lived remnant scenario for the end of the black hole evaporation that was conjectured and discussed mostly in the 1990's [START_REF] Giddings | Black holes and massive remnants[END_REF][START_REF] Giddings | Constraints on black hole remnants[END_REF][START_REF] Banks | Classical and quantum production of cornucopions at energies below 1018 GeV[END_REF][START_REF] Banks | Lectures on black holes and information loss[END_REF][START_REF] Banks | Are horned particles the end point of Hawking evaporation?[END_REF][START_REF] Giddings | Dynamics of extremal black holes[END_REF][START_REF] Banks | Black hole remnants and the information puzzle[END_REF][START_REF] Giddings | Constraints on black hole remnants[END_REF]. siderations is τ R ∼ m 4 o / 3/2 The last issue we should discuss is stability. Generically, white holes are known to be unstable under perturbations (see for instance Chapter 15 in [START_REF] Frolov | Black Hole Physics: Basic Concepts and New Developments[END_REF] and references therein). The instability arises because modes of short-wavelength are exponentially blue-shifted along the white hole horizon. In the present case, however, we have a Planck-size white hole. To run this argument for instability in the case of a planckian white hole, it is necessary to consider transplanckian perturbations. Assuming no transplanckian perturbations to exist, there are no instabilities to be considered. This addresses the requirement (c). Alternatively: a white hole is unstable because it may re-collapse into a black hole with similar mass; therefore a Planck size white hole can at most re-collapse into a Planck size black hole; but this has probability of order unity to tunnel back into a white hole in a Planck time. Therefore the proposed scenario addresses the consistency requirements (a), (b), and (c) for the solution of the information-loss paradox and provides an effective geometry for the end-point of black hole evaporation: a long-lived Planck-mass white hole. VII. ON WHITE HOLES Notice that from the outside, a white hole is indistinguishable from a black hole. This is obvious from the existence of the Kruskal spacetime, where the same region of spacetime (region I) describes both the exterior of a black hole and the exterior of a white hole. For r s > 2m, the conventional Schwarzschild line element describes equally well a black hole exterior and a white hole exterior. The difference is only what happens at r = 2m. The only locally salient difference between a white and a black hole is that if we add some generic perturbation or matter on a given constant t s surface, in (the Schwarzschild coordinate description of) a black hole we see matter falling towards the center and accumulating around the horizon. While in (the Schwarzschild coordinate description of) a white hole we see matter accumulated around the horizon in the past, moving away from the center. Therefore the distinction is only one of "naturalness" of initial conditions: a black hole has "special" boundary conditions in the future, a white hole has "special" boundary conditions in the past. This difference can be described physically also as follows: if we look at a black hole (for instance when the Event Horizon Telescope [START_REF] Doeleman | Imaging an Event Horizon: submm-VLBI of a Super Massive Black Hole[END_REF] examines Sagittarius A*), we see a black disk. This means that generic initial conditions on past null infinity give rise on future null infinity to a black spot with minimal incoming radiation: a "special" configuration in the future sky. By time reversal symmetry, the opposite is true for a white hole; generic initial conditions on future null infinity require a black spot with minimal incoming radiation from past null infinity: a "special" configuration in the past. We close this section by briefly discussing the "no transition principle" considered by Engelhardt and Horowitz in [START_REF] Engelhardt | Holographic consequences of a no transmission principle[END_REF]. By assuming "holographic" unitarity at infinity and observing that consequently information cannot leak out from the spacetime enclosed by a single asymptotic region, these authors rule out a number of potential scenarios, including the possibility of resolving generic singularities inside black holes. Remarkably, the scenario described here circumvents the no transition principle and permits singularity resolution in the bulk: the reason is that this singularity is confined in a finite spacetime region and does not alter the global causal structure. VIII. ON REMNANTS The long-lived remnant scenario provides a satisfactory solution to the black-hole information paradox. The main reason for which it was largely discarded was the fact that remnants appeared to be exotic objects extraneous to known physics. Here we have shown that they are not: white holes are well known solutions of the Einstein equations and they provide a concrete model for long-lived remnants. Two other arguments made long-lived remnants unpopular: Page's version of the information paradox; and the fact that if remnants existed they would easily be produced in accelerators. Neither of these arguments applies to the long-lived remnant scenario of this paper. We discuss them below. In its interactions with its surroundings, a black hole with horizon area A behaves thermally as a system with entropy S bh = A/4 . This is a fact supported by a large number of convincing arguments and continues to hold for the dynamical horizons we consider here. The Bekenstein-Hawking entropy provides a good notion of entropy that satisfies Bekenstein's generalized second law, in the approximation in which we can treat the horizon as an event horizon. In the white hole remnant scenario this is a good approximation for a long time, but fails at the Planck scale when the black hole transitions to a white hole. Let us assume for the moment that these facts imply the following hypothesis (see for instance [START_REF] Marolf | The Black Hole information problem: past, present, and future[END_REF]) (H) The total number of available states for a quantum system living on the internal spatial slice Σ i of Figure 1 is N bh = e S bh = e A/4 . Then, as noticed by Page [START_REF] Page | Information in black hole radiation[END_REF], we have immediately an information paradox regardless of what happens at the end of the evaporation. The reason is that the entropy of the Hawking radiation grows with time. It is natural to interpret this entropy as correlation entropy with the Hawking quanta that have fallen inside the hole, but for this to happen there must be a sufficient number of available states inside the hole. If hypothesis (H) above is true, then this cannot be, because as the area of the horizon decreases with time, the number of available internal states decreases and becomes insufficient to purify the Hawking radiation. The time at which the entropy surpasses the area is known as the Page time. This has lead many to hypothesize that the Hawking radiation is already purifying itself by the Page time: a consequence of this idea is the firewall scenario [START_REF] Almheiri | Black holes: Complementarity or firewalls?[END_REF]. The hypothesis (H) does not apply to the white-hole remnants. As argued in [START_REF] Rovelli | Black holes have more states than those giving the Bekenstein-Hawking entropy: a simple argument[END_REF], growing interior volumes together with the existence of local observables implies that the number of internal states grows with time instead of decreasing as stated in (H). This is not in contradiction with the fact that a black hole behaves thermally in its interactions with its surroundings as a system with entropy S = A/4 . The reason is that "entropy" is not an absolute concept and the notion of entropy must be qualified. Any definition of "entropy" relies on a coarse graining, namely on ignoring some variables: these could be microscopic variables, as in the statistical mechanical notion of entropy, or the variables of a subsystem over which we trace, as in the von Neumann entropy. The Bekenstein-Hawking entropy correctly describes the thermal interactions of the hole with its surroundings, because the boundary is an outgoing null surface and S bh counts the number of states that can be distinguished from the exterior; but this is not the number of states that can be distinguished by local quantum field operators on Σ i [START_REF] Rovelli | Black holes have more states than those giving the Bekenstein-Hawking entropy: a simple argument[END_REF]. See also [START_REF] Giddings | Statistical physics of black holes as quantum-mechanical systems[END_REF]. Therefore there is no reason for the Hawking radiation to purify itself by the Page time. This point has been stressed by Unruh and Wald in their discussion of the evaporation process on the spacetime pictured in the left panel of Figure 4, see e.g. [START_REF] Unruh | Information Loss[END_REF]. Our scenario differs from Unruh and Wald's in that the white hole transition allows the Hawking partners that fell into the black hole to emerge later and purify the state. They emerge slowly, over a time of order m 4 o / 3/2 , in a manner consistent with the long life of the white hole established here. The second standard argument against remnants is that, if they existed, it would be easy to produce them. This argument assumes that a remnant has a small boundary area and little energy, but can have a very large number of states. The large number of states would contribute a large phase-space volume factor in any scattering process, making the production of these objects in scattering processes highly probable. Actually, since in principle these remnants could have an arbitrarily large number of states, their phase-space volume factor would be infinite, and hence they would be produced spontaneously everywhere. This argument does not apply to white holes. The reason is that a white hole is screened by an anti-trapping horizon: the only way to produce it is through quantum gravity tunneling from a black hole! Even more, to produce a Planck mass white hole with a large interior volume, we must first produce a large black hole and let it evaporate for a long time. Therefore the threshold to access the full phase-space volume of white holes is high. A related argument is in [START_REF] Banks | Classical and quantum production of cornucopions at energies below 1018 GeV[END_REF], based on the fact that infinite production rate is prevented by locality. In [START_REF] Giddings | Constraints on black hole remnants[END_REF] Giddings questions this point treating remnants as particles of an effective field theory; the field theory, however, may be a good approximation of such a highly non-local structure as a large white hole only in the approximation where the large number of internal states is not seen. See also [START_REF] Banks | Lectures on black holes and information loss[END_REF]. IX. CONCLUSION As a black hole evaporates, the probability to tunnel into a white hole increases. The suppression factor for this tunneling process is of order e -m 2 /m 2 P l . Before reaching sub-Planckian size, the probability ceases to be suppressed and the black hole tunnels into a white hole. Old black holes have a large volume. Quantum gravitational tunneling results in a Planck-mass white hole that also has a large interior volume. The white hole is long-lived because it takes awhile for its finite, but large, interior to become visible from infinity. The geometry outside the black to white hole transition is described by a single asymptotically-flat spacetime. The Einstein equations are violated in two regions: The Planck-curvature region A, for which we have given an effective metric that smoothes out of the singularity; and the tunneling region B, whose size and decay probability can be computed [START_REF] Christodoulou | Characteristic Time Scales for the Geometry Transition of a Black Hole to a White Hole from Spinfoams[END_REF]. These ingredients combine to give a white hole remnant scenario. This scenario provides a way to address the information problem. We distinguish two ways of encoding information, the first associated with the small area of the horizon and the second associated to the remnant's interior. The Bekenstein-Hawking entropy S bh = A/4 is encoded on the horizon and counts states that can only be distinguished from outside. On the other hand, a white hole resulting from a quantum gravity transition has a large volume that is available to encode substantial information even when the horizon area is small. The white hole scenario's apparent horizon, in contrast to an event horizon, allows for information to be released. The long-lived white hole releases this information slowly and purifies the Hawking radiation emitted during evaporation. Quantum gravity resolves the information problem. -CR thanks Ted Jacobson, Steve Giddings, Gary Horowitz, Steve Carlip, and Claus Kiefer for very useful exchanges during the preparation of this work. EB and HMH thank Tommaso De Lorenzo for discussion of time scales. EB thanks Abhay Ashtekar for discussion of remnants. HMH thanks the CPT for warm hospitality and support, Bard College for extended support to visit the CPT with students, and the Perimeter Institute for Theoretical Physics for generous sabbatical support. MC acknowledges support from the SM Center for Space, Time and the Quantum and the Leventis Educational Grants Scheme. This work is supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. FIG. 1 . 1 FIG. 1. Conformal diagram of a classical black hole. The dashed line is the horizon. The dotted line is a Cauchy surface Σ. In regions A and B we expect (distinct) quantum gravitational effects and classical GR is unreliable. FIG. 2 . 2 FIG.2. Instantaneous snapshot of a collapsing cornucopion. FIG. 2 . 2 FIG.2. The interior geometry of an old black hole: a very long thin tube, whose length increases and whose radius decreases with time. Notice it is finite, unlikely the Einstein-Rosen bridge. FIG. 3 . 3 FIG. 3. The transition across the A region. FIG. 4 . 4 FIG. 4.Left: A commonly drawn diagram for black hole evaporation that we argue against. Right: A black-to-white hole transition. The dashed lines are the horizons. FIG. 5 . 5 FIG.5. Black hole bounce, with a sketch of the inside geometries, before and after the quantum-gravitational transition. FIG. 6 . 6 FIG. 6. Left: Two Kruskal spacetimes are glued at the singularity. The grey region is the metric of a black to white hole transition outside a collapsing and the exploding null shell. Right: The corresponding regions in the physical spacetime. FIG. 7 . 7 FIG. 7. The B region. Left: Surfaces of equal Schwarzschild radius are depicted. Right: The signs of the null Kruskal coordinates around B. FIG. 8 . 8 FIG. 8. The B transition region. 2. Instantaneous snapshot of a collapsing cornucopion. 9Appendix C. 2 (2.11) quation the rest to the (2.10) for the or com- there is s tensor em now (2.9) (2.8) ts have etric is rmined (2.7) (2.6) panding . xtremal s with given in (2.5) 2. Instantaneous snapshot of a collapsing cornucopion. 11) n st he 10) he m- is or w .9) .8) ve is d .7) .6) 9Appendix C.
01772222
en
[ "info", "shs.edu" ]
2024/03/05 22:32:16
2015
https://hal.science/hal-01772222/file/OEGlobal2015_Teachers_Time_Is_Valuable.pdf
Camila Morais Canellas email: [email protected] Colin De La Higuera Teacher's Time is Valuable 1 Keywords: OER adoption, teachers' time, technology, pedagogy, workload Consistent adoption adoption of Open Educational Resources (OER) depends on a number of factors. These may correspond to business, legal, policy, academic or technology issues. Among the vast number of elements making deployment harder, one is often underestimated: an increase on the demand for teacher's time. Some solutions to motivate and/or gratify teachers that adopt OER have been proposed over the years and used successfully among the first adopters. Unfortunately, such solutions are not always possible in several contexts. In this paper, we propose that a key concern of research on OER technologies should focus on saving teacher's time when creating pedagogical resources. Introduction When deploying Open Educational Resources (OER) initiatives it is often the case that the project stalls because of having failed to foresee that the excellent teacher, whose work the OER is supposed to be based upon, does not have the time nor the energy to brush2 her slides, to reengineer her course, to add the meta-data, to make herself available to answer the forum or prepare the additional material without which the courseware is not what the promoters were expecting. A number of studies have already pointed out that the lack of teacher's involvement is an important (if not the single most important) issue to start the adoption of OER. In certain contexts, the institution uses an incentive-based model in order to encourage the teachers to accept the extra workload. In under-resourced systems, or in places where newcomers to these questions are failing to understand the different issues, this may not be possible. In this work, we analyse the problem, the proposed alternatives and argue in favour of deploying technology in a way that enhances the learning (and teaching) experience, whilst avoiding burdening the teacher with a heavier workload. Problem Statement 2.1. A List of Issues for OER Projects regarding the adoption of OER face a number of issues related to its attributes and decision points as described in the framework proposed by [START_REF] Stacey | A Dialogue on Open Educational Resources and Social Authoring Models[END_REF]. The issues are grouped under the following categories: business, legal, policy, academic, technology. Moreover, in many countries the law has more or less broad definitions of fair use and educational use. Such a circumstance often means that teachers are used to incorporate in their pedagogical resources some materials of which they are not authors, but that they believe they can use if in a pedagogical environment. Nonetheless, when creating/adapting this material to be published as OER, this "exception" no longer applies and they face the need to brush the material3 . Frequently, some of these key attributes (or the combination of them) may end up being more time consuming than expected, especially regarding teacher's time. It was also noted that the question of effort is a crucial one when the teacher's perception of OER production was surveyed: teachers do not perceive OER as a means to save time (Masterman, Wild, White, & Manton, 2011, p. 138). About Teacher's Time Therefore, the issues related to the adoption of OER and the fact of having her work made public will have consequences regarding the work of the teacher: • She will receive extra pressure because her material is going to be made public -perhaps due to the fact that her course is going to be filmed; • The teacher is expected to help render her material impeccable. In some cases she will be asked to sign agreements she does not really understand (copyright, image forms); • The teacher may be asked to enrich her contents, by providing additional texts, updating her lectures, adding a forum, some quizzes and exams, in order to better adapt the material to a larger public. The above has also been noted by several authors, including in the work by [START_REF] Masterman | The impact of OER on teaching and learning in UK universities: implications for Learning Design[END_REF] already mentioned. In fact, "time" is mentioned as the most significant barrier for not adopting OER by two thirds of the respondents of an OECD questionnaire (Hylén, 2005, p. 4). Why Is This Not Always Taken into Account? We can conclude that the process of adopting OER is not easy, automatic or effortless. Still, why is it usually seen as so? Possibly, the common confusion between free and open has a role in this misconception. The result of this work, the resource itself, may be open and free, but not necessarily the work itself. As stated before, the effort is substantial. Another probable reason lies in the view that, being seen as technology, OER are supposed to save money and time. When a clear policy for adoption of OER is absent, it is difficult for the decision makers to accept to spend more money on this. Extra Funding, Direct Rewards to the Teacher Sometimes the institution is conscious that there is extra effort, that it needs this effort (because the institution has a visibility strategy, typically), and it is prepared to directly reward the teacher. The following types of incentives have been identified by [START_REF] West | OER Incentive Models Summaries[END_REF]: large lump sum stipends, incremented stipends, gifts instead of money, pay for creation of OER, sabbaticals and other existing institutional incentive plans. It should be noted that, in institutions that chose these incentives to get the ball rolling a few years ago, the tendency is now that enough motivation has been generated and these are no longer needed. Promotion of Teaching Quality In other cases, the prize may be differed: the institution will attempt to promote quality of teaching as it promotes quality of research. These promotion efforts are visible: the teacher will know that making the extra effort will result in some form of prestige or of a differed prize. The establishment of a credible academic reward system that includes OER was pointed out as one of the most important policy issue that leads to a large scale adoption in an Australian study (Hylén, 2005, p. 6). Adding Support to the Teacher's Tasks In this case, rather than rewarding or recognizing the work made by the teacher, the organization encourages her work by creating services that will support her. Staff that is specialized in the juridical questions regarding openness or grants that can be used to accomplish the task are some examples. This not only can save the teacher's time but also somehow demonstrates the importance of the work for the institution. Convincing the Staff that Practices Have Changed and Adaptation is Needed Finally, the argument used by decision makers can be that no intervention is required, as progress is just the natural sense of history, and that teachers should adapt to new technologies just because it is their job to do so. Following this view, one should not regard the creation of OER as something exceptional, worth a specific prize. The point of view is defended by two very different types of decision makers: those that no longer teach or those who are currently promoting the way teaching is (or should be) done and are possibly over-enthusiastic and sometimes fail to see that other teachers are not always so keen or have less time. The Role of Technology Arguments in favour of technology are often that it makes life simpler. Yet in the case of teaching, technology has been introduced to provide the teacher with tools she did not have before, allowing her to do her job much better, provided she uses these tools. However, it often represented an increase cost regarding time. Accordingly, it is possible to distinguish two types of technologies proposed to teachers: 1. Time-saving technologies: as an example provided by [START_REF] Cuban | Techno-Reformers and Classroom Teachers[END_REF], mimeograph machines and projectors were quickly and largely adopted by teachers around the world. 2. Time-consuming technologies: computers and television, also cited by [START_REF] Cuban | Techno-Reformers and Classroom Teachers[END_REF], represent technologies that may enhance the quality and possibilities for the teachers' practices, but usually demanded more time and were not as well adopted in classrooms compared with time-saving technologies. In fact, research has shown that having technology available does not mean that teachers will adopt it. The same occurs regarding OER adoption (Schuwer, Kreijns, & Vermeulen, 2014, p. 95). Two Alternatives We see two alternatives to the problem of teacher's time when adopting OER: the introduction of (or the publicity given to) time-saving technologies and/or a scenario where the teacher can do better with less work, or at least with as much work as before. Some examples: 1. Time-saving technology: a typical heavily time consuming task for the teacher is the production of metadata. This has been identified, for instance, as a blocking point in the Wikiwijs project [START_REF] Schuwer | Wikiwijs: An unexpected journey and the lessons learned towards OER[END_REF]. As a recurring theme for sessions in the past OCW Conferences, the importance and difficulty of the question is clear. On the other hand, natural language processing (NLP) research is able, today, to provide automatic transcriptions for videos, summarizations, and identification of names entities4 . 2. A pedagogical scenario using technology to improve the results, without adding much time to the teacher's agenda is currently studied at University of Nantes. In the Informatics for beginners course, the teachers have to face a number of problems: • A large number of students, and hence of groups to be run in parallel; • The necessity to propose innovative courses, not just one that can be taught by the different instructors. The choice which has been made is that of a first part including lectures on a wide range of topics (bio-informatics, natural language processing, distributed algorithms, cryptology and social networks); and a second part consisting of a programming course which makes use of the general topics addressed in the previous lectures. One of the problems is that a lot of information is passed to the students, and they need time to digest it. It has therefore been proposed to record the classes and make the videos available as OER. In order to make things more interesting, we considered pedagogical scenarios allowing the students to interact better with this material. However, in most of the envisaged scenarios, the teacher's cooperation was needed and the ideas were too demanding of her time, for which we did not have any support. We are therefore proposing to do the following: • The teacher proposes 8 to 10 quiz questions. It should be said that she has to do this anyways for the examination; • These quizzes are added to the video and synchronized with the moment the topic is addressed by the teacher; • The students, when viewing the video, can not only answer a quiz (and know if they were right -self assessment), but also rate the quiz and even propose better quizzes (learn by teaching); • In order to avoid overcharging the video with quizzes, the system automatically updates the list of the better quizzes proposed so far, following the evaluation made by the students. In this way, the intervention of the teacher is kept as small as possible: she is not asked to do more than she would do anyhow, and we hope to end with a better material. Conclusion Whereas technology has usually been deployed to answer necessities identified by the teacher, and often by over-enthusiastic pedagogues who will be identifying more complex and interesting learning scenarios, we feel that (at least part of) technology should be introduced in order to help the teacher do better for an (almost) identical cost regarding time. Technology is now being identified as a key factor to success in open education as shown by: • We believe and expect new ideas and tools in this direction in the future. The creation of the UNESCO Chair on Open Technologies for Open Educational Resources and Open Learning 5 ; • The Opening up Slovenia initiative 6 : initiatives to help promote open education in Slovenia are heavily backed by research; • The creation of partnerships based on research in tools based on machine learning, as suggested by the Knowledge 4 All Foundation 7 . This work has received a French government support granted to the COMIN Labs excellence laboratory and managed by the National Research Agency in the "Investing for the Futures" program ANR-JO-LABX-07-0J. Brushing the slides consists in making them free of any material inconsistent with the chosen licence. The Usual SolutionsOverall, the effort the teacher has to make, once she has agreed to adopt OER, is quite substantial. On the other hand, what will the employer, or the team running the open education scheme, propose to leverage this effort? 3 Cf. http://www.jisc.ac.uk/publications/programmerelated/2013/Openeducationalresources.aspx, section Open Licensing and http://poerup.referata.com/wiki/France#Copyright_in_education. There are a number of conferences on these topics. The webpage of the Association for Computational Linguistics is a good starting point: http://www.aclweb.org/ http://www.ouslovenia.net/project/unesco-chair-slovenia/ http://www.ouslovenia.net/ http://www.k4all.org/
01714251
en
[ "phys.grqc", "phys" ]
2024/03/05 22:32:16
2018
https://hal.science/hal-01714251/file/1802.02382.pdf
Carlo Rovelli Baptiste Le Biha Space and Time in Loop Quantum Gravity come I. INTRODUCTION Newton's success sharpened our understanding of the nature of space and time in the XVII century. Einstein's special and general relativity improved this understanding in the XX century. Quantum gravity is expected to take a step further, deepening our understanding of space and time, by grasping of the implications for space and time of the quantum nature of the physical world. The best way to see what happens to space and time when their quantum traits cannot be disregarded is to look how this actually happens in a concrete theory of quantum gravity. Loop Quantum Gravity (LQG) [START_REF] Rovelli | Quantum Gravity[END_REF][START_REF] Rovelli | Covariant Loop Quantum Gravity[END_REF][START_REF] Thiemann | Modern Canonical Quantum General Relativity[END_REF][START_REF] Ashtekar | Introduction to Loop Quantum Gravity[END_REF][START_REF] Gambini | Loops, Knots, Gauge Theories and Quantum Gravity[END_REF][START_REF] Gambini | Introduction to loop quantum gravity[END_REF][START_REF] Perez | The Spin-Foam Approach to Quantum Gravity[END_REF] is among the few current theories sufficiently developed to provide a complete and clear-cut answer to this question. Here I discuss the role(s) that space and time play in LQG and the version of these notions required to make sense of a quantum gravitational world. For a detailed discussion, see the first part of the book [START_REF] Rovelli | Quantum Gravity[END_REF]. A brief summary of the structure of LQG is given in the Appendix, for the reader unfamiliar with this theory. II. SPACE Confusion about the nature of space -even more so for time-originates from failing to recognise that these are stratified, multi-layered concepts. They are charged with a multiplicity of attributes and there is no agreement on a terminology to designate spacial or temporal notions lacking same of these attributes. When we say 'space' or 'time' we indicate different things in different contexts. The only route to clarify the role of space and time in quantum gravity is to ask what we mean in general when we say 'space' or 'time' [START_REF] Van Fraassen | An introduction to the philosophy if time and space[END_REF]. There are distinct answers to this question; each defines a different notion of 'space' or 'time'. Let's disentangle them. I start with space, and move to time, which is more complex, later on. Relational space: 'Space' is the relation we use when we locate things. We talk about space when we ask "Where is Andorra?" and answer "Between Spain and France". Location is established in relation to something else (Andorra is located by Spain and France). Used in this sense 'space' is a relation between things. It does not require metric connotations. It is the notion of space Aristoteles refers to in his Physics, Descartes founds on 'contiguity', and so on. In mathematics it is studied by topology. This is a very general notion of space, equally present in ancient, Cartesian, Newtonian, and relativistic physics. This notion of space is equally present in LQG. In LQG, in fact, we can say that something is in a certain location with respect to something else. A particle can be at the same location as a certain quantum of gravity. We can also say that two quanta are adjacent. The network of adjacency of the elementary quanta of the gravitational field is captured by the graph of a spin network (see Appendix). The links of the graph are the elementary adjacency relations. Spin networks describe relative spacial arrangements of dynamical entities: the elementary quanta. Newtonian space: In the XVII century, in the Principia, Newton introduced a distinction between two notions of space [START_REF] Newton | Scholium to the Definitions in Philosophiae Naturalis Principia Mathematica[END_REF]. The first, which he called the "common" one, is the one illustrated in the previous item. The second, which he called the "true" one, is what has been later called Newtonian space. Newtonian space is not a relation between objects: it is assumed by Newton to exist also in the absence of objects. It is an entity with no dynamics, with a metric structure: that of a 3d Euclidean manifold. It is postulated by Newton on the basis of suggestions from ancient Democritean physics, and is essential for his theoretical construction. 1 Special relativity modifies this ontology only marginally, merging Newtonian space and time into Minkowski's spacetime. In quantum gravity, Minkowski spacetime and hence Newtonian space appear only as an approximations, as we shall see below. They have no role at all in the foundation of the theory. General relativistic space: Our understanding of the actual physical nature of Newtonian space (and Minkowski spacetime) underwent a radical sharpening with the discovery of General Relativity (GR). The empirical success of GR -slowly cumulated for a century and recently booming-adds much credibility to the effectiveness of this step. What GR shows is that Newtonian space is indeed an entity as Newton postulated, but is not nondynamical as Newton assumed. It is a dynamical entity, very much akin to the electromagnetic field: a gravitational field. Therefore in GR there are two distinct spacial notions. The first is the simple fact that dynamical entities (all entities in the theory are dynamical) are localized with respect to one another ("This black hole is inside this globular cluster"). The second is a left-over habit from Newtonian logic: the habit of calling 'space' (or 'spacetime') one particular dynamical entity: the gravitational field. There is nothing wrong in doing so, provided that the substantial difference between these three notions of space (order of localization, Newtonian non-dynamical space, gravitational field) is clear. LQG treats space (in this sense) precisely as GR does: a dynamical entity that behaves as Newtonian space in a certain approximation. However, in LQG this dynamical entity has the usual additional properties of quantum entities. These are three: (i) Granularity. The quantum electromagnetic field has granular properties: photons. For the same reason, the quantum gravitational field has granular properties: the elementary quanta represented by the nodes of a spin network. Photon states form a basis in the Hilbert state of quantum electromagnetism like spin network states form a basis in the Hilbert space of LQG. (ii) Indeterminism. The dynamics of the 'quanta of space' (like that of photons) is probabilistic. (iii) Relationalism. Quantum gravity inherits all features of quantum mechanics including the weirdest. Quantum theory (in its most common interpretation) describes interactions among systems where properties become actual. So happens in LQG to the gravitational field: the theory describes how it interacts with other systems (and with itself) and how its properties become actual in interactions. More on this after we discuss time. III. TIME The case with time is parallel to space, but with some additional levels of complexity [START_REF] Callender | The Oxford handbook of philosophy of time[END_REF]. Relational time: 'Time' is the relation we use when we locate events. We are talking about time when we ask "When shall we meet?" and answer "In three days". Location of events is given with respect to something else. (We shall meet after three sunrises.) Used in this sense time is a relation between events. This is the notion Aristoteles refers to in his Physics 2 , and so on. It is a very general notion of time, equally present in ancient, Cartesian, Newtonian, and relativistic physics. When used in this wide sense, 'time' is definitely present in LQG. In LQG we can say that something happens when something else happens. For instance, a particle is emitted when two quanta of gravity join. Also, we can say that two events are temporally adjacent. A network of temporal adjacency of elementary processes of the gravitational field is captured by the spinfoams (see Appendix). Newtonian time: In the Principia, Newton distinguished two notions of time. The first, which he called the "common" one, is the one in the previous item. The second, which he called the "true" one, is what has been later called Newtonian time. Newtonian time is assumed to be "flowing uniformly", even when nothing happens, with no influence from events, and to have a metric structure: we can say when two time intervals have equal duration. Special relativity modifies the Newtonian ontology only marginally, merging Newtonian space and time into Minkowski spacetime. In LQG (Minkowsky spacetime and hence) Newtonian time appears only as an approximation. It has no role at all in the foundation of the theory. General relativistic time: What GR has shown is that Newtonian time is indeed (part of) an entity as Newton postulated, but this entity is not nondynamical as Newton assumed. Rather, it is an aspect of a dynamical field, the gravitational field. What the reading T of a common clock tracks, for instance, is a function of the gravitational field g µν , T = g µν dx µ dx ν . (1) In GR, therefore, there are two distinct kinds of temporal notions. The first is the simple fact that all events are localized with respect to one another ("This gravity wave has been emitted when the two neutron stars have merged", "The binary pulsar emits seven hundred pulses during an orbit"). The second is a left-over habit from Newtonian logic: the habit of calling 'time' (in 'spacetime') aspects of one specific dynamical entity: the gravitational field. Again, there is nothing wrong in doing so, 2 The famous definition is: Time is ἀριθμός κινήσ εως κατὰ τὸ πρότερον καὶ ῎ υστερον "The number of change with respect to before and after" (Physics, IV, 219 b 2; see also 232 b 22-23) [START_REF]Physics[END_REF]. provided that the difference between three notions of time (relative order of events, Newtonian non dynamical time, the gravitational field) are clear. LQG treats time (in this sense) as GR does: there is no preferred clock time, but many clock times measured by different clocks. In addition, however, clock times undergo standard quantum fluctuations like any other dynamical variable. There can be quantum superpositions between different values of the same clock time variable T . Our common intuition about time is profoundly marked by natural phenomena that are not generally present in fundamental physics. Unless we disentangle these from the aspects of time described above, confusion reigns (I have extensively discussed the multiple aspects of temporality in the recent book [START_REF] Rovelli | The Order of Time[END_REF]). These fall into two classes: Irreversible time: When dealing with many degrees of freedom we recur to statistical and thermodynamical notions. In an environment with an entropy gradient there are irreversible phenomena. The existence of traces of the past versus the absence of traces of the future, or the apparent asymmetry of causation and agency, are consequences of the entropy gradient (of what else?). Our common intuition about time is profoundly marked by these phenomena. We do not know why was entropy as low in the past universe [START_REF] Earman | The Past Hypothesis: Not Even False[END_REF]. (A possibility is that this is a perspectival effect due to the way the physical system to which we belong couples with the rest of the universe [START_REF] Rovelli | Is Time's Arrow Perspectival?[END_REF].) Whatever the origin of the entropic gradient, it is a fact that all irreversible phenomena of our experience can be traced to (some version) of it [START_REF] Reichenbach | The philosophy of space and time[END_REF][START_REF] Albert | Time and Change[END_REF][START_REF] Price | Time's Arrow[END_REF]. This has nothing to do with the role of time in classical or quantum mechanics, in relativistic physics or in quantum gravity. There is no compelling reason to confuse these phenomena with issues of time in quantum gravity. Accordingly, nothing refers to 'causation, 'irreversibility' or similar, in LQG. LQG describes physical happening, the way it happens, its probabilistic relations, the microphysics, not the statistics of many degrees of freedom, entropy gradients or related irreversible phenomena. To address these, and understand the source of the the features that make a time variable 'special', we need a general covariant quantum statistical mechanics. Key steps in this direction exist (see [START_REF] Connes | Von Neumann algebra automorphisms and time thermodynamics relation in general covariant quantum theories[END_REF][START_REF] Rovelli | Statistical mechanics of gravity and the thermodynamical origin of time[END_REF] on thermal time, and [START_REF] Chirco | Statistical mechanics of reparametrization-invariant systems. It takes three to tango[END_REF] and references therein) but are incomplete. They have no direct bearing on LQG. Experiential time: The second class of phenomena that profoundly affects our intuition of time are those following from the fact that our brain is a machine that (because of the entropy gradient) remembers the past and works constantly to antici-pate the future [START_REF] Buonomano | Your Brain Is A Time Machine: The Nueroscience and Physics of Time[END_REF]. This working of our brain gives us a distinctive feeling about time: this is the feeling we call "flow", or the "clearing" that is is our experiential time [START_REF] Heidegger | Gesamtausgabe[END_REF]. This depends on the working of our brain, not on fundamental physics [START_REF] James | The Principles of Psychology[END_REF]. It is a mistake to search something pertaining to our feelings uniquely in fundamental physics. It would be like asking fundamental physics to directly justify the fact that a red frequency is more vivid to our eyes than a green one: a question asked the wrong chapter of science. Accordingly, nothing refers to "flowing", "passage" or the similar in LQG. LQG describes physical happening [START_REF] Dorato | Rovelli' s relational quantum mechanics, monism and quantum becoming[END_REF], the way they happen, their probabilistic relations, not idiosyncrasies of our brain (or our culture [START_REF] Everett | Don't Sleep, There Are Snakes[END_REF]). IV. PRESENTISM OR BLOCK UNIVERSE? A FALSE ALTERNATIVE. An ongoing discussion on the nature of time is framed as an alternative between presentism and block universe (or eternalism). This is a false alternative. Let me get rid of this confusion before continuing. Presentism is the idea of identifying what is real with what is present now, everywhere in the universe. Special relativity and GR make clear that an objective notion of 'present' defined all over the universe is not in the physical world. Hence there can be no objective universal distinction between past, present and future. Presentism is seriously questioned by this discovery, because to hold it we have to base it on a notion of present that lacks observable ground, and this is unpalatable. A common response states that (i) we must therefore identify what is real with the ensemble of all events of the universe, including past and future ones [START_REF] Putnam | Time and Physical Geometry[END_REF], and (ii) this implies that, since future and past are equally real, the passage of time is illusory, and there is no becoming in nature [START_REF] Mctaggart | The Unreality of Time[END_REF]. The argument is wrong. (i) is just a grammatical choice about how we decide to use the ambiguous adjective "real", it has no content [START_REF] Austin | Sense and Sensibilia[END_REF][START_REF] Quine | On What There Is[END_REF]. (ii) is mistaken because it treats time too rigidly, failing to realise that time can behave differently from our experience, and still deserve to be called time. The absence of a preferred objective present does not imply that temporality and becoming are illusions. Events happen, and this we call 'becoming', but their temporal relations form a structure richer than we previously thought. We have to adapt our notion of becoming to this discovery, not discard it. There are temporal relations, but these are local and not global; more precisely, there is a temporal ordering but it is a partial ordering, and not a complete one. The universe is an ensemble of processes that happen, and these are not organised in a unique global order. In the classical theory, they are organised in a nontrivial geom-etry. In the quantum theory, in possibly more complex patterns. The expression "real now here" can still be used to denote an ensemble of events that sit on the portion of a common simultaneity surface for a group of observers in slow relative motion; the region it pertains to must be small enough for the effects of the finite speed of light to be smaller than the available time resolution. When these conditions are not met, the expression "real now" simply makes no sense. Therefore the discovery of relativity does not imply that becoming or temporality are meaningless or illusory: it implies that they behave in a more subtle manner than in our pre-relativistic intuition. The best language for describing the universe remains a language of happening and becoming, not a language of being. Even more so when we fold quantum theory in. LQG describes reality in terms of processes. The amplitudes of the theory determine probabilities for processes to happen. This is a language of becoming, not being. In a process, variables change value. The quantum states of the theory code the possible set of values that are transformed into each other in processes. In simple words, the now is replaced by here and now, not by a frozen eternity. Temporality in the sense of becoming is at the roots of the language of LQG. But in LQG there is no preferred time variable, as I discuss in the next section. V. "ABSENCE OF TIME" AND RELATIVE EVOLUTION: TIME IS NOT FORZEN What is missing in LQG is not becoming. It is a (preferred) time variable. Let me start by reviewing the (different) roles of the coordinates in Newtonian physics and GR. Newtonian space is a 3d Euclidean space and Newtonian time is a uniform 1d metric line. Euclidean space admits families of Cartesian coordinates X and the time line carries a natural (affine) metric coordinate T . These quantities are tracked by standard rods and clocks. Rods and clocks are not strictly needed for localisation in time and space, because anything can be used for relative localisation, but they are convenient in the presence of a rigid background metric structure such as the Newtonian, or the special relativistic one. Rods and clocks are also useful in GR, but far less central. Einstein relayed on rods and clocks in the early days of the theory, but later realized that this was a mistake and repeatedly de-emphasized their role at the foundation of his theory. In fact, he cautioned against giving excessive weight to the fact that the gravitational field defines a geometry [START_REF] Lehmkuhl | Why Einstein did not believe that general relativity geometrizes gravity[END_REF]. He regarded this fact as a convenient mathematical feature and a useful tool to connect the theory to the geometry of newtonian space [START_REF] Einstein | Geometrie und Erfahrung[END_REF], but the essential about GR is not that it describes gravitation as a manifestation of a Riemannian spacetime geom-etry; it is that it provides a field theoretical description of gravitation [START_REF] Einstein | The Meaning of Relativity[END_REF]. GR's general coordinates x, t are devoid of metrical meaning, unrelated to rods and clocks, and arbiltrarilly assigned to events. This is imposed by the fact that the dynamics of rods and clocks is determined by interaction with the gravitational field. Therefore the general relativistic coordinates do not have the direct physical interpretation of Newtonian and special relativistic coordinates. To compare the theory to reality we have to find coordinate-invariant quantities. This generates some technical complication but is never particularly hard in realistic applications. But the relativistic t coordinates should not be confused with intuitive time, nor with clock time. Clock time is computed in the theory by the proper time (1) along a worldline. The reason is that this quantity counts, say, the oscillations of a mechanism following the worldline. Contrary to what often wrongly stated, this is not a postulate of the theory: it is a consequences of the equations of motion of the mechanism. Given two events in spacetime, the clock time separation between them depends on the worldline of the clock. Therefore there is no single meaning to the time separation between two events. This does not make the notion of time inconsistent: it reveals it to be richer than our naive intuition. It is a fact that two clocks separated and then taken back together in general do not indicate the same time. Accord of clocks is an approximative phenomenon due to the peculiar environment in which we conduct our usual business. Due to the discrepancy between clocks, it makes no sense to interpret dynamics as evolution with respect to one particular clock, as Newton wanted. 3 Accordingly, the dynamics of GR is not expressed in terms of evolution in a single clock time variable; it is expressed in terms of relative evolution between observable quantities (a detailed discussion is in Chapter 3 of [START_REF] Rovelli | Quantum Gravity[END_REF]). This fact makes it possible to get rid of the t variable all-together, and express the dynamical evolution directly in terms of the relative evolution of dynamical variables (Chapter 3 of [START_REF] Rovelli | Quantum Gravity[END_REF]). Thus, special clocks or preferred spacial or temporal variables are not needed in relativistic physics. A formulation of classical GR that does not employ the time variable t at all is the Hamilton-Jacobi formulation [START_REF] Peres | On Cauchy's problem in General Relativity[END_REF]. It is expressed uniquely in terms of the three metric q ab of a spacelike surfaces and defined by two equations D a δS[q] q ab = 0, G abcd δS[q] q ab δS[q] q cd + det q R[q] = 0 (2) where G abcd = q ac q bd + q ad q bc -q ab q cd and R is the Ricci scalar of q. Notice the absence of any temporal coordinate t. In principle, knowing the solutions of these equations is equivalent to solving the Einstein equations. Here S[q] is the Hamilton-Jacobi function of GR. When q is the 3-metric of the boundary of a compact region R of an Einstein space, S[q] can be taken to be the action of a solution of the field equations in this region. It is the quantity connected to the LQG amplitudes as in (A1). Absence of a time variable does not mean that "time is frozen" or that the theory does not describe dynamics, as unfortunately is still heard. Equations ( 2) provide indeed an equivalent formulation of standard GR and can describe the solar system dynamics, black holes, gravitational waves and any other dynamical process, where things become, without any need of an independent t variable. In these phenomena many physical variables change together, and no preferred clock or parameter is needed to track change. The same happens in LQG. The quantum versions of (2) formally determine the transition amplitudes between quantum states of the gravitational field. These can be coupled to matter and clocks. Variables change together and no preferred clock variable is used in the theory. It is in this weak sense that it is sometimes said that "time does not exist" at the fundamental level in quantum gravity. This expression means that there is no time variable in the fundamental equations. It does not mean that there is no change in nature. The theory indeed is formulated in terms of probability amplitudes for processes. VI. QUANTUM THEORY WITHOUT SCHR ÖDINGER EQUATION Quantum mechanics requires some cosmetic adaptations in order to deal with the way general relativistic physics treats becoming. General relativistic physics describes becoming as evolution of variables that change together, any of them can be used to track change. No preferred time variable is singled out. Quantum mechanics instead is commonly formulated in terms of a preferred independent clock variable T . Evolution in T is expressed either in the form of Schrödinger equation i ∂ψ ∂T = Hψ (3) or as a dynamical equation for the variables dA dT = i [A, H], (4) where H is the Hamiltonian operator. Neither of these equations is adapt to describe relativistic relative evolution. The extension of quantum theory to the relativistic evolution is however not very hard, and has been developed by many authors, starting from Dirac. See for instance Chapter 5 of [START_REF] Rovelli | Quantum Gravity[END_REF], or [START_REF] Rovelli | Covariant Loop Quantum Gravity[END_REF] or, on a slightly different perspective, the extensive work of Jim Hartle [START_REF] Hartle | Spacetime quantum mechanics and the quantum mechanics of spacetime[END_REF] on this topic. Like classical mechanics, quantum mechanics can be phrased as a theory of the probabilistic relations between the values of variables evolving together, rather than variables evolving with respect to a single time parameter. The Schrödinger equation is then replaced by a Wheeler-DeWitt equation Cψ = 0, (5) or as a dynamical equation for the variables [A, C] = 0 (6) for a suitable Wheeler-DeWitt operator C. Again: these equations do not mean that time is frozen or there is no dynamics. They mean that the dynamics is expressed as joint evolution between variables, instead than evolution with respected to a single special variable. Formally: the → 0 limit of equation ( 5) is the second equation in [START_REF] Rovelli | Covariant Loop Quantum Gravity[END_REF]; given boundary values, ( 5) is formally solved by the transition amplitudes W ; these can be expressed as a path integral over fields in the region and in the → 0 limit W ∼ e i S , where S is a solution to [START_REF] Rovelli | Covariant Loop Quantum Gravity[END_REF]. These are formal manipulations. LQG provides a finite and well defined expressions for W , at any order in a truncation in the number of degrees of freedom. Quantum theory does not describe how things 'are'. It describe quantum events that happen when systems interact [START_REF] Rovelli | Space is blue and birds fly through it[END_REF]. We mentally separate a 'quantum system', for a certain time interval, from the rest of the world, and describe the way this interact with its surroundings. This peculiar conceptual structure at the foundations of quantum theory takes a surprising twist in quantum gravity. In quantum gravity we identify the process of the 'quantum system' with a finite spacetime region. This yields a remarkable dictionary between the relational structure of quantum theory and the relational structure of relativistic spacetime: quantum transition ↔ 4d spacetime region initial and final states ↔ 3d boundaries interaction ('measurement') ↔ continguity Thus the quantum states of LQG sit naturally on 3d boundaries of 4d regions (see Figure 1) [START_REF] Oeckl | A 'general boundary' formulation for quantum mechanics and quantum gravity[END_REF]. The quantum amplitudes are associated to what happens inside the regions. Intuitively, they can be understood as path integral over all possible internal geometries, at fixed boundary data. For each set of boundary data, the theory gives an amplitude, that determines the probability for this process to happen, with respect to other processes. Remarkably: the net of quantum interactions between systems is the same thing as the net of adjacent spacetime regions. 2. Spacetime is a name given to the gravitational field in classical GR. In LQG there is a gravitational field, but it is not a continuous metric manifold. It is a quantum field with the usual quantum properties of discreteness, indeterminism and quantum relationality. 3. Space and time can refer to preferred variables used to locate things or to track change, in particular reading of meters and clocks. In LQG, rods and clocks and their (quantum) behaviour can in principle be described, but play no role in the foundation of the theory. The equations of the theory do not have preferred spacial or temporal variables. 4. Thermal, causal, "flowing" aspects of temporality are ground on chapters of science distinct from the elementary quantum mechanics of reality. They may involve thermal time, perspectival phenomena, statistics, brain structures, or else. 5. The universe described by quantum gravity is not flowing along a single time variable, nor organised into a smooth Einsteinian geometry. It is a network of quantum processes, related to one another, each of which obeys probabilistic laws that the theory captures. The net of quantum interactions between systems is identified with the net of adjacent spacetime regions. These are the roles of space and time in Loop Quantum Gravity. Much confusion about these notions in quantum gravity is confusion between these different meanings of space and time. Appendix A: Loop Quantum Gravity in a nutshell As any quantum theory, LQG can be defined by a Hilbert space, an algebra of operators and a family of transition amplitudes. The Hilbert space H of the theory admits a basis called the spin network basis, whose states |Γ, j l , v n are labelled by a (abstract, combinatorial) graph Γ, a discrete quantum number j l for each link l of the graph, and a discrete quantum number v n for each node n of the graph. The nodes of the graph are interpreted as elementary 'quanta of gravity' or 'quanta of space', whose adjacency is determined by the links, see Figure 2. These quanta do not live on some space: rather, they themselves build up physical space. The volume of these quanta is discrete and determined by v n . The area of the surfaces separating two nodes is also discrete, and determined by j l . The elementary quanta of space do not have a sharp metrical geometry (volume and areas are not sufficient to determine geometry), but in the limit of large quantum numbers there are states in H that approximate 3d geometries arbitrarily well, in the same sense in which linear combinations of photon states approximate a classical electromagnetic field. The spin network states are eigenstates of operators A l and V l in the operator algebra of the theory, respectively associated to nodes and links of the graph. In the classical limit these operators become functions of the Einstein's gravitational field g µν , determined by the standard relativistic formulas for area and volume. For instance, V (R) = R √ det q, for the volume of a 3d spacial region R, where q is the 3-metric induced on R. In the covariant formalism (see [START_REF] Rovelli | Covariant Loop Quantum Gravity[END_REF]), transition amplitudes are defined order by order in a truncation on the number of degrees of freedom. At each order, a transition amplitude is determined by a spinfoam: a combinatorial structure C defined by elementary faces joining on edges in turn joining on vertices (in turn, labeled by quantum numbers on faces and edges), as in Figure 3. A spinfoam can be viewed as the Feynman graph of a history of a spin network; equivalently, as a (dual) dis-crete 4d geometry: a vertex corresponds to an elementary 4d region, an elementary process. The boundary of a spinfoam is a spin network. The theory associates an amplitude W C (Γ, j l , v n ) (a complex number) to spinfoams. These are ultraviolet finite. Several theorems relate them to the action (more precisely the Hamilton function S) of GR, in the limit of large quantum numbers. This is the expected formal relation between the quantum dynam-ics, expressed in terms of transition amplitudes W and its classical limit, expressed in terms of the action S: W ∼ e i S , (A1) where W and S are both functions of the boundary data. This concludes the sketch of the formal structure of (covariant) LQG. Notice that nowhere in the basic equations of the theory a time coordinate t or a space coordinate x show up. Figure 1 . 1 Figure 1. A compact spacetime region is identified with a quantum transition. The states of LQG sit on its boundary. Figure 2 . 2 Figure 2. The graph of a spin network and an intuitive image of the quanta of space it represents. Figure 3 . 3 Figure 3. Spinfoam: the time evolution of a spin network. During the XIX century, certain awkward aspects of this Newtonian hypostasis led to the development of the notion of 'physical reference system': the idea that Newtonian space captures the properties of preferred systems of bodies not subjected to forces. This is correct but already presupposes the essential ingredient: a fixed metric space, permitting to locate things with respect to distant references bodies. Thus the notion of reference system does not add much to the novelty of the Newtonian ontology. Given two clocks that measure different time intervals between two events, it make no sense to ask which of the two is 'true time': the theory simply allows us to compute the way each changes with respect to the other.
00845520
en
[ "phys.grqc", "phys.qphy" ]
2024/03/05 22:32:16
2017
https://hal.science/hal-00845520/file/1306.5206.pdf
Eugenio Bianchi Hal M Haggard Carlo Rovelli The boundary is mixed We show that Oeckl's boundary formalism incorporates quantum statistical mechanics naturally, and we formulate general-covariant quantum statistical mechanics in this language. We illustrate the formalism by showing how it accounts for the Unruh effect. We observe that the distinction between pure and mixed states weakens in the general covariant context, and surmise that local gravitational processes are indivisibly statistical with no possible quantal versus probabilistic distinction. I. INTRODUCTION Quantum field theory and quantum statistical mechanics provide a framework within which most of current fundamental physics can be understood. In their usual formulation, however, they are not at ease in dealing with gravitational physics. The difficulty stems from general covariance and the peculiar way in which general relativistic theories deal with time evolution. A quantum statistical theory including gravity requires a generalized formulation of quantum and statistical mechanics. A key tool in this direction which has proved effective in quantum gravity, is Oeckl's idea [START_REF] Oeckl | A 'general boundary' formulation for quantum mechanics and quantum gravity[END_REF][START_REF] Oeckl | General boundary quantum field theory: Foundations and probability interpretation[END_REF] of using a boundary formalism, reviewed below. This formalism combines the advantages of an S-matrix transition-amplitude language with the possibility of defining the theory without referring to asymptotic regions. It is a language adapted to general covariant theories, where "bulk" observables are notoriously tricky, because it can treat dependent and independent variables on the same footing. This formalism allows a general covariant definition of transition amplitudes, n-point functions and in particular the graviton propagator [START_REF] Rovelli | Graviton propagator from backgroundindependent quantum gravity[END_REF][START_REF] Bianchi | Graviton propagator in loop quantum gravity[END_REF]. These are defined on compact spacetime regions-the dependence on the boundary metric data makes general covariance explicit and circumvents the difficulties (e.g. [START_REF] Arkani-Hamed | A Measure of de Sitter entropy and eternal inflation[END_REF]) usually associated to the definition of these quantities in a general covariant theory. In the boundary formalism, the focus is moved from "states", which describe a system at some given time, to "processes", which describe what happens to a local system during a finite time-span. For a conventional nonrelativistic system, the quantum space of the processes, B (for "boundary"), is simply the tensor product of the initial and final Hilbert state spaces. Tensor states in B represent processes with given initial and final states. What about the vectors in B that are not of the tensor form? Remarkably, it turns out that mixed statistical quantum states are naturally represented by these non-tensor states [START_REF] Bianchi | Talk at the 2012 Marcel Grossmann meeting[END_REF]. Here we formalize this observation, showing how statistical expectation values are expressed in this language. This opens the way to a systematic treatment of general-covariant quantum statistical mechanics, a problem still wide open. The structure of this paper is as follows: In Section II, we start from conventional non-relativistic mechanics and move "upward" towards more covariance: we construct the formal structures that define the boundary formalism, characterize physical states and operators, define the dynamics through amplitudes, and show how statistical states and equilibrium states can be treated. In Section III, we adapt the boundary formalism to a general covariant language by including the independent evolution parameter (the "time" partial observable) into the configuration space. This is the step that permits the generalization to general covariant systems. Once these structures are clear, in Section IV we take them as fundamental, and show that they retain their meaning also in the more general cases where the system is genuinely general relativistic. In Section V we apply the formalism to the Unruh effect and in Section VI we draw some tentative conclusions regarding quantum gravity. These point towards the idea that any local gravitational process is statistical. II. NON-RELATIVISTIC FORMALISM A. Mechanics Consider a Hamiltonian system with configuration space C. Call x ∈ C a generic point in C. The corresponding quantum system is defined by a Hilbert space H and a Hamiltonian operator H. We indicate by A, B, ... ∈ A the self-adjoint operators representing observables. In the Schrödinger representation, which diagonalizes configuration variables, a state ψ is represented by the functions ψ(x) = x|ψ , where |x is a (possibly generalized) eigenvector of a family of observables that coordinatizes C (we use the Dirac notation also for generalized states, as Dirac did). States evolve in time by ψ t = e -iHt ψ o . For convenience we call H t the Hilbert space isomorphic to H, thought of as the space of states at time t. Fix a time t and consider the non-relativistic boundary space B t = H 0 ⊗ H * t , (1) arXiv:1306.5206v1 [gr-qc] 21 Jun 2013 where the star indicates the dual space. This space can be interpreted as the space of all (kinematical) processes. The state Ψ = ψ ⊗ φ * ∈ B t represents the process that takes the initial state ψ into the final state φ in a time t. For instance, if ψ and φ are eigenstates of operators corresponding to given eigenvalues, then Ψ represents a process where these eigenvalues have been measured at initial and final time. In the Schrödinger representation, vectors in B t have the form ψ(x, x ) = x, x |ψ . The state |x, x ≡ |x ⊗ x | represents the process that takes the system from x to x in a time t. The interpretation of the states in B t which are not of the tensor form is our main concern in this paper and is discussed below. There are two notable structures on the space B t . (a) A linear function W t on B t , which completely codes the dynamics. This is defined by its action W t (ψ ⊗ φ * ) := φ|e -iHt |ψ (2) on tensor states, and extended by linearity to the entire space. This function codes the dynamics because its value on any tensor state ψ ⊗ φ * gives the probability amplitude of the corresponding process that transforms the state ψ into the state φ. Notice that the expression of W t in the Schrödinger basis reads W t (x, x ) = x |e -iHt |x , (3) which is precisely the Schrödinger-equation propagator, and can be represented formally as a Feynman path integral from x to x in a time t, and, of course, it codes the dynamics of the theory. (b) There is a nonlinear map σ that sends H into B t , given by σ : ψ → ψ ⊗ (e -iHt ψ) * . (4) Boundary states in the image of σ represent processes that have probability amplitude equal to one, as can be easily verified using [START_REF] Oeckl | General boundary quantum field theory: Foundations and probability interpretation[END_REF] and ( 4). The process σ(ψ) is the one induced by the initial state ψ. In general, we shall call any vector Ψ ∈ B t that satisfies W t (Ψ) = 1 (5) a "physical boundary state." These are the basic structures of the boundary formalism in the case of a non-relativistic system. B. Statistical mechanics The last equation of the previous section is linear, hence a linear combination of solutions is also a solution. But linear combinations of tensor states are not tensor states. What do the solutions of (5) which are not of the tensor form represent? Consider a statistical state ρ. By this we mean here a trace class operator in H that can be mixed or pure. An operator in H is naturally identified with a vector in H ⊗ H * , of course. In particular, let |n be an orthogonal basis that diagonalizes ρ, then ρ = n c n |n n|. (6) The corresponding element in B 0 is ρ = n c n |n ⊗ n| (7) and we will from now on identify the two quantities. That is, below we often write states in H ⊗ H * , as operators in H. The numbers c n in (6) are the statistical weights. They satisfy n c n = 1 (8) because of the trace condition on ρ, which expresses the fact that probabilities add up to one. Thus the state ρ can be seen as an element of B 0 . Consider the corresponding element of B t , defined by ρ t := n c n |n n|e iHt . (9) It is immediate to see that W t (ρ t ) = 1. ( 10 ) Therefore we have found the physical meaning of the other (normalized) solutions of [START_REF] Arkani-Hamed | A Measure of de Sitter entropy and eternal inflation[END_REF]. They represent statistical states. Notice that these are expressed as vectors in the boundary Hilbert space B t . (See also [START_REF] Oeckl | A positive formalism for quantum theory in the general boundary formulation[END_REF].) The expectation value of the observable A in the statistical state ρ is A = Tr[Aρ], (11) the correlation between two observables is AB = Tr[ABρ], (12) and the time dependent correlation is A(t)B(0) = Tr[e iHt Ae -iHt Bρ], (13) of which the two previous expressions are special cases. These quantities can be expressed in the simple form A(t)B(0) = W t ( (B ⊗ A) ρ t ) (14) because W t ((B ⊗ A)ρ t ) = Tr[e -iHt Bρ t A] = Tr[e -iHt Bρe iHt A], here the placement of ρ t within the trace reflects the fact that its left factor is in the initial space and its right factor is in the final space (and A does not need a dagger because it is self-adjoint). Therefore the boundary formalism permits a direct reformulation of quantum statistical mechanics in terms of general boundary states, boundary operators and the W t amplitude. Consider states of Gibbs's form ρ = N e -βH . The corresponding state in B t is ρ t = N n e -βEn e iEnt |n n| = N e iH(t+iβ) ( 15 ) where |n is the energy eigenbasis and N = N (β), determined by the normalization, is the inverse of the partition function. A straightforward calculation shows that for these states the correlations ( 14) satisfy the KMS condition A(t)B(0) = B(-t -iβ)A(0) (16) which is the mark of an equilibrium state. Thus Gibbs states are the equilibrium states. C. L1 and L2 norms: physical states and pure states The two classes of solutions illustrated in the previous two subsections (pure states and statistical states) exhaust all solutions of the physical boundary state condition when B t decomposes as a tensor product of two Hilbert spaces: B t = H 0 ⊗ H * t . (17) This can be shown as follows. Consider an orthonormal basis |n in H 0 . Due to the unitarity of the time evolution, the vectors (e -iHt |n ) * form a basis of H * t . Therefore any state in B t can be written in the form Ψ = nn c nn |n ⊗ (e -iHt |n ) * . ( 18 ) The physical states satisfy W |Ψ = nn c nn n |e iHt e -iHt |n = n c nn = 1, ( 19 ) therefore they correspond precisely to the operators ρ = nn c nn |n n | ( 20 ) in H 0 , satisfying the condition Trρ = 1 [START_REF] Wightman | Quantum field theory in terms of vacuum expectation values[END_REF] which is to say: they are the statistical states. In particular, they are pure states if they are projection operators, ρ 2 = ρ. Observe that in general a statistical state in B t is not a normalized state in this space. Rather, its L 2 norm satisfies |Ψ| 2 = nn |c nn | 2 ≤ 1 (22) where the equality holds only if the state is pure. This is easy to see in a basis that diagonalizes ρ, because the trace condition implies that all eigenvalues are equal or smaller than 1 and sum to 1. Thus there is a simple characterization of physical states and pure states: the first have the "L 1 " norm [START_REF] Streater | PCT, Spin and Statistics, and All That[END_REF] equal to unity. The second have also the "L 2 " norm |Ψ| 2 equal to unity. III. RELATIVISTIC FORMALISM A. Relativistic mechanics Let us now take a step towards the relativistic formalism where the time variable is treated on the same footing as the configuration variables. With this aim, consider again the same system as before and define the extended configuration space E = C × R. Call (x, t) ∈ E a generic point in E. Let Γ ex = T * E be the corresponding extended phase space and C = p t + H the Hamiltonian constraint, where p t is the momentum conjugate to t. The corresponding quantum system is characterized by the extended Hilbert space K and a Wheeler-deWitt operator C [START_REF] Rovelli | Quantum Gravity[END_REF]. Indicate by A, B, ... ∈ A the self-adjoint operators representing partial observables [START_REF] Rovelli | Partial observables[END_REF] defined in K. In the Schrödinger representation that diagonalizes extended configuration variables, states are given by functions ψ(x, t) = x, t|ψ . The physical states are the solutions of the Wheeler-deWitt equation Cψ = 0, which here is just the Schrödinger equation. Physical states are the (generalized) vectors ψ(x, t) in K that are solutions of the Schrödinger equation. The space H formed by the physical states that are solutions of the Schrödinger equation is clearly in oneto-one correspondence with the space H 0 of the states at time t = 0. Therefore there is a linear map that sends H 0 into (a suitable completion of) K, simply defined by sending the state ψ(x) into the solution ψ(x, t) of the Schrödinger equation such that ψ(x, 0) = ψ(x). Vice versa, there is a (generalized) projection P from (a dense subspace of) K to H, that sends a state ψ(x, t) to a solution of the Schrödinger equation. This can be formally obtained from the spectral decomposition of C, or, more simply, by (P ψ)(x, t) = dx dt W (t-t ) (x, x ) ψ(x , t ). ( 23 ) Now, without fixing a time, the relativistic boundary state space is defined by B = K ⊗ K * . ( 24 ) Notice the absence of the t-label subscript. In the Schrödinger representation, vectors in B have the form ψ(x, t, x , t ) = x, t, x , t |ψ . This space can again be interpreted as the space of all (kinematical) processes, where now the boundary measurement of the clock time t is treated on the same footing as the other partial observables. Thus for instance |x, t, x , t ≡ |x, t ⊗ x , t | represents the process that takes the system from the configuration x at time t to the configuration x at time t . The two structures considered above simplify on the space B. (a) The dynamics is completely coded by a linear function W (no t label!) on B. This is defined extending by linearity W (φ * ⊗ ψ) := φ|P |ψ . ( 25 ) Its expression in the Schrödinger basis reads W (x, t, x , t ) = x, t|P |x , t = x|e iH(t-t ) |x , (26) which is once again nothing but the Schrödingerequation propagator, now seen as a function of initial and final extended configuration variables. The variable t is not treated as an independent evolution parameter, but rather is treated on equal footing with the other partial observables. The operator P can still be represented as a suitable Feynman path integral in the extended configuration space, from the point (x, t) to the point (x , t ). (b) Second, there is again a nonlinear map σ that sends K into B, now simply given by σ : ψ → ψ ⊗ ψ * . ( 27 ) States in the image of this map are "physical", namely represent processes that have probability amplitude equal to one, only if ψ satisfies the Schrödinger equation. In this case, a straightforward calculation verifies that W (Ψ) = 1. ( 28 ) As before, we call "physical" any state in B solving this equation. B. Relativistic statistical mechanics As before, linear combinations of physical states represent statistical states. A general relativistic statistical state is a statistical superposition of solutions of the equations of motion [START_REF] Rovelli | Statistical mechanics of gravity and the thermodynamical origin of time[END_REF]. 1 Consider again the state (6) in this 1 A concrete example is illustrated in [START_REF] Rovelli | The Statistical state of the universe[END_REF]. language: if ψ n is the full time-dependent solution of the Schrödinger equation corresponding to the initial state |n , we can now represente the state (6) in B simply by ρ = n c n ψ n ψ * n . (29) Explicitly, in the Schrödinger basis ρ(x, t, x , t ) = n c n ψ n (x, t) ψ n (x , t ). ( 30 ) The equilibrium statistical state at inverse temperature β is given by ρ(x, t, x , t ) = N n e iEn(t-t +iβ) ψ n (x) ψ n (x ). = N e iH(t-t +iβ) . ( ) 31 where ψ n (x) are the energy eigenfunctions. The correlation functions between partial observables are now given simply by AB = W ((A ⊗ B) ρ). ( 32 ) Notice the complete absence of the time label t in the formalism. Any temporal dependence is folded into the boundary data. (However, see the next section for a generalization of the KMS property and equilibrium.) This completes the construction of the boundary formalism for a relativistic system. We now have at our disposal the full language and we can "throw away the ladder," keep only the structure constructed, and extend it to far more arbitrary systems, including relativistic gravity. IV. GENERAL BOUNDARY We now generalize the boundary formalism to genuinely (general) relativistic systems that do not have a non-relativistic formulation. A quantum system is defined by the triple (B, A, W ). The Hilbert space B is interpreted as the boundary state space, not necessarily of the tensor form. A is an algebra of self-adjoint operators on B. The elements A, B, ... ∈ A represent partial observables, namely quantities to which we can imagine associating measurement apparatuses, but whose outcome is not necessarily predictable (think for instance of a clock). The linear map W on B defines the dynamics. Vectors Ψ ∈ B represent processes. If Ψ is an eigenstate of the operator A ∈ A with eigenvalue a, it represents a process where the corresponding boundary observable has value a. The quantity W (Ψ) = W |Ψ (33) is the amplitude of the process. Its modulus square (suitably normalized) determines the relative probability of distinct processes [START_REF] Rovelli | Quantum Gravity[END_REF]. A physical process is a vector in B that has amplitude equal to one, namely satisfies W |Ψ = 1. ( 34 ) The expectation value of an operator A ∈ A on a physical process Ψ is A = W |A|Ψ . ( 35 ) If a tensor structure in B is not given, then there is no a priori distinction between pure and mixed states. The distinction between quantum incertitude and statistical incertitude acquires meaning only if we can distinguish past and future parts of the boundary [START_REF] Smolin | On the nature of quantum fluctuations and their relation to gravitation and the principle of inertia[END_REF][START_REF] Smolin | Quantum gravity and the statistical interpretation of quantum mechanics[END_REF]. So far, there is no notion of time flow in the theory. The theory predicts correlations between boundary observables. However, as pointed out in [START_REF] Connes | Von Neumann algebra automorphisms and time thermodynamics relation in general covariant quantum theories[END_REF], a generic state Ψ on the algebra of local observables of a region defines a flow α τ on the observable algebra by the Tomita theorem [START_REF] Connes | Von Neumann algebra automorphisms and time thermodynamics relation in general covariant quantum theories[END_REF], and the state Ψ satisfies the KMS condition for this flow A(τ )B(0) = B(-τ -iβ)A(0) , (36) where A(τ ) = α τ (A). It will be interesting to compare the flow generated in this manner with the flow generated by a statistical state within the boundary Hilbert space. If a flow is given a priori, the KMS states for this flow are equilibrium states for this flow. In a general relativistic theory including gravity, no flow is given a priori, but we can still distinguish physical equilibrium states as follows: an equilibrium state is a state that defines a mean geometry and whose Tomita flow is given by a timelike Killing vector of this geometry: see [START_REF] Rovelli | General relativistic statistical mechanics[END_REF]. V. UNRUH EFFECT As an example application of the formalism, we describe the Unruh effect [START_REF] Unruh | Notes on black hole evaporation[END_REF] in this language. Other treatments with a focus on the general boundary formalism are [START_REF] Colosi | The Unruh Effect in General Boundary Quantum Field Theory[END_REF][START_REF] Banisch | Vacuum states on timelike hypersurfaces in quantum field theory[END_REF]. Consider a partition of Minkowski space into two regions M and M separated by the two surfaces Σ 0 : {t = 0, x ≥ 0}, Σ η : {t = ηx, x ≥ 0}. ( 37 ) The region M is a wedge of angular opening η and M is its complement (Figure 1). Consider a Lorentz invariant quantum field theory on Minkowski space, say satisfying the Wigtmam axioms [START_REF] Streater | PCT, Spin and Statistics, and All That[END_REF]: in particular, energy is positive-definite and there is a single Poincaré-invariant state, the vacuum |0 . How is the vacuum described in the boundary language? In general, a boundary state φ b on ∂M = Σ = Σ 0 ∪ Σ η is a vector in the Hilbert space B = H 0 ⊗ H * η , where H 0 and H η are Hilbert spaces associated to the states on Σ 0 FIG. 1. The wedge M in Minkowski space. and Σ η respectively. The conventional Hilbert space H associated to the t = 0 surface is the tensor product of two Hilbert spaces H = H L ⊗ H R that describe the degrees of freedom to the left or right of the origin. We can identify H R and H 0 since they carry the same observables: the field operators on Σ 0 . Because the theory is Lorentz-invariant, H carries a representation of the Lorentz group. The self-adjoint boost generator K in the t, x plane does not mix the two factors H L and H R . If we call k its eigenvalues and |k, α L,R , its eigenstates in the two factors with α labeling the distinct degenerate levels of k, then it is a well known result [START_REF] Bisognano | On the Duality Condition for Quantum Fields[END_REF] that 0|k, α L = e -πk k, α| R (38) which we can write in the form |0 = dk dα e -πk |k, α L ⊗ |k, α R . (39) Tracing over H L gives the density matrix in H R ρ 0 = Tr L |0 0| = e -2πK (40) which determines the result of any vacuum measurement, and therefore any measurment [START_REF] Wightman | Quantum field theory in terms of vacuum expectation values[END_REF], performed on Σ 0 . The evolution operator W η in the angle η, associated to the wedge, sends Σ 0 to Σ η and is W η = e -iηK . (41) These two quantities give immediately the boundary expression of the vacuum on Σ: ρ η = ρ 0 e iηK = e i(η+2πi)K (42) This is the vacuum in the boundary formalism. It is a KMS state at temperature 1/2π with respect to the flow generated by K in η. For an observer moving with constant proper acceleration a along the hyperboloid of points with constant distance from the origin, this flow is proportional to proper time s s = η/a. (43) And therefore the vacuum is a KMS state, namely a thermal state, at the Unruh temperature (restoring ) T = a 2π . (44) This is the manner in which the Unruh effect is naturally described in the boundary language. Notice that no reference to accelerated observers or special basis in Hilbert space is needed to identify the thermal character of the vacuum on the η-wedge. An interesting remark is that the expectation values of operators on Σ can be equally computed using the region M which is complementary to the wedge M . Let us first do this for η = 0. In this case, the insertion of the empty region M cannot alter the value of the observables, and therefore it is reasonable to take the boundary state we associate to it to be the unit operator. ρ = 1l (45) And therefore ρη = e -iηK . (46) For consistency, we have then that the evolution operator associated to M must be Wη = e i(η+2πi)K . Therefore the evolution operator and the boundary state simply swap their roles when going from a region to its complement. 2 Notice that there exists a geometrical transformation that rotates Σ 0 into Σ η , obtained by rotating it clockwise, rather than anti clockwise. This rotation is not implemented by a proper Lorentz transformation, because the Lorentz group rotates Σ 0 at most only up to the light cone t = -x. But it can nevertheless be realized by extending a Lorentz transformation x = cosh(η)x + sinh(η)t t = sinh(η)x + cosh(η)t (48) to a complex parameter iη x = cosh(iη)x + sinh(iη)t = cos(η)x + i sin(η)t t = sinh(iη)x + cosh(iη)t = i sin(η)x + cos(η)t. (49) For a small η = , this transformation rotates the positive x axis infinitesimally into the complex t plane. The Lorentz group acts on the expectation values of the theory, and in particular on the expectation values of products of its local observables. Since the n-point functions 2 This can be intuitively understood in terms of path integrals: the evolution operator is the path integral on the interior of a spacetime region, at fixed boundary values; the boundary state can be viewed as the path integral on the exterior of the region. In the case under consideration, the vacuum is singled out by the boundary values of the field at infinity. For a detailed discussion, see [22]. of a quantum field theory where the energy is positive can be continued analytically for complex times (Theorem 3.5, pg. 114 in [START_REF] Streater | PCT, Spin and Statistics, and All That[END_REF]), this action is well defined on expectation values. In particular, we can rotate (t, x) infinitesimally into the complex t plane, and then rotate around the real t, x plane, passing below the light cone x = ±t in complex space. In other words, by adding a small complex rotation into imaginary time, we can rotate a space-like half-line into a timelike one [START_REF] Gibbons | Action integrals and partition functions in quantum gravity[END_REF][START_REF] Bianchi | Entropy of Non-Extremal Black Holes from Loop Gravity[END_REF]. A full rotation is implemented by U (2πi), giving (47). Finally, observe that the vacuum is the unique Poincaré invariant state in the theory. This implies that if a state is Poincaré invariant then it is thermal at the Unruh temperature on the boundary of the wedge. This is clearly a reflection of correlations with physics beyond the edge of the wedge. Since vacuum expectation values determine all local measurable quantum-field-theory observables, this implies that the boundary state is unavoidably mixed. In essence the available field operators are insufficient to purify the state. This can be seen physically as follows: in principle, we can project the state onto a pure state on Σ 0 , breaking Poincaré invariance by singling out the origin, but to do so we need a complete measurement of field values for x > 0 and therefore an infinite number of measurements, which would move the state out of its folium [START_REF] Haag | Local Quantum Physics[END_REF]. We continue these considerations in the next section. VI. RELATION WITH GRAVITY AND THERMALITY OF GRAVITATIONAL STATES So far, gravity has played no direct role in our considerations. The construction above, however, is motivated by general relativity, because the boundary formalism is not needed as long as we deal with a quantum field theory on a fixed geometry, but becomes crucial in quantum gravity, where it allows us to circumvent the difficulties raised by diffeomorphism invariance in the quantum context. In quantum gravity we can study probability amplitudes for local processes by associating boundary states to a finite portion of spacetime, and including the quantum dynamics of spacetime itself in the process. Therefore the boundary state includes the information about the geometry of the region itself. The general structure of statistical mechanics of relativistic quantum geometry has been explored in [START_REF] Rovelli | General relativistic statistical mechanics[END_REF], where equilibrium states are characterized as those whose Tomita flow is a Killing vector of the mean geometry. Up until now it hasn't been possible to identify the statistical states in the general boundary formalism and so this strategy hasn't been available in this more covariant context. With a boundary notion of statistical states this becomes possible. It becomes possible, in particular, to check if given boundary data allow for a mean geometry that interpolates them. In quantum gravity we are interested in spacelike boundary states where initial and final data can be given, therefore a typical spacetime region will have the lens shape depicted in Figure 2. Past and future components of the boundary will meet on wedge-like two-dimensional "corner" regions. Now, say we assume that a quantum version of the equivalence principle holds, for which the local physics at the corner is locally Lorentz invariant. Then the result of the previous section indicates that the boundary state of the lens region will be mixed. Any such boundary state in quantum gravity is a mixed state. (Other arguments for the thermality of local spacetime processes are in [START_REF] Martinetti | Diamonds's temperature: Unruh effect for bounded trajectories and thermal time hypothesis[END_REF].) The dynamics at the corner is governed by the corner terms of the action [START_REF] Carlip | The Off-shell black hole[END_REF][START_REF] Bianchi | Horizon energy as the boost boundary term in general relativity and loop gravity[END_REF], which can indeed be seen as responsible for the thermalization [START_REF] Massar | How the change in horizon area drives black hole evaporation[END_REF][START_REF] Jacobson | Horizon entropy[END_REF]. Up to this point we have emphasized the mixed state character of the boundary states in order to make a clear connection with the standard quantum formalism. However, note that from the perspective of the fully covariant general boundary formalism (see section IV) there is always a single boundary Hilbert space B that can be made bipartite in many different manners. From this point of view it is more natural to call these boundary states nonseparable. Then, local gravitational states are entangled states. This was first appreciated in the context of the examples treated in [22], which was an inspiration for the present work. Recently Bianchi and Myers have conjectured that in a theory of quantum gravity, for any sufficiently large region corresponding to a smooth background spacetime, the entanglement entropy between the degrees of freedom describing the given region with those describing its complement are given by the Bekenstein-Hawking entropy [START_REF] Bianchi | On the Architecture of Spacetime Geometry[END_REF]. The Bianchi-Myers conjecture and the considerations above result in a compelling picture supporting a quantum version of the equivalence principle. Both the mixing of the state near a corner and the Bianchi-Myers conjecture can be seen as manifestations of the fact that by restricting the region of interest to a finite spatial region we are tracing over the correlations between this region and the exterior, and therefore we are necessarily dealing with a state which is not pure. If, as we expect, the boundary formalism is crucial for extracting physical amplitudes from quantum gravity, all this appears to imply that the notion of pure state is irrelevant in local quantum gravitational physics and therefore statistical fluctuations cannot be disentangled from quantum fluctuations in quantum gravity [START_REF] Smolin | On the nature of quantum fluctuations and their relation to gravitation and the principle of inertia[END_REF][START_REF] Smolin | Quantum gravity and the statistical interpretation of quantum mechanics[END_REF]. FIG. 2 . 2 FIG. 2. Lens shaped spacetime region with spacelike boundaries and corners (filled circles). EB acknowledges support from a Banting Postdoctoral Fellowship from NSERC. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research & Innovation. HMH acknowledges support from the National Science Foundation (NSF) International Research Fellowship Program (IRFP) under Grant No. OISE-1159218.
01718721
en
[ "info.info-rb", "info.info-ts", "info.info-au", "info.info-im" ]
2024/03/05 22:32:16
2018
https://theses.hal.science/tel-01718721v3/file/PATLAN_ROSALES_Pedro_Alfonso.pdf
Keywords: Medical robotics, ultrasound elastography, visual servoing, haptic feedback Optical flow RF Radio frequency ROI Region of interest SAD Sum of absolute difference SSD Sum of squared differences US to always improve the quality of my work. I also appreciate his charisma that was never missing. My most sincere thanks to Jonathan Vappou and Pierre Vieyres for the valuable review and comments to improve my manuscript. I would like to thank the members of my thesis committee Marie-Aude Vitrani, Pierre Janin and Eric Marchand for their insightful comments and encouragement during my defense. Special thanks to François Chaumette for accepting me in Lagadic team (now Rainbow team) and for his support during my doctorate. I also would like to express my gratitude to Paolo Robuffo Giordano and wish him the best as the head of Rainbow team. I am also very thankful to Fabien for his support and wise advices with the robotic platform. I also appreciate the support of Hélène with all the administrative processes during my doctorate. I warmly thank my fellow labmates in Lagadic/Ranbow team. I am grateful to Hadrien and Souriya who helped me to improve the writing of the abstract in french of this manuscript. Through these three years and several months in Rennes I had the opportunity to meet excellent mates who shared with me invaluable moments. I will always remember the past fellows: Bertrand, Mani, Le, Riccardo, Giovanni, Pierre, Lucas, Suman, Aly San and Vishnu. I want to thank you to Fabrizio, Usman, Agniva, Ide-Flore, Firas, Lesley, Jason, Marco Cognetti, Marco Agravi, Bryan, Rahf, Marc, Nöel and Quentin for all the good moments. This work has been inspired and supported by many people in my life, some of them are physically far but always in my mind. My sincere gratitude to Juan Gabriel Aviña who always encouraged me to pursue my doctorate. I am also grateful with Adan Salazar and Horacio Rostro for their support in my quest of a doctorate. Agradezco a toda mi familia i en México, especialmente a mis padres Isidro and Ruth por su constante apoyo durante toda mi vida. A mis suegros Refugio y Bertha por el apoyo que siempre me han brindado. A mi hermano José Alberto un caluroso agradecimiento porque siempre ésta para mi en todo momento y porque su visita me recargó de energía. A mi hermana y sobrinas por su apoyo durante estos años lejos de casa. Agradecimientos especiales para mis cuñados y amigos María del Rayo, Mauricio y Felipe. Desde que los conozco, sus comentarios, amistad y cariño siempre fueron y han sido una gran fuente de energía en mi vida. A todo el resto de mi familia por todo el apoyo que me han brindado. ii RÉSUMÉ EN FRANÇAIS La robotique médicale est apparue dans les années 1980 avec pour objectif de fournir aux médecins de nouveaux outils facilitant le traitement des patients. Un premier dispositif médical robotisé fut présenté en 1983. Il s'agit d'Arthrobot, un robot dédié à la chirurgie orthopédique que le chirurgien utilisait principalement pour des tâches fatigantes (par exemple, maintenir une partie du corps du patient à la même position pendant une longue période) et qu'il pouvait contrôler par commande vocale. Ce fut une innovation majeure dans le domaine des interventions chirurgicales assistées par la robotique. Suite au succès d'Arthrobot, d'autres applications analogues virent le jour tirant partie de la précision et de la dextérité des dispositifs robotisés. En 1985, le robot industriel PUMA 560 fut utilisé comme un outil de positionnement d'aiguille, appliqué à la biopsie du cerveau sous imagerie tomodensitométrique (TDM) [START_REF] Kwoh | A robot with improved absolute positioning accuracy for ct guided stereotactic brain surgery[END_REF]. En 1992, Think Surgical Inc introduisit ROBODOC, un assistant robotique pour l'arthroplastie de la hanche [START_REF] Taylor | An image-directed robotic system for precise orthopaedic surgery[END_REF]. A l'aide d'images TDM et durant l'intervention, le robot repérait la position de trois broches insérées dans la hanche du patient par le chirurgien, puis assurait la planification et la bonne exécution de tâches élémentaires. Ce robot fut le premier de son genre à être utilisé sur des patients. Voyant le guidage par imagerie comme un allié majeur de la robotique médicale, chercheurs et industriels proposèrent de nouvelles solutions reposant sur davantage de modalités d'imagerie médicale. Avec le temps, l'imagerie médicale devint incontournable dans la plupart des interventions chirurgicales, celle-ci permettant au médecin de visualiser l'intérieur du corps humain sans avoir recours à une incision (imagerie non-invasive). Encore aujourd'hui, les modalités les plus notables sont la tomodensitométrie (TDM), l'imagerie par résonance magnétique (IRM) et l'imagerie échographique qui utilise les ultrasons (US). L'IRM est une technologie ayant recours à des champs magnétiques puissants pour produire une image des organes. L'acquisition d'une telle image est longue et nécessite des équipements encombrants, rendant l'IRM inadaptée pour la commande par vision des robots médicaux en temps réel. Malgré tout, des dispositifs robotisés ont été conçus iii pour être utilisés simultanément avec un scanner IRM, par exemple pour faciliter la manipulation d'aiguilles de biopsie [START_REF] Su | Real-time mri-guided needle placement robot with integrated fiber optic force sensing[END_REF]. L'imagerie échographie, quant à elle, s'appuie sur la propagation du son dans le corps humain pour construire des images des organes. Un de ses avantages majeurs par rapport aux autres modalités est son encombrement réduit, puisqu'elle nécessite uniquement l'utilisation d'une station (pouvant être portable) et d'une sonde échographique. Elle permet également d'acquérir des images en temps réel. Pour ces deux raisons, l'imagerie par ultrasons est la modalité la plus utilisée pour le guidage de robots médicaux. Elle fut pour la première fois utilisée à cette fin en 1999 [START_REF] Salcudean | A robot system for medical ultrasound[END_REF]. Afin de faciliter l'examen de l'artère carotide, un système robotique fut conçu pour déplacer automatiquement une sonde à ultrasons dans le but de maintenir la section de l'artère dans le plan de coupe de la sonde. Ce fut une innovation majeure dans le contrôle robotique par la vision (asservissement visuel). Dès lors, l'asservissement visuel par imagerie échographique suscita un vif intérêt dans le monde scientifique [START_REF] Azizian | Visual servoing in medical robotics: a survey. part II: tomographic imaging modalities -techniques and applications[END_REF]. Depuis, de nouvelles modalités d'imagerie ont vu le jour, issues de l'étude des informations obtenues à partir des modalités d'imagerie désormais classiques (US, IRM, TDM,. . . ). Parmi elles, l'élastographie permet d'acquérir des informations sur la rigidité d'un tissu, offrant alors à l'utilisateur de nouvelles données de diagnostic, tout en restant non-invasive. Ce concept fut approfondi durant les trois dernières décennies, en particulier pour la détection de tumeurs du sein [START_REF] Goddi | Breast elastography: A literature review[END_REF], de fibroses hépatiques à différents stades [START_REF] Barr | Elastography assessment of liver fibrosis: Society of radiologists in ultrasound consensus conference statement[END_REF] ou de cancer de la prostate [START_REF] Correas | Ultrasound elastography of the prostate: State of the art[END_REF]. Cependant, à ce jour la procédure permettant d'obtenir des images d'élastographie est réalisée manuellement. Elle nécessite une formation spécifique ainsi qu'une certaine expérience du praticien. Afin d'aider ce dernier, la robotique pourrait être utilisée à des fins d'assistance. Motivations La palpation manuelle est une technique médicale de diagnostic pratiquée depuis des siècles. Lorsqu'il a recours à cette approche, le praticien utilise le toucher pour évaluer la raideur des tissus du patient. Des changements de raideur peuvent alors être interprétés comme des signes d'une éventuelle maladie. Dans la pratique, en plus d'être non-invasive et simple à appliquer, cette méthode ne nécessite aucun équipement. En revanche, elle ne fournit que des informations qualitatives au praticien et nécessite une grande rigueur d'exécution. En effet, la raideur des tissus environnants peut influer sur le résultat et surtout le médecin n'a accès qu'aux tissus à portée de ses mains. Malgré cela, la palpation reste utilisée pour le diagnostic de nombreuses maladies. Par exemple, certaines maladies du foie, telles que la cirrhose ou l'hépatite, peuvent être détectées en repérant une fibrose. Certains types de cancers, tels que le cancer du sein, de la prostate iv ou de la thyroïde, peuvent être identifiés par la palpation. La détection précoce du cancer est cruciale pour accroître les chances de succès du traitement. Cependant, les techniques les plus couramment utilisées pour la détection du cancer, comme la biopsie ou la mastographie, peuvent perturber le confort du patient, le conduisant bien souvent à retarder les examens nécessaires. Par conséquent, il est primordial de proposer des outils de diagnostic indolores et non-invasifs. L'élastographie ultrasonore est une méthode d'imagerie tactile utilisant les ondes ultrasonores pour mesurer la raideur des tissus. Cette approche permet de palier les défauts de la palpation manuelle, puisqu'elle fournit au praticien des informations importantes et précises. L'élastographie est le plus souvent manuelle et implique d'imposer un mouvement répétitif au tissu. Pour un tissu donné, la carte de raideur peut varier si la pression appliquée à ce dernier n'est pas régulière. Ainsi, il sera difficile, voire impossible, de reproduire des résultats d'élastographie d'un examen à un autre, en particulier s'il ne s'agit pas du même médecin ou si plusieurs interventions successives sont réalisées sur le même tissu. Ces inconvénients mettent en évidence la nécessité d'une méthode capable de fournir des informations plus fiables. Dans cette optique, la robotique est une alternative intéressante à l'intervention entièrement manuelle, de par la capacité des robots à effectuer des tâches répétitives avec un degré de précision constant et élevé. Cela permettrait, entre autres, de pouvoir produire les mêmes cartes d'élasticité d'un examen à un autre. En conclusion, la mise en oeuvre d'un système robotisé capable de mesurer la raideur d'un tissu pourrait apporter une aide considérable au diagnostic par palpation manuelle, conduisant par la suite à une meilleure prise en charge du patient. 1. Développement d'un système robotisé pour réaliser le mouvement de palpation des tissus de manière répétitive. Pour cela, la conception d'un système de commande robotique utilisant un transducteur à ultrasons conventionnel pour appliquer le mouvement est requise. Objectifs de la thèse 2. Estimation quantitative de l'élasticité du tissu en temps réel par l'intermédiaire d'un processus élastographique rapide. [START_REF] Azizian | Visual servoing in medical robotics: a survey. part II: tomographic imaging modalities -techniques and applications[END_REF]. Utilisation de l'information précédente pour commander le robot afin d'assister le praticien durant l'intervention. 4. Télé-opération de la sonde à ultrasons pour explorer le tissu d'intérêt à distance. Contributions Durant cette thèse, plusieurs contributions ont été proposées dans le domaine de l'assistance médicale robotisée. Elles sont présentées ci-dessous : • Une nouvelle méthodologie pour utiliser la carte de déformation du tissu comme entrée d'un schéma de contrôle par asservissement visuel. Cette contribution inclut l'estimation en temps réel de la carte d'élasticité des tissus et l'extraction des caractéristiques visuelles requise pour la commande d'un robot porteur d'une sonde ultrasonore. Cette méthodologie est dérivée pour les cas d'échographie 2D et 3D. • Un système complet de palpation automatique fournissant une élastographie ultrasonore quantitative. Le schéma de contrôle, proposé dans le présent manuscrit, est composé de trois tâches robotiques hiérarchiques collaborant les unes avec les autres. Elles ont pour objectif de constamment afficher la carte d'élasticité d'une région d'intérêt donnée. Dans l'ordre des priorités les tâches proposeés sont : l'application d'un mouvement de palpation par une commande par retour d'effort, le centrage automatique d'une cible correspondant à un tissu rigide dans l'image par asservissement visuel, l'orientation automatique de la sonde ultrasonore afin d'observer la cible avec différents angles de vue. Cette contribution a été publiée dans deux articles de deux conférences internationales en robotique vi de premier rang : IROS 2016 [START_REF] Patlan-Rosales | Automatic palpation for quantitative ultrasound elastography by visual servoing and force control[END_REF] (IEEE / RSJ International Conference on Intelligent Robots and Systems) et ICRA 2017 [START_REF] Patlan-Rosales | A robotic control framework for 3-D quantitative ultrasound elastography[END_REF] (IEEE International Conference on Robotics and Automation). • Une nouvelle méthode d'estimation de la carte des contraintes tissulaires. Cette approche est basée sur l'estimation des déplacements d'une région d'intérêt, au sein d'une image échographique. Ceux-ci sont calculés à l'aide des paramètres géométriques caractérisant un système de recalage d'image déformable. • Une nouvelle approche d'estimation par échographie 2D de la carte des déformations d'une région d'intérêt donnée soumise à des perturbations de type mouvements physiologiques. La compensation de mouvement repose sur un asservissement visuel dense qui déplace la sonde de façon à annuler le mouvement relatif entre la sonde et le tissu mobile. Ainsi, une estimation robuste de l'élasticité des tissus peut être réalisée sur des structures en déplacement. Cette contribution a été publiée dans un article de la conférence internationale IROS 2017 [START_REF] Patlan-Rosales | Strain estimation of moving tissue based on automatic motion compensation by ultrasound visual servoing[END_REF] (IEEE / RSJ International Conference on Intelligent Robots and Systems). • Un système haptique basé sur l'élastogramme des tissus. Ce dernier retourne à l'utilisateur la sensation de l'élasticité des tissus, tout en permettant à l'utilisateur de déplacer la position de la région d'intérêt à analyser dans l'image ultrasonore. • Un dispositif associant la télé-opération de la sonde échographique au système haptique présenté précédemment. Structure de la thèse Le présent manuscrit de thèse est organisé comme suit : Le chapitre 1 introduit les concepts de base et l'état de l'art de l'élastographie ultrasonore. Il est divisé en trois sections. La première décrit succinctement la théorie physique de la formation du faisceau d'ultrasons, puis explique la reconstruction géométrique de l'image ultrasonore. La deuxième présente les concepts élémentaires de l'élastographie, ainsi qu'un état de l'art. Ce dernier détaille plus particulièrement les contributions existantes dans le domaine de l'élastographie ultrasonore. La dernière section rappelle les principes de l'asservissement visuel qui seront utilisés dans les chapitres suivants. Le chapitre 2 présente le système de palpation robotique que nous proposons pour réaliser une élastographie ultrasonore quantitative. Une partie traite du protocole expérimental envisagé dans le cadre de la thèse. Elle détaille notamment l'ensemble des vii équipements utilisés, mais aussi les outils de travail nécessaires à la mise en oeuvre de ce dispositif robotique. Ensuite, trois sections principales décrivent les tâches robotiques proposées pour concevoir le système de palpation robotique dédié à l'élastographie ultrasonore. Ces tâches sont au nombre de trois. La première correspond à un contrôle du mouvement d'oscillation de la sonde ultrasonore par une commande en effort. La deuxième consiste à extraire et utiliser les paramètres d'élasticité du tissu dans une commande par asservissement visuel permettant de maintenir une visibilité optimale du tissu analysé. Quant à la troisième, elle a pour objectif d'orienter automatiquement la sonde à ultrasons selon un angle fourni par l'utilisateur. Etant donné que ces trois tâches sont couplées, une approche hiérarchique est présentée afin de pouvoir les utiliser en combinaison dans le système de palpation proposé. Les résultats expérimentaux obtenus avec une sonde échographique 2D ou 3D sont décrits à la fin du chapitre. Le chapitre 3 présente une méthode de compensation de mouvement robuste, utilisée pour estimer la carte des déformations d'un tissu en déplacement. Ce système complète et améliore celui décrit au chapitre 2. Ce chapitre comprend cinq sections. La première décrit des travaux liés au suivi visuel de tissus déformables. La deuxième présente un aperçu des modèles de suivi visuel, testés avec des images échographiques réelles. Ensuite, une présentation détaillée de notre dispositif de suivi visuel dense est effectuée. Cette section traite également de l'utilisation du modèle de suivi pour estimer la carte des déformations du tissu en mouvement. La section 4 présente le schéma de contrôle utilisé pour compenser une perturbation, de type mouvement physiologique, par l'intermédiaire d'une sonde échographique 2D actionnée par un robot. Dans cette partie, la compensation du mouvement est basée sur un asservissement visuel dense. La dernière section introduit puis discute les résultats expérimentaux obtenus durant des expérimentations réalisées sur un fantôme simulant des tissus en mouvement. Le chapitre 4 propose un système haptique basé sur la carte des contraintes du tissu, et restituant à l'utilisateur la sensation de l'élasticité des tissus. Ce chapitre est divisé en quatre sections. La première décrit les concepts de base du retour haptique jusqu'à aboutir au type de retour haptique utilisé dans ce chapitre. La deuxième introduit le modèle permettant de transformer la carte de contraintes en retour haptique. Elle comprend la description d'un système de télé-opération chargé d'assister l'utilisateur dans l'exploration des tissus avec une sonde à ultrasons. La combinaison de la télé-opération et du contrôle de la force oscillatoire est présentée dans la même section. La quatrième section détaille le protocole expérimental ainsi que les résultats obtenus avec le dispositif haptique. Elle approfondit également l'implémentation du système haptique. Puis, une conclusion est proposée pour clore ce chapitre. Bibliography. 139 xiv LIST OF FIGURES INTRODUCTION Medical robotics emerged in the 1980s with the aim to provide the physicians with new tools to extend their ability to treat patients. In 1983, arthrobot was the first robotic assistant used for orthopedic surgery. This robot was voice-commanded by the surgeon to assist in tiring tasks (e.g, holding a limb at the same position for long periods). Arthrobot was a major breakthrough at the time, offering a wide perspective of robotic assisted procedures in medicine. The advantages of accuracy and dexterity of a robot brought more applications in medical procedures since the success of arthrobot. In 1985 the industrial robot PUMA 560 was used as a positioning device to orient a needle for biopsy of the brain [START_REF] Kwoh | A robot with improved absolute positioning accuracy for ct guided stereotactic brain surgery[END_REF]. The target was identified using computed tomography (CT) imaging. Afterwards, many applications in medical robotics were developed, for example the RO-BODOC system (Think Surgical, Inc.) [START_REF] Taylor | An image-directed robotic system for precise orthopaedic surgery[END_REF] in 1992, which assisted a surgeon in a total hip arthroplasty procedure. This system was the first on its kind used on humans. It employs the position of three pins implanted in the hip by the surgeon, locating them in CT images for planning and performing most of the tasks involved in the orthopedic procedure. Since image-guidance offers extensive opportunities for medical robotics, more medical imaging modalities began to be employed in this field. Currently, medical imaging has become essential in most medical procedures, providing examiners with the ability to see through the body without having to incise it (non-invasive). Along CT imaging, magnetic resonance imaging (MRI) and ultrasound imaging (US) have been the most frequently used modalities in medical robotics. MRI is a technology that uses strong magnetic fields to reconstruct images of the organs in the body. The acquisition of one image takes time and requires big sized equipment, making this modality incompatible for real-time visual control of medical robots. However, robots have been designed to be used in MRI scanner in order to assist in different tasks as for example the manipulation of a biopsy needle [START_REF] Su | Real-time mri-guided needle placement robot with integrated fiber optic force sensing[END_REF]. On the other hand, ultrasound (US) is a modality based on the propagation of the sound inside the body to generate images of the organs. The equipment used to perform ultrasound imaging is small compared to the other modalities. In addition, US has real-time acquisition capability, which INTRODUCTION makes it the most used technology for the image-guidance of medical robots. One of the first medical robots lead by ultrasound imaging was introduced in 1999 [START_REF] Salcudean | A robot system for medical ultrasound[END_REF]. This robotic system was designed to assist in the examination of the carotid artery by automatically moving an ultrasound probe such that the artery section was always visible in the ultrasound image. This system was the first one to use vision-based robot control (visual servoing) with ultrasound images. Since then, the application of ultrasound with visual servoing has gained great interest [START_REF] Azizian | Visual servoing in medical robotics: a survey. part II: tomographic imaging modalities -techniques and applications[END_REF]. New imaging modalities, such as elastography, have emerged of the analysis of the information obtained from the classical imaging modalities (ultrasound, MRI, CT, etc.). Elastography introduces the promising concept of obtaining quantitative values of the stiffness of a tissue. The measurement of the stiffness can provide the examiner with more tools for diagnosis of diseases avoiding invasive approaches. This concept has been explored during the past three decades in medicine for the diagnosis of breast tumors [START_REF] Goddi | Breast elastography: A literature review[END_REF], liver fibrosis at different stages [START_REF] Barr | Elastography assessment of liver fibrosis: Society of radiologists in ultrasound consensus conference statement[END_REF] and prostate cancer [START_REF] Correas | Ultrasound elastography of the prostate: State of the art[END_REF]. However, the elastography process is currently performed manually, requiring high experience and training of the examiner. To overcome this issue, robotic systems can be used to assist in this medical procedure. Motivations Manual palpation is a medical procedure that has been used in diagnosis for centuries, in which the stiffness of the tissue of a patient is felt with the examiner's hands. It allows to recognize changes on the stiffness of the tissue, indicating a possible disease. This practice is non invasive, simple in concept and needs no equipment. However, it requires great expertise and has significant constraints: it provides only qualitative information, it can be affected by the surrounding tissue and it is limited to the tissues within the reach of the examiner's hands. Palpation is used in the diagnosis of a wide range of diseases. Some illnesses of the liver, such as cirrhosis and hepatitis can be diagnosed by detecting fibrosis. Certain types of cancer, such as breast, prostate and thyroid cancer can be first identified by palpation. The early detection of cancer is fundamental in increasing the probabilities of successful treatment. However, the procedures most commonly performed for cancer detection, such as biopsies and mastographies, present significant drawbacks regarding the comfort of the patient, which can provoke for such patient to delay necessary examinations. Therefore, the development of tools for diagnosing which are mostly painless and non-invasive is of the utmost importance. Ultrasound elastography is a tactile imaging method that can measure the stiffness INTRODUCTION of tissue using ultrasound waves. Consequently, it can overcome the limitations of manual palpation, providing important and precise information. Elastography is a process commonly performed by hand, which requires a repetitive motion applied to the tissue. The generation of a stiffness map of a tissue can variate if the pressure applied on the tissue is not regular. As a consequence, the reproducibility of the results in elastography can be affected if several examinations are performed on the same tissue, specially if it is done by different examiners. This shortcoming suggests the need for an innovative method capable of providing more reliable information. On that account, a robotic system is capable of performing a repetitive task with the same pressure, which makes it a great option in the assistance for elastography process. Moreover, if the elastography is well performed, the output elasticity map of the tissue can be reproduced. Finally, the implementation of a robotic system capable of measuring the stiffness of a tissue in a patient can far extend the capabilities of diagnosis by manual palpation, creating a new tool to aid an examiner in improving the treatment of patients. The main goal of the thesis is to provide a general robotic control framework to assist an examiner in the elastography process. A general overview of the robotic framework we propose is illustrated by the block diagram presented in Figure 2 and described as follows: Goal of the thesis 1. Development of a robotic system to perform the repetitive palpation motion on tissues, which is always needed in classic ultrasound elastography process. This goal requires the design of a robotic controller that applies motion with a conventional ultrasound transducer. INTRODUCTION 2. Estimation of quantitative elastic information of the tissue in real time by the development of a fast elastography process. 3. Use of the elastic information of the tissue to perform robotic tasks, which can assist the examiner during the elastography process. 4. Teleoperation of the ultrasound probe to remotely explore the tissue. Contributions This dissertation presents several contributions to medical robotic assistance, which are listed as follows: • A new methodology to use the strain map of the tissue as input of a visual servoing control scheme. This contribution includes the real-time estimation of the strain map, and the extraction of visual features required for the image-based control. This methodology is developed for the cases of 2D and 3D ultrasound information. • A complete system performing automatic palpation and providing quantitative ultrasound elastography. The proposed control process is composed of three hierarchical robotic tasks collaborating with each other. These tasks are proposed with the goal of always obtaining the strain map visibility of a region of interest. According to their priority, the three tasks are palpation motion by force control, automatic centering of a stiff tissue target by visual servoing and the orientation of the probe for tissue exploration. This contribution was published in two articles in the proceedings of the two major international conferences in robotics, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) [START_REF] Patlan-Rosales | Automatic palpation for quantitative ultrasound elastography by visual servoing and force control[END_REF] and IEEE International conference on Robotics and Automation (ICRA) [START_REF] Patlan-Rosales | A robotic control framework for 3-D quantitative ultrasound elastography[END_REF]. • A new method to estimate the tissue strain map. This approach is based on the estimation of the motion displacements of a region of interest inside an ultrasound image. The displacements are computed through the geometric parameters involved in a deformable registration system. • A new approach to estimate the strain map in a region of interest under motion perturbation using a 2D ultrasound probe. The motion compensation is based on a dense visual servoing approach that actuates the 2D ultrasound probe such that the perturbation motion is canceled. As a result, a robust estimation of the tissue elasticity can be performed on moving tissues. This contribution was published in an article in the proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) [START_REF] Patlan-Rosales | Strain estimation of moving tissue based on automatic motion compensation by ultrasound visual servoing[END_REF]. INTRODUCTION • A haptic system using the tissue elastogram that allows to the user to "feel" the elasticity of the tissue while changing the position of the region of interest in the ultrasound image. • The combination of teleoperation of the ultrasound probe with the haptic system described in the previous contribution. Structure of the thesis The manuscript of this thesis is organized as follows: Chapter 1 introduces the basic concepts and state of the art of ultrasound elastography. The chapter is divided in three main sections. First section starts with a brief description of the physics of the ultrasound beam forming, and it ends with the geometry of the ultrasound image. Second section presents the elementary concepts of elastography and a state of the art focusing particularly on ultrasound elastography. Last section recalls the visual servoing principle, which is used along the next chapters. Chapter 2 details the robotic palpation system we propose to perform quantitative ultrasound elastography. This chapter introduces the experimental setup considered in this thesis, detailing all the equipments and the workflow used in the implementation of the robotic system. Afterwards, three main sections describe the three proposed robotic tasks in the design of the robotic palpation system for ultrasound elastography. The first task corresponds to an oscillatory force control of the ultrasound probe required for the elastography process. The second task involves the extraction and use of geometric parameters of a stiff tissue that is then automatically centered in the field of view of an ultrasound probe so the stiff tissue is always visible. The third task automatically adjusts the orientation of the ultrasound probe to a desired angle introduced by the user. As these three tasks are coupled, a hierarchical approach is presented to combine them into the proposed palpation system. Experimental results obtained by using either a 2D or a 3D ultrasound probe are presented at the end of the chapter. Chapter 3 presents a robust motion compensation process used to estimate the strain map of a moving tissue. This functionality complements and improves the system presented in chapter 2. This chapter is divided in five main sections. First section describes some works related to the visual tracking of deformable tissues. Second section presents an overview of the visual tracking models tested with real ultrasound images. Then, we detail our proposed dense visual tracking system. This section also explains INTRODUCTION the use of the tracking model to estimate the strain map of the moving tissue. Afterwards, section four presents the control system used to compensate the motion using a 2D ultrasound probe actuated by a robot. In this section, the motion compensation is based on a dense visual servoing process. The last section presents and discusses the experimental results obtained from experiments performed with a phantom simulating moving tissues. Chapter 4 proposes a haptic system based on the strain map of the tissue to provide the user with the feeling of the tissue elasticity. This chapter is divided in four sections. The first section describes the basic concepts of haptic feedback leading to the type of haptic feedback used in this chapter. The second section presents the model of the transformation from strain map to haptic force feedback. This section includes the description of a teleoperation system, which helps the user in the exploration of the tissues with an ultrasound probe. The fusion of the teleoperation with the oscillatory force control is presented in the same section. Section four shows the experimental setup and results of the haptic system. This section also describes the implementation details of the haptic system followed by the conclusion of the chapter. Chapter 5 delivers the general conclusions of this thesis and proposes several shortterm and long-term perspectives of this work. CHAPTER 1 ULTRASOUND ELASTOGRAPHY AND BASIC PRINCIPLES OF VISUAL SERVOING The quest of building efficient systems to assist in medical procedures has been increasing in the last decades. Medical imaging technologies as ultrasound imaging, magnetic resonance imaging (MRI), X-ray radiography and tomography have been used by physicians to facilitate the diagnosis of illnesses and their treatment, often through complex medical procedures. However, the images produced by every one of these technologies are vastly different in terms of content, appearance and resolution and their analysis needs training and experience. When searching for malign soft tissue in a subject, medical image analysis can be exploited for visual localization, however, the elasticity properties of the studied tissue can provide more precise and valuable information. Elastography has been developed with the aim of finding the elastic parameters of a tissue to help in the detection of tumors and other malign bodies by their stiffness. This technology has been implemented using ultrasound imaging and MRI. Since the ultrasound imaging modality offers the capabilities of real-time and portability, ultrasound elastography is a promising technique to be used in building a robotic-assisted system as the one presented in this dissertation. This chapter presents the basic principles of ultrasound imaging in section 1.1, from the beam forming to the classic b-mode image. Afterwards, a state-of-the-art related to elastography is introduced in section 1.2 that mainly focuses on techniques applied for ultrasound elastography. In section 1.3, several robotic systems involving elastography are presented. This chapter also recalls the definition of visual servoing in section 1.4 which is needed for a better understanding of the systems proposed in the next chapters. 1.1. ULTRASOUND IMAGING Ultrasound imaging In medical applications, the use of ultrasound imaging has become one of the most widespread along the medical community, due to its significant advantages over other technologies, such as: no radiation, low-cost, portability, to mention a few. The basic principle of US imaging consists in sending several pulses of ultrasound into the body and waiting for echoes to return. The echoes are then processed to produce an image of internal structures of the tissue. The ultrasound pulses are mechanical waves created by a vibrating object and propagated by a medium. The energy of these waves traveling through the body is then attenuated, scattered and reflected, producing echoes. An ultrasound wave is represented as a repetitive pattern of high and low amplitudes. The distance between two peaks is known as wavelength (λ), which represents the property of the wave repetition and it is defined as λ = c f (1.1) where c and f are the speed and frequency of the sound. Commonly, the speed of the sound in soft tissue is assumed to be constant at 1540 m s -1 . The frequency of sound f , as mentioned before, is in the range between 2 and 40 MHz. Generally, ultrasound waves are generated by a piezoelectric transducer, which is driven with electrical pulses sent from an ultrasound machine. The piezoelectric transducer also has the function of receiving the echoes after they have traveled through the tissue and transforming them into electrical pulses for the ultrasound machine. The propagation speed of ultrasound in homogeneous tissue depends on two particular properties of the considered tissue, the bulk modulus B and the density ρ. It is described by the following definition: The propagation speed then depends on the material of the medium where it is propagated. For example, the speed of the sound that propagates in soft tissue is 1540 m s -1 . c = B ρ . ( 1 When traveling in bone, the speed of the sound is around 4000 m s -1 . Tissue is composed of different materials that influence the propagation of the ultrasound waves. This propagation is described by the Snell's law. This law relates the ultrasound wave directions of incidence, reflection and transmission at the interface between two different materials. Figure 1.1 shows two different media with sound propagation speeds c 1 and c 2 respectively. Snell's law states that if the sound wave is propagated with an incidence angle θ i = 0 with respect to the interface, then the wave is reflected in the same medium and transmitted through the second medium with the angles θ r and In practice, reflections of the sound wave do not only occur at tissue boundaries. θ t , such that sin θ i c 1 = sin θ r c 1 = sin θ t c 2 (1.3) Tissues are inhomogeneous thus producing local deviation of density contributing to the reflected wave. This is denominated as scatter reflection, which is represented as a collection of point scatterers retransmitting the incident sound wave in all directions. the wavelength of the sound wave induced. Otherwise, the sound wave induced is reflected If the scatterer is smaller than the wavelength of the sound wave induced, then the interference is minimum. Otherwise, the interference is important, causing occlusions. The ultrasound wave propagation is also affected by the traveled distance. Its energy is reduced due to the scatterers and the absorption of the media. This energy loss is characterized by the attenuation coefficient α (expressed in decibels), which is dependent on the frequency of the sound wave. For soft tissue, the attenuation is usually between 0.3 and 0.6 dB/cm/MHz. Ultrasound beam formation The generation and detection of ultrasound waves is performed through a piezoelectric crystal that vibrates when an electric field is applied or generates an electric signal when it is subject to a mechanical vibration. The crystal is embedded in a transducer, usually an array of 128 elements, used as a transmitter and as a detector at the same time. Commonly, every piezoelectric crystal is driven with a sinusoidal electric signal that results in the emission of an ultrasound wave with a given frequency. The energy of the ultrasound wave reflected by the tissues that comes back to the crystal is then modified due to the factors previously discussed about sound propagation. A transducer toggles from the state of transmitter to receiver every time a pulse is emitted. The ultrasound wave echo received by the transducer is captured as function of time by the ultrasound station. This detected signal is often called radio frequency (RF) signal, because its frequency range corresponds to the one of the radio waves in the electromagnetic spectrum (see Figure 1.3). An RF signal is described by the following y(t ) = A(t )cos(ω r t + φ(t )) (1.4) where ω r is the carrier frequency, φ is the phase and A is the amplitude of the RF signal (also known as the envelope of the RF signal). The description of the RF signal is expressed in the analytic form, however in practice it is recorded using a sampling frequency f s . The frequency f s follows the Nyquist criterion, f s > 2 f max , (1.5) where f max is the maximum frequency of the RF signal y(t ). For typical commercial ultrasound transducers, the sampling frequency f s is set between 20 -40 MHz. In our work, we use the beamformed RF data from an ultrasonics machine. The amplitude of the RF signal is commonly represented by a 16-bits integer. The high frequency information is removed by envelope detection (see Figure 1.4). This 1.1. ULTRASOUND IMAGING process is computed as follows, Y env (t ) = y(t ) 2 + H (y)(t ), (1.6) where H (y)(t ) is the Hilbert transform of the RF signal y(t ) defined as, H (y)(t ) = - 1 π lim ς→0 ∞ ς y(t + τ) -y(t -τ) τ d τ. (1.7) The envelope detection is also known as demodulation of the RF signal. In practice, the envelope of the RF signal is obtained in the Fourier domain as F(H(y))(ω) = σ H (ω) • F(y)(ω), (1.8) where F denotes the Fourier transform and σ H is a sign function: σ H (ω) =      i , for ω < 0 0, for ω = 0 -i , for ω > 0 (1.9) The envelope detection of the RF signal is used in the ultrasound image reconstruction. Ultrasound image reconstruction The envelope of the RF signal provides a filtered signal after removing the carrier frequency of the original signal. Every amplitude along the envelope is represented as a gray value of 8-bits. Then, for a transducer scanning different lines (number of scan lines SL), a brightness mode (b-mode) image is obtained as shown in Figure 1.5a. In Figure 1.5a we can only observe specular reflections in the brightest areas. However, reflections due to the scatterers are barely visible due to the large difference of amplitude between the reflections. This issue can be easily addressed by performing a logarithmic compression of every scan line obtained from the transducer as Y l og (t ) = A log(Y env (t )) + β (1.10) where A is the amplification parameter and β is the linear gain parameter. This process enhances the image contrast as shown in the example presented in Figure 1.5b. Ultrasound image formation also depends on the geometry of the ultrasound transducer, more commonly known as ultrasound probe. There exist different ultrasound probe shapes, however two of the most common shapes are linear and convex probes. The linear probes are mostly employed for vascular imaging and the convex probes are used in abdominal imaging. In the following, we explain the relation between the geometry of a 2D ultrasound probe and the image formation. There are several parameters to consider in the reconstruction of an image from the geometry of the probe. For example, in case of a linear probe the transducer elements are co-linear to the extreme of the probe's surface and the scan lines are parallel to each other (see Figure 1.6). In the case of a convex probe, the transducer elements are positioned along the arc of the surface of the probe and the direction of the scan lines is therefore normal to this curved surface (Figure 1.7). The geometric parameters of a linear probe are shown in Figure 1.6. Every point inside the RF array is represented by (i , j ), where i and j are the indexes of the scan line and the sample, respectively. Using those parameters, the metric coordinates (x, y) of any point located in the RF array at coordinates (i , j ) can be computed by x = α L • (i -i 0 ) (1.11) y = α A • j (1.12) where α A and α L are the axial and lateral resolutions, respectively. α L is the distance between two consecutive elements of the transducer, usually a measure given by the manufacturer. α A is the distance value between two adjacent samples in one scan line, and it is defined as α A = c f s (1.13) In the case of a convex probe, the geometry of the probe is given in Figure 1. y = r cos(θ), (1.15) where r is the distance of the point to the origin of F p and θ is the angle with respect to y-axis. The RF scan lines obtained from the convex probe are stored in a rectangular array also called RF array. The origin of this array is at the top-left corner as shown in Figure 1.7 (bottom-right). The distance r is computed using the radius of the ultrasound probe r p and the distance between the origin and the j sample as, r = r p + α A j (1.16) where α A is the distance between two adjacent samples for every scan line i . This value is obtained as in Equation (1.13). To obtain the angle θ for the i -th scan line, first we need to know the angular distance between two consecutive scan lines α θ . This angular distance can be computed as α θ = α L r p (1.17) where α L is the separation between two contiguous elements of the transducer. Therefore, the angular field of view of the convex probe is Θ = SLα θ (1.18) where SL is the number of scan lines recorded (typical values are 128 and 192). The limits of the angular field of view correspond to θ min = -Θ 2 and θ max = Θ 2 . The angle θ for the i -th scan line is then defined as θ = θ min + i α θ . (1.19) With these previous relations (equations 1.14 to 1.19) we can therefore associate the (i , j ) indexes of each sample stored in the RF array to its respective metric coordinates (x, y). If the envelope detection is applied to the scan lines, then the image reconstructed is known as pre-scan image with memory indexes coordinates (i , j ) (see Figure 1.8a). However, in a pre-scan image, the geometric structures of the tissue scanned by a convex probe are distorted since the geometry of the probe is not taken into account. The rectification of the pre-scan image is called post-scan image as shown in Figure 1.8b, which takes into account the geometry of the FOV of the ultrasound probe. ELASTOGRAPHY: STATE-OF-THE-ART In case of a convex probe, every pixel coordinates (u, v) in the post-scan image are then transformed to the metric Cartesian coordinates (x, y) by the relations: u = x -(r p + SN α A ) sin(θ min ) s (1.20) v = y -r p cos(θ min ) s (1.21) where SN is the number of samples in a scan line and s is the scaling factor of the image that represents the size of a pixel in meters. After every pixel is mapped in the post-scan image, an image interpolation is then applied to fill the missing pixel intensities between the captured scan lines. Elastography: state-of-the-art Many diseases cause changes in the mechanical properties of tissues. Current imaging devices such as computed tomography (CT), ultrasound (US) and magnetic resonance imaging (MRI) are not directly capable of measuring these mechanical properties. However, this information can be obtained with elastography imaging techniques that consist in applying internal or external compression on tissues and measuring the resulted strain distribution from the image. This strain distribution is related to the tissue elasticity and generates a strain image of the underlying tissues. This section provides a state-of-the-art of the elastography process, starting from the principle of elastography. Afterwards, an overview of elastography applied on different medical imaging modalities is presented, as shown in Figure 1.9. Since this thesis concerns ultrasound elastography, the state-of-the-art will be mainly focused on techniques dedicated to the ultrasound modality that will be presented in section 1.2.5. Figure 1.9: Organization of the state-of-the-art of elastography. The number at the bottom of every block is the subsection number where the topic is described. Elastography principle Mechanical properties of an organ or tissue provide essential information for diagnosis in medicine. For example, a tumor or diseased tissue can be detected by its stiffness, generally perceived by palpation, which is a physical examination used in medical diagnosis that is performed by applying pressure with a hand or fingers to the surface of the body. However, this method is limited by the accessibility of the examiner to the tissue of interest, and it only provides qualitative information that can be distorted by surrounding tissues. Elastography is one of the existing approaches able to overcome these issues. The principle of elastography is to apply external compression on tissues and to measure their resulted displacements using medical imaging in order to estimate a quantitative image of the strains (also known as elastogram or strain map). In practice, the elastogram is derived from the analysis of the pre-and post-compression states of the tissue. The solution proposed by Ophir et al. [START_REF] Ophir | Elastography: A quantitative method for imaging the elasticity of biological tissues[END_REF] uses sound waves generated by a piezoelectric transducer array (usually RF signals) and models every wave as a succession of springs (see Fig. 1.10a). If an axial force is applied over the succession of springs (see Fig. 1.10b), the length in each spring will change according to Hooke's law as follows: F = i k i ∆l i , (1.22) where F is the force applied to the succession of springs in Newtons (N). k i is the stiffness of every i -th spring in (N•m -1 ), and ∆l i is the deformation of each spring in meters (m). The strain value (without units) for each i -th spring is defined as ε i = ∆l i l i (1.23) Fig. 1.10b shows the Hooke's law scheme, where l i is the initial length of the i -th spring, l ′ i is the length after a stress is applied and ∆l i = l ′ i -l i is the difference of lengths. The relation between the change of length ∆l i and the strain value ε i is illustrated in Figure 1.11. The analogy of Hooke's law for the successive springs is then adapted to the echo signals and the change of length ∆l is the time-delay ∆ t between the pre-and post-compression In practice, the time-delay ∆ t is usually obtained from crosscorrelation analysis between pre-and post-compression segments of RF signal. However, the amplitudes of the RF signal change between the different states of compression of the tissue generating a false estimation of ∆ t when using cross-correlation analysis. In Equation (1.23) we can note that an optimal estimation of ∆ t is essential to compute the best approximation of the strain value ε. Thus, other ways to estimate ∆ t are: phase zero estimation (PZE) [START_REF] Pesavento | A time-efficient and accurate strain estimation concept for ultrasonic elastography using iterative phase zero estimation[END_REF], axial velocity estimation (AVE) [START_REF] Loupas | An axial velocity estimator for ultrasound blood flow imaging, based on a full evaluation of the doppler equation by means of a twodimensional autocorrelation approach[END_REF] and optical flow (OF) [START_REF] Pan | A two-step optical flow method for strain estimation in elastography: Simulation and phantom study[END_REF]. PZE computes the displacement ∆ t based on the estimation of the zero-phase between the pre-and post compressed RF signals represented in the Fourier domain. On the other hand, AVE and OF compute the velocity between the two RF signals. AVE uses the Doppler effect to calculate the velocity while OF uses a first order Taylor approximation. The tissue elasticity variates among all the internal materials inside the body. Young's modulus E describes the tendency of any material to deform when a stress is applied on it: E = σ ε (1.24) where σ is the stress applied to the material, measured in Pascals (Pa). Young's modulus is also expressed in Pa, since the strain ε has no units. The strain value can be obtained with the analogy of the Hooke's law for successive springs. However, the estimation of the Young's modulus requires knowledge of the value of the stress applied, which cannot be measured when the process of pre-and post-compression is performed manually. One alternative solution to obtain the Young's modulus is the use of shear waves, which are elastic waves generated by an external actuator (vibration), natural physiological stress (e.g. breathing) or acoustic radiation force impulses (ARFI) [START_REF] Sandrin | Shear modulus imaging with 2-D transient elastography[END_REF]. In this approach, local elasticity is estimated from the phase of the displacement, instead of the amplitude. The propagation of a shear wave is related to the shear elasticity modulus as, ρ ∂ 2 - → u ∂t 2 = G∆ - → u (1.25) where ρ is the density of the medium, -→ u is the displacement vector and G (in kPa) is the shear modulus of the medium [START_REF] Vappou | Magnetic resonance-and ultrasound imaging-based elasticity imaging methods: A review[END_REF]. Thus, the shear wave velocity is related to the shear modulus through: c s = G ρ (1.26) As shown in Equation 1.26, the estimation of the local shear wave velocity c s is required to compute the shear modulus G. The average of the shear wave velocity is calculated from the phase shift of displacement between two locations and its distance. ELASTOGRAPHY: STATE-OF-THE-ART The existing elastography approaches mainly concern modalities such as magnetic resonance elastography (MRE), computed tomography elastography (CTE), optical coherence tomographic elastography (OCTE) and ultrasound elastography (USE) which are described in the next sections. Magnetic resonance elastography (MRE) MRE is a non-invasive medical imaging technique that reconstructs the stiffness of the tissue by imaging the propagation of shear waves using MRI. MRE assesses quantitatively the mechanical properties of the tissue [START_REF] Mariappan | Magnetic resonance elastography: A review[END_REF][START_REF] Muthupillai | Magnetic resonance elastography by direct visualization of propagating acoustic strain waves[END_REF]. The technology is becoming available as an upgrade on conventional MRI scanners. MRE has proven to be beneficial as a clinical tool for the diagnosis of diseases such as hepatic fibrosis, which increases the stiffness of liver tissue [START_REF] Venkatesh | Magnetic resonance elastography of liver: technique, analysis, and clinical applications[END_REF][START_REF] Venkatesh | MR elastography of liver tumors: preliminary results[END_REF] (see Figure 1.12). The process to obtain a MRE image to 500 Hz are induced in the tissue using an external actuator. Then, the shear waves are imaged inside the body using a MRI process. Finally, the imaged shear waves are processed to generate quantitative images of the stiffness of the tissue. MRE does not have real-time capability since the MRI takes at least 1s to generate the images of shear VISUAL SERVOING waves. However, MRE offers quantitative stiffness of all the tissue scanned. One important application of MRE is the measure of the stiffness of in-vivo brain tissue as presented in [START_REF] Vappou | Assessment of in vivo and post-mortem mechanical behavior of brain tissue using magnetic resonance elastography[END_REF]. Computed tomography elastography (CTE) CTE is a technique to obtain the stiffness of the tissue based on computed tomography (CT) images. There are two ways to implement CTE. The first approach consists in using two CT images corresponding to the pre-and post-compression states. Then, the displacement map between the two images is computed with an image registration technique, and the strain map is estimated from this displacement. A feasibility study of this approach was presented in [START_REF] Kim | X-ray elastography: A feasibility study[END_REF], where a non-rigid registration method was used to estimate the displacement maps. The second way to implement CTE is by performing an evaluation of the stiffness of different tissues using a tactile sensor. Those tissues are imaged with CT and then segmented in the images (automatically or manually). Then, the intensities in the image and the stiffness measured with the tactile sensor are related with a curve fitting process. Afterwards, the CTE image is reconstructed to provide quantitative stiffness of the tissue in the CT image [START_REF] Sasaki | CT elastography: A pilot study via a new endoscopic tactile sensor[END_REF]. The major drawback of CTE is the radiation exposure of the patient. However, CTE offers different information, which sometimes cannot be observed with other imaging modalities. Optical coherence tomographic elastography (OCTE) OCTE is a modality based on optical coherence tomography (OCT), which is a medical imaging technique that uses laser light and reaches micrometer resolution. OCT is based on low-coherence interferometry, typically employing near-infrared light. It uses long wavelength light which penetrates into the scattering medium. The information that OCT provides has high resolution. In 1998 Schmitt [START_REF] Schmitt | OCT elastography: imaging microscopic deformation andstrain of tissue[END_REF] used the OCT information to obtain strain maps of micro tissues (e.g., skin of the finger). More recently, in 2013, Sampson et al [START_REF] Sampson | Optical elastography probes mechanical properties of tissue at high resolution[END_REF] improved the differentiation of tissues pathologies, such as cancer or atherosclerosis, based on Schmitt's work. 1.2. ELASTOGRAPHY: STATE-OF-THE-ART Ultrasound elastography (USE) USE is a modality that has been in development for about 25 years. The principle of elastography presented in section 1.2.1 is based on ultrasound imaging. USE was developed to reconstruct an image representing the elasticity of the structures observed in the field of view of an ultrasound probe. This technique has been applied using different approaches to measure elastic parameters. These approaches can be divided in two categories according to the measurement of the displacements: quasi-static and dynamic. We show in Figure 1.13 four of the existing approaches: quasi-static, remote palpation, transient and supersonic. For all the mentioned approaches, an overview of related works is provided in the following sections. R(n) = (s pr e ⋆ s post )(n) = M m=1 s * pr e (m)s post (m + n) (1.27) where ⋆ is the cross-correlation operator and * denotes the conjugate of the function. n is the sample displacement (also known as lag) in a certain range (usually [-N , N ]). The output of the correlation R(n) is a signal of length (2N -1), where the amplitude at every CHAPTER 1. ULTRASOUND ELASTOGRAPHY AND BASIC PRINCIPLES OF VISUAL SERVOING n-th displacement corresponds to the similarity measured value. Then, the optimal displacement ∆ t between the two RF signal segments can be estimated by ∆ t = arg max n R(n) (1.28) The optimal displacement ∆ t occurs when the two signals are the most similar. An illustrative example of the cross-correlation is shown in Figure 1.14. We can observe that RF signals are high frequency signals, which can cause the repetition of some segments. This repetition can produce a wrong estimation of the optimal displacement ∆ t when using N >> M . The quasi-static ultrasound elastography approach offers a solution to this issue. It consists in applying a small compression to the tissue that usually represents about 1% to 2% of the length of the tissue [START_REF] Ophir | Elastography: A quantitative method for imaging the elasticity of biological tissues[END_REF]. This quasi-static compression improves the estimation of the displacement by limiting the range of search in the cross-correlation analysis, while reducing the computational cost. Once the displacement is estimated, the strain value of the tissue can be computed. As explained in the principle of elastography described in section 1.2.1, the strain value can be computed through Equation (1.23). However, if the strain value is calculated for every segment of RF signal it can cause abrupt changes in the strain profile as illustrated in Figure 1.11. A better strain profile can be obtained by computing the strain value for all the samples in each of the RF signal segments. However, this process will increase the computational cost. An approach based on least-squares (LSQ) was proposed to avoid the computation of the displacement for every sample [START_REF] Kallel | A least-squares strain estimator for elastography[END_REF]. Let us consider the samples in a RF signal segment as a vector n ∈ [n i , n f ] with n i and n f as the initial and final 1.2. ELASTOGRAPHY: STATE-OF-THE-ART samples of the segment, respectively. Then, a vector of displacements ∆ t can be defined as ∆ t = an + b, (1.29) where a and b are the slope and the offset values of the segment, respectively. The Equation (1.29) can also be rearranged as follows        ∆ t (n i ) ∆ t (n i + 1) . . . ∆ t (n f )        =        n i 1 n i + 1 1 . . . . . . n f 1        a b , (1.30) and can be written also in a short form as, ∆ t = A a b . (1.31) We can observe from Equation (1.23) that the relation between the displacement ∆ t and the strain value ε can be expressed as, ∆ t = εn, (1.32) leading to redefine a = ε and b = 0. Therefore, the Equation (1.31) can be changed to obtain ε from the displacement of the segment as ε 0 = (A T A) -1 A T ∆ t (1.33) Obtaining the strain values by LSQ as presented in Equation (1.33) still requires the computation of the displacement for all the samples in the segment n. However, Kallel and Ophir [START_REF] Kallel | A least-squares strain estimator for elastography[END_REF] found a reduction of the first row of A T A -1 A T as a vector g(k), g(k) = ξ(k) 1 -k+1 VISUAL SERVOING The methodology presented for the quasi-static ultrasound elastography is the base for the strain estimation of one RF line. Moreover, this approach can be easily adapted for 2D and 3D ultrasound information [START_REF] Rao | Correlation analysis of three-dimensional strain imaging using ultrasound two-dimensional array transducers[END_REF][START_REF] Shi | Two-dimensional multi-level strain estimation for discontinuous tissue[END_REF]. Since motion estimation is essential for optimal strain measurements, the improvement of this estimation in elastography has been studied in multiple works, like the one presented in [START_REF] Pan | A two-step optical flow method for strain estimation in elastography: Simulation and phantom study[END_REF], where a method based on two-step optical flow measure is proposed to estimate the tissue strain map. This technique has been tested in vivo in the breast of myocardial stiffness [START_REF] Konofagou | Myocardial elastography -a feasibility study in vivo[END_REF][START_REF] Varghese | Ultrasonic imaging of myocardial strain using cardiac elastography[END_REF]. Remote palpation There are more ways to exert force on tissue than the mechanical compression presented in the quasi-static approach. Remote palpation introduces the alternative of using ultrasound energy to remotely excite the tissue [START_REF] Nightingale | On the feasibility of remote palpation using acoustic radiation force[END_REF]. In this method, acoustic radiation force impulse (ARFI) is used to locally displace the tissue in order to reveal its mechanical properties. ARFI is created by focusing a high-intensity ultrasound signal on a small region generating a local force in the direction of the propagation of the ultrasound wave. This force is proportional to the power absorbed by the medium at the focal region: F = 2υI c (1.35) where υ is the absorption coefficient, I is the ultrasound intensity and c is the longitudinal wave speed in the medium. This force generates a displacement in the tissue (typically a few micrometers). This phenomenon is the result of the latency of soft tissue in responding to the excitation making the response out of phase [START_REF] Fahey | Acoustic radiation force impulse imaging of thermally-and chemically-induced lesions in soft tissues: preliminary ex vivo results[END_REF][START_REF] Nightingale | Acoustic radiation force impulse imaging: in vivo demonstration of clinical feasibility[END_REF]. The same ultrasound transducer used for imaging can generate ARFI excitation. In the conventional method of ARFI strain imaging, the transducer excites multiple locations in the tissue at a constant depth [START_REF] Fahey | Acoustic radiation force impulse imaging of thermally-and chemically-induced lesions in soft tissues: preliminary ex vivo results[END_REF]. These locations are spread along the lateral direction. Regular ultrasound then captures the axial tissue motion before and after excitation for each lateral position to form the strain image. It is also possible to monitor temporal response of the tissue after excitation by firing a series of ultrasonic imaging signals. However, the sequence of firing limits the frame rate of ARFI imaging and the size of the region of interest. Another limitation is the imaging depth, which is limited in a focused region [START_REF] Zhai | Acoustic radiation force impulse imaging of human prostates: Initial in vivo demonstration[END_REF]. The first in-vivo results of ARFI imaging were presented in [START_REF] Nightingale | Acoustic radiation force impulse imaging: in vivo demonstration of clinical feasibility[END_REF] and demonstrated the feasibility of ARFI imaging for clinical applications. In [START_REF] Fahey | Acoustic radiation force impulse imaging of thermally-and chemically-induced lesions in soft tissues: preliminary ex vivo results[END_REF], thermally-and chemicallyinduced lesions were imaged in ex-vivo tissues. In addition, ARFI has been employed for other applications such as assessment of breast lesions [START_REF] Sharma | Acoustic radiation force impulse imaging of in vivo breast masses[END_REF], detection of prostate cancer [START_REF] Zhai | Acoustic radiation force impulse imaging of human prostates: Initial in vivo demonstration[END_REF] and for delineation of Radiofrequency ablation (RFA) [START_REF] Fahey | Acoustic radiation force impulse imaging of myocardial radiofrequency ablation: initial in vivo results[END_REF]. 1.2. ELASTOGRAPHY: STATE-OF-THE-ART Transient elastography Transient elastography or pulsed elastography is a technique where an ultrasound transducer probe is mounted on the axis of a vibrator (see Figure 1.15). Vibrations of mild amplitude and low frequency (arround 50 Hz) are transmitted from the vibrator to the tissues via the transducer. This induces an elastic shear wave that propagates through the tissues. In the meantime, pulse-echo ultrasound acquisitions allow the propagation of the shear wave to be followed and its velocity to be measured. The stiffness of the tissue is directly related to the velocity of the shear wave propagation: the stiffer the tissue, the faster the shear wave is propagated [START_REF] Sandrin | Transient elastography: a new noninvasive method for assessment of hepatic fibrosis[END_REF]. Supersonic ultrasound elastography Supersonic ultrasound elastography or supersonic shear imaging (SSI) is the most recent technique for ultrasound elastography. This approach uses the same principle as remote palpation to generate mechanical vibrations using ARFI. However, supersonic ultrasound elastography generates the radiation shear waves, allowing to compute an 1.2. ELASTOGRAPHY: STATE-OF-THE-ART elasticity map of the tissue up to 5000 frames/s [START_REF] Bercoff | Supersonic shear imaging: a new technique for soft tissue elasticity mapping[END_REF]. The generation of the supersonic ultrasound elastography requires special equipment for fast generation of the ARFI. After the generation of the pulses with ARFI at supersonic speeds, the system switches to ultrasound imaging, requiring a fast acquisition and processing of the ultrasound information too. The processing of the ultrasound data is based on a cross-correlation analysis that estimates the axial displacement map required to compute the strain map of the tissue. SSI has been used for the assessment of liver fibrosis [START_REF] Chen | Supersonic shearwave elastography in the assessment of liver fibrosis for postoperative patients with biliary atresia[END_REF] with promising results as a noninvasive technique for evaluate liver fibrosis in children. Regarding to breast lesions, SSI has demonstrated the capacity to be considered as a tool for breast cancer diagnosis [START_REF] Athanasiou | Breast lesions: Quantitative elastography with supersonic shear imaging-preliminary results[END_REF]. Comparison of the ultrasound elastography approaches The main approaches of ultrasound elastography can be compared as shown in Table 1.1. In this Table, the elasticity of the tissue can be estimated applying compression to the tissue in three ways: using quasi-static compression, with external vibrator or through ARFI. We also present the comparison of the advantages and limitations for every approach. Approach Stress type Stress source Advantages Limitations Quasi The advantages and limitations of the ultrasound elastography approaches presented in Table 1.1 offer us a perspective to select the most convenient method for our robotic system. In order to avoid special equipment (besides the robot), we opt for the quasi-static approach, which is compatible with all ultrasound probes. The main limitation of this approach is the necessity of an operator holding the ultrasound probe to perform quasi-static compression. However, this limitation can be overcome by having a robot performing this task with precision. CHAPTER 1. ULTRASOUND ELASTOGRAPHY AND BASIC PRINCIPLES OF VISUAL SERVOING The following section presents a brief overview of medical robotic systems involving ultrasound elastography. Robotic-assisted systems for elastography Very few investigations have been undertaken regarding the use of ultrasound elastography in robot-assisted procedures. These works are related to the field of minimal invasive surgery (laparoscopy). For example, a snake-like robot was presented in [START_REF] Sen | Enabling technologies for natural orifice transluminal endoscopic surgery (N.O.T.E.S) using robotically guided elasticity imaging[END_REF], where a micro ultrasound probe attached at the distal part of the robot was used to find hard lesions by palpation motion. This system controls an 11 degrees of freedom (DOF) robot in order to perform three types of motion: coarse positioning, fine positioning and palpation motion. This system was tested on a prostate phantom with some stiff regions, and the performance of the palpation system is displayed in Figure 1.18. The da Vinci surgical robot (Intuitive Surgical Inc.) has been used to obtain elastic information of a tissue of interest by controlling the motion of a laparoscopic 2D ultrasound probe [START_REF] Billings | System for robot-assisted real-time laparoscopic ultrasound elastography[END_REF]. This robot-assisted system applies a palpation motion that is mixed with the teleoperated motion of a laparoscopic ultrasound probe. Therefore, it allows to obtain the elastogram of the tissue while the surgeon teleoperates the ultrasound probe with the da Vinci robot (see Figure 1.19). In a similar framework [START_REF] Schneider | Remote ultrasound palpation for robotic interventions using absolute elastography[END_REF], a mechanical vibrator placed on the skin of the patient was used to replace the palpation through the controlled motion of an ultrasound probe. In ultrasound remote palpation, a robotic system was built to control the contact forces between an ultrasound probe and the tissue. Then, ARFI were applied to obtain a measure of the elasticity of the tissue [START_REF] Bell | Force-controlled ultrasound robot for consistent tissue pre-loading: Implications for acoustic radiation force elasticity imaging[END_REF]. The block diagram of this approach is shown in Figure 1.20, where a proportional-integral-derivative (PID) controller was designed to Force control scheme of the robotic system presented in [START_REF] Bell | Force-controlled ultrasound robot for consistent tissue pre-loading: Implications for acoustic radiation force elasticity imaging[END_REF] for applying the ARFI required for the tissue elasticity measurement. The works mentioned in this Section introduced robotic systems to assist in the elastography process. However, none of these robotic systems used the information of the CHAPTER 1. ULTRASOUND ELASTOGRAPHY AND BASIC PRINCIPLES OF VISUAL SERVOING elastogram as part of the robotic controller, which is one of the main contributions of this thesis that will be presented in Chapter 2. The following Section briefly recalls the principle of visual servoing since it will be considered in this thesis work. Visual servoing principle A brief overview on visual servoing is presented here to explain the basic concepts of this control technique. The reader can find a more detailed explanation of visual servoing in the two parts tutorial presented by François Chaumette and Seth Hutchinson [START_REF] Chaumette | Visual servo control, part I: Basic approaches[END_REF][START_REF] Chaumette | Visual servo control, part II: Advanced approaches[END_REF]. Two main configurations related to the visual sensor position are used in visual servoing: eye-in-hand and eye-to-hand configurations (see Figure 1.22). In the eye-in-hand configuration, the visual sensor is placed on the robot, so its motion is guided by the robot. Alternatively, in the eye-to-hand configuration, the visual sensor is in a fixed remote location, and observes the moving robot's end-effector interacting with the scene. Interaction matrix The configuration in any robotic mechanism depends on the position of its joints q(t ) ∈ R p at time t , with p as the number of joints (see Figure 1.23). The number of joints p is also known as the number of degrees-of-freedom (DOF) of the robot. Since we will consider a robotic arm holding an ultrasound probe, the following expressions will be recalled for the eye-in-hand visual servoing configuration. Therefore, the pose of the visual sensor r(q, t ) can be linked to the joints position using the forward kinematic model of the robot. In the design of the control scheme, the variation of the robot's end-effector ṙ(t ) is related to the time variation of the features ṡ(t ). This relation is defined where v is the velocity screw vector of the end-effector obtained from the time variation of r. ∂s ∂t is the variation of the features in the environment through time. This means that in a static environment ∂s ∂t = 0. The remaining term, ∂s ∂r is called the interaction matrix of k × 6, also defined as, L s = ∂s ∂r . (1.38) The interaction matrix links the velocity screw vector of the visual sensor, v, to the variation of the visual features ṡ(t ) as, ṡ = L s v. (1.39) We consider here the case where the feature variation is only due to the robot displacement meaning that ∂s ∂t = 0. Control law The relation between time variation of the error ė and the sensor velocity v can be computed using equations (1.36) and (1.39) as follows, ė = L s v. (1.40) Since the goal of visual servoing is to minimize e, the variation of the error ė is usually set as an exponential decrease of the error ė = -λe, where λ > 0 is the gain of the control law. This gain can be set as a constant value or as a variable (e.g, adaptive gain) dependent on the current error value [START_REF] Kermorgant | Dealing with constraints in sensor-based robot control[END_REF], such as, λ( e ) = (λ 0 -λ ∞ ) e - λ ′ 0 λ 0 -λ∞ e + λ ∞ , (1.42) where λ 0 = λ(0) and λ ∞ are the gains for the smallest and highest values of e respectively, and λ ′ 0 is the gain slope at e = 0. Now, we can compute the velocity control law to be applied to the robot by using equations (1.40) and (1.41) to obtain, v = -λL + s e, (1.43) 1.5. CONCLUSION where L + s ∈ R 6×k is the Moore-Penrose pseudoinverse of L s defined by L + s = L ⊤ s L s -1 L ⊤ s when L s is full rank. However, if L s is square (k = 6 ) and the det(L s ) = 0, then it is possible to invert L s , giving the velocity control law as v = -λL -1 s e. We should notice that the interaction matrix L s cannot be known perfectly on a real system and an approximated value Ls is usually considered. To ensure an asymptotic stability of the system using Ls , the condition Ls L s > 0 must be valid as demonstrated in [START_REF] Chaumette | Visual servo control, part I: Basic approaches[END_REF]. Therefore, the control law becomes v = -λ L+ s e (1.44) Conclusion This chapter has introduced the principles of ultrasound imaging, ultrasound elastography and visual servoing. First, we explained the process to reconstruct an ultrasound image from the acquired information of the ultrasound propagation through the tissue. This introduction allows the reader to be introduced to the basic concepts of medical ultrasound imaging which are important for the understanding of this thesis. Mainly, the focus of this chapter is elastography, presented in Section 1.2.1, where the most widely used approaches for this process were introduced. Magnetic resonance elastography (MRE), computed tomography elastography (CTE) and optical computed tomography elastography (OCTE) require expensive and large equipment to be used. On the other hand, ultrasound elastography (USE) requires small and less expensive equipment which is already present in most medical facilities. The different methods used in USE were also presented in Section 1.2.5, where the principle and state-of-the-art of every approach was provided. The comparison of these techniques was presented in Section 1.2.5.5, which led us to select the classic quasi-static approach for our robotic framework that will be detailed in the following chapters. The quasi-static approach was chosen due to its compatibility with most ultrasound systems, requiring no additional devices, nor special ultrasound transducers. All the concepts presented in this chapter are widely used in the next chapters. Chapter 2 will present a novel approach to build a robotic-assisted elastography system. This approach not only helps in the generation of the tissue elasticity map, but it also exploits the elastic information in a visual servoing task. CHAPTER 2 AUTOMATIC PALPATION FOR ULTRASOUND ELASTOGRAPHY In the previous chapter we presented the background of elastography and its first usage in robotic applications. Strain information has been used to localize malign tissue based on its elasticity, which is not possible with common b-mode ultrasound images. This chapter focuses on the development of a quasi-static elastography approach that ensures the compatibility with widespread conventional medical ultrasound systems. The major contribution presented in this chapter is the development of a robotic assistant palpation system that autonomously provides tissue elasticity information based on a quasi-static elastography estimation process. Our assistant robotic system is composed by a robotic arm that holds and controls an ultrasound probe. Its goal consists in continuously applying a palpation motion on the tissues and maintaining the visibility of a stiff tissue of interest in the ultrasound image. To achieve these assistance functionalities, we propose to perform three hierarchical robotic tasks that collaborate together. The first task consists in automatically applying a periodical compression motion to the tissues with the ultrasound probe in order to obtain the pre-and post-compression states of the tissues. The secondary task is based on a visual servoing control scheme that uses directly the strain information as visual feature to automatically maintain a selected tissue of interest in the field of view of the ultrasound probe. To the best of our knowledge, it is the first time the elastography strain information is used as input of a robot controller. The third proposed task is the automatic orientation of the ultrasound probe, allowing the user to explore the surrounding area of the target tissue. This chapter is structured as follows. In Section 2.1, the experimental setup used for 2.1. EXPERIMENTAL SETUP the experiments presented in this chapter and Chapter 3 is first detailed. Then, in Section 2.2, the process to obtain the elastogram by applying periodical soft tissue deformation by a force control is described. Section 2.3 develops the visual servoing approach we propose to automatically align the tissue of interest with the center of the FOV of the ultrasound probe. The automatic orientation of the ultrasound probe is presented in Section 2.4, and the fusion of the three control tasks is detailed in Section 2.5. Experimental results obtained with the robotic system are presented in Section 2.1 for the use of 2D and 3D ultrasound probes interacting with different kinds of phantoms. Experimental setup This section presents the setup and the equipment used for the experiments. First, we explain the workflow presented in Figure 2.1, and then we present all the components used to build the setup. In Figure 2.1, we illustrate the connections between the elements of our experimental setup. We have a robot equipped with a force/torque sensor and an ultrasound probe attached to the end-effector. Force sensor data is sent to a workstation, where all the algorithms are implemented. The robot is connected bidirectionally to the workstation to send its status and to receive motion velocity commands. Ultrasound equipment We used the ultrasound station SonixTOUCH (BK Ultrasound, MA) as shown in Figure 2.3a. This diagnostic ultrasound system is packed with an ultrasound research The probe used for the experiments is the 4DC7-3/40 (see Figure 2.3b) which is a convex 3D probe. This probe has a motor to orientate a 2D curvilinear transducer array. The motor, with a radius of 2.72 cm, makes the probe suitable to acquire 2D and 3D ultrasonic data. The curvilinear array of the transducer has a frequency range between 7 and 3 MHz, a focal depth range between 5 and 24 cm, and an image field of view of 78°. The motor sweeping maximum angle range is 75°. Phantoms Automatic palpation We define as "autonomous palpation" the robotic task and image processing needed to automatically compute a strain map. It is performed in real-time by applying periodic compression motion to the tissue with an ultrasound probe attached to the end-effector of the 6-DOF Viper manipulator robot. The Figure 2.5 shows the location of the Cartesian frames we considered in this study, where the force sensor and the robot end-effector are positioned at frames F f and F e , respectively. We define two frames attached to the mechanical part holding the ultrasound probe that is plugged to the robot end-effector: first, the frame at the gravity center of mass F g , second, the frame at the probe first contact point F cp . We define one frame F U S at the center of the image acquired by the ultrasound probe. All these frames are defined using the metric system (meter and radian) and we consider in the following of this manuscript that both intrinsic and extrinsic parameters of the ultrasound probe have been calibrated using a method like the ones presented in [START_REF] Lasso | Plus: Opensource toolkit for ultrasound-guided intervention systems[END_REF]. Force control In order to obtain the pre-and post-compression states of the tissue, we propose to apply a varying force along the axial direction (y cp ) of the ultrasound probe at the contact frame F cp by using a force control scheme. We denote the velocity screw vector of the probe at the frame F cp , as v = [v x v y v z ω x ω y ω z ] ⊤ . The first three components of v correspond to the translational velocities, and the last three elements to the angular velocities. The 6-axes force/torque sensor provides a force tensor measurement H f f in the force sensor frame F f . To measure the interaction force between the probe and the tissue expressed in the contact frame F cp , we have to consider the probe mass m p in order to compensate the gravity force tensor H g g = [0 0 9.81m p 0 0 0] ⊤ defined at F g . The force tensor applied to the tissue can then be expressed in the frame F cp as follows: H cp cp = F cp f H f f -F f g H g g (2.1) where F f g and F cp f are force twist transformation matrices from the gravity frame F g to the frame F f and from the frame F f to the frame F cp , respectively. The force twist transformation matrix is used to transform the force/torque vector expressed at a frame F b into a frame F a and it is defined by a 6×6 matrix: Since our goal is to control only the force component along the y-axis (axial direction) of the probe, we define the feature vector to be regulated as F a b = R a b 0 3×3 t a b × R a b R a b (2.2) s f = [0 1 0 0 0 0] H cp cp . (2.3) In order to apply a continuous compression motion, we propose to implement the following desired varying force that is based on a sinusoidal function: F d (k) = ∆ F 2 sin (4k -T )π 2T + 1 + F 0 , (2.4) where k is the discrete time and ∆ F is the amplitude of the sinusoidal function as shown in Figure 2.6. T is the period of the desired force signal expressed in sample time and F 0 is the initial desired force value. In order to apply this varying force along the y-axis, we define the desired force as s * f = F d (k) and the force error to minimize as e f = s f -s * f . An exponential decrease of e f is achieved by imposing the desired error variation of the error such as ė * f = -λ f e f with λ f being the force control gain. To generate a velocity control law that minimizes e f , we need to express the interaction matrix L f which relates the variation of the force feature to the probe velocity tensor such as ṡ f = L f v. In this work we consider an approximation of the interaction matrix L f = [0 K 0 0 0 0], where K is a coarse estimation of the contact stiffness between the probe and the tissue. The force control law is then obtained by applying the following velocity to the ultrasound probe: v f = L + f ( ė * f + ṡ * f ), (2.5) where the operator "+" represents the Moore-Penrose pseudo-inverse defined as L + f = (L ⊤ f L f ) -1 L ⊤ f when L f is full rank. ṡ * f (k) = ds * f (k) dk is the differential of the desired force variation. If we analyze Equation (2.5), the term ṡ * f (k) can be neglected in this work, since our goal is just to obtain a sinusoidal variation of the force. Simulated outputs of the force controller are presented in Figure 2.7 to show the temporal evolution of the resulting force with and without considering the term ṡ * f (k) in the control law. Since a perfect phase between the desired and measured force is not necessary for the palpation motion task, we simplified the control law as: v f = L + f ė * f . (2.6) The velocity v f is applied in the contact point frame F cp . However, in practice the velocity must be expressed at the robot's end-effector frame F e to be applied. Thus, the velocity at F e is formulated as follows, where V e cp is a 6×6 velocity twist transformation matrix defined in Equation (2.8). v e = V e cp v f (2.7) V e cp = R e cp t e cp × R e cp 0 3×3 R e cp (2.8) The velocity v e can finally be applied using the robot's velocity kinematics model as follows, q = J + e v e (2.9) where q is the join velocity control vector and J e is the Jacobian of the robot estimated through its kinematics. The remaining of the manuscript assumes that Equations (2.7)-(2.9) are always applied after expressing any control law at the frame F cp . Elastogram estimation Force control gives us the mechanical compression required for the elastography. The process that estimates the elastogram is illustrated in The elastogram is generated using a method based on motion estimation and strain filtering. First, we detail the motion estimation process for the 2D and 3D cases, and then the filter used to estimate the strain map. 2D Motion estimation The essential part of the elastography method is the displacement estimation of the RF signals from pre-to post-compression states. This can be computed using motion estimation. We propose to use a subpixel motion estimation approach [START_REF] Chan | Subpixel motion estimation without interpolation[END_REF] with the purpose of achieving real time elastography imaging capability. Motion estimation is divided in two steps: integer displacement estimation and sub-displacement estimation. Integer displacement estimation is obtained with the block matching algorithm (BMA). Figure 2.9 shows the parameters used for this approach. Let us define two arrays of RF signals, the RF ROI in pre-compression as f (i , j ) and post-compression as g (i , j ), where i is the scan-line index and j is the sample index of the RF scan line. BMA 2.2. AUTOMATIC PALPATION divides the RF frame in blocks of M × N size (B 1 and B 2 for f (i , j ) and g (i , j ) respectively). Then, the displacement for each block in the current frame f is estimated with respect to the next one, g . The search of the best match is performed over a region of size (2N -1)×(2M -1) (as the one shown in Figure 2.9 in orange). The search region size can be changed to optimize the computational cost. The best match of the i -th block over the search region can be found using common similarity measures (MSE, SAD, ZNCC, etc.). In our case, the sum of absolute differences (SAD) was selected due to its low computational cost. Therefore, the minimization is computed as follow, (u 0 , v 0 ) = arg min 1 M N M -1 m=0 N -1 n=0 |B 1 (m + u 0 , n + v 0 ) -B 2 (m, n)| (2.10) where B 1 ∈ f and B 2 ∈ g are blocks (matrices) of M ×N size. The u 0 and v 0 are the integer displacements corresponding to lateral and axial displacements, respectively. Unfortunately, BMA only estimates integer displacements, and we need to estimate the sub-displacements to obtain an accurate displacement map. There are approaches for sub-displacement estimation that are based on parabolic interpolation, cosine interpolation, optical flow (OF) and splines, to name a few. In our work, we use OF as base for sub-displacement estimation. The OF between the blocks B 1 and B 2 is estimated by solving the next linear system,   m,n ∂B 1 ∂i 2 m,n ∂B 1 ∂i ∂B 1 ∂ j m,n ∂B 1 ∂i ∂B 1 ∂ j m,n ∂B 1 ∂ j 2   δ u δ v = m,n (B 2 -B 1 ) ∂B 1 ∂i m,n (B 2 -B 1 ) ∂B 1 ∂ j (2.11) Once we have obtained the sub-displacements (δ u , δ v ), we can compute a more accurate displacements as, The overlapping can be tunned to achieve good results. For example, a real-time displacement map was obtained using a zero-overlapping in [START_REF] Zhou | A motion estimation refinement framework for real-time tissue axial strain estimation with freehand ultrasound[END_REF]. This produced good results when a 2% compression between the pre-and post compress states was applied. In our case, we consider 25% of overlapping to obtain a finer displacement map while allowing real-time processing capability. The displacement maps, U 0 (i , j ) and V 0 (i , j ) for lateral and axial displacements respectively, are the outputs of this process (see Figure 2.11). Since the maps are of different size than the RF ROI due to the overlapping, we apply a bilinear interpolation to obtain the real sized U (i , j ) and V (i , j ) of the considered ROI. After obtaining the displacements for every block, their axial and lateral components are stored in two arrays V 0 (i , j ) and U 0 (i , j ), respectively. u = (i c -u 0 ) + δ u (2.12) v = ( j c -v 0 ) + δ v (2. AUTOMATIC PALPATION Strain filtering Strain values depend directly on the motion estimation. The base of the strain tensors is the partial derivative of the directional displacement map with respect to the axis of the tensor component. In the specific case of elastography, the compression is performed in axial direction. Then, the axial strain values can be computed as, ε j j = ∂v ∂ j . (2.14) where ∂v ∂ j indicates the variation of the axial displacement v through j -direction. Additionally, to obtain a better quality elastogram, we use the LSQ strain filter proposed by Kallel and Ophir [START_REF] Kallel | A least-squares strain estimator for elastography[END_REF]: h(n) = ξ(n) 1 -n+1 2 1 2 . . . n 1 1 . . . 1 (2.15) where ξ(n) = 12 n(n 2 -1) and n is the number of samples in the interval ∆ j as shown in Figure 2.12. As demonstrated in [START_REF] Kallel | A least-squares strain estimator for elastography[END_REF], the convolution of this filter h(n) with the axial component of the motion estimation V (i , j ) can generate a smooth strain map ε(i , j ). The strain map ε(i , j ) provides the elastic information inside the ROI. However, the strain is a measure depending on the constant stress applied to the tissue. This stress is performed with the ultrasound probe by applying the force controller presented in Section 2.2.1. Evaluation of the estimated elastogram using a ground truth from finite element model (FEM) simulation The process to obtain the elastogram should be evaluated based on a ground truth model. Therefore, we propose to evaluate the strain map estimation based on finite element analysis (FEA). The physical phenomenon of strain is expressed using partial differential equations. Solving these equations for any shape using analytical formulation is challenging. However, FEA is a numerical methodology to approximate the solution to these partial differential equations. The principle of FEA is to divide a rigid body into finite elements using a mesh and compute a solution for every element. The accuracy of the solution improves with the increment of the number of elements, but the computational cost also increases. FEA is used in many engineering applications to design and test mechanical structures under several boundary conditions. There is a wide variety of software for FEA, from open-source to paid license. In our case, we use COMSOL Multiphysics 5.0 (COMSOL, Inc.) to obtain the ground truth of the strain map. To estimate the elastogram as in section 2.2.2, we need the pre-and postcompressed RF frames. We obtain those RF frames using an ultrasound simulator called Field II [START_REF] Jensen | Computer phantoms for simulating ultrasound B-mode and CFM images[END_REF]. This software can simulate the data acquired by a virtual ultrasound system based on an accurate ultrasound wave propagation model. We first defined a virtual stiff tissue target by randomly positioning scatterers in a geometrical mesh that represents a virtual organ at the pre-compression state as illustrated in Figure 2.13. Then we simulated tissue deformation by applying displacements computed from the FEA to the Once the models are defined for the FEM, we need to apply the compression. In our work, we fixed the edge of the base and define a Neumann boundary condition, which specifies the normal derivative of the function on a surface. A 2% compression was applied and we obtained the results reported in Figure 2.16 with the image of axial displacements presented in the first row and the strain image in the second row. The value of the displacements d ∈ R 2 obtained by FEM are then applied to the scatterers position. We define the scatterers position at pre-compression frame as p s , and the scatterers position at post-compression as p ′ s . We use each frame as input in FIELD II to obtain its RF frames. Figure 2.17 shows the output of the three models described previously, where the pre-and post-compression frames are displayed in the first and second row, respectively. The output is shown in b-mode. The output of FIELD II gives the RF data, which is the input to the processes described in Section 2.2.2.1 and Section 2.2.2.2. These processes were implemented in Qt with C++, and the library used to compute the FFT (Fast Fourier Transform) in the Extension for 3D elastogram estimation We have defined how to compute the elastogram in 2D. Now, in this section, we use an ultrasound probe able to obtain volumetric data. For this case, the ultrasound probe motor is enabled to move, with a specific angle step, as opposed to the 2D case where the motor position is fixed. In this case, one volume of RF signals can be seen as a set of N f RF frames in 2D. These frames are acquired during one directional sweep of the motor as shown in Figure 2.19. If we estimate the 3D elastogram with the same principle as the 2D case, then we need to wait for the acquisition of two complete RF volumes, one for each state: preand post-compression. However, the computational cost of the motion estimation in process. First, we store the RF volume for one state V r (pre-or post-compress). Afterwards, the 2D elastograms are computed online for every motor position in the current motor sweep, where we also grab the frames in the RF volume V c . At the end of the current motor sweep we obtain the complete 3D elastogram, defined as, V s (i , j , k) = ε k (i , j ) (2.16) where i and j are the indexes of the ultrasound scanline and the sample in the scanline, respectively. ε k is the strain map in 2-D estimated at the k-frame pair of V r and V c in the interval [k 0 , k n-1 ] with k 0 and k n-1 as the indexes of the initial and final frame pair in the volumes V r and V c , respectively. Automatic centering of a stiff tissue in ultrasound image In the last section, we presented how to estimate the elastogram by automatically moving an ultrasound probe with a robot. Now, in this section, we define a robotic task to automatically align the center of the probe with the stiffest tissue in a ROI. This process can assist the examiner by always maintaining the visibility of the target tissue during a medical procedure. We propose to use visual servo control for this robotic task by considering visual features extracted from the estimated strain image directly as inputs of the control scheme. In order to automatically center a stiff object at the middle of the full image by visual servoing, we propose to isolate the biggest rigid region in the elastogram and use its barycenter coordinates as the visual features. The method we propose to extract these features consists first in segmenting the biggest stiff region from the elastogram and then computing the coordinates of its centroid as described next. Stiff tissue segmentation First, we propose to generate an image I g (i , j ) by filtering the strain map with a Gaussian function as: I g (i , j ) = e -ε(i , j ) 2 e ε 2 max (2.17) 53 AUTOMATIC CENTERING OF A STIFF TISSUE IN ULTRASOUND IMAGE where ε max = max(|ε(i , j )|) and I g (i , j ) ∈ [0, 1]. The aim of this filter is to enhance the intensity of the rigid objects and to decrease the intensity of the rest of the area. The segmentation is obtained by applying a binarization on the image I g (i , j ) such as, I u (i , j ) = 1 if I g (i , j ) ≥ Γ 0 otherwise (2.18) where Γ is the threshold value. In practice, the value of Γ = 0.5 was used since it provides optimal results. A similar approach can be used for the segmentation of a 3D strain volume. In this case, we generate first a filtered volume V g (i , j , k) ∈ [0, 1] as, V g (i , j , k) = e -V s (i , j ,k) 2 e V 2 smax , (2.19) where V s max = max(|V s |). The segmentation of the volume V g is then computed as, V u (i , j , k) = 1 if V g (i , j , k) ≥ Γ 0 otherwise (2.20) Centroid estimation Once we have isolated all the stiff regions, we need to know which is the biggest one by labeling all the regions. For this process, we can use a connected components algorithm [START_REF] Samet | Efficient component labeling of images of arbitrary dimension represented by linear bintrees[END_REF], which is a graph theory-based approach to label different regions. In Figure 2.21, we show an example of the labeling of different regions. We represent a binary image with a tree (e.g., a quadtree for a 2D case) to search the adjacent nodes and classify them in one region. Every node is defined as a pixel in the 2D case, or as a voxel in the 3D case. The biggest region is the region having the bigger area (or volume in the 3D case) of the labeled regions, and its centroid is computed from the image moments as proposed in [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]. The general definition of the image-based moments in 2D is given by where the integration is done using all the elements in the RF plane Ω. The area A and the centroid (i c , j c ) are computed as follows, M mn = Ω I u (i , j )i m j n d i d j , (2.21) A = M 00 (2.22) i c = M 10 M 00 (2.23) j c = M 01 M 00 (2.24) In a similar way for the 3D case, the moments in 3D are defined by, M mno = Ψ V u (i , j , k)i m j n k o d xd yd z, (2.25) and the integration is done using all the elements in the RF volume Ψ. The volume ν and the centroid (i c , j c , k c ) are estimated as, ν = M 000 (2.26) i c = M 100 M 000 (2.27) j c = M 010 M 000 (2.28) k c = M 001 M 000 (2.29) The area (volume for 3D case) of every labeled region is used to find the biggest one. Then, the centroid of the biggest region is obtained through the image moments as previously defined. Figure 2.22 shows the different steps of the centroid computation in a 2D case. c x = -α L i r 0 + i c - w p 2 (2.30) c y = α A j r 0 + j c - h p 2 (2.31) where w p in RF lines and h p in RF samples are the width and height of the RF frame. The coordinate (i r 0 , j r 0 ) is the initial point (top-left corner) of the ROI with respect to the RF frame. The constants α L and α A are the scale values to express RF in metric units for RF lines and samples, respectively. We should mention that α L and α A were defined in Section 1.1.2. Due to the geometry of the convex probe, the centroid (c x , c y ) is distorted, and it requires scan conversion to the real metric centroid (x c , y c ). This scan conversion is estimated through Equations (1.14) and (1.15). To center the ultrasound probe with the stiffest tissue centroid, we need to minimize x c to zero. In a 2D case, to achieve this robotic task we are controlling in-plane motions. This means that we can control only 3-DOF of the ultrasound probe, v x , v y and ω z . Then, we define the measure of the lateral component of the centroid as our measures in the control loop, s t = x c , and our desired centroid position as s * t = 0 corresponding to the horizontal component of the center of the ultrasound image. The relation of the variation of s t with respect to the ultrasound probe velocity v is defined as, ṡt = L t v (2.32) where L t is the interaction matrix that relates the probe velocity and the measure variation. The value of L t can be found through the Varignon's formula using the relation between the centroid variation with respect to the probe's velocity as,    ṡt ẏc 0    =    ẋc ẏc 0    = -    v x v y v z    -    ω x ω y ω z    ×    x c y c 0    . ( 2 .33) As we mentioned before, we want to control only 3-DOF (v x , v y and ω z ). Then, the remaining velocities are zero, and we compute the interaction matrix L t = [-1 0 0 0 0 y c ]. The goal of this control task is to minimize the error e t = s t -s * t with an exponential decrease of e t . Therefore, the desired variation of the error is defined as ė * t = -λ t e t with λ t being the centering control gain. Then, the control law for automatic horizontal centering is defined as, 57 AUTOMATIC CENTERING OF A STIFF TISSUE IN ULTRASOUND IMAGE v t = v = L + t ė * t (2.34) Automatic centering in 3D Figure 2.24: Axes for the frame F p (frame of the ultrasound probe). Lateral and top views of the probe in contact with a virtual human torso are shown in left and right images, respectively. A target, in green, is placed inside of the torso to relate the two views. Similar to automatic centering in 2D, the goal is to center the stiffest object on the plane X -Z of the full volume (see Figure 2.24). First, we need to convert the value of the centroid in RF units to the metric coordinates. To do this, as we use a convex ultrasound probe, we perform a scan conversion of each point inside the RF volume to the Cartesian coordinates (see Figure 2.25), s(i , j , k) → p(X , Y , Z ), in order to obtain the metric location with respect to a Cartesian frame. We define the scan conversion using the ultrasound probe parameters as described in [START_REF] Lee | Intensity-based visual servoing for non-rigid motion compensation of soft tissue structures due to physiological motion using 4D ultrasound[END_REF]. In our case, RF data is considered instead of prescan images. We recall the scan conversion formulation as, The quasi-spherical coordinates are computed in function of the RF coordinates as, r = v s f s j + r p (2.38) φ = -0.5α l (N f -1) + α l i (2.39) θ = -0.5η(N f -1 -2k) (2.40) where v s is the speed of the sound (1540 m/s), f s is the sampled frequency (40 MHz), α l is the angle between neighboring scanlines and η is the angle of the field of view (FOV) of the motor in the ultrasound probe for a motor angular step. All these parameters are given by the ultrasound system and probe specifications. Using the scan conversion defined above, we can compute the metric value of any point in the RF volume with respect to the probe frame F p . We calculate the centroid inside of a VOI, which means that the centroid coordinate in the full RF volume is expressed as, i cm = i r 0 + i c + w p 2 (2.41) j cm = j r 0 + j c + h p 2 (2.42) k cm = k r 0 + k c + d p 2 (2.43) where (i r 0 , j r 0 , k r 0 ) is the initial point of the VOI (top-left-front corner) in RF units. w p , h p and d p are the dimensions of the VOI in RF units. Then, using equations (2.35) to (2.40) for (i cm , j cm , k cm ), we obtain (X c , Y c , Z c ) in metric units. Keeping a target in the FOV of the volume of analysis is a task which requires the displacement of the ultrasound probe on the X -Z plane (see Figure 2.24). The centroid of the target is defined as (X c , Y c , Z c ), but in this case we only use the values of X c and Z c . This means that the centroid's coordinates with respect to F p are the same with respect to the frame F cp . Then, similar to the 2D case, we define a visual feature as s t = PROBE ORIENTATION [X c Z c ] ⊤ and the desired feature vector to reach the centering of the object of interest in the probe FOV is directly s * t = 0 2×1 . The error is defined as e t = s t -s * t . Similar to the 2D case, an exponential decrease of the error can be obtained by defining the desired error variation as ė * t = -λ t e t with λ t being the target-probe centering control gain. Using the Varignon's formula, we determine the relation between the probe velocity v and the variation of the retained features as, Ẋc Żc = -1 0 0 0 -Z c Y c 0 0 -1 -Y c X c 0 v. (2.44) The Equation (2.44) can be written as ṡt = L t v, where L t is the interaction matrix related to s t . Then, the control law for the target-probe centering can be expressed as v t = v = L + t ė * t (2.45) Probe orientation Probe orientation is the third proposed task in our approach. This task offers to the user the capability to explore the surrounding area of the target tissue with an automatic orientation of the ultrasound probe. In the following, we detail this robotic task in both 2D and 3D cases. 2D Probe orientation CHAPTER 2. AUTOMATIC PALPATION FOR ULTRASOUND ELASTOGRAPHY The aim of this task is to automatically orient the probe to a desired angle s * θ (in the image plane) from the current angle of the probe s θ = θθ i ni t . θ i ni t is the angle of the initial probe orientation and θ is the angle measured during the probe orientation control (see Figure 2.26). Both angles are obtained by the odometry measures of the robot. The variation of the angle feature s θ due to the probe velocity is defined as: ṡθ = L θ v (2.46) where L θ = 0 0 0 0 0 -1 is the interaction matrix related to the variation of s θ . The angle error is defined as e θ = s θ -s * θ , and the desired angle error variation as ė * θ = -λ θ e θ with λ θ being the probe orientation control gain. Therefore, the control law for the orientation of the probe is defined as, The error to be minimized is defined as e θ = s θ -s * θ , and the desired exponential error decrease can be achieved by the desired error variation expressed as ė * θ = -λ θ e θ where λ θ is the orientation control gain. v θ = L + θ ė * θ (2.47) 3D Probe orientation As in the previous 2D case, we determine the interaction matrix that relates the feature vector variation, ṡθ , with the probe's velocity v as, L θ =    0 3×3 -1 0 0 0 -1 0 0 0 -1    (2.48) The control law for the probe orientation is then provided by, v θ = L + θ ė * θ (2.49) Control fusion The system that we propose requires the three control tasks presented in this chapter. However, if we analyze the interaction matrices, we can observe a coupling between the automatic centering and the probe orientation controls. This means that these two tasks are disturbing each other. We can deal with this through the redundancy control framework [START_REF] Siciliano | A general framework for managing multiple tasks in highly redundant robotic systems[END_REF], where a hierarchical method for the i -th control task ( ėi , L i ) is proposed as, v 0 =0 v i =v i -1 + (L i P i -1 ) + ( ėi -L i v i -1 ) (2. 50) where P i -1 is the projection operator onto the null-space of (L 1 , . . . , L i -1 ), and it is defined as, P 0 =I P i =P i -1 -L + i L i (2.51) This formulation allows us to establish the control tasks priorities, giving to the ith task a lower priority with respect to the previous i -1 task so it does not disturb it. Considering this hierarchical approach, we assign to the palpation task by force control e f the highest priority, since it is needed for the elastogram estimation process. Then, the automatic horizontal centering e t and the automatic probe orientation e θ are set as the second and third priorities, respectively. We can express these tasks using the redundancy control framework. First we designated the task e f with the highest priority such that, v 1 = v f = L + f ė f . (2.52) Then, the projector of v 1 on the next task is defined as, P 1 = I 6 -L + f L f =            1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1            . ( 2 .53) We can observe in P 1 that the first task, force control, constrains the next tasks on the translation along the y-axis. Then, projecting the matrix L t onto the null space of L f (ker(L f )) we obtain, L t P 1 = L t . (2.54) This means that the first and second tasks are decoupled, and the product L t v f = 0. Then, the secondary task can be defined as, v 2 = L + t ėt , (2.55) where L + t is the Moore-Penrose pseudoinverse of L t that we analytically calculated for the 2D and 3D cases as, 2D case: L + t = C 1 -1 0 0 0 0 Y c ⊤ , (2.56) 3D case: L + t = C 1 C 2 -(X 2 c +Y 2 c +1) 0 -(X c Z c ) -(X c Y c Z c ) -Z c (Y 2 c +1) Y 2 c (X 2 c +Y 2 c +1) -(X c Z c ) 0 -(Y 2 c +Z 2 c +1) -Y c (Y 2 c +Z 2 c +1) X c (Y 2 c +1) X c Y c Z c ⊤ , (2.57) where C 1 = 1 1+Y 2 c and C 2 = 1 X 2 c +Y 2 c +Z 2 c +1 . The projector of this task on ker(L f ) and ker(L t ) can be defined for both cases as, P 2 = P 1 -L + t L t , (2.58) producing the projectors that constrain the motion for the third task on the x-and ytranslations for the 2D case, and for the 3D case x-, y-and z-translations. The third task, probe orientation, can be expressed as, 2.5. CONTROL FUSION v 3 = (L θ P 2 ) + ( ėθ -L θ (v f + v 2 )). (2.59) Finally, we obtain the control law that fuses the three hierarchical tasks with the following expression: v = v 1 + v 2 + v 3 . (2.60) The behavior of this control law can be compared with another approach, where there are not task priorities, which can be defined by a simple interaction matrices stacking as: 2D case: v =    -1 0 0 0 0 Y c 0 K 0 0 0 0 0 0 0 0 0 -1    +    ėt ė f ėθ    (2.61) 3D case: v =            -1 0 0 0 -Z c Y c 0 K 0 0 0 0 0 0 -1 -Y c X c 0 0 0 0 -1 0 0 0 0 0 0 -1 0 0 0 0 0 0 -1            +       ėt x ė f ėt z ėθ       (2.62) In the 3D case, ėt x and ėt z are the first and second row of ėt , respectively. We can observe Equation (2.61) (2D case) that the first and third rows of the interaction matrices stacking, representing the second and third tasks, respectively, are disturbing each other. Similarly for the 3D case in Equation (2.62), we can notice this disturbance between the same tasks, the second task and third tasks represented in the first row and third to sixth rows, respectively. This issue represents a coupling between the second and third task which can cause instability in the control law when using equations (2.61) and (2.62). However, we can see that the approach presented in Equation (2.60) can deal with the coupling of the second and third tasks offering us stability in the control system. In order to demonstrate the difference between the control laws presented with and without the hierarchical approach, we performed a simulation with the initial and desired features shown in Table 2.3. This table presents the parameters of the controller for the 2D and 3D cases. The initial values are set for all the parameters. However, we changed at t =2s the desired probe orientation in order to observe the behavior of the control laws due to the modification of the third task. The evolution of the probe velocities is shown in Figure 2.28. Modality v x v y v z ω x ω y ω z Figure 2 .28: Plots of the probe control velocities. First and second rows are for the 2D and 3D cases, respectively. The performance of the three tasks are displayed using a global interaction matrix at the left, and using the redundancy control framework at the right. The gray strip highlights the time segment where we can observe the difference between the two approaches. EXPERIMENTAL RESULTS We can observe in Figure 2.28 the differences between the two approaches that occur during the time periods that are highlighted with the gray strips. As expected, the hierarchical control framework allows to obtain a perfect decoupling of the three tasks. On the other hand, the use of the interaction matrices stacking, presented in equations (2.61) and (2.62), shows the coupling between the second and third tasks as we expected. For this reason, we will retain the hierarchical approach in the next of our work. Experimental results We present here the experimental results obtained by applying the control framework that combines the three robotic tasks. We consider both the 2D and 3D cases that correspond respectively to the use of a 2D or 3D ultrasound probe. In the first part of this section, we present the results and performance obtained for the 2D case by considering in the setup the abdominal phantom that was presented in Figure 2.4a. Then, in the second part of this section, we present the results obtained with the implementation of our control approach to the 3D case by the use of a 3D US probe interacting with a homemade phantom. For all experiments, the acquisition of RF data was implemented using a server-client TCP/IP communication in a local network. We used as server the SonixTouch ultrasound scanner, and as client a Linux workstation (Intel Xeon CPU @2.1 GHz) that performs all the imaging process, control law computation and communication with the robot. The RF data from the server is sent to the client at the rate of 24 FPS (frames per second). In the 3D case, each 3D image (volume) is composed of 31 RF 2D frames which result in a volume acquisition rate of 0.77 VPS (volumes per second). On the client side, we developed a multi-thread software application in C++ based on ViSP [START_REF] Marchand | ViSP for visual servoing: a generic software platform with a wide class of robot control skills[END_REF], VTK [START_REF] Schroeder | Visualization Toolkit: An Object-Oriented Approach to 3D Graphics, 4th Edition[END_REF] and Qt [START_REF] Eng | Qt5 C++ GUI Programming Cookbook[END_REF] libraries. It provides a graphical user interface (GUI) to activate and supervise the proposed functionalities of the elastography robotic system. The ViSP library was used to perform all the computation process related to the elastogram estimation, control law, and robot communication. The elastogram display and application GUI were implemented using VTK and QT. The details of the implementation are presented for every case. Results of the 2D case implementation The multi-thread software application presented in Figure 2.29 was developed for the implementation of the 2D case. The RF frame is created as a shared pointer 1 , which is shared with three threads (RFtoBMode, Acquisition and Elastography). The RF frame is updated every time the Acquisition thread acquires the RF data from the client-server communication, previously described. Then, the RF frame is converted to b-mode image to be displayed in the main thread containing a GUI. The same RF frame is stored by the Elastography thread such that the first RF frame is defined as the pre-compress and the next one as the post-compress state. After the second frame is sent into the Elastography thread, the incoming RF frames are always defined as post-compress state, and the previous post-compress RF frame is shifted to the pre-compress state. In this thread the elastogram is computed after a ROI is selected by the user in the GUI. Then, the elastogram in the ROI is overlaid in the b-mode image. The robot is activated in the GUI to perform the palpation motion, and the automatic centering of the stiffest tissue in the center of the FOV of the US probe is also enabled through the GUI. The required centroid for this task is computed in the main thread using the current elastogram. The process time of the elastography algorithm was 20 ms corresponding to 50 FPS, over a ROI of 50% of the RF frame size. Therefore, it is compatible with the time constraint of a real-time robotic control scheme. 1 Address of the shared memory that allows data exchange between the different threads 67 2.6. EXPERIMENTAL RESULTS Experimental results using a training abdominal phantom For the next experiments we used the setup presented in section 2.1 with the 3D convex ultrasound probe in 2D imaging mode. The force control law was performed with a frequency higher than the other tasks (200 Hz). The visual control was performed with the same period as the image capture (24 fps). The experiments were performed over the ABDFAN ultrasound examination training model (see section 2.1) simulating the abdomen of a patient. The phantom manufacturer specifications note the presence of lesions and tumors. Experiments were performed selecting a ROI including hepatic lesions and pancreatic tumors. In the experiments, the probe was initially positioned above the phantom, without contact, and oriented with an initial angle θ i ni t and initial force F 0 = 0. Then, to demonstrate the efficiency of the general control law (2.60), we set a desired sinusoidal force signal with F 0 = 5 N and ∆ F = 2 N. We selected ∆ F = 2 N, because it is the minimum force variation required to compute an elastogram, which was found by applying different forces in the finite element analysis. The lower relative error compared with the ground truth computed by the FEA was 5.3%, which proves that our elastogram is well estimated. The automatic horizontal centering of the ROI is activated once the user selects this area in the graphical interface. We performed several experiments and present here the results obtained from one of the experiments. A set of five desired angles for the probe orientation task is considered: θ 0 = θ i ni t - 10 • , θ 1 = θ 0 +5 • , θ 2 = θ 1 +5 • , θ 3 = θ 2 Elastogram quality improvement To improve the elastogram quality, we propose to align and average the different elastograms obtained for all probe orientations. This alignment is performed by a warping function that consists in applying a translation based on the centroid relative position and the image relative rotation between each elastogram of the object of interest (blue region in the images of the third row in the Figure 2.31, where dark blue is the lower strain, and dark red is the highest strain). Once we obtain the warped elastograms Further, in elastography, the concept of contrast-noise-ratio (CNR e ) is expressed as, (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) CNR e = 2(µ s -µ b ) σ 2 s + σ 2 b (2.63) where µ s and µ b represent the mean value of the stiff and background tissues, and σ 2 s , σ 2 b denote the strain variance, respectively. The value of the CNR e allows to make a decision on accepting or rejecting the presence of a lesion as presented in [START_REF] Varghese | An analysis of elastographic contrast-to-noise ratio[END_REF], and a higher level of CNR e suggests better ability to detect the lesion. Therefore, to evaluate the elastogram quality in our experiment, we compute the CNR e in the strain images for each probe orientation and for their mean as shown in Table 2.4. The highest CNR e is obtained for the image of the mean of the elastograms as expected. Results of the 3D case implementation For the implementation of the 3D process, we developed a multi-thread software application as described in Figure 2.34. A shared pointer related to the memory of the RF volume is continuously updated by the acquisition thread (frame by frame). This shared pointer is read by the RFtoBMode thread (process in charge of converting the RF volume to b-mode volume) and the Elastography thread (process to compute the 3D elastogram in a VOI) once a volume is completed. The Display object, in the main thread, contains the functions to display three orthogonal planes (sagittal, axial, coronal) of the volume (see Figure 2.33) using VTK library (Visualization Toolkit [START_REF] Schroeder | Visualization Toolkit: An Object-Oriented Approach to 3D Graphics, 4th Edition[END_REF]). This object also allows the user to select the VOI by displacing the planes to the desired position. We perform a set of experiments on a homemade gelatin phantom containing two duck gizzards (see Figure 2.35), and we present the evolution of one experiment, which is the base for every experiment performed. We set the values of F mi n = 1.5 N, F max = 2.5 N and λ f = 0.002. F mi n and F max were estimated empirically for the gelatin phantom, and they can be adapted for any other kinds of tissue. In this implementation, for the second and third tasks, we used adaptive gains (see Equation (1.42)). The parameters in the adaptive gains in the control law of Equation 2.60 are set as λ t (0 ) = 0.1, λ t (∞) = 0.03 , λt (0) = 0.3, λ θ (0) = 1.5, λ θ (∞) = 0.2 and λθ (0) = 2.3. The experiment begins with an initial probe position where a stiff object of interest is located in the 3D US probe FOV (red point in Figure 2. 35-left). Then, the automatic palpation task with the robot is activated to perform the compression of the tissue. Next, four points delimiting the VOI are chosen using the developed GUI and displayed by small yellow spheres as shown in Figure 2.33. Once the VOI is selected, the 3D elastogram is estimated for every pair of RF volumes. The centroid of the 3D elastogram is computed as we previously described in section 2.3, and it is sent to the automatic centering control task. The probe orientation is always active, and the user can change the desired orientation of the probe any time through the GUI. We show in Figure 2.36 the plots of the evolution of the probe velocities and error of the three tasks for one experiment. We can observe at the beginning of the experiment that the only active velocity is the v y (force control). At time t ≃ 23s the system is paused to select the VOI, and at time t ≃ 72s the process continues. Then, the center of mass of the biggest stiff tissue is computed and the velocities v x and v z applied by the visual Conclusion We have proposed a new approach for automatic palpation in this chapter. We based our methodology on three hierarchical tasks. The main task, the compression motion based on the force variation, is required every moment for the elastography system. Then, the secondary task, ultrasound probe centering with a target tissue, performs the visual servoing-based approach to automatically center the probe's FOV with the biggest stiff tissue in a ROI or VOI. The third task, the probe orientation, is used to explore the tissue targeted with the secondary task from different orientations of the probe. In the 2D case, our system was also used to improve the quality of the elastogram. It is based on the elastograms captured at different probe orientations using the third task. This third task will be revisited in chapter 4, where the probe orientation will be controlled with a haptic interface. The control system of the 2D case was designed to control the 3-DOF of the ultrasound probe corresponding to the in-plane motions. We have also presented an extended approach for the use of a 3D US probe allowing our system to perform realtime 3D quantitative ultrasound elastography. It is based on a control scheme similar to the 2D case but controlling the 6-DOF of a 3D motorized ultrasound probe in order to consider the out-of-plane motions. The experimental results demonstrated the feasibility of the proposed concept. The experiments were performed on static tissues which is not the case with the human body. However, next chapter will present an extended dense approach to deal with the physiological motions and also with the control of the out-of-plane motions with a 2D probe. CHAPTER 2. AUTOMATIC PALPATION FOR ULTRASOUND ELASTOGRAPHY RELATED WORKS to the tracking of deformable tissues. Then, the general principle of the visual dense tracking approach is presented in Section 3.2. In the same section, deformable tracking models and their relation with the strain map are presented. Afterwards, in Section 3.3, the motion compensation of the ultrasound probe is introduced. A dense visual servoing based on ultrasound b-mode images is elaborated to compensate the in-plane and outof-plane motions of a targeted tissue of interest. Experimental results obtained on a moving abdominal phantom are presented and discussed in Section 3.4. Related works Image tracking has gained interest over the past 20 years in the medical imageprocessing field. Image tracking is the process of aligning two or more images. Such images can originate from a single imaging modality or from different modalities; they can be taken from different patients to study the same organ, tissue or structure; or they can be obtained from an acquisition through time, where temporal structural changes are analyzed. Image tracking can extract valuable information that can be spread on two or more images. Defining the transformation model that best aligns the structures or tissues of interest present in the images is of the utmost importance. Deformable transformations are capable of managing significant changes of biological structures. Accordingly, deformable image tracking is a fundamental task in medical image processing. Currently, there is a wide variety of techniques developed for medical imaging tracking as presented in [START_REF] Alam | Evaluation of medical image registration techniques based on nature and domain of the transformation[END_REF]. We briefly describe few works related to the tracking method of deformable tissues that will be presented in the next section. In heart surgery, the surface of a beating heart was tracked in stereoscopic images using a thin-plate spline (TPS) deformation model [START_REF] Richa | Three-dimensional motion tracking for beating heart surgery using a thin-plate spline deformable model[END_REF] (see Figure 3.1). Similar methodology was presented to track the deformation applied to a soft tissue CHAPTER 3. ROBUST MOTION COMPENSATION phantom from 3D ultrasound images [START_REF] Lee | Intensity-based visual servoing for non-rigid motion compensation of soft tissue structures due to physiological motion using 4D ultrasound[END_REF]. In this work, a 3D non-rigid tracking algorithm based on a thin-plate spline 3D deformation model was considered and a visual servoing scheme was designed to automatically move the ultrasound probe to compensate the rigid motion components of the tissue. Recently, an approach based on a massspring-damper model combined with the dense information contained in a sequence of ultrasound 3D images was proposed to track the deformation of liver tissues [START_REF] Royer | Realtime target tracking of soft tissues in 3d ultrasound images based on robust visual information and mechanical simulation[END_REF]. However, none of these works were applied to obtain the strain information of a moving tissue which is the goal of our work presented in this chapter. Dense visual tracking In Chapter 2, a geometrical-feature-based visual tracking approach was presented, where the centroid of the stiffest tissue in a selected ROI was extracted from the elastogram using a segmentation algorithm. Unlike this previous approach, we propose here to use directly the appearance of the b-mode image to perform a non-rigid visual tracking of the ROI containing the deformable tissue of interest. In addition, the elastogram is estimated exploiting the output of the visual tracking process. The use of intensities, colors and textures instead of geometrical features has been proposed in several works. Visual tracking based on image pixel intensities was introduced by Lucas and Kanade [START_REF] Lucas | An iterative image registration technique with an application to stereo vision[END_REF] as an approach for image matching (registration) in stereo vision. Through the years, image registration has been extended in several approaches. For example in [START_REF] Irani | A unified approach to moving object detection in 2D and 3D scenes[END_REF], a technique for moving object detection was presented. A two-view approach for moving objects detection was introduced in [START_REF] Schindler | Two-view multibody structure-and-motion with outliers through model selection[END_REF]. The common factor of these works that are referenced as dense registration techniques is the use of image templates as visual features. One of these approaches is the photometric visual tracking technique that considers the intensity of all pixels in the image registration process [START_REF] Hager | Efficient region tracking with parametric models of geometry and illumination[END_REF]. However, these dense registration approaches are computationally expensive due to the large quantity of pixels to match. Nevertheless, an efficient optimization for direct image registration has been presented in [START_REF] Benhimane | Real-time image-based tracking of planes using efficient second-order minimization[END_REF][START_REF] Dame | Second order optimization of mutual information for real-time image registration[END_REF]. Then, the process to match the pixel position of I c with I t is defined as: p = arg min p N -1 k=0 E (I t (x k ), I c (w (x k , p))) (3.1) where w (x, p) warps the image point coordinates x using the transformation parameters p and E is a similarity function. Similarity measures The similarity function E is the cost function to be optimized by finding the parameters p and it is an essential part of the visual tracking process. E measures the similarity or dissimilarity between two images. We present two similarity metrics that we consider in this thesis. Sum of squared difference (SSD) The sum of squared differences (SSD) measures the difference between the pixel intensities of a template image I t and a current image I c as, SSD(I c , I t ) = N -1 k=0 I t (x k ) -I c (w (x k , p)) 2 (3.2) This function is simple and computationally efficient, which makes it a widely used function in image registration [START_REF] Shi | Good features to track[END_REF]. However, SSD function lacks of robustness to intensity changes and occlusions in the image. These are common occurrences in ultrasound images when the probe is moving as illustrated in Figure 3.2. Sum of conditional variance (SCV) To deal with issues as the ones presented in Figure 3.2, a more robust similarity measure is required. In our work we propose to use the sum of conditional variance (SCV) [START_REF] Richa | Visual tracking using the sum of conditional variance[END_REF], which is robust to global illumination changes and that is expressed as follows: SC V (p) = N -1 k=0 Ît (x k ) -I c (w (x k , p)) 2 (3.3) where Ît is the image intensity adaptation of the template relative to the image intensity conditions in the current warped image I c (w (x, p)). The image template adaptation is performed through the expectation operator E as Ît = E(I c (w (x, p))|I t (x)). Thus, the adaptation of every gray level for the reference I t is, Ît (x) = L-1 i =0 i p I t I c (i , j ) p I t ( j ) (3.4) where L is the maximum gray level of the template image I t and current image I c . p I t is the probability density function of I t and p I t I c is the joint probability density function of I t and I c . These functions are computed as follows: p I t I c (i , j ) = p I t I c (I c (w (x, p)) = i , I t (x) = j ) (3.5) = 1 N N -1 k=0 δ(I c (w (x k , p)) -i )δ(I t (x k ) -j ) p I t (i ) = L-1 j =0 p I t I c (i , j ) (3.6) where δ is a Dirac delta function such as δ(u) = 1 ⇔ u = 0. Optimization The selection of the SCV as a similarity function replaces E in Equation (3.1) as follows: p = arg min p N -1 k=0 Ît (x k ) -I c (w (x k , p)) 2 (3.7) The variation of the intensity values of Ît (x) with respect to the coordinates x is nonlinear. Therefore, the Equation (3.7) is a nonlinear optimization, and we can solve it using some iterative strategies. Nonlinear optimization iteratively updates the values of the parameters p until convergence. A good performance of the optimization depends on DENSE VISUAL TRACKING Strategy Warp parameters increment Update rule Forward additional [START_REF] Lucas | An iterative image registration technique with an application to stereo vision[END_REF] ∆p l = arg min ∆p l N -1 k=0 I c l (w (x k , p l + ∆p l )) -Ît (x k ) 2 p l +1 = p l + ∆p l Direct com- positional ∆p l = arg min ∆p l N -1 k=0 I c l (w (w (x k , ∆p l ), p l )) -Ît (x k ) 2 w (x, p l +1 ) = w (w (x, ∆p l ), p l ) Inverse com- positional ∆p l = arg min ∆p l N -1 k=0 I t (w (x k , ∆p l )) -Îc l (w (x k , p l )) 2 w (x, p l +1 ) = w (w -1 (x, ∆p l ), p l ) Table 3.1: Nonlinear optimization strategies. ∆p l is the increment of the parameters at the iteration l . the initialization of parameters. Table 3.1 presents three of the most used strategies to solve the Equation (3.7). Table 3.2 summarizes the advantages and drawbacks of the different optimization strategies presented in Table 3.1. The iterative process of every strategy ends when SC V is minimum or when l (iteration index) has reached N i t (maximum iteration number). In our work, we select the inverse compositional approach due to the advantages of efficiency with respect to the other strategies. This strategy helps us to converge in less number of iterations. Since the nonlinear optimization is solved using an increment CHAPTER 3. ROBUST MOTION COMPENSATION ∆p l , we perform the first order Taylor expansion of I t (w (x k , ∆p l )) as: Strategy I t (w (x k , ∆p l )) = I t (w (x k , 0)) + ∇I t ∂w ∂∆p l ∆p l , (3.8) where ∇I t ∈ R 1×2 is the image gradient of I t defined as ∇I t = ∂I t ∂x ∂I t ∂y . We assume that w (x k , 0) is the identity warp [START_REF] Baker | Lucas-kanade 20 years on: A unifying framework[END_REF], such that w (x k , 0) = x k . Therefore, the SC V (∆p l ) can be defined as: SC V (∆p l ) = N -1 k=0 I t (x k ) + ∇I t ∂w ∂∆p l ∆p l -Îc l (w (x k , p l )) 2 (3.9) The goal of every l -iteration is to minimize the value of SC V (∆p l ), which can be achieved by nullifying the gradient of SC V with respect to ∆p l as: ∂SC V (∆p l ) ∂∆p l = 2 N -1 k=0 ∇I t ∂w ∂∆p l ⊤ I t (x k ) + ∇I t ∂w ∂∆p l ∆p l -Îc l (w (x k , p l )) = 0 (3.10) We can obtain the change of the parameters for every iteration from Equation (3.10) as follows, ∆p l = -J(∆p l ) + Îc l (w (x, p l )) -I t (x) , (3.11) where J(∆p l ) ∈ R N ×N p , defined in Equation (3.12), is the Jacobian matrix with N p as the number of parameters in the warp function w . Îc l (w (x, p l )) and I t (x) are row vectors of N -elements containing every pixel in the images Îc l and I t , respectively. J(∆p l ) = ∇I t ∂w ∂∆p l (3.12) We have defined how to solve iteratively a dense visual tracking problem as presented in the Equation (3.7). This solution is invariant to global illumination changes due to the robustness of the SCV similarity metric. Next, we define the warp function w and how to select it according to the complexity of the image registration problem. Warp transformation The warp function R 2 → R 2 : x ′ = w (x, Rigid transformations Geometric rigid transformations preserve the distance between points. These transformations are based on Euclidean geometry and include rotations and translations. A rigid transformation in 2D can be defined as a homogeneous transformation matrix T ∈ SE (2) 1 : T(ψ, t x , t y ) = R 2×2 (ψ) t 2×1 (t x , t y ) 0 1×2 1 (3.13) where R ∈ SO(2) and t ∈ R2 are the rotation matrix (2 × 2 size) and the translation vector (2 × 1 size) respectively. These two elements of the rigid transformation matrix are formally defined as: R(ψ) = cos(ψ) -sin(ψ) sin(ψ) cos(ψ) (3.14) t(t x , t y ) = t x t y (3.15) The rigid transformation, defined in Equation (3.13), is applied to any pixel with coordinates x = [x y] ⊤ to obtain the new pixel coordinates: x ′ 1 = T(ψ, t x , t y ) x 1 (3.16) This leads us to obtain the warp function w to compute the transformed coordinates x ′ = [x ′ y ′ ] ⊤ as, x ′ = w (x, p) = x cos(ψ) -y sin(ψ) + t x x sin(ψ) + y cos(ψ) + t y (3.17) where p = [ψ t x t y ] ⊤ are the parameters of the warp function. CHAPTER 3. ROBUST MOTION COMPENSATION Once we have defined the function w , we can estimate the Jacobian defined in Equation (3.12) as: J(∆p l ) = ∇I t -x sin(ψ) -y cos(ψ) 1 0 x cos(ψ) -y sin(ψ) 0 1 (3.18) where J(∆p l ) ∈ R N ×3 is the Jacobian that estimates the parameters variation in the iterative process to solve the image registration problem (Equation (3.11)) with a rigid transformation. In order to evaluate the performance of every transformation model presented in this chapter, we implemented the visual tracking algorithm in C++ in a Linux notebook (Intel i7 CPU @2.1 GHz). For this evaluation, the convergence conditions are the maximum number of iterations N i t set to 50 or the minimum SC V value set to 1×10 -6 . Then, we acquired a sequence of 500 ultrasound b-mode images during the application of the palpation motion task, presented in Section 2.2, and the presence of lateral in-plane motion also applied with the ultrasound probe. Due to palpation motion, deformations are produced along the image sequence. We tested the rigid transformation with this image sequence using a ROI as shown in Figure 3.4a delineated in green color. This ROI was tracked through the image sequence until the last image as shown in Figure 3.4. The performance of the dense rigid tracking was evaluated with the image absolute error, e abs = e ⊤ e N , (3.19) where e = Îc (w (x, p)) -I t (x). The algorithm converged after 29 iterations and the value of e abs reached 9.70 × 10 -7 . It is well known that the palpation motion introduces deformations that are non-rigid, therefore, it is likely that a rigid transformation will not be sufficient for this visual tracking problem. Non-rigid transformations The compressions required in the elastography generate deformations in the ultrasound images which cannot be approximated using rigid transformations. Therefore, it is necessary to introduce non-rigid transformations as warp functions. These transformations are commonly used when the image template is distorted or deformed. Moreover, nonrigid transformations are classified as linear and non-linear. Linear transformations are used for image distortion, which can come from the change in the normal of a planar object in an image sequence. Image distortions also appear when a planar object is viewed from a different point of view in relation to the camera perspective. The most wellknown linear transformations are the affine and the projective transformations. However, the projective transformation is not suitable for the ultrasound images, since the image is not reconstructed from a perspective model as images from a camera. Nonlinear transformations are used when deformation is applied to the image template in an image sequence. For example, in an image sequence where an elastic material is deformed by an external force, a non-linear transformation should be defined to track the image template through the sequence. Next, we describe the most common linear transformation, the affine transform. After, we detail the free-form deformation (FFD) and the TPS as non-linear transformations for image deformation. For every transformation, the warping function and the Jacobian (required for the visual tracking process) are formulated. In order to present a comparative of the image registration process, we use the same ultrasound image sequence presented for the rigid tracking case. This image sequence was acquired with our experimental setup by applying deformation on the phantom with the ultrasound probe. Affine The main characteristic of an affine transformation is that it preserves parallels lines in the image after being applied. This transformation combines the rigid transformation motions, along with scale and shear in a set of six parameters p. Basically, four parameters of p modify rotation, scale and shear of a pixel coordinate x = [x y] ⊤ . The remaining two parameters are directly related to the translation of x. An affine transformation can be define as a matrix as follows, A(p) = p 0 p 1 p 2 p 3 p 4 p 5 (3.20) where p = [p 0 p 1 p 2 p 3 p 4 p 5 ] ⊤ is the vector containing the parameters of the affine transformation with N p = 6. The affine transformation A is applied to any pixel coordinate x to obtain the transformed coordinate x ′ . If we use the augmented version of A expressed in homogeneous coordinates, we can then compute the transformed coordinate as: x ′ 1 =    p 0 p 1 p 2 p 3 p 4 p 5 0 0 1    x 1 . (3.21) The warp function w that maps x → x ′ using the affine transformation is then expressed as, x ′ = w (x, p) = xp 0 + y p 1 + p 2 xp 3 + y p 4 + p 5 (3.22) in order to be adapted to the image registration process. The resulted Jacobian of the Equation (3.12) can be then expressed as: J(∆p l ) = ∇I t x y 1 0 0 0 0 0 0 x y 1 (3.23) In the dense affine registration process, the Jacobian J(∆p l ) ∈ R N ×6 allows to estimate the parameters variation in the iterative process to solve the image registration (Equation (3.11)). We show in Figure 3.5 an example of the performance of the dense registration process using the affine model. For comparative purposes of the warp functions performance, we use the same image sequence as the one used for rigid registration. In this case, the image absolute error after 31 iterations was 9.23 × 10 -7 which is better than the error obtained with the rigid transformation. Free-Form deformation Free-Form deformation (FFD) is a common technique in computer graphics and animation design. The main concept of FFD relies on the use of hierarchical transformations to deform an object [START_REF] Barr | Global and local deformations of solid primitives[END_REF]. The transformations include twisting, bending, tapering and stretching of the object. A most generalized approach for FFD was presented in [START_REF] Sederberg | Free-form deformation of solid geometric models[END_REF] allowing to apply global and local deformations to surfaces of any degree (e.g., plane, quadric or parametric). This generalization was performed by using Bernstein polynomials to design the spline functions. In the context of image registration, FFD has been used to match breast MRI images where deformations are present [START_REF] Rueckert | Nonrigid registration using free-form deformations: application to breast mr images[END_REF]. More recently, image matching with FFD has been also improved in terms of computational cost [START_REF] Brunet | Feature-driven direct non-rigid image registration[END_REF] using an intuitive feature-driven framework. FFD is one of the most common transformation models in medical imaging, where a rectangular grid of N p = N p x × N p y control points are placed on the template image. The displacement of the control points deforms the image using the products of univariate splines. The deformation of the image using FFD is obtained by applying the warping function: w (x, p) = N p y j =1 N px i =1 p k B i (x)B j (y), (3.24) where p k ∈ R 1×2 is the k-th control point with index number k = ( j -1)N p x + i and B i is the basis function of the cubic B-splines: B i (x) =                  B 1 (x) = x3 6 if x ∈ [k i , k i +1 ] B 2 (x) = -3 x3 +3 x2 +3 x+1 6 if x ∈ [k i +1 , k i +2 ] B 3 (x) = 3 x3 -6 x2 +4 6 if x ∈ [k i +2 , k i +3 ] B 4 (x) = -x3 +3 x2 -3 x+1 6 if x ∈ [k i +3 , k i +4 ] 0 otherwise (3.25) with x = x-k l δ |x ∈ [k l , k l +1 ], δ =∥ k l +1 -k l ∥ . k l is the l-th control point. A generalization of the Equation (3.24) can be expressed as w (x, p) = ψ ⊤ P, (3.26) where ψ ∈ R N p and P ∈ R N p ×2 are the vectors of the basis functions and control points respectively: ψ ⊤ = [B 1 (x)B 1 (y) . . . B N px (x)B 0 (y) . . . B N px (x)B N p y (y)] (3.27) P = [p 1 . . . p N p ] (3.28) One big advantage of the Equation (3.26) is that the vector of the basis functions can be precomputed for the pixel coordinate x. This helps us to improve the computational cost of this warping function. Therefore, the deformation of the current image depends on the variation of the parameters p. Moreover, the computational cost also increases with a large number of control points. As in previous transformations, we define the Jacobian of the warp function required for the image registration process. This one is expressed as, J F F D = ∂w ∂∆p = ψ ⊤ 0 1×N p 0 1×N p ψ ⊤ (3.29) The Jacobian J F F D ∈ R 2×2N p helps us to solve iteratively the image registration process of the Equation (3.11) using the FFD warp function. We show in Figure 3.6 the performance of the dense visual tracking system using FFD transformation applied in the same image sequence as the previous transformations. In this case a 5×5 grid of control points was used. The image absolute error after 50 iterations was 4.69×10 -7 , which is a better result than the previous transformation models. This improvement is due to the FFD transformation fitting better the image template to the deformation presented in the image. DENSE VISUAL TRACKING Thin-plate splines This nonlinear transformation was suggested for image registration [START_REF] Goshtasby | Registration of images with geometric distortions[END_REF], and it is based on the analogy of how a metallic thin plate is deflected by normal forces at discrete points. Thin-plate splines (TPS) have also been used for dense image tracking in [START_REF] Delabarre | Dense non-rigid visual tracking with a robust similarity function[END_REF] and [START_REF] Brunet | Feature-driven direct non-rigid image registration[END_REF]. TPS warping function is a combination of an affine transformation and deformation parameters (control points) as follows: w (x, p) = a 0 a 1 a 3 a 4 x + a 2 a 5 + N c -1 k=0 κ k x κ k y φ(d (x, c k )) (3.30) where N c is the number of control points c, κ k x and κ k y are the weights of each k control point along the x and y axes respectively. These weights represent the force amplitude applied at the control point position. φ is the thin-plate kernel defined as, φ(x) = x 2 l og (x) 2 and (3.31) d (x , y) is the euclidean distance between the points x and y. The parameter vector of the warping function p of dimension 2N c +6 is expressed as: p ⊤ = a 0 a 1 a 2 a 3 a 4 a 5 κ ⊤ x κ ⊤ y (3.32) where the first six parameters are the parameters of the affine transformation. κ x and κ y are vectors (N c elements) containing the weights κ k x and κ k y respectively: κ ⊤ x = κ 0 x . . . κ N c -1 x (3.33) κ ⊤ y = κ 0 y . . . κ N c -1 y (3.34) In the registration process, the positions of the control points c are initially distributed in an equidistant grid inside the image. Then, the values of the forces applied at every control point are changed to match a deformed image with a template image. We illustrate in Figure 3.7 how the TPS warp function can be adapted to deform the template image. Image registration with TPS is commonly performed by changing the parameters p to adjust the current image, in an image sequence, with the image template. The optimization requires the Jacobian of w , as we shown in the previous transformations, which can be obtained as: ∂w ∂∆p = J A J κ (3.35) where: J A = x y 1 0 0 0 0 0 0 x y 1 ∈ R 2×6 , J κ = φ 0 • • • 0 0 • • • 0 φ ∈ R 2×2N c , φ = φ(d (x, c 0 )) • • • φ(d (x, c N c -1 )) , x and y are the pixel coordinates in the image I t . The Jacobian J(∆p l ) ∈ R N ×(6+2N c ) can be obtained through Equation (3.12). This Jacobian is employed to solve the image registration system of the Equation (3.11) using the TPS transformation. We show in Figure 3.8 the performance of this process using the same example as the one tested with the previous transformations. This figure shows the results of the dense tracking process using the TPS transformation with 5×5 control points. The image absolute error after 21 iterations was 1.33 × 10 -7 . This result is better than the result obtained by the FFD transformation. In addition, TPS reaches the convergence in the image registration process in less number of iterations than the FFD image registration. The performance of the registration process using the different warp functions provides a perspective to select the best function for our application. Due to the mechanism of the elastography using the compression of the tissue, the ultrasound image tends to present deformation. The non-linear warp functions perform better than the other functions under deformations. Therefore, from our performance comparison study of the different tested approaches, summarized in Table 3.3, we propose to choose the TPS as the warp function in our visual tracking system due to its faster convergence and minimum absolute error. The computational time is also a major factor considered in our decision, since our system requires real-time visual tracking capability. Table 3.3: Performance evaluation of the transformations in the visual tracking system. Strain estimation based on optical flow Strain estimation is a process that depends on the motion estimation of the elements contained in the ROI to generate an elastogram. From the TPS registration, we can obtain the displacement maps U (x, y) and V (x, y) (lateral and axial directions respectively). Let us define x ′ = w (x, p) as the corresponding coordinates of x after the tissue deformation. For every x we have a displacement vector D(x) = x ′ -x. Then, we can obtain U (x) and V (x) as the lateral and axial components of the displacement vector D(x) (see Figure 3.9). Assuming that we have at least a grid of 3 × 3 control points, then the elastogram ε(x, y) can be computed as in Section 2.2.2.2 by convolving a least-squared (LSQ) strain filter with the axial displacement map V (x, y). Therefore, the elastogram ε(x, y) can be computed using the information from the deformable registration. Motion compensation We have developed a method to track a template image (in our case the ROI) when the tissue is deformed. Since the elastography process requires the compression of the tissue, deformable registration is essential to track the ROI. However, some of the physiological motions cause the ROI to go outside of the image plane when using a 2D probe. This leads to a failure of the image tracking. Therefore, to solve this issue, we propose to use a control system that uses prior information of parallel images planes to the plane containing the ROI. This control system helps us to fully control the 6-DOF of the ultrasound probe in order to always maintain the visibility of the ROI. Preserving the position of the ROI stable even when the tissue is moving is essential for the right estimation of an elastogram. Moving tissue can cause motions not only inplane but also out-of-plane as emphasized in [START_REF] Krupa | Real-time tissue tracking with b-mode ultrasound using speckle and visual servoing[END_REF] (see Figure 3.11). This tissue motion can be compensated by controlling the motion of a 2D US probe using the intensitybased ultrasound visual servoing presented in [START_REF] Nadeau | Intensity-based direct visual servoing of an ultrasound probe[END_REF][START_REF] Nadeau | Intensity-based ultrasound visual servoing: Modeling and validation with 2-D and 3-D probes[END_REF]. 6-DOF motion compensation by dense visual servoing We propose to use a similar visual servoing approach to the one presented in [START_REF] Nadeau | Intensity-based direct visual servoing of an ultrasound probe[END_REF] to control a robotic arm holding the probe in order to automatically compensate the relative non-axial motions between the probe and the moving tissue of interest to analyze. Nonaxial motions correspond to one lateral and one rotational motions in the US plane and one lateral and two rotational motions out of the US plane (see Figure 3.11). In [START_REF] Nadeau | Intensity-based direct visual servoing of an ultrasound probe[END_REF], a visual servoing method that uses the intensity information of the pixels inside a ROI has demonstrated the feasibility to control the 6 DOF of a 2D ultrasound probe for compensating both in-plane and out-of-plane rigid motions. However, in this previous work the tissue was assumed rigid without considering deformation due to internal physiological motion or the presence of mechanical compression. To deal with the soft tissue deformations, we propose to improve the method of [START_REF] Nadeau | Intensity-based direct visual servoing of an ultrasound probe[END_REF] by using the non-rigid motion estimation algorithm presented in Section 3.2.5.3 We briefly recall the principle of the ultrasound dense visual servoing approach [START_REF] Nadeau | Intensity-based direct visual servoing of an ultrasound probe[END_REF]. The aim is to control the probe velocity v expressed in the frame F cp (see Figure 2.5). The visual features vector, s, used in this control scheme contains directly the intensities of the pixels inside a ROI such as: s I = (I 1,1 , . . . , I M ,N ) (3.36) where I u,v is the intensity in gray level for the 2D pixel coordinates (u, v) in the US image. The interaction matrix L I u,v ∈ R 1×6 that relates the variation of the pixel intensity to the probe velocity v, such that I u,v = L I u,v v, is given by: L I u,v = ∇I x ∇I y ∇I z y∇I z -x∇I z (x∇I y -y∇I x ) (3.37) where ∇I u,v = [∇I x ∇I y ∇I z ] corresponds to the 3D image gradient associated with the pixel (u, v). The three components, ∇I x = ∂I u,v ∂x , ∇I y = ∂I u,v ∂y and ∇I z = ∂I u,v ∂z are obtained with 3D derivative filters (see Figure 3.12), as performed in [START_REF] Nadeau | Intensity-based direct visual servoing of an ultrasound probe[END_REF], applied to a thin volume composed of 5 parallel slices captured by moving the probe during an initial procedure before launching the visual servoing. The values of x and y are the metric coordinates of the pixel (u, v) in the image obtained from the intrinsic parameters of the probe: x y = s x (u -u cp ) s y (v -v cp ) (3.38) as e s = s w I -s * I , and we establish the desired visual error variation as ė * s = -λ s e s with λ s being the visual control gain. Unlike [START_REF] Nadeau | Intensity-based direct visual servoing of an ultrasound probe[END_REF], here s w I = I c (w (x, p)) is the current image warped with the TPS warping function using the current parameters p. This major improvement allows the visual servoing approach to be robust to the presence of the non-rigid motion induced by the tissue deformation. The desired pixel intensities vector corresponds to the intensities value of the pixels contained in the ROI of the initial image I t , s * I = (I t 1,1 , . . . , I t m,n ). Then, the control law applied to the probe for performing the automatic motion compensation is provided by: v s = L + s ė * s . (3.40) Control fusion In order to fuse the automatic motion compensation by visual servoing and the force control (presented in Chapter 2 Section 2.2.1), we can define a control law for the probe velocity v using the redundancy control framework (presented in Chapter 2 Section 2.5). We set the force control law as the highest priority task, remaining as in Equation (2.6). Then, the secondary task that corresponds to the visual servoing can be expressed as: v s = (L s P f ) + (ė * s -L s v f ) (3.41) where P f = I-L + f L f is the projector operator onto the null space of L f . I is the identity matrix of size 6. Finally, the general control law that allows to control the 6-DOF of the 2D probe is given from (2.6) and (3.41) as: v = v f + v s (3.42) This control fusion allows to control the 6-DOF of the ultrasound probe in order to automatically compensate the motions and keep the ROI always visible. In the next section we present the experimental results of the proposed approach that makes possible the estimation of the elastogram of a moving tissue. Experimental results We present the results obtained with the same experimental setup proposed in Chapter 2. The images from the scanner were sent to the workstation at a rate of 40 FPS EXPERIMENTAL RESULTS (frames per second). Force control was performed at higher frequency (200 Hz). We developed a C++ software with a graphical user interface (GUI), and we used ViSP [START_REF] Marchand | ViSP for visual servoing: a generic software platform with a wide class of robot control skills[END_REF] for the communication with the robot. The experiments were performed on the ABDFAN ultrasound phantom. Now, we describe the complete process for one experiment. Initially, the probe was positioned above the phantom without contact. Then, through the GUI, we enabled the force control without oscillation (F 0 =5 N and ∆ F =0 N), and we can see that the measured force value reaches the desired force in the strip (light gray background) of the plot in the that the cyst is very visible as opposed to the elastogram presented in Figure 3.15b where a perturbation motion was applied to the phantom without activating the automatic compensation by visual servoing. It is clear that in this case the elastogram estimation is perturbed and can not provide any useful information. Figure 3.15c shows the obtained elastogram when the phantom is moving and the automatic motion compensation by visual servoing is activated. This last test demonstrates the efficiency of our approach since the cyst is very visible and similar to the case where the phantom was motionless. Conclusion Physiological motions are always present in real tissue making it necessary to consider motion compensation in the design of our robotic-assisted system for elastography. We have encountered several challenges to obtain the elastogram of a moving tissue. First, since the elastogram computation depends on the axial motion estimation between the pre-and post-compression states, a large lateral motion causes wrong measurements in the elasticity of the tissue. The process presented in Chapter 2 which includes a block matching algorithm (BMA) and an optical flow (OF) algorithm for motion estimation can deal with small lateral motions introduced to the tissue. However, it does not consider large motion perturbations in the tissue when using a 2D probe, neither it can deal with out-of-plane motions. On the other hand, the system using a 3D probe, also presented in Chapter 2, takes into account in-and out-of-plane motions in the robotic centering task. However, the slow acquisition rate of the volumetric information makes the system fail when fast motions are induced to the tissue. These reasons lead us to the design of a robust approach for motion compensation. In this chapter, we use the 2D ultrasound probe to reach a faster acquisition rate than with the 3D probe, in order to be reactive to fast motion introduced to the tissue. Contrary to the approach presented in Chapter 2, we consider the b-mode image for our visual tracking process instead of the RF signals. This makes the process two times faster, since the visual tracking does not require two acquisitions of RF data. The visual tracking system presented here was tested with several dense approaches considering an image template as our ROI to estimate the elastogram. Rigid and non-rigid transformations were implemented to evaluate the performance of our visual tracking system. This also demonstrated that the sum of conditional variance (SCV) is robust to intensity changes usually observed in ultrasound images. The thin-plate splines (TPS) transformation was selected due to its fast performance with respect to the other transformations evaluated (rotation-translation, affine and free-form deformation transformations). TPS transformation uses control points placed inside of the ROI and then computes the displacement of the points due to the deformation of the tissue reflected in the ultrasound image. The use of the control point displacements that are estimated by the proposed non-rigid dense visual tracking allows us to avoid the axial motion estimation from the pre-and post-compressed RF signal. However, the palpation motion task is still required to generate a slight deformation of the tissue along the axial direction. In addition, our approach uses the b-mode image ROI in a dense visual servoing approach that automatically moves the ultrasound probe to compensate the in-plane and out-ofplane motions of a moving tissue using the 2D ultrasound probe. This process considers the information of five parallel images captured at the initial position of the probe in order to estimate the 3D image gradient that is needed in the control law for compensating both the in-plane and out-of-plane tissue motions. Preliminary ex-vivo results have demonstrated the feasibility to estimate the strain map of a moving tissue. CHAPTER 4 TELEOPERATION AND HAPTIC FEEDBACK BASED ON ELASTOGRAM The term teleoperation refers to a process performed at a distance. It comprises a robotic system where a human operator controls a remote robot. Teleoperation has been used in applications where close manual operation is hazardous or where access is limited (e.g., nuclear waste manipulation, underwater exploration). The first teleoperation system was designed to handle nuclear material in 1940 [START_REF] Vertut | Teleoperation and robotics[END_REF]. Since the 1990s the use of teleoperation for medical purposes began to appear along the concept of computed-assisted surgery. The first teleoperated system related to this field was the ZEUS surgical robot (Computer Motion, Inc.) developed in 1995 [START_REF] Kumar | Telesurgery[END_REF]. This system comprises three robotic arms mounted on a table, one holding an endoscope which provides a view of the internal operating field, and the others holding surgical instruments. The robotic arms are controlled by the surgeon through a console. Currently, ZEUS is discontinued and the da Vinci surgical system (Intuitive Surgical, Inc.) is the most widely used robotic system for telesurgery in hospitals. The da Vinci surgical system provides to the surgeon a console that renders force feedback and 3D vision of the internal operating field. The current version of the da Vinci system is equipped with four robotic arms with one holding the camera and the others actuating the surgical instruments. Force feedback is provided to the surgeon via the joysticks of the console when the instruments are in contact with the tissues. Usually, the force feedback is associated with the name of haptic feedback and its rendering is performed at a higher frequency than the visual feedback in order to provide a responsive interface to the user [START_REF] Cavusoglu | Multirate simulation for high fidelity haptic interaction with deformable objects in virtual environments[END_REF]. Such haptic feedback allows the surgeon to increase his perception of the scene and it is therefore a functionality of great interest to facilitate the execution of the intervention. For the same reason, we propose in this chapter to provide a haptic feedback functionality to our robotic palpation system. HAPTIC FEEDBACK We develop hereafter an approach that will provide a force feedback to the examiner from the estimated strain map. Indeed, our objective is to give the examiner the abilities of feeling the rigidity of a tissue while visualizing its elastogram with the use of a haptic device. First, the basic concepts of haptic feedback are introduced in Section 4.1. In Section 4.2, the design of the system that translates the elastogram into force feedback is presented. Afterwards, in the same section, a method to remotely control the ultrasound probe held by the robot using a haptic device is described. In Section 4.3, results obtained from experiments performed on the abdominal phantom are presented. Finally, Section 4.4 concludes this chapter. Haptic feedback The sense of touch is used to perceive and explore the world, helping us to identify objects or warning us when touching something dangerous. There are two types of force feedback when making physical contact with an object, kinesthetic and tactile. The kinesthetic feedback is the perception of the internal status of the body. On the other hand, tactile feedback is the response that allows us to feel the material or texture of any object. The combination of these two feedbacks facilitates the adjustment in the configuration of the body to interact with the object. For example, the configuration adopted with a hand to hold something lightweight such as a cotton ball is not the same as the configuration to hold a ceramic mug. In robotics, the haptic feedback is commonly associated with the two types of force feedback used to explore an object by hand. A wide variety of devices have been designed to emulate the kinesthetic and tactile force feedbacks with the aim of feeling virtual objects. Those devices can be easily identified by their structure: the kinesthetic haptic devices are usually grounded while the tactile ones are wearable (see Figure 4.1). Currently, the most common haptic devices are the kinesthetic kind, which can be classified by their configuration: manipulandum, grasp, and exoskeleton (see Figure 4.2). The manipulandum configuration involves all grounded devices with 3 to 6 DOF. Grasp configuration concerns the devices simulating grasping interaction at the user's hand. The exoskeleton configuration is associated with devices adapted to the user's body, providing forces at the joints. In this thesis, we use the Virtuose 6D (Haption S.A.) shown in Figure 4.3. This device is a kinesthetic haptic device with a manipulandum configuration (referred on the next sections as haptic device) with 6 DOF. It also has the capability of applying a 6-DOF force feedback to the user, three translational forces (maximum force of 31N) and three The impedance type devices are the most common force feedback devices (see Figure 4.4). The input of the impedance type device is the motion applied by the user to the kinesthetic haptic device and the output is a force feedback. On the other hand, the admittance type devices have as input the force applied by the user to the kinesthetic haptic device and, as output, the user feels motion (see Figure 4.5). Admittance type devices are not as common as the impedance ones. In our case, the Virtuose 6D haptic device can be configured as both types (impedance or admittance), but we adopted the impedance configuration in order to be compatible with most of the similar haptic devices. ELASTOGRAM In the next section, we present the design of our robotic system to provide haptic force feedback estimated from an elastogram acquired in real-time. The system allows the teleoperation of a 6-DOF robot, actuating an ultrasound probe (as shown in previous chapters) with a haptic device that is also used to provide force feedback rendered by the elastogram. Haptic force feedback from elastogram The use of a haptic device allows us to move a virtual probe within a virtual representation of the environment. The location of the virtual probe can be modified by physically manipulating the haptic device, generating a force feedback. To compute the output force, the algorithm transforms the motion into a force value. For example, if the virtual probe is positioned at the middle of an empty container or box, the force feedback would be null. However, if the position of the virtual probe is displaced to the location of any wall of the container, then the force would be estimated based on the strength of its material. This example provides an idea on how to translate the location of the virtual probe into haptic force feedback. We consider the virtual probe as a virtual region located at the center of the ROI in the ultrasound image. The virtual probe moves according to the haptic device motion, implying that the ROI is also moving. Next, we present the development of a new approach to estimate the force feedback based on the location of the virtual probe. We also present a teleoperation system to position the ultrasound probe according to the haptic's motion. Figure 4.6 shows a short block diagram of the robotic system proposed that will be developed in the following sections. Two operational modes are presented in this block diagram: impedance haptic control and teleoperation control. The haptic control mode uses the elastogram in a ROI of the ultrasound image to estimate the force feedback that will be applied to the haptic device as we will describe in Section 4.2.1. On the other hand, the teleoperation control mode applies the motion introduced to the handler of the haptic device into the ultrasound probe as we will describe in Section 4.2.3. Force estimation from elastogram The diagram of the proposed process to estimate the force based on the elastogram is shown in (i , j ) = g m (i , j ) max(G m ) , (4.1) where max(G m ) is a constant value representing the maximum component value of G m , and g m (i , j ) = e -i 2 2σ 2 x + j 2 2σ 2 y 2πσ x σ y , (4.2) where σ x and σ y are the standard deviation for lateral and axial directions, respectively. In our case, these values are typically set as σ x = N 4 and σ y = M 4 aiming to obtain a Gaussian distribution inside of a rounded area. The center of the rounded area is located at the center of G m . Following the diagram of Figure 4.7, the filtering of the elastogram is performed using the following expression E f = E • G m , (4.3) where • is the Hadamard product operator and E f is the resulting elastogram after filtering. Next, the average scalar strain value of E f , µ ε is computed and then used in the process to estimate the displacement ∆x of a virtual spring after being compressed with a force F (see Figure 4.7). Based on the classic definition of mechanical strain, we can obtain the value of the displacement of the spring as, ∆ x = µ ε L (4.4) where L is the original length of the spring. The force F of the spring according to Hooke's law is defined as, F = -k∆ x (4.5) where k is the stiffness value of the spring given by k = E πr 2 d (4.6) where E is the Young's modulus of the soft tissue. The Young's modulus of healthy tissue has values between 2-4kPa. d is the thickness of the compressed tissue and r is the The translation from strain to force is then achieved through Equation (4.5). Now, in the following sections we define how the motion applied by the user on the haptic device is used to move the virtual probe which corresponds to the ROI in the ultrasound image. Afterwards, we detail the process we developed for teleoperating the ultrasound probe. We also show how moving the virtual probe gives us the force feedback in function of the tissue elasticity by using the approach previously explained. Impedance force feedback system This section describes the impedance force feedback scheme we implemented on the haptic device. The relation between the elastogram and the output force has been established in the previous section. Now, we explain how to relate this output force to the user input motion. In this study, we consider the use of a 2D ultrasound probe instead of a 3D one in order to obtain 2D elastograms in real time. Therefore, the motion of the 2D US probe will be limited to pure in-plane translations (lateral and axial translations). h0 M h = ( b M h0 ) -1b M h . (4.8) where the operator -1 represents the inversion of a homogeneous matrix. The value of the homogeneous matrix b M h0 is measured at the initialization and remains constant, and b M h represents the current measure of the handler pose. As previously mentioned, we only need the relative translation, which is extracted from the relative homogeneous matrix h0 M h as h0 t h ∈ R 3 . Therefore, the relative motion of the handler ∆ h in the x-y plane is obtained from the x and y components of the relative translation h0 t h . The value ∆ d = S I R h ∆ h , (4.9) where I R h ∈ SO( 2) is a 2 × 2 rotation matrix between the x-y plane of F h and the image frame F I . S ∈ R 2×2 is a 2 × 2 diagonal matrix containing the scale values for lateral and axial direction. These values are computed from the calibration of the ultrasound probe to convert pixels to meters (s x and s y ). Therefore, the matrix S is defined as The displacement of the point (u c , y c ) generates a new elastogram which is translated to a force feedback F using the method described in Section 4.2.1 through Equation (4.5). This completes the impedance haptic system that generates force feedback every time the user applies motion to the handler of the haptic device. As we previously explained, this approach is implemented for the 2D FOV of the ultrasound probe. S = 1 s x 0 0 1 s y . ( 4 In addition, the adaptation of the probe teleoperation in the robot's workspace to explore the tissue while moving the handler of the haptic feedback can provide more freedom to the user. However, the teleoperation of the probe and the force feedback system can not work together since they would need to share the same DOF of the haptic device. To deal with this issue, we design two operation modes as shown in Figure 4.6, where the user can switch between teleoperation of the ultrasound probe and virtual probe for haptic sensing through the haptic device handler's buttons. Next section presents the teleoperation of the US probe, and the experimental results of teleoperation and haptic feedback. Robotic teleoperation In this section, the teleoperation of the ultrasound probe is presented. This process is based on a master-slave system, where the master is a unit manually operated by a user, in our case the haptic device. The slave robot holding the US probe is duplicating the motion applied on the master device. The master-slave communication is essential to perform delicate procedures. Figure 4.12 shows the proposed system for the masterslave teleoperation of the ultrasound probe. In the system presented in Figure 4.12, the haptic state contains the information of the kinematics of the haptic device. This information provides the current pose of the handler of the haptic device. An initial pose of the handler is represented by the homogeneous matrix b M h0 once the system is launched. Afterwards, the current pose described by b M h is used to compute the relative pose of b P h with respect to the initial one b P h0 as defined in Equation (4.8). The relative pose represented by h0 M h is measured with respect to the frame F h and to duplicate the displacement with the ultrasound probe we need to define it with the same direction as the ultrasound contact frame F cp (see is obtained using the rotation matrix cp R h between the frames F h and F cp : As we want to reach the desired displacement with the ultrasound probe, we define the current pose as r M cp , and its relative pose cp0 M cp with respect to the initial pose r M cp0 as, cp0 M cp = ( r M cp0 ) -1r M cp (4.13) M ∆ = cp R h 0 3×1 0 1×2 1 h M h0 (4.11) We can extract the relative probe displacement from cp0 M cp using the same principle as the one used to extract the displacement component of M ∆ . In this case the six displacement elements are defined inside the measured vector s ∆ : s ∆ = ∆t p x ∆t p y ∆t p z ∆θp x ∆θp y ∆θp z ⊤ (4.14) The error between the measured and desired displacement is defined as, e ∆ = s ∆ -s * ∆ , (4.15) and the desired exponential error variation as, ė∆ = -λ ∆ e ∆ , (4.16) where λ ∆ is the gain of the error variation. The variation in the measured displacement ṡ∆ is related to the ultrasound probe velocity v with the following expression, ṡ∆ = L ∆ v, (4.17) where L ∆ = I 6 is the interaction matrix that relates the variation of the measured displacement ṡ∆ with the probe velocity v and I 6 is a 6 × 6 elements identity matrix. The variation of the measured displacement ṡ∆ is directly related to the variation of the desired exponential error ė∆ . Therefore, if we replace ṡ∆ with ė∆ in the Equation (4.17), then we obtain a relation to compute the velocity v corresponding to the desired error variation: v = L + ∆ ė∆ (4.18) The Equation (4.18) is the velocity control law for the 6-DOF at the frame F cp . Since this control law is designed for the full motion of the probe at frame F cp , we need to limit its movement in the y-axis for security reasons, such that the force control in the y-axis has full priority. Indeed, force control is needed for the elastography process, but it also brings safety when combined with the teleoperation task. The force control law The content of the Elastogram shared pointer is changed by the elastography thread at 24 FPS. The elastography process computes the elastogram with the approach presented in Section 2.2.2. Then, the elastogram is converted into force feedback as presented in Section 4.2.1 in the haptic system thread. The haptic system measures the handler current pose with an update rate frequency of 100Hz, and the new relative displacement ∆ d is sent to the elastography process to change the position of the ROI. Inside the haptic system thread, a mechanism can be activated on demand by the user to switch between the impedance force feedback and the teleoperation control mode. This mechanism uses the two buttons located on the handler to switch between the two control modes. When the teleoperation process is activated, the Robot Control thread computes and applies the velocity to the robot by using the Equation (4.21). The Robot Control thread is also in charge of applying the continuous oscillation in the y-axis force needed to obtain the pre-and post-compressed states of the tissues required for the elastogram estimation. The thread RFtoBMode performs the conversion of the RF frame to a B-mode image and sends it to the main thread for displaying purposes. In this implementation, most of the threads are running at different frequencies. However, the synchronization between the threads is achieved by the Qt connection signal-slot mechanism. Now, we present an experiment that consists in two parts. First, the initial state of the system is in teleoperation mode, and the user can explore the tissue by moving the handler of the haptic device. Figure 4.16 shows in the first row different configurations of the haptic device when the user applied manual motion on the handler during the teleoperation control mode. The second row of Figure 4.16 presents the resulting pose of the robot holding the probe and the third row provides the observed ultrasound image for each configuration. Figure 4.17 shows the temporal evolution of the measured force and the control velocities of the robot applied at the contact frame F cp . The experiment begins with the ultrasound probe above the phantom without making contact, as shown in to variate in order to replicate the motion introduced by the user in the handler of the haptic device. After, in Figure 4.17d we highlighted two moments when the user was applying continuous motion to the haptic device (strips in light-blue color) and when the user stops the motion (strips in light-green color). We can observe the fast convergence (∼ 0.5s) of the teleoperation system in both cases. In these plots, one can observe the variation in the velocities and the errors due to the different motions introduced to the handler of the haptic device. The parameters of the force controller F 0 and ∆ F were set to 5N and 3N, respectively. We can observe that the measured force follows correctly the desired oscillation reference, since the force control task that has the highest priority is not disturbed by the teleoperation task. The second part of the experiment presents the results of the impedance haptic system when the user switches from the teleoperation mode to the haptic force feedback mode. After selecting a ROI where the elastogram is estimated in real time, the user can feel the force computed from the elastogram while moving the handler of the haptic device. We present the results of one experiment where the user moves the virtual probe position, corresponding to the ROI, inside the ultrasound image with the haptic device. The user feels the force feedback while moving the haptic device. We can observe in Conclusion The sense of touch in palpation examination is essential to differentiate tissue stiffness. In addition to the capability of ultrasound elastography to provide quantitative elastic information, the interaction with the examiner to feel the elasticity can help to locate ELASTOGRAM anomalies while moving a ROI in the ultrasound image. Two assistant modes were proposed in this chapter to aid the examiner to perform ultrasound elastography of the tissue and to simultaneously feel the elasticity of the tissue with a haptic device. This system with the teleoperation of a 2D ultrasound probe offers the capabilities to remotely perform ultrasound elastography on a patient or simply to confirm the tissue elasticity displayed in the elastogram. We have demonstrated experimentally a good performance for both teleoperation and impedance haptic control modes. The force feedback using the elastogram was statistically evaluated to determine its reproducibility. However, the estimation of the force feedback from the elastogram assumes a specific stiffness value depending on the Young's modulus of the soft tissue and this may variate between different kinds of soft tissues. Despite of the coarse assumption value of the Young's modulus, the force feedback feeling obtained from the experimental results is promising, and offers the possibility to perform a study with expert physicians to validate this force feedback functionality assistance. CHAPTER 5 CONCLUSION AND PERSPECTIVES Conclusion The research work presented in this thesis focused on the design of a robotic system to assist the examiner in the ultrasound elastography process. The assistance of the robotic system could facilitate the diagnosis of diseases done by the physician. In addition, the system introduced in this work has demonstrated the ability to perform fatiguing tasks that the examiner usually performs. In Chapter 1, we introduced the principles of ultrasound imaging, the state-of-the-art of the ultrasound elastography and the definition of visual servoing. In Chapter 2, we presented a robotic-assisted system for quantitative ultrasound elastography combining three hierarchical robotic tasks. As far as we know, the system presented in Chapter 2 is the first robotic-assisted system that uses direct parameters from the elastography in a robotic task. This contribution showed its capacity to be used with 2D and 3D ultrasound probes. A different methodology to maintain the stiff tissue of interest in the field of view of the ultrasound probe was designed in Chapter 3. This approach took into account motion perturbations in the tissue (e.g., physiological motions), which were not addressed in the system presented in Chapter 2. The approach of Chapter 3 considered a 2D ultrasound probe using a hybrid ultrasound visual servoing process to compensate this perturbation motion with a 6-DOF robot. The automatic motion compensation with the robot led to another contribution, the estimation of the elasticity map of a moving tissue. In Chapter 4, another robotic application with elastography was presented. This last application was the development of a haptic system based on the elasticity map given by the robotic palpation system. In addition, the haptic device was used to teleoperate the ultrasound probe in order to explore the tissue. In the following, we present the general conclusions of the thesis. Afterwards, several ideas The first Chapter presented the three concepts most used in this work. First off, the main concepts of ultrasound imaging modality were introduced. The definition of the image reconstruction from ultrasound was described. Then, the state-of-the-art of the elastography described the most common approaches to obtain the strain map of the tissue. The entire classic approach of elastography was detailed to introduce the idea of the process in which a robot can assist. Afterwards, the few robotic systems related with ultrasound elastography were described. However, these previous robotic systems do not exploit the elastic information in their robotic controllers. This chapter also recalled the basic principle of visual servoing. Chapter 2 presented one of the major contributions of this thesis, a robotic-assisted system for quantitative ultrasound elastography. The proposed system allows an automatic and real-time generation of the elasticity map of the observed tissues. It is based on the development and implementation of three robotic tasks: palpation motion, automatic centering of stiff tissue, which is the target of interest and orientation of the ultrasound probe. The palpation motion was performed by a force controller that uses the force measured with a force/torque sensor placed between the end-effector of the robot and the ultrasound probe. This controller was designed to continuously apply a periodical pressure on the tissues by controlling the probe velocity component along its axial direction. An algorithm was then proposed based on BMA (Block Matching Algorithm) and optical flow to estimate the elasticity image (elastogram) from pre-and post-compression RF arrays acquired during this palpation task. The second task was employed to automatically center the ultrasound probe with the stiffest tissue in a ROI. The location of the stiffest tissue was extracted from the elastogram image generated by the palpation task. Then, the location of the stiffest tissue was used as input of a visual servoing process to laterally center it in the FOV of the ultrasound probe. The third robotic task that we proposed was an automatic re-orientation of the probe that can be activated on demand by the examiner to observe the stiff tissue of interest under different view angles. The three tasks were combined by formulating a hierarchical control. The palpation motion has the highest priority, followed by the automatic centering of the stiffest tissue and lastly, the orientation of the probe has the lower priority. Experimental results obtained on different phantoms demonstrated the feasibility of our proposed assistant robotic system for 2D and 3D elastography. The experiments performed with the 2D probe demonstrated convergence of the robotic system when a ROI was selected in the ultrasound image's FOV. As the use of a 2D probe limited the motion control of the probe in the observation plane, we also proposed an adaptation of our system for the use of a 3D probe. This adaptation gave to the system the possibility of performing 3D elastography and centering the probe on a stiff tissue target that does not initially CHAPTER 5. CONCLUSION AND PERSPECTIVES intersect with the central US observation plane. However, the acquisition time of an ultrasound volume with a motorized probe is important (around 1s per volume) and this limitation caused the system to be slow compared with the one developed with a 2D probe. Additionally, a study of the use of this robotic system to improve the quality of the 2D elastography was also included in this chapter. This study consisted in capturing elastograms at different probe orientations and then averaging them to compute the resulting elastogram, which has better quality than an individual elastogram at one orientation. The quality assessment in the elastogram was performed by measuring the CNRe (contrast-noise ratio elastogram). The limitations of the robotic system presented in Chapter 2 led us to design an alternative approach to assist in the elastogram estimation. This approach has to be robust to any motion perturbation on the tissue. This system was presented in Chapter 3, where a 2D ultrasound probe was used to perform the compensation of perturbing motions. The robotic system comprised two tasks, palpation motion and motion compensation. The first task, palpation motion, remained the same as in the previous approach. However, the motion compensation using a 2D ultrasound probe was the key issue in the design of the new system. As explained before in the previous approach, a 2D ultrasound probe was limited to in-plane motions. However, prior knowledge of close parallel planes to the initial 2D ultrasound image provides enough information to estimate the motion out-ofplane through 3D image gradients, as presented in [START_REF] Nadeau | Intensity-based ultrasound visual servoing: Modeling and validation with 2-D and 3-D probes[END_REF]. This approach can compensate motion based on a dense image tracking of a ROI. However, the existing approach only considered a rigid model transformation to track the ROI, which in our case is not always sufficient due to the deformation of the tissue. Therefore, we proposed to adapt this dense image tracking method by including a deformable model to track the ROI in the image in order to compensate for the deformations of the ROI due to the compression applied by the palpation motion with the ultrasound probe. This improves the efficiency of the robotic motion compensation task. In addition, the information of the deformable tracking of the ROI was employed to compute the displacement map and then used for the estimation of the elastogram in the ROI. The experimental results demonstrate that the system is capable of compensating a wide range of external motions applied to the tissue, keeping the ROI in the FOV of the ultrasound probe. This improved the system presented in Chapter 2, providing a better alternative for a robotic-assisted system for elastography. The possibility of measuring the elasticity of the tissue with the assistance of a robotic system helps the examiners to detect stiff tissues. However, the cooperation between the robotic system and the examiner is indispensable to allow a better diagnosis of diseases based on the elasticity of the tissue. Feeling the elasticity of the tissue is a fundamental task in palpation examination, which complements the elastography process. PERSPECTIVES Therefore, in Chapter 4, we presented a haptic system to apply force feedback based on the motion of a virtual probe and the estimated elastogram of the tissue. This approach was designed as a master-slave system, where the master was a haptic device and the robot performing palpation motion with the ultrasound probe was the slave. This system included a teleoperation of the probe with a haptic device to explore the tissue while the ultrasound probe was always in contact with the tissue using the palpation motion task. The approach was experimentally tested over the abdominal phantom used in the other two robotic systems previously described, and the results of the haptic force feedback were analyzed. The results showed that the haptic system is an excellent approach that can help doctors to just not only measure the elasticity, but to be able to feel it through a haptic device. Perspectives There are some important perspectives of this work which are classified in two categories accordingly to the time of development: short-term and long-term perspectives. Short-term perspectives Many studies can be performed with the robotic framework presented in this work. For example, the use of a 3D ultrasound probe with faster acquisition of ultrasound data can be explored to expand the approach presented in Chapter 2. The exploration and comparison of different ultrasound elastography approaches with the robotic system can also provide another perspective in determining which technique performs better with the robotic system. A multi-modality image registration process can also improve the system presented in Chapter 2. The information of the 3D stiffness map of the tissue can be obtained beforehand with MRE (Magnetic resonance elastography) to apply a multi-modality image registration with the 2D stiffness map estimated in real-time with the ultrasound. This approach can be applied in laparoscopy, guiding the surgeon in the location of stiff tissues. The multi-modality image registration for elastography can also be used to avoid the estimation of the tissue stiffness and instead using the corresponding stiffness value computed by the MRE to calculate the force feedback in the haptic system presented in Chapter 4. The occlusions on the ultrasound propagation due to bones or other artifacts are cause of low quality in the image. The quality in the acquisition of the ultrasound information is fundamental to compute the elasticity of the tissue. Recently, a new robotic 128 CHAPTER 5. CONCLUSION AND PERSPECTIVES approach to improve the ultrasound imaging quality was presented in [START_REF] Chatelain | Confidence-driven control of an ultrasound probe[END_REF]. This work is based on the optimization of the confidence map, which is an image representation of the ultrasound quality. Therefore, the adaptation of this approach to the robotic systems presented in this thesis could be an advantageous perspective, leading to increase the quality of the elastogram of the tissue. In terms of applications, guidance on the insertion of a needle for a biopsy using elastography can be an interesting approach. This application can be obtained with the combination of the robotic needle insertion approach presented in [START_REF] Chevrie | Online prediction of needle shape deformation in moving soft tissues from visual feedback[END_REF] and the approach presented in Chapter 2 to locate the stiffest tissue in a region of interest inside the field of view of an ultrasound probe. The real-time computation of the location of the stiffest tissue, usually corresponding to a tumor, would be the goal to achieve a robotic needle insertion system. Long-term perspectives The estimation of tissue deformation can be improved by an online registration between the real-time ultrasound information and prior knowledge of the geometrical structure of the tissue. This kind of approach has been applied in radiotherapy of the neck [START_REF] Coevoet | Registration by interactive inverse simulation: application for adaptive radiotherapy[END_REF] where weight loss during several weeks of therapy modifies the volume of anatomical structures. This approach uses a finite element method (FEM) obtained from the first CT scans. Afterwards, an interactive registration of few points is performed to obtain the deformation and biomechanic parameters of the soft tissue helping in the limitation of radiations. Currently, soft robots have been studied for minimal invasive surgery [START_REF] Cianchetti | Soft Robots in Surgery[END_REF]. This kind of robots provides higher dexterity and flexibility than the classic robots. The mobility is one of the most interesting elements in a soft robot, it can help in performing elastography of regions occluded by bones in real-time. The design of a soft robot to perform elastography is a perspective that can help surgeons to localize and remove tumors in a minimal invasive surgery. Finally, the three robotic-assisted systems presented in this dissertation are just the beginning in the research of a robotic tool that someday can be employed in hospitals to facilitate the diagnosis of diseases or the planning of surgery. The approaches for a robotic system to assist in elastography presented in this work were only evaluated on phantoms. However, the main goal is to take these approaches to real medical applications. This is a long process that involves evaluations of the systems with in-vivo tissues, which requires the approvement of hospitals and clinicians. Despite the accuracy of an 129 5.2. PERSPECTIVES industrial robot, the volume and safety of this kind of robots are limitations that should be considered for a certified medical robotic system [START_REF] Taylor | Medical robotics in computer-integrated surgery[END_REF]. Therefore, the implementation of the proposed robotic assistance tasks on a smaller and safer robot is a necessary step. Figure 1 : 1 Figure 1: Cadre robotique général pour l'élastographie ultrasonore quantitative. Le chapitre 5 3 . 1 4 . 1 5 Conclusion and perspectives 125 5 . 1 5314151 présente la conclusion générale de la thèse et propose des perspectives à court et à long termes de ces travaux. ix x Related works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Dense visual tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Similarity measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Warp transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Rigid transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.5 Non-rigid transformations . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.6 Strain estimation based on optical flow . . . . . . . . . . . . . . . . . . 3.3 Motion compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 6-DOF motion compensation by dense visual servoing . . . . . . . . . 3.3.2 Control fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii CONTENTS 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Teleoperation and haptic feedback based on elastogram 103 Haptic feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Haptic force feedback from elastogram . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Force estimation from elastogram . . . . . . . . . . . . . . . . . . . . . 4.2.2 Impedance force feedback system . . . . . . . . . . . . . . . . . . . . . 4.2.3 Robotic teleoperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Short-term perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Long-term perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 2 : 2 Figure 2: General robotic framework for quantitative ultrasound elastography. Figure 1 . 1 : 11 Figure 1.1: Sound propagation through two different media. Left illustration depicts the change in the wavelength of the sound for every media. Right sketch shows the orientation of the sound when is induced (θ i ), transmitted (θ t ) and reflected (θ r ) between the two media. Figure 1 .Figure 1 . 2 : 112 Figure 1.2 shows the representation of the reflections of the sound wave of incidence produced by one scatterer point. Figure 1 . 3 : 13 Figure 1.3: RF signal recorded from a real transducer. Figure 1 . 4 : 14 Figure 1.4: RF envelope detection. Red curve represents the envelope of the RF signal in green. Figure 1 . 5 : 15 Figure 1.5: B-mode image reconstructed from the RF signals envelopes. (a) b-mode image reconstructed only from the envelope detection for a set of scan lines. (b) b-mode image obtained with the logarithmic compression of the envelope detection for the same set of scanlines used in (a). Figure 1 . 6 : 16 Figure 1.6: Geometry of a linear ultrasound probe. 7. The elements of the transducer are positioned along an arc of a circumference with center at F p . The metric coordinates of every point (x, y) inside the field of view (FOV) of the probe are related to the polar coordinates (r, θ) by x = r sin(θ), (1.14) Figure 1 . 7 : 17 Figure 1.7: Geometry of a convex ultrasound probe. Figure 1 . 8 : 18 Figure 1.8: Pre-scan and post-scan b-mode images. (a) pre-scan and (b) resulted postscan b-mode images obtained with a convex ultrasound probe. 1. 2 .Figure 1 . 10 : 2110 Figure 1.10: (a) RF signal analogy with a succession of springs S 1 , S 2 and S 3 . (b) the Hooke's law scheme. Figure 1 . 11 : 111 Figure 1.11: Strain profile for three consecutive springs. Springs 1 and 3 have the same length l and stiffness value k in the pre-compression state. Figure 1 . 12 : 112 Figure 1.12: Results where the MRE demonstrates increasing liver stiffness values with increasing stage of fibrosis (Figure taken from [89]). The top row shows share wave images from four patients with biopsy-proven hepatic fibrosis from stage 1 to 4. The lower row shows corresponding elastograms for these patients. Figure 1 . 13 : 1 . 2 . 5 . 1 1131251 Figure 1.13: Classification of ultrasound elastography approaches. The principle of each category is described in the section indicated at the bottom of the box. Figure 1 . 14 : 114 Figure 1.14: Cross-correlation between the segments of RF signals s pr e and s post . The absolute cross-correlation is normalized to one and the lag where the amplitude is maximum is indicated. Figure 1 . 15 :Figure 1 . 16 : 115116 Figure 1.15:Transient elastography process with an ultrasound transducer. First, the shear wave is produced by the vibrator and then, the ultrasound is emmited. This process produces the elastic information of the tissue scanned by the ultrasound transducer. Figure 1 . 17 : 117 Figure 1.17: Liver stiffness measurement using FibroScan (Figure taken from [25]). Figure 1 . 18 : 118 Figure 1.18: Results of the snake robot performing palpation of a prostate phantom presented in [76]. (Left) shows the experiment setup and (right) presents the stiffness of the artificial prostate. 1. 3 .Figure 1 . 19 : 3119 Figure 1.19: View of the da Vinci console displaying the real-time images including the elastogram (Figure taken from [10]). Figure 1 . 1 Figure1.20: Force control scheme of the robotic system presented in[START_REF] Bell | Force-controlled ultrasound robot for consistent tissue pre-loading: Implications for acoustic radiation force elasticity imaging[END_REF] for applying the ARFI required for the tissue elasticity measurement. Figure 1 . 21 : 121 Figure 1.21: Visual servoing closed-loop. Figure 1 . 22 : 122 Figure 1.22: The two configurations of the visual sensor location in visual servoing. Figure 1 . 23 : 123 Figure 1.23: Robot joints q for a 6-DOF robot, and the pose r of the end-effector. Figure 2 . 1 : 21 Figure 2.1: Scheme and workflow of the experimental setup. CHAPTER 2 .Figure 2 . 2 : 222 Figure 2.2: Viper s850 robot. 2. 1 1 . EXPERIMENTAL SETUP (a) SonixTouch ultrasound system. (b) Ultrasound transducer 4DC7-3/40. Figure 2 . 3 : 23 Figure 2.3: Ultrasound equipment used for the experiments. (a) Abdominal phantom ABDFAN US-1B. (b) Two-layers gelatin phantom containing two duck gizzards. Figure 2 . 4 : 24 Figure 2.4: Phantoms used in the experiments. Figure 2 . 5 : 25 Figure 2.5: Cartesian reference frames attached to the robotic arm. b as the rotation matrix and translation vector, respectively. t a b × is the three-by-three skew-symmetric matrix representation of t a b . Figure 2 . 6 : 26 Figure 2.6: Desired sinusoidal force applied by the force controller. Figure 2 . 7 : 27 Figure 2.7:Measured force when the force control law is applied. Green curve is the desired force variation. The red and blue curves correspond to the force measured when applying Equations (2.5) and (2.6) respectively. Figure 2 . 8 . 28 Two RF frames are grabbed before and after the tissue compression and represent respectively the pre-and post-compression states. A region of interest (ROI) or a volume of interest (VOI), in case of the use of a 2D or 3D ultrasound probe, is selected from the RF data to estimate the elastogram and display it on the b-mode image. Figure 2 . 8 : 28 Figure 2.8: Elastography process. From left to right: mechanical compression with the ultrasound probe over a static soft tissue; RF frames acquired for the pre-and postcompression states; b-mode image with an elastogram overlaid on a ROI. Figure 2 . 9 : 29 Figure 2.9: Displacement estimation. Parameters in the motion estimation process using the RF data observed in the ROIs of the pre-and post-compressed frames. [START_REF] Cavusoglu | Multirate simulation for high fidelity haptic interaction with deformable objects in virtual environments[END_REF] where (i c , j c ) is the center point of the search region as shown in Figure2.10. At this stage, we have shown the estimation of the motion for one block. The same process is repeated for the next blocks, but we need to define their position. We define the position changing as a shifting process, where the next block position can represent a block overlapping with respect to one or more previous blocks (seeFigure 2.10). The percentage of blocks overlapping is related to the resolution of the axial and lateral displacements Figure 2 . 10 : 210 Figure 2.10: Parameters involved in the motion estimation algorithm using the information of the post-compression RF frame (element access through g (i , j )). The grid inside the search region represents the integer step between the RF elements. Figure 2 . 11 : 211 Figure 2.11: Displacement maps from motion estimation. After obtaining the displacements for every block, their axial and lateral components are stored in two arrays V 0 (i , j ) and U 0 (i , j ), respectively. Figure 2 . 12 : 212 Figure 2.12: LSQ strain estimation. (a) strain estimation parameters for the sample at (i s , j s ) in the displacement map, and the respective (b) strain map. Figure 2 . 13 : 213 Figure 2.13: Generation of the ground truth using a FEM. Figure 2 . 2 Figure 2.14 shows few scatterers as red points in a frame acquisition by the simulated ultrasound machine (similar to a real ultrasound machine). The same figure also shows how the scatterers have been displaced due the mechanical compression. Note that in order to work, FIELD II requires a number of scatterers of order 10 3 points in a random Gaussian distribution. Figure 2 . 14 : 214 Figure 2.14: Scatterers position in the process of compression. Figure 2 . 15 : 215 Figure 2.15: Three input models used to simulate the compression by FEM. Figure 2 . 16 : 216 Figure 2.16: Output due the compression by FEM. Figure 2 . 17 :Figure 2 . 18 : 217218 Figure 2.17: Output FIELD II using B-Mode. Figure 2 . 19 : 219 Figure 2.19: RF volume for one motor sweep. Figure 2 . 20 : 220 Figure 2.20: 3D elastogram reconstruction based on the 2D process. Figure 2 . 2 Figure 2.20 shows the process to compute one elastogram in 3D based on the 2D Figure 2 . 21 : 221 Figure 2.21: Connected components process to label different regions for 2D and 3D images. Figure 2 . 25 :Figure 2 . 2252 Figure 2.25: 3D scan conversion. We show the parameters needed to convert any point s in RF units to a point p in metric units. Figure 2 . 26 : 226 Figure 2.26: Probe orientation. Figure 2 . 27 : 227 Figure 2.27: 3D probe orientation. Figure 2 . 29 : 229 Figure 2.29: Block diagram of the 2D case implemented in the multi-thread software application. +5 • and θ 4 = θ 3 +5 • . The curves of the error evolution for the three tasks are shown in Figure 2.30.We can see that the force error ranges between ±1N due to the sinusoidal desired force variation. Once the ROI is selected (at time 20 s), the centroid of the elastogram is computed, and it is sent to the automatic centering control task. The object centering error decreases towards zero but still exhibits a low remaining oscillation of ±3 mm due to the elastogram noise. However, the ROI is horizontally maintained close to the image center by the visual servoing task even when the user successively changed the probe desired angles at times 21, 90, 126, 167 and 205 s, keeping automatically the object of interest in the field of view. Figure 2 .Figure 2 . 30 : 2230 Figure 2.30: Phantom experiment. Evolution of the system during the experiment. (a) Force error evolution, with F 0 = 5N and ∆ F = 2N. (b) Probe orientation error evolution. (c) Horizontal target centering error evolution. (d) Velocities applied to the 3-DOF involved in the control law. Figure 2 . 31 : 231 Figure 2.31: Experiment with different probe orientations: (a)-(e) the probe oriented at different angles, from left to right the angles are θ 0 = -10 • , θ 1 = -5 • , θ 2 = 0 • , θ 3 = 5 • and θ 4 = 10 • . (f)-(j) b-mode image where the target is centered with the image and the elastogram ROI overlaid for each probe orientation. (k)-(o) elastograms obtained for each probe orientation shown in the ROI of the images. 42 Table 2 . 4 : 4224 71.45 65.87 87.52 87.02 127.Comparison of the CNR e (in dB) of the estimated elastography images at the different probe orientations of the experiment and their mean. Figure 2 . 32 : 232 Figure 2.32: Mean of aligned elastograms obtained from 5 probe orientations. Figure 2 . 33 : 233 Figure 2.33: Display of the three orthogonal planes and the VOI with the 3D elastogram. Figure 2 . 34 : 234 Figure 2.34: Short diagram of the implemented multi-thread software application. Figure 2 . 35 : 235 Figure 2.35: Experiment with a gelatin phantom containing two duck gizzards. 2. 7 . 7 CONCLUSIONservoing task start to variate. The convergence is reached at t ≃ 150s when the probe has been automatically aligned with the stiff tissue of interest (duck gizzard) by following an exponential decrease of the visual error e t as expected. A change of the orientation of the probe is introduced by the user at time t ≃ 160s and we can see at this point the variation of ω x , ω y and ω z that allow reaching the desired orientation (s * θ = [ π 36 -π 36 π 18 ] rad). At this time all the tasks are activated which lead us to keep the target in the center of the ultrasound image. Angular velocities (e) Probe position at the VOI selection (t =72s) (f) Probe position at t =80s (g) Probe position at t =110s (h) Probe position at t =150s (i) Probe orientation at t =172s Figure 2 . 36 : 236 Figure 2.36: Evolution through time of the 3D reference experiment. (a)-(d) evolution of the errors and velocities during the experiment. (e)-(h) evolution of the pose of the ultrasound probe during the centering of the object in the FOV with the 3D ultrasound images at the top. The stiffest object (green color) is also overlaid in the 3D ultrasound images. (i) probe orientation control of the ultrasound probe. Figure 3 . 1 : 31 Figure 3.1: Surface tracking of a beating heart [61]. Left and right images of the stereo camera and the TPS surface approximation. The main advantage of dense image registration techniques is the suppression of an image feature extraction step, therefore avoiding errors due to false detection or bad segmentation in the image. Let us define I t ∈ R m×n and I c ∈ R m×n as the initial and current image templates, respectively. The templates are composed of N = m × n pixels. Figure 3 . 2 : 32 Figure 3.2: Ultrasound images acquired after applying small motion to the probe.(a) reference image. (b)-(c) images acquired after small motion of the probe. (c) an oclussion (shadow) at the right side of the image is present due to the lack of echogenic gel between the probe and the tissue surface. p) is a function that maps a point or set of points x to a new location x ′ by applying a geometric transformation, as illustrated in Figure 3.3. The parameters p in the warp function depend on the type of geometric transformation to use. These transformations are classified in two groups, rigid and non-rigid transformations. The most commonly used transformations in visual tracking are briefly explained in the next section. Figure 3 . 3 : 33 Figure 3.3: Arbitrary warp transformation applied to a set of points x. Figure 3 . 4 : 34 Figure 3.4: Example of performance in dense rigid tracking. (a) initial image (i = 0) of an image sequence (green rectangle is the region to be tracked). (b) last image (i = 500) in the sequence. (c) template image from (a) at top and warped image from (b) at bottom. (d) error image computed from Îc (w (x, p)) -I t (x). Figure 3 . 5 : 35 Figure 3.5: Example of the performance in dense affine tracking. (a) initial image (i = 0) of an image sequence (green rectangle is the region to be tracked). (b) last image (i = 500) in the sequence. (c) template image from (a) at top and warped image from (b) at bottom. (d) error image computed from Îc (w (x, p)) -I t (x). Figure 3 . 6 : 36 Figure 3.6: Example of the performance in dense FFD tracking. (a) initial image (i = 0) of an image sequence (green rectangle is the region to be tracked). (b) last image (i = 500) in the sequence. (c) template image from (a) at top and warped image from (b) at bottom. (d) error image computed from Îc (w (x, p)) -I t (x). Figure 3 . 7 : 37 Figure 3.7: TPS image transformation. Image deformation using the TPS warp function. Red dots represent the location of the control points. Figure 3 . 8 : 38 Figure 3.8: Example of the performance in dense TPS tracking. (a) initial image (i = 0) of an image sequence (green rectangle is the region to be tracked). (b) last image (i = 500) in the sequence. (c) template image from (a) at top and warped image from (b) at bottom. (d) error image computed from Îc (w (x, p)) -I t (x). Figure 3 . 9 : 39 Figure 3.9: Displacement map obtained from the deformation of the ROI. x and x ′ represent respectively the coordinates of one pixel in I t and its new coordinates in the current image I c after tissue deformation. D(x) is the vector displacement from x to x ′ . Figure 3 . 3 Figure 3.10 shows the workflow of our proposed method. The main problem produced by a moving object is the generation of noise in the elastogram, due to the introduction of non-axial motion. To counter this, our system combines ultrasound dense visual servoing, force control and the non-rigid motion estimation to compute the elastogram of moving tissue presented in last section. In the next subsections we describe the elements of the proposed control system featured in Figure3.10. Figure 3 . 10 : 310 Figure 3.10: Proposed methodology to obtain the strain map of a moving tissue. This diagram shows the steps to estimate the elastogram from two ROIs in the ultrasound images I t and I c . These images are the reference and current images respectively. I w represents the image I c modified with a non-rigid transformation (w ) that is performed from the motion estimation in order to reduce the absolute difference with I t . [U, V] are the lateral and axial displacement maps computed between I c and I t after the motion estimation. Figure 3 . 11 : 311 Figure 3.11: Possible motions with a 2D US probe. Figure 3 . 13 : 313 Figure 3.13: Evolution through time of the reference experiment. (a) to (d) show the evolutions of the velocities, measured force and visual error during the experiment. Figure 3 . 3 Figure 3.13c. Figure 3 . 14 : 314 Figure 3.14: Perturbations introduced to the phantom. (a) to (d) show some of the states during the experiment. (e) to (h) show the b-mode images observed at these states. (i) to (l) show the strain maps for every state. Figure 3 .Figure 3 .Figure 3 . 15 : 33315 Figure 3.15 shows three elastograms obtained under different conditions. InFigure3.15a, the strain map has been estimated using only oscillatory force control without the automatic motion compensation when the phantom was motionless. We can see Figure 4 . 1 : 41 Figure 4.1: Types of haptic force feedback. Figure 4 . 2 : 42 Figure 4.2: Configurations of kinesthetic haptic device. Figure 4 . 3 : 43 Figure 4.3: Virtuose 6D (Haption S.A.). Haptic device used in this thesis work. Figure 4 . 7 .Figure 4 . 6 : 4746 Figure 4.6: Short block diagram of the system proposed in this chapter. Figure 4 . 7 : 47 Figure 4.7: Force estimation based on strain information. 4. 2 .r = σ 2 x + σ 2 y( 4 . 7 )Figure 4 . 8 : 2224748 Figure 4.8: Thickness of the compressed tissue with an ultrasound probe. Figure 4 . 9 illustrates 49 the principle that consists in moving the ROI to follow the displacement of the user measured by the handler of the haptic device. This figure also shows the Cartesian frames F b and F h corresponding to the base and the handler of the haptic device, respectively, and F I corresponding to the ultrasound image. The point (u r , v r ) is the origin point of the ROI with respect to the ultrasound image frame F I , and the point (u c , v c ) is the center of the ROI. If the user applies motion at the handler of the haptic device F h , then the point (u c , v c ) is shifted with a displacement ∆ d proportional to the displacement of the handler. Figure 4 . 9 : 49 Figure 4.9: Moving elastogram inside of the ultrasound image. Figure 4 . 10 : 410 Figure 4.10: Handler displacement into the ultrasound image. . 10 ) 10 The displacement ∆ d of the point (u c , y c ) in the ultrasound image affects directly the position of the ROI where the elastogram is estimated. This produces a displacement of ∆ d in all the elements inside the ROI, and the motion of the point (u c , y c ) must be bounded in the region R I ∈ ( w r 2 (w I -w r2 )] × ( h r 2 (h I -h r 2 )] (seeFigure 4.11). Setting boundaries for the motion of the point (u c , y c ) is necessary to ensure the estimation of the elastogram in a ROI of size w r × h r . Figure 4 . 11 : 411 Figure 4.11: Motion boundary R I of the point (u c , y c ) in the ultrasound image. Figure 4 . 4 Figure 4.13). Therefore, the new relative pose expressed by the homogeneous matrix M ∆ Figure 4 . 12 : 412 Figure 4.12: Main components of the master-slave teleoperation system and the connexions between them. Figure 4 . 13 : 413 Figure 4.13: Cartesian frames used for teleoperation of the probe.Once the relative pose is expressed in the frame F cp , we can extract the six displacement components, three angular displacements (∆θ x , ∆θ y , ∆θ z ) and three linear displacements (∆t x , ∆t y , ∆t z ). The linear displacements are computed from the translational vector inside M ∆ . On the other hand, the angular displacements are obtained through the rotation matrix R ∆ contained in M ∆ and expressed as three angular components (∆θ x , ∆θ y , ∆θ z ) with Rodrigues' formula. These six displacement components are the desired values that the ultrasound probe should reach with respect to the initial probe position relative to the robot's base frame F r , r M cp0 " and we can enclose them in a desired feature vector as, Figure 4 . 14 : 414 Figure 4.14: Experimental setup of the proposed haptic system. Figure 4 . 15 : 415 Figure 4.15: Proposed multithread workflow for the implementation. Directional connections between the threads are depicted with red lines. Figure 4 . 4 Figure 4.16d. Then, the palpation motion task is activated, initiating the force control to reach contact with the phantom as shown in Figure 4.16e.Figure 4.17a shows at t =∼ 2.5s Figure 4 . 4 17a shows at t =∼ 2.5s the beginning of the force variation needed for the palpation motion. The teleoperation of the US probe with the haptic device starts at t =∼ 13s as indicated with the black arrow in the plot illustrating the evolution of the teleoperation errors (Figure4.17d). At the same time, we can also observe inFigures 4.17b Figure 4 . 16 : 416 Figure 4.16: Teleoperation system states. (a) initial pose of the handler and (d) the corresponding pose of the ultrasound probe with the (h) resulting ultrasound image before the contact with the phantom. (b) rotation around z-axis of the handler and (f) the corresponding pose of the ultrasound probe with the (j) resulting ultrasound image. (c) arbitrary pose of the handler and (g) the corresponding pose of the ultrasound probe with the (k) resulting ultrasound image. Figure 4 . 17 : 417 Figure 4.17: Measured force, velocities and errors in the teleoperation system. Figure 4 . 4 Figure 4.18 the plot that represents the haptic force feedback applied to the handler of Figure 4 . 18 : 418 Figure 4.18: Result of the force feedback of the impedance system. First row shows the different states while moving the handler of the haptic device. The motion of the ROI containig the elastogram is shown in the second row for the different states of the handler motion. The temporal evolution of the force feedback is ploted at the bottom and the position of the states are indicated with red arrows. Figure 4 . 4 [START_REF] Chevrie | Online prediction of needle shape deformation in moving soft tissues from visual feedback[END_REF] shows the force feedback average after the 50 repetitions of the green path, where the black line is the force feedback average and green area represents the interquartile range (IQR). The small IQR for all the positions shown in the plot of Figure4.20 describes the standard deviation (SD). Based on the observed small variation (maximum SD of 0.21N) of the force feedback after 50 repetitions of the green path illustrated in Figure4.19, we can conclude that our force feedback measurement in the ROI is highly reproducible. Figure 4 . 19 :Figure 4 . 20 : 419420 Figure 4.19: Repetitive motion path of the virtual probe. The four images correspond to the four corners of the square path. The green path was used to measure the standard deviation of the force feedback. 5. 1 . 1 CONCLUSIONfor perspectives of this work are provided. .[START_REF] Chaumette | Visual servo control, part II: Advanced approaches[END_REF] Results of the robotic system presented in[START_REF] Sen | Enabling technologies for natural orifice transluminal endoscopic surgery (N.O.T.E.S) using robotically guided elasticity imaging[END_REF] . . . . . . . . . . . . . . . . . . 1.19 Results of the robotic system presented in [10] . . . . . . . . . . . . . . . . . . .13 Generation of the ground truth using a FEM. . . . . . . . . . . . . . . . . . . . 2.14 Scatterers position in the process of compression. . . . . . . . . . . . . . . . 2.15 Three input models used to simulate the compression by FEM. . . . . . . . . 2.16 Output due the compression by FEM. . . . . . . . . . . . . . . . . . . . . . . . Experiment with different probe orientations in 2D . . . . . . . . . . . . . . . 2.32 Mean of aligned elastograms obtained from 5 probe orientations. . . . . . . 2.33 Display of the three orthogonal planes and the VOI with the 3D elastogram. 2.34 Short diagram of the implemented multi-thread software application. . . . 2.35 Experiment with a gelatin phantom containing two duck gizzards. . . . . . 2.36 Evolution through time of the reference experiment (3D case) . . . . . . . . Strain maps obtained under different conditions. . . . . . . . . . . . . . . . . 4.1 Types of haptic force feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Configurations of kinesthetic haptic device . . . . . . . . . . . . . . . . . . . . 4.3 Virtuose 6D (Haption S.A.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Workflow of the impedance feedback . . . . . . . . . . . . . . . . . . . . . . . 4.5 Workflow of the admittance feedback . . . . . . . . . . . . . . . . . . . . . . . LIST OF FIGURES LIST OF FIGURES LIST OF FIGURES xvii LIST OF TABLES 4.20 Force feedback average after 50 repetitions of a path . . . . . . . . . . . . . . 122 2.31 xviii xx 3.15 xix 1.1 Comparison between some of the ultrasound elastography approaches. . . 1 Cadre robotique général pour l'élastographie ultrasonore quantitative. . . . v 2 General robotic framework for quantitative ultrasound elastography. . . . . 3 1.1 Sound propagation through two different media . . . . . . . . . . . . . . . . 9 1.2 Scatterer point reflections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3 RF signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.4 RF envelope detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.5 B-mode image reconstructed from the RF signals envelopes . . . . . . . . . 13 1.6 Geometry of a linear ultrasound probe . . . . . . . . . . . . . . . . . . . . . . 14 1.7 Geometry of a convex ultrasound probe . . . . . . . . . . . . . . . . . . . . . . 15 1.8 Pre-scan and post-scan b-mode images . . . . . . . . . . . . . . . . . . . . . . 16 1.9 Organization of the state-of-the-art of elastography . . . . . . . . . . . . . . . 17 1.10 RF signal analogy with a succession of springs and Hooke's law scheme . . . 18 1.11 Strain profile for three consecutive springs . . . . . . . . . . . . . . . . . . . . 18 1.12 Magnetic resonance elastography of liver fibrosis [89] . . . . . . . . . . . . . 20 1.13 Classification of ultrasound elastography approaches . . . . . . . . . . . . . 22 xv LIST OF FIGURES 1.14 Cross-correlation between two segments of RF signals . . . . . . . . . . . . . 1.15 Transient elastography process with an ultrasound transducer . . . . . . . . 1.16 Experimental setup for transient ultasound elastography [12] . . . . . . . . . 1.17 Liver stiffness measurement using FibroScan (Figure taken from [25]). . . . 11.20 Force control scheme of the robotic system presented in [7]. . . . . . . . . . 1.21 Visual servoing closed-loop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.22 The two configurations of the visual sensor location in visual servoing. . . . 1.23 Robot joints q for a 6-DOF robot, and the pose r of the end-effector. . . . . . 2.1 Scheme and workflow of the experimental setup. . . . . . . . . . . . . . . . . 2.2 Viper s850 robot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Ultrasound equipment used for the experiments. . . . . . . . . . . . . . . . . 2.4 Phantoms used in the experiments. . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Cartesian reference frames attached to the robotic arm. . . . . . . . . . . . . 2.6 Desired sinusoidal force applied by the force controller. . . . . . . . . . . . . 2.7 Measured force when the force control law is applied . . . . . . . . . . . . . . 2.8 Elastography process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 Displacement estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 Parameters involved in the motion estimation algorithm . . . . . . . . . . . . xvi LIST OF FIGURES 2.11 Displacement maps from motion estimation . . . . . . . . . . . . . . . . . . . 2.12 LSQ strain estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.17 Output FIELD II using B-Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.18 Results using simulation data from FEM+FIELD II. . . . . . . . . . . . . . . . 2.19 RF volume for one motor sweep. . . . . . . . . . . . . . . . . . . . . . . . . . . 2.20 3D elastogram reconstruction based on the 2D process. . . . . . . . . . . . . 2.21 Connected components process to label different regions. . . . . . . . . . . . 2.22 Centroid estimation of the biggest region . . . . . . . . . . . . . . . . . . . . . 2.23 Cartesian frames in the ROI for automatic centering process. . . . . . . . . . 2.24 Axes for the frame F p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.25 3D scan conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.26 Probe orientation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.27 3D probe orientation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.28 Plots of the probe control velocities . . . . . . . . . . . . . . . . . . . . . . . . 2.29 Block diagram of the 2D case implemented in the multi-thread software application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.30 Phantom experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Surface tracking of a beating heart [61] . . . . . . . . . . . . . . . . . . . . . . 3.2 Ultrasound images acquired after applying small motion to the probe . . . . 3.3 Warp transformation applied to a set of points x . . . . . . . . . . . . . . . . . 3.4 Example of performance in dense rigid tracking. . . . . . . . . . . . . . . . . 3.5 Example of the performance in dense affine tracking. . . . . . . . . . . . . . 3.6 Example of the performance in dense FFD tracking. . . . . . . . . . . . . . . 3.7 TPS image deformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Example of the performance in dense TPS tracking. . . . . . . . . . . . . . . . 3.9 Displacement map obtained from the deformation of the ROI . . . . . . . . 3.10 Methodology to obtain the strain map of a moving tissue. . . . . . . . . . . . 3.11 Possible motions with a 2D US probe. . . . . . . . . . . . . . . . . . . . . . . . 3.12 Spatial derivative filters (case of three parallel slices). . . . . . . . . . . . . . . 3.13 Evolution through time of the reference experiment. . . . . . . . . . . . . . . 3.14 Perturbations introduced to the phantom . . . . . . . . . . . . . . . . . . . . 4.6 Block diagram of the system presented in Chapter 4 . . . . . . . . . . . . . . 4.7 Force estimation based on strain information . . . . . . . . . . . . . . . . . . 4.8 Thickness of the compressed tissue . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Moving elastogram inside of the ultrasound image . . . . . . . . . . . . . . . 4.10 Handler displacement into the ultrasound image . . . . . . . . . . . . . . . . 4.11 Motion boundary of the point (u c , y c ) . . . . . . . . . . . . . . . . . . . . . . . 4.12 Teleoperation system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13 Cartesian frames used for teleoperation of the probe . . . . . . . . . . . . . . 4.14 Experimental setup of the proposed haptic system . . . . . . . . . . . . . . . 4.15 Proposed multithread workflow for the implementation . . . . . . . . . . . . 4.16 Teleoperation system states. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.17 Measured force, velocities and errors in the teleoperation system. . . . . . . 4.18 Result of the force feedback of the impedance system . . . . . . . . . . . . . 4.19 Repetitive motion path of the virtual probe . . . . . . . . . . . . . . . . . . . . 2.1 Force/torque sensor range and resolution. . . . . . . . . . . . . . . . . . . . . 2.2 Parameters used in FIELD II. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Parameters for the control simulation test. . . . . . . . . . . . . . . . . . . . . 2.4 Quality comparison probe orientation and average. . . . . . . . . . . . . . . . 3.1 Nonlinear optimization strategies . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Features of nonlinear optimization strategies. . . . . . . . . . . . . . . . . . . 3.3 Performance evaluation of the transformations in the visual tracking system. xxi LIST OF TABLES xxii Table 1 . 1 -static Mechanical Manual compression Compatibility with all ultrasound probes Operator dependent Remote palpation Transient Shear wave ARFI Transient force Can assess deeper located tissue Accuracy for liver fibrosis staging Special or additional equipment Supersonic ARFI Fast acquisition 1: Comparison between some of the ultrasound elastography approaches. Table 2 2 Axis Sensing range Resolution F x , F y 65 N 12.5 mN F z 200 N 25 mN T x , T y and T z 5 N•m 0.75 µN•m .1. As shown in Figure 2 .2, the robot is holding an ultrasound probe plugged to its end-effector. Table 2 . 2 1: Force/torque sensor range and resolution. Table 2 . 2 : 22 2.2. AUTOMATIC PALPATIONscatterers in order to obtain the post-compressed state of the virtual organ. FIELD II was then used to simulate the RF data of the pre-and post-compression states of the virtual tissue. In practice, FIELD II has to be initialized with the parameters of an ultrasound probe and in our case we set it with the parameters of our real probe (4DC7-3/40) in 2D mode, shown in Table2.2. Parameters used in FIELD II. Parameter Value Transducer center frequency [Hz] 3.5 × 10 6 Sampling frequency [Hz] 4.0 × 10 7 Speed of sound [m/s] 1540 Wave length [m] 1540/3.5 × 10 6 Width of element[m] 4.25 × 10 -4 Kerf [m] 5.5 × 10 -5 Number of elements in the transducer 128 Table 2 . 3 : 23 Parameters for the control simulation test. Parameter Initial value Value at t =2s 2D 3D 2D and 3D s t s * θ s t s * θ F 0 ∆F [ π 6 9 cm π 6 rad [9 3] ⊤ cm π 32 π 9 ] ⊤ rad [ 5π 18 3 N 2 N -3 rad π -7π 36 -5π 36 ] ⊤ rad -- Table 3 . 2 : 32 Features of nonlinear optimization strategies. Convergence efficiency Advantages Disadvantages Forward additional Low Simple approach Jacobian computed every at iteration Direct com-positional Medium Intuitive approach and faster convergence than Jacobian computed every at iteration forward additional ap- proach Inverse com-positional High Jacobian computed be-fore iterations Requires good knowledge about the warp function w (a) (b) 1 2 . . . k 1 1 . . . 1 (1.34)whereξ(k) = 12 k(k 2 -1)is the variance of the estimated displacement and k is the number of samples in the RF signal segment. Then, by convolving the vector g(k) with the displacement ∆ t , the strain values are estimated. This filter was designed such that ∆ t contains only two displacements (at n i and n f ) reducing the computational time considerably in the strain estimation. The LSQ strain estimator has been used to obtain a real-time ultrasound elastography[START_REF] Turgay | Identifying the mechanical properties of tissue by ultrasound strain imaging[END_REF]. The special Euclidean group SE (2) = SO(2) ⋉ R with SO(2) as the special orthogonal group. ACKNOWLEDGEMENTS I would like to express my sincere gratitude to my supervisor Alexandre Krupa for his valuable support during my Ph.D. research. He gave me autonomy and valuable advices Automatic centering in 2D The centroid of the stiffest region of the elastogram, extracted using the prior process, will be used to horizontally center the rigid object in the full image. As the centroid coordinates are expressed in the ROI's frame (in RF frame units), it is necessary to express them in the ultrasound contact probe frame F cp (metric units). ROBUST MOTION COMPENSATION In the previous chapter, a palpation assistant system for ultrasound elastography was proposed to automatically move a 2D or 3D ultrasound probe to maintain the visibility of a stiff tissue of interest at the center of the FOV. This approach requires a processing step to segment and compute the centroid of the stiff tissue which is then automatically centered in the image by visual servoing. The system was designed for motionless tissues, and it gives good results under small perturbations. However, the previous implementation could not provide satisfactory output in case the tissue is moving since perturbation motion generates large noises in the estimated elastogram. This is also an important issue for a clinician when performing manual ultrasound elastography by moving the probe with his hand. Therefore, the main objective of this chapter is to present a method to estimate the strain map of a moving tissue, which was not possible with the process previously described. We propose a new robotic solution that exploits the intensity information of the 2D b-mode images. It is based on a tissue deformation tracking algorithm and an automatic 6-DOF compensation of the perturbation motion by ultrasound visual servoing. In this chapter, a method for non-rigid motion estimation in 2D ultrasound images is proposed to estimate the displacement map required to compute the tissue strain map. For this, the intensity changes in the ultrasound images due to the force applied by the probe are considered. Moreover, to estimate the strain map of a moving tissue, we propose to perform an automatic motion compensation using an ultrasound image-based visual servoing that synchronizes the probe and tissue motions during the strain map estimation process. This chapter is divided in four sections. Section 3.1 introduces few works related where (s x , s y ) are the pixel width and height and (u cp , v cp ) are the pixel coordinates of the origin of the contact point frame, F cp , in the ultrasound 2D image. In order to automatically compensate the moving tissue, we define the visual error was detailed in Chapter 2, and it is defined by the Equation (2.6). The fusion of the force control and the teleoperation task is achieved by using the redundancy control framework presented in Section 2.5. Therefore, the highest priority task is the force control defined as: The secondary task is the teleoperation of the ultrasound probe. Then, to determine the second task, we must define a projector matrix P 1 that allows to project the second task onto the null space of the first task v 1 . The projector P 1 was defined in Equation (2.53), and the secondary task is expressed as The secondary task v 2 does not disturb the primary task v 1 . Therefore, the fusion of the two robotic tasks leads to where v p is now the velocity expressed at the frame F cp , and it includes the hierarchy of the two tasks. This velocity is applied to the robot using the Equations (2.7)-(2.9) in order to express them in the frame of the robot's end effector. The control law presented in Equation (4.21) is always executed by the system. However, when the user switches to the impedance haptic mode in order to feel the force generated by the elastogram contained in the ROI, then we deactivate the secondary task by setting v 2 = 0 6×1 . This implies that the primary task is always running and the secondary task is paused so the elastogram can be estimated. Experimental results This section presents the results of the teleoperation and haptic feedback system previously described. First, we define the experimental setup as illustrated in Figure 4.14. The haptic device used for the experiments is the Virtuose 6D (Haption S.A.). The robot and the ultrasound systems are the same used in the experiments presented in the previous chapters. We use the 3D ultrasound probe in 2D mode and the abdominal phantom ABDFAN US-1B. For more details about the equipments, please refer to Section 2.1. The implementation of the system was coded in C++ and divided in five threads as shown in Figure 4.15. In the main thread is a graphic interface (coded with Qt libraries), Résumé Cette thèse se situe dans le contexte de la robotique médicale et porte sur l'élaboration d'un système robotisé permettant d'assister le processus d'élastographie ultrasonore. La solution proposée consiste à commander par retour d'effort et asservissement visuel un bras manipulateur actionnant une sonde échographique afin d'automatiser le mouvement de palpation nécessaire à la génération des images d'élasticité des tissus. La solution permet de réaliser l'imagerie élastographique de tissus sujets à des mouvements au moyen d'une tâche de compensation par asservissement visuel. Une approche innovante a également été proposée pour fournir à l'utilisateur un retour d'effort lui reflétant la sensation d'élasticité des tissus observés par l'intermédiaire d'un dispositif haptique. Les résultats expérimentaux des trois approches robotiques obtenus sur des fantômes constitués de tissus démontrent l'efficacité des méthodes proposées et ouvre des perspectives intéressantes pour l'élastographie ultrasonore assitée par robot. Mots-clés -Robotique médicale, élastographie ultrasonore, asservissement visuel, haptique. Abstract This thesis concerns the development of a robotic control framework for quantitative ultrasound elastography. Ultrasound elastography is a technology that unveils elastic parameters of a tissue, which are commonly related with certain pathologies. This thesis proposes three novel robotic approaches to assist examiners with elastography. The first approach deals with the control of a robot actuating an ultrasound probe to perform palpation motion required for ultrasound elastography. The elasticity of the tissue is used to design a servo control law to keep a stiff tissue of interest in the field of view of the ultrasound probe. Additionally, the orientation of the probe is controlled by a human user to explore other tissue while elastography is performed. The second approach exploits deformable image registration of ultrasound images to estimate the tissue elasticity and to help in the automatic compensation by ultrasound visual servoing of a motion introduced into the tissue. The third approach offers a methodology to feel the elasticity of the tissue by moving a virtual probe in the ultrasound image with a haptic device while the robot is performing palpation motion. Experimental results of the three robotic approaches over phantoms with tissue-like offer an excellent perspective for robotic-assistance for ultrasound elastography.
01772243
en
[ "spi.meca.biom", "spi.meca.msmeca", "spi.meca.solid", "spi.meca.stru" ]
2024/03/05 22:32:16
2016
https://hal.science/hal-01772243/file/Conf_Canadas_al_Modelling_collection_2016.pdf
E Postek F Dubois R Mozul P Ca Ñadas MODELLING OF A COLLECTION OF NON-RIGID PARTICLES WITH SMOOTH DISCRETE ELEMENT METHOD Introduction Usually, the models of generation of the cell colonies use the quasi-static discrete element approach to evaluate the contact forces between the cells [START_REF] Adra | Development of a three dimensional multiscale computational model of the human epidermis[END_REF]. The forces are necessary for evaluation of the mechanotransduction phenomena [START_REF] Postek | Parameter sensitivity of a monolayer tensegrity model of tissues[END_REF]. In contrast to this approach we use compliant particles. The use of the non-rigid particles changes the stresses distribution in the particles assembly. Even intuitively, it is closer to reality. Cell colony The cell colony stands for a piece of tissue. We employ a non-rigid model of a single particle of equivalent stiffness to the cell. The calibration is done following the paper [START_REF] Mcgarry | A three-dimensional finite element model of an adherent eukaryotic cell[END_REF]. Multibody approach We apply the program LMGC90 [START_REF] Renouf | A parallel version of the Non Smooth Contact Dynamics Algorithm applied to the simulation of granular media[END_REF], [START_REF] Radjai | Discrete-element Modelling of Granular Materials[END_REF] with the possibility of modelling of contact of large number of compliant particles that are discretized with finite elements. The governing equations are written in the framework of the approach that was proposed by Moreau and Jean [START_REF] Renouf | A parallel version of the Non Smooth Contact Dynamics Algorithm applied to the simulation of granular media[END_REF]. The set of equations of motion including the initial and the boundary conditions takes the form: M( qi+1 -qi ) = t i+1 t i (F(q, q, s) + P(s))ds + p i+1 (1) where M is the mass matrix, q is the vector of generalized displacements, P(t) is the vector of external forces, F(q, q, t) is the vector of internal forces including the inertia terms and p i+1 is the vector of impulse resulting from contacts over the time step. The integration with the θ scheme of the above system of equations leads to the equation: Mk ∆ qk+1 i = p k f ree + p k+1 i+1 (2) The effective mass matrix Mk reads: Mk = M + h 2 θ 2 K k (3) where h is the time increment, θ is the integration coefficient [0.5, 1] and K is the tangent stiffness. The θ coefficient is taken as 1.0 yielding the Newton-Raphson integration rule. The effective vector of forces free of contact is of the form: p k f ree = Mk qk i+1 + M( qi -qk i+1 ) + h[(1 -θ)(F i + P i ) + θ(F k i+1 + P i+1 )] (4) Contact impulses are computed using the NSCD method implemented in the LMGC90 software platform. Firstly, it will perform contacts detection between cells. Then the previous dynamics equations will be expressed in term of contact unknowns (gap or relative velocity and contact impulse). Afterward a Non Linear Gauss-Seidel method computes the contacts impulses. Finally, the resulting impulse on cells nodes due to contacts impulses are added to the dynamics equation to compute the new velocities and positions. We use the Open-MPI parallel version of the program [START_REF] Postek | Parameter sensitivity of a monolayer tensegrity model of tissues[END_REF]. Concluding remarks With the presented scheme we calculate contact forces between the particles. The scheme that is based on a coupling of LMGC90 software and the modeling of cells, has been joined to an agent model able to take into account the effect of the stress evolution in the growing tissue [START_REF] Postek | Concept of an Agent-stress Model of a Tissue[END_REF]. Fig. 1 . 1 Single cell (a) Group of cells (b). The group of cell avatars in Fig. 1 (b) has been generated employing the agent model in the framework of the FLAME platform (Flexible Large-scale Agent Modelling Environment) [1]. It consists of the stem cells, the TA cells and the Committed cells. The avatars are replaced with the equivalent stiffness cells, Fig. 1 (a).
01476561
en
[ "math.math-ho" ]
2024/03/05 22:32:16
2017
https://hal.science/hal-01476561/file/1508.00001.pdf
Carlo Rovelli Michelangelo's Stone: an Argument against Platonism in Mathematics If there is a 'platonic world' M of mathematical facts, what does M contain precisely? I observe that if M is too large, it is uninteresting, because the value is in the selection, not in the totality; if it is smaller and interesting, it is not independent of us. Both alternatives challenge mathematical platonism. I suggest that the universality of our mathematics may be a prejudice hiding its contingency, and illustrate contingent aspects of classical geometry, arithmetic and linear algebra. Mathematical platonism [START_REF] Plato | The Republic[END_REF] is the view that mathematical reality exists by itself, independently from our own intellectual activities. 1 Many top level mathematicians hold this view dear, and express the sentiment that they do not "construct" new mathematics, but rather "discover" structures that already exist: real entities in a platonic mathematical world. 2 Platonism is alternative to other views on the foundations of mathematics, such as reductionism, formalism, intuitionism, or the Aristotelian idea that mathematical entities exist, but they are embodied in the material world [START_REF] Gillies | An Aristotelian Approach to Mathematical Ontology[END_REF]. Here, I present a simple argument against platonism in mathematics, which I have not found in the literature. The argument is based on posing a question. Let us assume that a platonic world of mathematical entities and mathematical truths does indeed exist, and is independent from us. Let us call this world M, for Mathematics. The question I consider is: what is it reasonable to expect M to contain? I argue that even a superficial investigation of this question reduces the idea of the existence of M to something trivial, or contradictory. In particular, I argue that the attempt to restrict M to "natural and universal" structures is illusory: it is the perspectival error of mistaking ourselves as universal ("English is clearly the most natural of languages".) In particular, I point out contingent aspects of the two traditional areas of classical mathematics: geometry and arithmetic, and of a key tool of modern science: linear algebra. Michelangelo's stone and Borges' library Say we take a Platonic stance about math: in some appropriate sense, the mathematical world M exists. The expressions "to exist", "to be real" and similar can have a variety of meanings and usages, and this is a big part of the issue, if not the main one. But for the sake of the present argument I do not need to define them-nor, for that matter, platonism-precisely. The argument remains valid, I think, under most reasonable definitions and usages. So, what does M include? Certainly M includes all the beautiful mathematical theories that mathematicians have discovered so far. This is its primary purpose. It includes Pythagoras' theorem, the classification of the Lie groups, the properties of prime numbers and so on. It includes the real numbers, and Cantor's proof that they are "more" than the integers, in the sense defined by Cantor, and both possible extensions of arithmetic: the one where there are infinities larger than the integers and smaller than the reals, and the one where there aren't. It contains game theory, and topos theory. It contains lots of stuff. But M cannot include only what mathematicians have discovered so far, because the point of platonism is precisely that what they will discover tomorrow already exists in the platonic world today. It would not be kind towards future mathematicians to expect that M contains just a bit more of what we have already done. Obviously whoever takes a Platonic view must assume that in the platonic world of math there is much more than what has been already discovered. How much? Certainly M contains, say, all true (or at least all demonstrable) theorems about integer numbers. All possible true theorems about Euclidean geometry, including all those we have not yet discovered. But there should be more than that, of course, as there is far more math than that. We can get a grasp on the content of M from the axiomatic formulation of mathematics: given various consistent sets A 1 , A 2 , ... of axioms, M will include all true theorems following from each A n , all waiting to be discovered by us. We can list many sets of interesting axioms, and imagine the platonic world M to be the ensemble of theorems these imply, all nicely ordered in families, according to the corresponding set of axioms. But this is still insufficient, because a good mathematician tomorrow could come out with a new set of axioms arXiv:1508.00001v2 [math.HO] 6 Sep 2015 and find new great mathematics, like the people who discovered non-commutative geometry, or those who defined C * algebras did. We are getting close to what the platonic world must contain. Let us assume that a sufficiently universal language exist, say based on logic. Then the platonic world M is the ensemble of all theorems that follow from all (non contradictory) choices of axioms. This is a good picture of what M could be. We have found the content of the platonic world of math. But something starts to be disturbing. The resulting M is big, extremely big: it contains too much junk. The large majority of coherent sets of axioms are totally irrelevant. Before discussing the problem with precision, a couple of similes can help to understand what is going on. (i) During the Italian Renaissance, Michelangelo Buonarroti, one of the greatest artists of all times, said that a good sculptor does not create a statue: he simply "takes it out" from the block of stone where the statue already lay hidden. A statue is already there, in its block of stone. The artists must simply expose it, carving away the redundant stone [START_REF] Neret | [END_REF]. The artist does not "create" the statue: he "finds" it. In a sense, this is true: a statue is just a subset of the grains of stone forming the original block. It suffices to take away the other grains, and the statue is taken out. But the hard part of the game is of course to find out which subset of grains of stone to leave there, and this, unfortunately, is not written on the stone. It is selection that matters. A block of stone already contained Michelangelo's Moses, but it also contained virtually anything else -that is, all possible forms. The art of sculpture is to be able to determine, which, among this virtual infinity of forms, will talk to the rest of us as art does. Michelangelo's statement is evocative, maybe powerful, and perhaps speaks to his psychology and his greatness. But it does not say much about the art of the sculptor. The fact of including all possible statues does not confer to the stone the immense artistic value of all the possible beautiful statues it might contain, because the point of the art is the choice, not the collection. By itself, the stone is dull. (ii) The same story can be told about books. Borges's famous library contained all possible books: all possible combinations of alphabet letters [START_REF] Borges | The Library of Babel[END_REF]. Assuming a book is shorter than, say, a million letters, there are then more or less 30 10 6 possible books, which is not even such a big number for a mathematician. So, a writer does not really create a book: she simply "finds" it, in the platonic library of all books. A particularly nice combinations of letters makes up, say, Moby-Dick. Moby-Dick already existed in the platonic space of books: Melville didn't create Moby-Dick, he just discovered Moby-Dick... Like Michelangelo's stone, Borges's library is void of any interest: it has no content, because the value is in the choice, not in the totality of the alternatives. The Platonic world of mathematics M defined above is similar to Michelangelo's block of stone, or Borges library, or Hegel's "night in which all cows are black" 3 : a mere featureless vastness, without value because the value is in the choice, not in the totality of the possibilities. Similarly, science can be said to be nothing else than the denotation of a subset of David Lewis's possible worlds [START_REF] Lewis | On the Plurality of Worlds[END_REF]: those respecting certain laws we have found. But Lewis's totality of all possible world is not science, because the value of science is in the restriction, not in the totality. Mathematics may be called an "ensemble of tautologies", in the sense of Wittgenstein. But it is not the ensemble of all tautologies: these are too many and their ensemble is uninteresting and irrelevant. Mathematics is about recognizing the "interesting" ones. Mathematics may be the investigation of structures. But it is not the list of all possible structures: these are too many and their ensemble is uninteresting. If the world of mathematics was identified with the platonic world M defined above, we could program a computer to slowly unravel it entirely, by listing all possible axioms and systematically applying all possible transformation rules to derive all possible theorems. But we do not even think of doing so. Why? Because what we call mathematics is an infinitesimal subset of the huge world M defined above: it is the tiny subset which is of interest for us. Mathematics is about studying the "interesting" structures. So, the problem becomes: what does "interesting" mean? Interest is in the eye of the interested Can we restrict the definition of M to the interesting subset? Of course we can, but interest is in the eyes of a subject. A statue is a subset of the stone which is worthwhile, for us. A particular combination of letters is a good book, for us. What is it that makes certain set of axioms defining certain mathematical objects, and certain theorems, interesting? There are different possible answer to this question, but they all make explicit or implicit reference to features of ourselves, our mind, our contingent environment, or the physical structure our world happens to have. This fact is pretty evident as far as art or literature are concerned. Does it hold for mathematics as well? Hasn't mathematics precisely that universality feel that is at the root of platonism? 3 Hegel utilized this Yiddish saying to ridicule Schelling's notion of Absolute, meaning that -like mathematical platonism-this included too much and was too undifferentiated, to be of any relevance [START_REF] Hegel | Phenomenology of Spirit[END_REF]. Shouldn't we expect -as often claimed-any other intelligent entity of the cosmos to come out with the same "interesting" mathematics as us? The question is crucial for mathematical platonism, because platonism is the thesis that mathematical entities and truths form a world which exists independently from us. If what we call mathematics ends up depending heavily on ourselves or the specific features of our world, platonism looses its meaning. I present below some considerations that indicate that the claimed universality of mathematics is a parochial prejudice. These are based on the concrete examples provided by the chapters of mathematics that have most commonly been indicated as universal. The geometry of a sphere Euclidean geometry has been among the first pieces of mathematics to be formalized. Euclid's celebrated text, the "Elements" [START_REF] Euclid | [END_REF], where Euclidean geometry is beautifully developed, has been the ideal reference for all mathematical texts. Euclidean geometry describes space. It captures our intuition about the structure of space. It has applications to virtually all fields of science, technology and engineering. Pythagoras' theorem, which is at its core, is a familiar icon. It is difficult to imagine something having a more "universal" flavor than euclidean geometry. What could be contingent or accidental about it? What part do we humans have in singling it out? Wouldn't any intelligent entity developing anywhere in the universe come out with this same mathematics? I maintain the answer is negative. To see why, let me start by recalling that, as is well known, Euclidean geometry was developed by Greek mathematicians mostly living in Egypt during the Hellenistic period, building on Egyptians techniques for measuring the land. These were important because of the Nile's floods cancelling borders between private land parcels. The very name of the game, "geometry", means "measurement of the land" in Greek. Two-dimensional Euclidean geometry describes, in particular, the mathematical structure formed by the land. But: does it? Well, the Earth is more a sphere than a plane. Its surface is better described by the geometry of a sphere, than by two-dimensional (2d) Euclidean geometry. It is an accidental fact that Egypt happens to be small compared to the size of the Earth. The radius of the Earth is around 6,000 Kilometers. The size of Egypt is of the order of 1,000 Kilometers. Thus, the scale of the Earth is more than 6 times larger than the scale of Egypt. Disregarding the sphericity of the Earth is an approximation, which is viable when dealing with the geometry of Egypt and becomes better and better as the region considered is smaller. As a practical matter, 2d Euclidean geometry is useful, but it is a decent approximation that works only because of the smallness of the size of Egypt. Intelligent beings living on a planet just a bit smaller than ours [START_REF] De Saint-Exupéry | [END_REF], would have easily detected the effects of the curvature of the planet's surface. They would not have developed 2d Euclidean geometry. One may object that this is true for 2d, but not for 3d geometry. The geometry of the surface of a sphere can after all be obtained from Euclidean 3d geometry. But the objection has no teeth: we have learned with general relativity that spacetime is curved and Euclidean geometry is just an approximation also as far as 3d physical space is concerned. Intelligent beings living on a region of stronger spacetime curvature would have no reason to start mathematics from Euclidean geometry. 4 A more substantial objection is that 2d euclidean geometry is simpler and more "natural" than curved geometry. It is intuitively grasped by our intellect, and mathematics describes this intuition about space. Its simplicity and intuitive aspect are the reasons for its universal nature. Euclidean geometry is therefore universal in this sense. I show below that this objection is equally ill founded: the geometry of a sphere is definitely simpler and more elegant than the geometry of a plane. Indeed, there is a branch of mathematics, called 2d "spherical" geometry, which describes directly the (intrinsic) geometry of a sphere. This is the mathematics that the Greeks would have developed had the Earth been sufficiently small to detect the effects of the Earth's surface curvature on the Egyptians fields. Perhaps quite surprisingly for many, spherical geometry is far simpler and "more elegant" than Euclidean geometry. I illustrate this statement with a few examples below, without, of course, going into a full exposition of spherical geometry (see for instance [START_REF] Todhunter | Spherical trigonometry[END_REF][START_REF] Harris | Spherical Geometry[END_REF]). Consider the theory of triangles: the familiar part of geometry we study early at school. In Euclidean geometry, a triangle has three sides, with lengths, a, b and c, and three angles α, β and γ (Figure 1). We measure angles with pure numbers, so, α, β and γ are taken 4 It is well known that Kant was mistaken in his deduction that the Euclidean geometry of physical space is true a priori [START_REF] Kant | The Critique of Pure Reason[END_REF]. But even Wittgenstein bordered on mistake in dangerously appearing to assume a unique possible set of laws of geometry for anything spatial: "We could present spatially an atomic fact which contradicted the laws of physics, but not one which contradicted the laws of geometry". Tractatus, Proposition 3.0321 [START_REF] Wittgenstein | Tractatus Logico-Philosphicus[END_REF]. to be numbers with value between 0 and π. Measuring with numbers the length of the sides is a more complicated business. Since there is no standard of length in Euclidean geometry, we must either resort to talk only about ratios between lengths (as the ancients preferred), or to choose arbitrarily a segment once and for all, use it as our "unit of measure", and characterize the length of each side of the triangle by the number which gives its ratio to the unit (as the moderns prefer). All this simplifies dramatically in spherical geometry: here there is a preferred scale, the length of the equator. The length of an arc (the shortest path between two points) is naturally measured by the ratio to it. Equivalently (if we immerge the sphere in space) by the angle subtended to the arc. Therefore the length of the side of a triangle (a, b, c) is an angle as well. See Figure 2. Compare then the theories of triangles in the two geometries (Figure 1): Euclidean geometry: (i) Two triangles are equal if they have equal sides, or if one side and two angles are equal. (ii) The area of the triangle is A = 1 4 √ 2a 2 b 2 + 2a 2 c 2 + 2b 2 c 2 -a 4 -b 4 -c 4 . (iii) For right triangles: a 2 + b 2 = c 2 . Spherical geometry: (i) Triangles with same sides, or same angles, are equal. (ii) The area of a triangle is A = α + β + γ -π. (iii) For right triangles: cos c = cos a cos b. Even a cursory look at these results reveals the greater simplicity of spherical geometry. Indeed, spherical geometry has a far more "universal" flavor than Euclidean geometry. Euclidean geometry can be be obtained from it as a limiting case: it is the geometry of figures that are much smaller than the curvature radius. In this case a, b and c are all much smaller than π. Their cosine is well approximated by cos θ ∼ 1 -1 2 θ 2 and the last formula reduces to Pythagoras' theorem in the first approximation. Far from being a structural property of the very nature of space, Pythagoras' theorem is only a first order approximation, valid in a limit case of a much simpler and cleaner mathematics: 2d spherical geometry. There are many other beautiful and natural results in spherical geometry, which I shall not report here. They extend to the 3d case: the intrinsic geometry of a 3sphere. A 3-sphere is a far more reasonable structure than the infinite Euclidean space: it is the finite homogeneous three-dimensional metric space without boundaries. The geometry may well be the large scale geometry of our universe [START_REF] Einstein | Cosmological Considerations in the General Theory of Relativity[END_REF]. 5 It shape is counterintuitive for many of us, schooled in Euclid. But it was not so for Dante Alighieri, who did not study Euclid at school: the topology of the universe he describes in his poem is precisely that of a 3-sphere [START_REF] Peterson | Dante and the 3-sphere[END_REF]. See Figure 3. What is "intuitive" changes with history. These considerations indicate that the reason Euclidean geometry has played such a major role in the foundation of mathematics is not because of its universality and independence from our contingent situation. It is the opposite: Euclidean geometry is of value to us just because it describes-not even very wellthe accidental properties of the region we happen to inhabit. Inhabitants of a different region of the universe-a smaller planet, or a region with high space curvaturewould likely fail to consider euclidean geometry interesting mathematics. For them, Euclidean geometry could 5 Cosmological measurements indicate that spacetime is curved, but have so far failed to detected a large scale cosmological curvature of space. This of course does not imply that the universe is flat [START_REF] Ellis | Relativistic Cosmology[END_REF], for the same reason for which the failure to detect curvature on the fields of Egypt did not imply that that the Earth was flat. It only shows that the universe is big. Current measurements indicate that the radius of the Universe should be at least ten time larger than the portion of the Universe we see [START_REF] Hinshaw | Five-Year Wilkinson Microwave Anisotropy Probe Observations: Data Processing, Sky Maps, and Basic Results[END_REF]. A ratio, by the way, quite similar to the Egyptian case. be a uninteresting and cumbersome limiting case. Linear algebra Every physicist, mathematician or engineer learns linear algebra and uses it heavily. Linear algebra, namely the algebra of vectors, matrices, linear transformations and so on, is the algebra of linear spaces, and since virtually everything is linear in a first approximation, linear algebra is ubiquitous. It is difficult to resist its simplicity, beauty and generality when studying it, usually in the early years at college. Furthermore, today, we find linear algebra at the very foundations of physics, because it is the language of quantum theory. In the landmark paper that originated quantum theory [START_REF] Heisenberg | Uber quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen[END_REF], Werner Heisenberg understood that physical quantities are better described by matrices and used the multiplication of matrices (core of linear algebra) to successfully compute the properties of quantum particles. Shortly later, Paul Dirac wrote his masterpiece book [START_REF] Dirac | Principles of Quantum Mechanics[END_REF], where quantum mechanics is entirely expressed in terms of linear algebra (linear operators, eigenvalues, eigenvectors...). It would therefore seem natural to formulate the hypothesis that any minimally advanced civilization would discover linear algebra very early and start using it heavily. But it is not the case. In fact, we have a remarkable counterexample: a civilization that has developed for millennia without developing linear algebra-ours. When Heisenberg wrote his famous paper he did not know algebra. He had no idea of what a matrix is, and had never previously learned the algorithm for multiplying matrices. He made it up in his effort to understand a puzzling aspect of the physical world. This is pretty evident from his paper. Dirac, in his book, is basically inventing linear algebra in the highly nonrigorous manner of a physicist. After having constructed it and tested its power to describe our world, linear algebra appears natural to us. But it didn't appear so for generations of previous mathematicians. Which tiny piece of M turns out to be interesting for us, which parts turns out to be "mathematics" is far from obvious and universal. It is largely contingent. Arithmetic and identity The last example I discuss is given by the natural numbers (1, 2, 3, ...), which form the basis of arithmetic, the other half of classical mathematics. Natural numbers seem very natural indeed. There is evidence that the brain is pre-wired to be able to count, and do elementary arithmetic with small numbers [START_REF] Vallortigara | Cervelli che contano[END_REF]. Why so? Because our world appears to be naturally organized in terms of things that can be counted. But is this a feature of reality at large, of any possible world, or is it just a special feature of this little corner of the universe we inhabit and perceive? I suspect the second answer to be the right one. The notion of individual "object" is notoriously slippery, and objects need to have rare and peculiar properties in order to be countable. How many clouds are there in the sky? How many mountains in the Alps? How many coves along the coast of England? How many waves in a lake? How many clods in my garden? These are all very ill-defined questions. To make the point, imagine some form of intelligence evolved on Jupiter, or a planet similar to Jupiter. Jupiter is fluid, not solid. This does not prevent it from developing complex structures: fluids develop complex structures, as their chemical composition, state of motion, and so on, can change continuously from point to point and from time to time, and their dynamics is governed by rich nonlinear equations. Furthermore, they interact with magnetic and electric fields, which vary continuously in space and time as well. Imagine that in such a huge (Jupiter is much larger than Earth's) Jovian environment, complex structures develop to the point to be conscious and to be able to do some math. After all, it has happened on Earth, so it shouldn't be so improbable for something like this to happen on an entirely fluid planet as well. Would this math include counting, that is, arithmetic? Why should it? There is nothing to count in a completely fluid environment. (Let's also say that our Jovian planet's atmosphere is so thick that one cannot see and count the stars, and that the rotation and revolution periods are equal, as for our Moon, and there are neither days nor years.) The math needed by this fluid intelligence would presumably include some sort of geometry, real numbers, field theory, differential equations..., all this could develop using only geometry, without ever considering this funny operation which is enumerating individual things one by one. The notion of "one thing", or "one object", the notions themselves of unit and identity, are useful for us living in an environment where there happen to be stones, gazelles, trees, and friends that can be counted. The fluid intelligence diffused over the Jupiter-like planet, could have developed mathematics without ever thinking about natural numbers. These would not be of interest for her. I may risk being more speculative here. The development of the ability to count may be connected to the fact that life evolved on Earth in a peculiar form characterized by the existence of "individuals". There is no reason an intelligence capable to do math should take this form. In fact, the reason counting appears so natural to us may be that we are a species formed by interacting individuals, each realizing a notion of identity, or unit. What is clearly made by units is a group of interacting primates, not the world. The archetypical identities are my friends in the group. 6Modern physics is intriguingly ambiguous about countable entities. On the one hand, a major discovery of the XX century has been that at the elementary level nature is entirely described by field theory. Fields vary continuously in space and time. There is little to count, in the field picture of the world. On the other hand, quantum mechanics has injected a robust dose of discreteness in fundamental physics: because of quantum theory, fields have particle-like properties and particles are quintessentially countable objects. In any introductory quantum field theory course, students meet an important operator, the number operator N , whose eigenvalues are the natural numbers and whose physical interpretation is counting particles [START_REF] Itzykson | Quantum Field Theory[END_REF]. Perhaps our fluid Jovian intelligence would finally get to develop arithmetic when figuring out quantum field theory... But notice that what moderns physics says about what is countable in the world has no bearing on the universality of mathematics: at most, it points out which parts of M are interesting because they happen to describe this world. Conclusion In the light of these consideration, let us look back at the development of our own mathematics. Why has mathematics developed at first, and for such a long time, along two parallel lines: geometry and arithmetic? The answer begins to clarify: because these two branches of mathematics are of value for creatures like us, who instinctively count friends, enemies and sheep, and who need to measure, approximately, a nearly flat earth in a nearly flat region of physical space. In other words, this mathematics is of interest to us because it reflects very contingent interests of ours. Out of the immense vastness of M, the dull platonic space of all possible structures, we have carved out, like Michelangelo, a couple of shapes that talk to us. From the immense vastness of M, the dull platonic space of all possible structures, we have carved out, like Michelangelo, a couple of shapes that speak to us. There is no reason to assume that the mathematics that has developed later escapes this contingency. To the contrary, the continuous re-foundations and the constant re-organization of the global structure of mathematics testify to its non-systematic and non-universal global structure. Geometry, arithmetic, algebra, analysis, set theory, logic, category theory, and -recentlytopos theory [START_REF] Caramello | The unification of Mathematics via Topos Theory[END_REF] have all been considered for playing a foundational role in mathematics. Far from being stable and universal, our mathematics is a fluttering butterfly, which follows the fancies of inconstant creatures. Its theorems are solid, of course; but selecting what represents an interesting theorem is a highly subjective matter. It is the carving out, the selection, out of a dull and undifferentiated M, of a subset which is useful to us, interesting for us, beautiful and simple in our eyes, it is, in other words, something strictly related to what we are, that makes up what we call mathematics. The idea that the mathematics that we find valuable forms a Platonic world fully independent from us is like the idea of an Entity that created the heavens and the earth, and happens to very much resemble my grandfather. -Thanks to Hal Haggard and Andrea Tchertkoff for a careful reading of the manuscript and comments. FIG. 1 . 1 FIG. 1. Flat and spherical triangles. FIG. 2 . 2 FIG.2. Two points on a sphere determine an arc, whose size is measured by the angle it subtends, or equivalently, intrinsically, by its ratio to an equator. FIG. 3 . 3 FIG.3. Dante's universe: the Aristotelian spherical universe is surrounded by another similar spherical space, inhabited by God and Angel's spheres. The two spheres together form a three-sphere. For an overview, see for instance[START_REF] Linnebo | Platonism in the Philosophy of Mathematics[END_REF]. Contemporary mathematicians that have articulated this view in writing include Roger Penrose[START_REF] Penrose | The Road to Reality : A Complete Guide to the Laws of the Universe[END_REF] and Alain Connes[START_REF] Connes | A View of Mathematics[END_REF]. This might be why ancient humans attributed human-like mental life to animals, trees and stones: they were perhaps utilizing mental circuits developed to deal with one another -within the primate group-extending them to deal also with animals, trees and stones.
01476783
en
[ "phys.grqc" ]
2024/03/05 22:32:16
2019
https://hal.science/hal-01476783/file/1603.01561.pdf
Valerio Astuti email: [email protected] Marios Christodoulou email: [email protected] Carlo Rovelli email: [email protected] Volume entropy Building on a technical result by Brunnemann and Rideout on the spectrum of the Volume operator in Loop Quantum Gravity, we show that the dimension of the space of the quadrivalent states -with finite-volume individual nodes-describing a region with total volume smaller than V , has finite dimension, bounded by V log V . This allows us to introduce the notion of "volume entropy": the von Neumann entropy associated to the measurement of volume. I. Introduction Thermodynamical aspects of the dynamics of spacetime have first been pointed out by Bekenstein's introduction of an entropy associated to the horizon of a black hole [START_REF] Bekenstein | Black Holes and Entropy[END_REF]. This led to the formulation of the "laws of black holes thermodynamics" by Bardeen, Carter, and Hawking [START_REF] James M Bardeen | The Four laws of black hole mechanics[END_REF] and to Hawking's discovery of black role radiance, which has reinforced the geometry/thermodynamics analogy [START_REF] Hawking | Particle creation by black holes[END_REF]. The connection between Area and Entropy suggests that it may be useful to treat aspects of space-time statistically at scales large compared to the Planck length [START_REF] Jacobson | Thermodynamics of space-time: The Einstein equation of state[END_REF], whether or not we expect the relevant microscopic elementary degrees of freedom to be simply the quanta of the gravitational field [START_REF] Chirco | Spacetime thermodynamics without hidden degrees of freedom[END_REF], or else. Black hole entropy, in particular, can be interpreted as cross-horizon entanglement entropy (see [START_REF] Bianchi | Horizon entanglement entropy and universality of the graviton coupling[END_REF] for recent results reinforcing this interpretation, and references therein), or -most likely equivalently-as the von Neumann entropy of the statistical state representing a macrostate with given horizon Area. In the context of Loop Quantum Gravity (LQG), this was considered in [START_REF] Rovelli | Black Hole Entropy from Loop Quantum Gravity[END_REF] and later extensively analyzed; for a recent review and full references see [START_REF] Barbero | Quantum Geometry and Black Holes[END_REF][START_REF] Ashtekar | The Issue of Information Loss: Current Status[END_REF]. All such developments are based on the assignment of thermodynamic properties to spacetime surfaces. This association has motivated the holographic hypothesis: the conjecture that the degrees of freedom of a region of space are somehow encoded in its boundary. In this paper, instead, we study statistical properties associated to spacetime regions. We show that it is possible to define a Von Neumann entropy for the quantum gravitational field, associated to the Volume of a region, and that this entropy is (under suitable conditions) finite. The existence of an entropy associated to bulk degrees of freedom of a spin network was already considered in [START_REF] Krzysztof | Eigenvalues of the volume operator in loop quantum gravity[END_REF]. To this aim, we prove a finiteness result on the num-ber of quantum states of gravity describing a region of finite volume. More precisely, we work in the context of LQG, and we prove that the dimension of the space of diffeomorphism invariant quadrivalent states without zero-volume nodes, describing a region of total volume smaller than V is finite. We give explicitly the upper bound of the dimension as a function of V . The proof is based on a result on the spectrum of the LQG Volume operator proven by Brunnemann and Rideout [START_REF] Brunnemann | Properties of the volume operator in loop quantum gravity. I. Results[END_REF][START_REF] Brunneman | Properties of the volume operator in loop quantum gravity. II. Detailed presentation[END_REF]. Using this, we define the Von Neumann entropy of a quantum state of the gravitational field associated to Volume measurements. II. Counting spin networks Consider the measurement of the volume of a 3d spacelike region Σ. The physical system measured is the gravitational field. In the classical theory, this is given by the metric q on Σ: the volume is V = Σ √ det q d 3 x. In the quantum context, using the LQG formalism, the geometry of Σ is described by a state in the kinematical Hilbert space H diff . The volume measurement of Σ are described by a volume operator V on this state space. We refer to [START_REF] Rovelli | Covariant Loop Quantum Gravity[END_REF][START_REF] Rovelli | Quantum Gravity[END_REF] for details on basic LQG results and notation. We restrict H diff to four-valent graphs Γ where the nodes n have non-vanishing (unoriented) volume v n . The spin network states |Γ, j l , v n ∈ H diff , where j l is the link quantum number or spin, form a countable, orthonormal basis of H diff . (We disregard here eventual additional quantum numbers such as the orientation, that have no bearing on our result.) The intertwiner basis at each node is chosen so that the local volume operator Vn , acting on a single node, is diagonal and is labelled by the eigenvalues v n , of the node volume operator Vn associated to the node n. Vn |Γ, j l , v n = v n |Γ, j l , v n (1) The states |Γ, j l , v n are also eigenstates of the total volume operator V = N n=1 Vn , where N is the number of nodes in Γ, with eigenvalue v = N n=1 v n , (2) the sum of the node volume eigenvalues v n . We seek a bound on the dimension of the subspace H V spanned by the states |Γ, j l , v n such that v ≤ V . That is, we want to count the spin-networks with volume less than V . We do this by bounding the number N Γ of four valent graphs in H V , the number N {j l } of possible spin assignments, and the number of the volume quantum numbers assignments N {vn} on each such graph. Clearly dim H V ≤ N Γ N {j l } N {vn} . (3) Crucial to this bound is the analytical result on the existence of a volume gap in four-valent spin networks found in [START_REF] Brunnemann | Properties of the volume operator in loop quantum gravity. I. Results[END_REF][START_REF] Brunneman | Properties of the volume operator in loop quantum gravity. II. Detailed presentation[END_REF]. The result is the following. In a node bounded by four links with maximum spin j max all nonvanishing volume eigenvalues are larger than v gap ≥ 1 4 √ 2 ℓ 3 P γ 3 2 j max (4) Where ℓ P is the Planck constant and γ the Immirzi parameter. Numerical evidence for equation ( 4) was first given in [START_REF] Brunnemann | Simplification of the spectral analysis of the volume operator in loop quantum gravity[END_REF] and a compatible result was estimated in [START_REF] Bianchi | Discreteness of the volume of space from Bohr-Sommerfeld quantization[END_REF]. Since the minimum non-vanishing spin is j = 1 2 , this implies that v gap ≥ 1 8 ℓ 3 P γ 3 2 ≡ v o (5) From existence of the volume gap, it follows that there is a maximum value of N Γ , because there is a maximum number of nodes for graphs in H V , as every node carries a minimum volume v o . Therefore a region of volume equal or smaller than V contains at most n = V v o (6) nodes. Equation ( 4) bounds also the number of allowed area quantum numbers, because too large a j max would force too large a node volume. Therefore N {j l } is also finite. Finally, since the dimension of the space of the intertwiners at each node is finite and bounded by the value of spins, it follows that also the number N {vn} of individual volume quantum numbers is bounded. Then (3) shows immediately that the dimension of H V is finite. Let us bound it explicitely. We start by the number of graphs. The number of nodes must be smaller than n, given in [START_REF] Bianchi | Horizon entanglement entropy and universality of the graviton coupling[END_REF]. The number N Γ of 4-valent graphs with n nodes is bounded by N Γ ≤ n 4n (7) because each node can be connected to each other n n four times (n n ) 4 . Equation ( 4) bounds the spins. Since we must have V ≥ v gap , we must also have j ≤ j max ≤ 32 V 2 ℓ 6 P γ 3 = 1 2 n 2 (8) In a graph with n nodes there are at most 4n links (the worst case being all boundary links), and therefore there are at most (2j max + 1) 4n spin assignments, or, in the large j limit, (2j max ) 4n . That is N {j l } ≤ (2j max ) 4n ≤ n 8n (9) Finally, the dimension of the intertwiner space at each node is bounded by the areas associated to that node: dim K j1,j2,j3,j4 = = dim Inv SU(2) (H j1 ⊗ H j2 ⊗ H j3 ⊗ H j4 ) = min (j 1 + j 2 , j 3 + j 4 ) -max ((j 1 -j 2 ), (j 3 -j 4 )) + 1 ≤ 2 max(j l∈n ) + 1 ≤ 4 max(j l∈n ) with the last step following from max(j l∈n ) ≥ 1/2. Thus on a graph with n nodes, the maximum number of combination of eigenvalues is limited by: N {vn} ≤ (4j max ) n = 2 n n 2n (10) Combining equations ( 3), ( 7), ( 9) and ( 10), we have an explicit bound on the dimension of the space of states with volume less than V = nv o : dim H V ≤ (cn) 14n ( 11 ) where c is a number. For large n we can write S V ≡ log dim H V ≤ 14 n log n (12) which is the entropy associated to Hilbert space. Explicitly S V ≤ 14 V v o log V v o ∼ V log V. (13) In the large volume limit, when the eigenvalues become increasingly dense, this corresponds to a density of states ν(V ) ≡ d(dim H V )/dV similarly bounded ν(V ) < 14 [log(n) + C] (cn) 14n . ( 14 ) III. Von Neumann proper volume entropy In the previous section, we have observed that the dimension of the space of (with four-valent, finite-volume nodes) quantum states with total volume less than V is finite. This results implies that there is a finite von Neumann volume entropy associated to statistical states describing to volume measurements. The simplest possibility is to consider the microcanonical ensemble describing the volume measurement of a region of space. That is, we take Volume to be a macroscopic (or thermodynamic, or"coarse grained") variable, and we write the corresponding statistical microstate that maximizes entropy. If the measured volume is in the interval I V = [V -δV, V ], with small δV , then the corresponding micro-canonical state is simply ρ = P V,δV dim H V . ( 15 ) where P V,δV is the projector on H V,δV ≡ Span{|Γ, j l , v n > : v ∈ I V }. ( 16 ) namely the span of the eigenspaces of eigenvalues of the volume that are in I V . Explicitly, the projector can be written in the form P V,δV ≡ v∈IV |Γ, j l , v n >< Γ, j l , v n | (17) The von Neumann entropy of ( 15) is S = -T r[ρ log ρ] = log dim H V < S V ∼ V log V. (18) It is interesting to consider also a more generic state where ρ ∼ p(V ), for an arbitrary distribution p(V ) of probabilities of measuring a given volume eigenstate with volume V . For this state, the probability distribution of finding the value V in a volume measurement is P (V ) = ν(V )p(V ) (19) and the entropy can be written as the sum of two terms S = dV ν(V )p(V ) log(p(V )) = S1 + S2 (20) where the first S P = -dV P (V ) log(P (V )) (21) is just the entropy due to the spread in the outcomes of volume measurements, while the second S Volume ≡ S -S P = dV P (V ) log(ν(V )) (22) can be seen as as a proper volume entropy. The bound found in the previous Section on ν(V ), which indicates that ν(S) grows less that V 2 , shows that this proper volume entropy is finite for any distribution P (V ) whose variance is finite. S Volume can be viewed as the irreducible entropy associated to any volume measurement. IV. Lower bound Let us now bound the dimension of H V from below. The crucial step for this is to notice the existence of a maximum δV in the spacing between the eigenvalues of the operator Vn . For instance, if we take a node between two large spins j and two 1 2 spins, the volume eigenvalues have decreasing spacing, with maximum spacing for the lowest eigenvalues, of the order v o . Disregarding irrelevant small numerical factors, let's take v o as the maximal spacing. Given a volume V , let, as before, n = V /v 0 and consider spin networks with total volume in the interval I n = [(n -1)v o , nv o ]. Let N m be the number of spin networks with m nodes that have the total volume v in the interval I n . For m = 1, there is at least one such spin network, because of the minimal spacing. For m = 2, the volume v must be split between the two nodes: v = v 1 + v 2 . This can be done in at least n -1 manners, with v 1 ∈ I p and v 1 ∈ I n-p and p running from 1 to n -1. This possibility is guaranteed again by the existence of the maximal spacing. In general, for m nodes, there are N n,m = n -1 m -1 (23) different ways of splitting the total volume among nodes. This is the number of compositions of n in m subsets. Finally, the number m of nodes can vary between 1 and the maximum n, giving a total number of possible states larger than N n = n m=1 N n,m = n m=1 n -1 m -1 = 2 n-1 . (24) From which it follows that dim H V ≥ 2 n-1 . (25) Can all these states be realised by inequivalent spin networks, with suitable choices of the graph and the spins? To show that this is the case, it is sufficient to display at least one (however peculiar) example of spin network for each sequence of v n . But given an arbitrary sequence of v n we can always construct a graph formed by a single one dimensional chain of nodes, each (except the two ends) with two links connecting to the adjacent nodes in the chain and two links in the boundary. All these spin networks exist and are non-equivalent to one another. Therefore we have shown that there are at least 2 n-1 states with volume between V -v o and V . In the large volume limit we can write dim H V ≥ 2 n = 2 V vo . (26) so that the entropy satisfies cV ≤ S ≤ c ′ V log V. (27) with c and c ′ constants. V. Discussion Geometrical entropy associated to surfaces of given Area plays a large role in the current discussions of the quantum nature of spacetime. Here we have shown that, under suitable conditions, it is also possible to compute a Von Neumann entropy associated to measurements of the Volume of a region of space. We have not discussed possible physical roles played by this entropy. A number of comments are in order: (i) Since in the classical low energy limit Volume and area are related by V ∼ A (ii) The result presented above depends on the restriction of H diff to four-valent states. We recall that the discussion is currently open in the literature on which of the two theories, with or without this restriction, is physically more interesting, with good arguments on both sides. However, it might be possible to extend the results presented here to the case of higher-valent graphs. Indeed, there is some evidence that there is a volume gap in higher-valent cases too, see for instance [START_REF] Haggard | Pentahedral volume, chaos, and quantum gravity[END_REF]. The effect of zerovolume nodes on the Volume entropy will be discussed elsewhere. (iii) Volume entropy appears to fail to be an extensive quantity. The significance of this conclusion deserves to be explored. This feature is usual for systems with long range interactions, and in particular for systems of particles governed by the gravitational interaction. It is suggestive that gravity could retain this feature even when there are no interacting particle, and the role of long range interactions is taken by "long range" connections between graph nodes1 . A final word on this behaviour, however, has to wait for a more precise computation of the entropy growth with volume. (iv) It has been recent pointed out that the interior of an old black old contains surfaces with large volume [START_REF] Christodoulou | How big is a black hole?[END_REF][START_REF] Christodoulou | The (huge) volume of an evaporating black hole[END_REF] and that the large volume inside black holes can play an important role in the information paradox [START_REF] Ashtekar | The Issue of Information Loss: Current Status[END_REF][START_REF] Perez | No firewalls in quantum gravity: the role of discreteness of quantum geometry in resolving the information loss paradox[END_REF]. The results presented here may serve to quantify the corresponding interior entropy. (v) A notion of entropy associated to the volume of space might perhaps provide an alternative to Penrose's Weyl curvature hypothesis [START_REF] Penrose | Before the big bang: An outrageous new perspective and its implications for particle physics[END_REF]. For the sec-ond principle of thermodynamics to hold, the initial state of the universe must have had low entropy. On the other hand, from cosmic background radiation observations, the initial state of matter must have been close to having maximal entropy. Penrose addresses this discrepancy by taking into consideration the entropy associated to gravitational degrees of freedom. His hypothesis is that the degrees of freedom which have been activated to bring the increase in entropy from the initial state are the ones associated to the Weyl curvature tensor, which in his hypothesis was null in the initial state of the universe. A definition of the bulk entropy of space, which, as would be expected, grows with the volume, could perhaps perform the same role as the Weyl curvature degrees of freedom do in Penrose's hypothesis: the universe had a much smaller volume close to its initial state, so the total available entropy was low -regardless of the matter entropy content -and has increased since, just because for a space of larger volume we have a greater number of states describing its geometry. (vi) We close with a very speculative remark. Does the fact that entropy is large for larger volumes imply the existence of an entropic force driving to larger volumes? That is, could there be a statistical bias for transitions to geometries of greater volume? Generically, the growth of the phase space volume is a driving force in the evolution of a system: in a transition process, we sum over out states, more available states for a given outcome imply greater probability of that outcome. A full discussion of this point requires the dynamics of the theory to be explicitly taken into account, and we postpone it for future work. 3 2 , 3 2 23 the Volume entropy we have considered S V ∼ V log V ∼ A log A may exceed the Bekenstein bound S < S A ∼ A. Volume entropy is accessible only by being in the bulk, and not necessarily from the outside, therefore it does not violate the versions of the Bekenstein bound that only refer to external observables. Of course they are not really long range, in the sense that graph connections actually define locality. Acknowledgments MC and VA thank Thibaut Josset and Ilya Vilenski for critical discussions. MC acknowledges support from the Educational Grants Scheme of the A.G.Leventis Foundation for the academic years 2013-2014, 2014-2015, 2015-2016, as well as from the Samy Maroun Center for Time, Space and the Quantum. VA acknowledges financial support from Sapienza University of Rome.
01461384
en
[ "phys.grqc" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01461384/file/1611.02420.pdf
come L'archive ouverte pluridisciplinaire Meaning = Information + Evolution Carlo Rovelli CPT, Aix-Marseille Université, Université de Toulon, CNRS, F-13288 Marseille, France. Notions like meaning, signal, intentionality, are difficult to relate to a physical word. I study a purely physical definition of "meaningful information", from which these notions can be derived. It is inspired by a model recently illustrated by Kolchinsky and Wolpert, and improves on Dretske classic work on the relation between knowledge and information. I discuss what makes a physical process into a "signal". I. INTRODUCTION There is a gap in our understanding of the world. On the one hand we have the physical universe; on the other, notions like meaning, intentionality, agency, purpose, function and similar, which we employ for the like of life, humans, the economy... These notions are absent in elementary physics, and their placement into a physicalist world view is delicate [START_REF] Price | Naturalism without Mirrors[END_REF], to the point that the existence of this gap is commonly presented as the strongest argument against naturalism. Two historical ideas have contributed tools to bridge the gap. The first is Darwin's theory, which offers evidence on how function and purpose can emerge from natural variability and natural selection of structures [START_REF] Darwin | On the Origin of Species[END_REF]. Darwin's theory provides a naturalistic account for the ubiquitous presence of function and purpose in biology. It falls sort of bridging the gap between physics and meaning, or intentionality. The second is the notion of 'information', which is increasingly capturing the attention of scientists and philosophers. Information has been pointed out as a key element of the link between the two sides of the gap, for instance in the classic work of Fred Dretske [START_REF] Dretske | Knowledge and the Flow of Information[END_REF]. However, the word 'information' is highly ambiguous. It is used with a variety of distinct meanings, that cover a spectrum ranging from mental and semantic ("the information stored in your USB flash drive is comprehensible") all the way down to strictly engineeristic ("the information stored in your USB flash drive is 32 Giga"). This ambiguity is a source of confusion. In Dretske's book, information is introduced on the base of Shannon's theory [START_REF] Shannon | A Mathematical Theory of Communication[END_REF], explicitly interpreted as a formal theory that "does not say what information is". In this note, I make two observations. The first is that it is possible to extract from the work of Shannon a purely physical version of the notion of information. Shannon calls its "relative information". I keep his terminology even if the ambiguity of these terms risks to lead to continue the misunderstanding; it would probably be better to call it simply 'correlation', since this is what it ultimately is: downright crude physical correlation. The second observation is that the combination of this notion with Darwin's mechanism provides the ground for a definition of meaning. More precisely, it provides the ground for the definition of a notion of "meaningful infor-mation", a notion that on the one hand is solely built on physics, on the other can underpin intentionality, meaning, purpose, and is a key ingredient for agency. The claim here is not that the full content of what we call intentionality, meaning, purpose -say in human psychology, or linguistics-is nothing else than the meaningful information defined here. But it is that these notions can be built upon the notion of meaningful information step by step, adding the articulation proper to our neural, mental, linguistic, social, etcetera, complexity. In other words, I am not claiming of giving here the full chain from physics to mental, but rather the crucial first link of the chain. The definition of meaningful information I give here is inspired by a simple model presented by David Wolpert and Artemy Kolchinsky [START_REF] Wolpert | Observers as systems that acquire information to stay out of equilibrium[END_REF], which I describe below. The model illustrates how two physical notions, combined, give rise to a notion we usually ascribe to the nonphysical side of the gap: meaningful information. The note is organised as follows. I start by a careful formulation of the notion of correlation (Shannon's relative information). I consider this a main motivation for this note: emphasise the commonly forgotten fact that such a purely physical definition of information exists. I then briefly recall a couple of points regarding Darwinian evolution which are relevant here, and I introduce (one of the many possible) characterisation of living beings. I then describe Wolpert's model and give explicitly the definition of meaningful information which is the main purpose of this note. Finally, I describe how this notion might bridge between the two sides of gap. I close with a discussion of the notion of signal and with some general considerations. II. RELATIVE INFORMATION Consider physical systems A, B, ... whose states are described by a physical variables x, y, ..., respectively. This is the standard conceptual setting of physics. For simplicity, say at first that the variables take only discrete values. Let N A , N B , ... be the number of distinct values that the variables x, y, ... can take. If there is no relation or constraint between the systems A and B, then the pair of system (A, B) can be in N A × N B states, one for each choice of a value for each of the two variables x and y. In physics, however, there are routinely constraints between systems that make certain states impossible. Let N AB be the number of allowed possibilities. Using this, we can define 'relative information' as follows. We say that A and B 'have information about one another' if N AB is strictly smaller than the product N A × N B . We call S = log(N A × N B ) -log N AB , (1) where the logarithm is taken in base 2, the "relative information" that A and B have about one another. The unit of information is called 'bit'. For instance, each end of a magnetic compass can be either a North (N ) or South (S) magnetic pole, but they cannot be both N or both S. The number of possible states of each pole of the compass is 2 (either N or S), so N A = N B = 2, but the physically allowed possibilities are not N A × N B = 2 × 2 = 4 (N N, N S, SN, SS). Rather, they are only two (N S, SN ), therefore N AB = 2. This is dictated by the physics. Then we say that the state (N or S) of one end of the compass 'has relative information' S = log 2 + log 2 -log 2 = 1 ( 2 ) (that is: 1 bit) about the state of the other end. Notice that this definition captures the physical underpinning to the fact that "if we know the polarity of one pole of the compass then we also know (have information about) the polarity of the other." But the definition itself is completely physical, and makes no reference to semantics or subjectivity. The generalisation to continuous variables is straightforward. Let P A and P B be the phase spaces of A and B respectively and let P AB be the subspace of the Cartesian product P A × P B which is allowed by the constraints. Then the relative information is S = log V (P A × P B ) -log V (P AB ) ( 3 ) whenever this is defined. 1Since the notion of relative information captures correlations, it extends very naturally to random variables. Two random variables x and y described by a probability distribution p AB (x, y) are uncorrelated if p AB (x, y) = pAB (x, y) (4) where pAB (x, y) is called the marginalisation of p AB (x, y) and is defined as the product of the two marginal distributions pAB (x, y) = p A (x) p B (y), (5) in turn defined by p A (x) = p AB (x, y) dy, p B (y) = p AB (x, y) dx. (6) Otherwise they are correlated. The amount of correlation is given by the difference between the entropies of the two distributions p A (x, y) and pA (x, y). The entropy of a probability distribution p being S = p log p on the relevant space. All integrals are taken with the Luoiville measures of the corresponding phase spaces. Correlations can exist because of physical laws or because of specific physical situations, or arrangements or mechanisms, or the past history of physical systems. Here are few examples. The fact that the two poles of a magnet cannot have the same polarisation is excluded by one of the Maxwell equations. It is just a fact of the world. The fact that two particles tied by a rope cannot move apart more than the distance of the rope is a consequence of a direct mechanical constraint: the rope. The frequency of the light emitted by a hot piece of metal is correlated to the temperature of the metal at the moment of the emission. The direction of the photons emitted from an object is correlated to the position of the object. In this case emission is the mechanism that enforces the correlation. The world teams with correlated quantities. Relative information is, accordingly, naturally ubiquitous. Precisely because it is purely physical and so ubiquitous, relative information is not sufficient to account for meaning. 'Meaning' must be grounded on something else, far more specific. III. SURVIVAL ADVANTAGE AND PURPOSE Life is a characteristic phenomenon we observe on the surface of the Earth. It is largely formed by individual organisms that interact with their environment and embody mechanisms that keep themselves away from thermal equilibrium using available free energy. A dead organism decays rapidly to thermal equilibrium, while an organism which is alive does not. I take this -with quite a degree of arbitrariness-as a characteristic feature of organisms that are alive. The key of Darwin's discovery is that we can legitimately reverse the causal relation between the existence of the mechanism and its function. The fact that the mechanism exhibits a purpose -ultimately to maintain the organism alive and reproduce it-can be simply understood as an indirect consequence, not a cause, of its existence and its structure. As Darwin points out in his book, the idea is ancient. It can be traced at least to Empedocles. Empedocles suggested that life on Earth may be the result of random happening of structures, all of which perish except those that happen to survive, and these are the living organisms. 2 The idea was criticised by Aristotle, on the ground that we see organisms being born with structures already suitable for survival, and not being born at random ([6] II 8, 198b35). But shifted from the individual to the species, and coupled with the understanding of inheritance and, later, genetics, the idea has turned out to be correct. Darwin clarified the role of variability and selection in the evolution of structures and molecular biology illustrated how this may work in concrete. Function emerges naturally and the obvious purposes that living matter exhibits can be understood as a consequence of variability and selection. What functions is there because it functions: hence it has survived. We do not need something external to the workings of nature to account for the appearance of function and purpose. But variability and selection alone may account for function and purpose, but are not sufficient to account for meaning, because meaning has semantic and intentional connotations that are not a priori necessary for variability and selection. 'Meaning' must be grounded on something else. IV. KOLCHINSKY-WOLPERT'S MODEL AND MEANINGFUL INFORMATION My aim is now to distinguish the correlations that are are ubiquitous in nature from those that we count as relevant information. To this aim, the key point is that surviving mechanisms survive by using correlations. This is how relevance is added to correlations. The life of an organisms progresses in a continuous exchange with the external environment. The mechanisms that lead to survival and reproduction are adapted by evolution to a certain environment. But in general environment is in constant variation, in a manner often poorly predictable. It is obviously advantageous to be appropriately correlated with the external environment, because survival probability is maximised by adopting different behaviour in different environmental conditions. A bacterium that swims to the left when nutrients are on the left and swims to the right when nutrients are on the right prospers; a bacterium that swims at random has less chances. Therefore many bacteria we see around us are of the first kind, not of the second kind. This simple observation leads to the Kolchinsky-Wolpert model [START_REF] Wolpert | Observers as systems that acquire information to stay out of equilibrium[END_REF],. A living system A is characterised by a number of variables x n that describe its structure. These may be numerous, but are in a far smaller number than those describing the full microphysics of A (say, the exact position 2 [There could be] "beings where it happens as if everything was organised in view of a purpose, while actually things have been structured appropriately only by chance; and the things that happen not to be organised adequately, perished, as Empedocles says". of each water molecule in a cell). Therefore the variables x n are macroscopic in the sense of statistical mechanics and there is an entropy S(x n ) associated to them, which counts the number of the corresponding microstates. As long as an organism is alive, S(x n ) remains far lower than its thermal-equilibrium value S max . This capacity of keeping itself outside of thermal equilibrium, utilising free energy, is a crucial aspects of systems that are alive. Living organisms have generally a rather sharp distinction between their state of being alive or dead, and we can represent it as a threshold S thr in their entropy. Call B the environment and let y n denote a set of variables specifying its state. Incomplete specification of the state of the environment can be described in terms of probabilities, and therefore the evolution of the environment is itself predictable at best probabilistically. Consider now a specific variable x of the system A and a specific variable y of the system B in a given macroscopic state of the world. Given a value (x, y), and taking into account the probabilistic nature of evolution, at a later time t the system A will find itself in a configuration x n with probability p x,y (x n ). If at time zero p(x, y) is the joint probability distribution of x and y, the probability that at time t the system A will have entropy higher that the threshold is P = dx n dx dy p(x, y) p x,y (x n )θ(S(x n ) -S thr ), ( 7 ) where θ is the step function. Let us now define P = dx n dx dy p(x, y) p x,y (x n )θ(S(x n ) -S thr ). (8) where p(x, y) is the marginalisation of p(x, y) defined above. This is the probability of having above threshold entropy if we erase the relative information. This is Wolpert's model. Let's define the relative information between x and y contained in p(x, y) to be "directly meaningful" for B over the time span t, iff P is different from P . And call M = P -P (9) the "significance" of this information. The significance of the information is its relevance for the survival, that is, its capacity of affecting the survival probability. Furthermore, call the relative information between x and y simply "meaningful" if it is directly meaningful or if its marginalisation decreases the probability of acquiring information that can be meaningful, possibly in a different context. Here is an example. Let B be food for a bacterium and A the bacterium, in a situation of food shortage. Let x be the location of the nurture, for simplicity say it can be either at the left of at the right. Let y the variable that describe the internal state of the bacterium which determines the direction in which the bacterium will move. If the two variables x and y are correlated in the right manner, the bacterium reaches the food and its chances of survival are higher. Therefore the correlation between y and x is "directly meaningful" for the bacterium, according to the definition given, because marginalising p(x, y), namely erasing the relative information increases the probability of starvation. Next, consider the same case, but in a situation of food abundance. In this case the correlation between x and y has no direct effect on the survival probability, because there is no risk of starvation. Therefore the x-y correlation is not directly meaningful. However, it is still (indirectly) meaningful, because it empowers the bacterium with a correlation that has a chance to affect its survival probability in another situation. A few observations about this definition: i. Intentionality is built into the definition. The information here is information that the system A has about the variable y of the system B. It is by definition information "about something external". It refers to a physical configuration of A (namely the value of its variable x), insofar as this variables is correlated to something external (it 'knows' something external). ii. The definition separates correlations of two kinds: accidental correlations that are ubiquitous in nature and have no effect on living beings, no role in semantic, no use, and correlations that contribute to survival. The notion of meaningful correlation captures the fact that information can have "value" in a darwinian sense. The value is defined here a posteriori as the increase of survival chances. It is a "value" only in the sense that it increases these chances. iii. Obviously, not any manifestation of meaning, purpose, intentionality or value is directly meaningful, according to the definition above. Reading today's newspaper is not likely to directly enhance mine or my gene's survival probability. This is the sense of the distinction between 'direct' meaningful information and meaningful information. The second includes all relative information which in turn increases the probability of acquiring meaningful information. This opens the door to recursive growth of meaningful information and arbitrary increase of semantic complexity. It is this secondary recursive growth that grounds the use of meaningful information in the brain. Starting with meaningful information in the sense defined here, we get something that looks more and more like the full notions of meaning we use in various contexts, by adding articulations and moving up to contexts where there is a brain, language, society, norms... iv. A notion of 'truth' of the information, or 'veracity' of the information, is implicitly determined by the definition given. To see this, consider the case of the bacterium and the food. The variable x of the bacterium can take to values, say L and R, where L is the variable conducting the bacterium to swim to the Right and L to the Left. Here the definition leads to the idea that R means "food is on the right" and L means "food is on the left". The variable x contains this information. If for some reason the variable x is on L but the food happens to be on the Right, then the information contained in x is "not true". This is a very indirect and in a sense deflationary notion of truth, based on the effectiveness of the consequence of holding something for true. (Approximate coarse grained knowledge is still knowledge, to the extent it is somehow effective. To fine grain it, we need additional knowledge, which is more powerful because it is more effective.) Notice that this notion of truth is very close to the one common today in the natural sciences when we say that the 'truth' of a theory is the success of its predictions. In fact, it is the same. v. The definition of 'meaningful' considered here does not directly refer to anything mental. To have something mental you need a mind and to have a mind you need a brain, and its rich capacity of elaborating and working with information. The question addressed here is what is the physical base of the information that brains work with. The answer suggested is that it is just physical correlation between internal and external variables affecting survival either directly or, potentially, indirectly. The idea put forward is that what grounds all this is direct meaningful information, namely strictly physical correlations between a living organism and the external environment that have survival and reproductive value. The semantic notions of information and meaning are ultimately tied to their Darwinian evolutionary origin. The suggestion is that the notion of meaningful information serves as a ground for the foundation of meaning. That is, it could offer the link between the purely physical world and the world of meaning, purpose, intentionality and value. It could bridge the gap. V. SIGNALS, REDUCTION AND MODALITY A signal is a physical event that conveys meaning. A ring of my phone, for instance, is a signal that means that somebody is calling. When I hear it, I understand its meaning and I may reach the phone and answer. As a purely physical event, the ring happens to physically cause a cascade of physical events, such as the vibration of air molecules, complex firing of nerves in my brain, etcetera, which can in principle be described in terms of purely physical causation. What distinguishes its being a signal, from its being a simple link in a physical causation chain? The question becomes particularly interesting in the context of biology and especially molecular biology. Here the minute working of life is heavily described in terms of signals and information carriers: DNA codes the information on the structure of the organism and in particular on the specific proteins that are going to be produced, RNA carries this information outside the nucleus, receptors on the cell surface signal relevant external condition by means of suitable chemical cascades. Similarly, the optical nerve exchanges information between the eye and the brain, the immune system receives information about infections, hormones signal to organs that it is time to do this and that, and so on, at libitum. We describe the working of life in heavily informational terms at every level. What does this mean? In which sense are these processes distinct from purely physical processes to which we do not usually employ an informational language? I see only one possible answer. First, in all these processes the carrier of the information could be somewhat easily replaced with something else without substantially altering the overall process. The ring of my phone can be replaced by a beep, or a vibration. To decode its meaning is the process that recognises these alternatives as equivalent in some sense. We can easily imagine an alternative version of life where the meaning of two letters is swapped in the genetic code. Second, in each of these cases the information carrier is physically correlated with something else (a protein, a condition outside the cell, a visual image in the eye, an infection, a phone call...) in such a way that breaking the correlation could damage the organism to some degree. This is precisely the definition of meaningful information studied here. I close with two general considerations. The first is about reductionism. Reductionism is often overstated. Nature appears to be formed by a relative simple ensemble of elementary ingredients obeying relatively elementary laws. The possible combinations of these elements, however, are stupefying in number and variety, and largely outside the possibility that we could compute or deduce them from nature's elementary ingredients. These combinations happen to form higher level structures that we can in part understand directly. These we call emergent. They have a level of autonomy from elementary physics in two senses: they can be studied independently from elementary physics, and they can be realized in different manners from elementary constituents, so that their elementary constituents are in a sense irrelevant to our understanding of them. Because of this, it would obviously be useless and self defeating to try to replace all the study of nature with physics. But evidence is strong that nature is unitary and coherent, and its manifestations are -whether we understand them or not-behaviour of an underlying physical world. Thus, we study thermal phenomena in terms of entropy, chemistry in terms of chemical affinity, biology in terms functions, psychology in terms of emotions and so on. But we increase our understanding of nature when we understand how the basic concept of a science are ground in physics, or are ground in a science which is ground on physics, as we have largely been able to do for chemical bonds or entropy. It is in this sense, and only in this sense, that I am suggesting that meaningful information could provide the link between different levels of our description of the world. The second consideration concerns the conceptual structure on which the definition of meaningful information proposed here is based. The definition has a modal core. Correlation is not defined in terms of how things are, but in terms of how they could or could not be. Without this, the notion of correlation cannot be constructed. The fact that something is red and something else is red, does not count as a correlation. What counts as a correlation is, say, if two things can each be of different colours, but the two must always be of the same colour. This requires modal language. If the world is what it is, where does modality comes from? The question is brought forward by the fact that the definition of meaning given here is modal, but does not bear on whether this definition is genuinely physical or not. The definition is genuinely physical. It is physics itself which is heavily modal. Even without disturbing quantum theory or other aspects of modern physics, already the basic structures of classical mechanics are heavily modal. The phase space of a physical system is the list of the configurations in which the system can be. Physics is not a science about how the world is: it is a science of how the world can be. There are a number of different ways of understanding what this modality means. Perhaps the simplest in physics is to rely on the empirical fact that nature realises multiple instances of the same something in time and space. All stones behave similarly when they fall and the same stone behaves similarly every time it falls. This permits us to construct a space of possibilities and then use the regularities for predictions. This structure can be seen as part of the elementary grammar of nature itself. And then the modality of physics and, consequently, the modality of the definition of meaning I have given are fully harmless against a serene and quite physicalism. But I nevertheless raise a small red flag here. Because we do not actually know the extent to which this structure is superimposed over the elementary texture of reality by ourselves. It could well be so: the structure could be generated precisely by the structure of the very 'meaningful information' we have been concerned with here. We are undoubtably limited parts of nature, and we are so even as understanders of this same nature. [ 6 ]FIG. 1 . 61 FIG.1. The Kolchinsky-Wolpert model and the definition of meaningful information. If the probability of descending to thermal equilibrium P increases when we cut the information link between A and B, then the relative information (correlation) between the variables x and y is "meaningful information". Here V (.) is the Liouville volume and the difference between the two volumes can be defined as the limit of a regularisation even when the two terms individually diverge. For instance, if A and B are both free particles on a circle of of size L, constrained to be at a distance less of equal to L/N (say by a rope tying them), then we can easily regularise the phase space volume by bounding the momenta, and we get S = log N , independently from the regularisation. ACKNOWLEDGMENTS I thank David Wolpert for private communications and especially Jenann Ismael for a critical reading of the article and very helpful suggestions.
01772292
en
[ "sde" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01772292/file/mt2017-pub00056979.pdf
Julianne De Castro Oliveira Jean-Baptiste Feret Jorge Flavio Yann Ponzoni Jean-Philippe Nouvellon Otavio Camargo Gastellu-Etchegorry José Luiz Campoe Luiz Stape Estraviz Carlos Gueric Rodriguez Le Maire Julianne De Castro Oliveira Jean-Baptiste Féret Jorge Flávio Guerric Rodriguez Simulating the canopy reflectance of different eucalypt genotypes with the DART 3-D model 1  Abstract-Finding suitable models of canopy reflectance in forward simulation mode is a prerequisite for their use in inverse mode to characterize canopy variables of interest, such as Leaf Area Index (LAI) or chlorophyll content. In this study, the accuracy of the 3D reflectance model DART was assessed for canopies of different genotypes of Eucalyptus, having distinct biophysical and biochemical characteristics, to improve the knowledge on how these characteristics are influencing the reflectance signal as measured by passive orbital sensors. The first step was to test the model suitability to simulate reflectance images in the visible and near infrared. We parameterized DART model using extensive measurements from Eucalyptus plantations including 16 contrasted genotypes. Forest inventories were conducted and leaf, bark and forest floor optical properties were measured. Simulation accuracy was evaluated by comparing the mean top of canopy (TOC) bidirectional reflectance of DART with TOC reflectance extracted from a Pleiades very high resolution satellite image. Results showed a good performance of DART with mean reflectance absolute error lower than 2 %. Inter-genotype reflectance variability was correctly simulated, but the model didn't succeed at catching the slight spatial variation for a given genotype, excepted when large gaps appeared due to tree mortality. The second step consisted in a sensitivity analysis to explore which biochemical or biophysical characteristics influenced more the canopy reflectance between genotypes. These results present perspectives for using DART model in inversion mode. Index Terms-DART, 3D modeling, eucalypt, radiative transfer model, remote sensing I. INTRODUCTION MONG the different methods to estimate biophysical or biochemical characteristics of forest plantations, the analysis of the images measured by Julianne de Castro Oliveira and Luiz Carlos Estraviz Rodriguez are with University of São Paulo, ESALQ/USP, Brazil (e-mail: [email protected]; [email protected]), Jean-Baptiste Féret is with IRSTEA, UMR TETIS, BP5092 Montpellier, France (e-mail: [email protected]), Flávio Jorge Ponzoni is with INPE, Brazil (email: [email protected]), Yann Nouvellon is with CIRAD, UMR ECO&SOLS, F-34398 Montpellier, France and with University of São Paulo, ESALQ/USP, Brazil (e-mail: [email protected]), Jean-Philippe Gastellu-Etchegorry is with CESBIO, France (e-mail : [email protected]), Otavio Camargo Campoe is with Federal University of Santa Catarina, UFSC, Brazil (e-mail : [email protected]), José Luiz Stape is with Suzano Pulp and Paper, Brazil (e-mail: [email protected]) and Guerric le Maire is with CIRAD, UMR ECO&SOLS, F-34398 Montpellier, France and with NIPE, UNICAMP, Campinas, Brazil (e-mail: [email protected]). sensors on orbital platforms is appropriate for large spatial scales studies. Images are converted into reflectance values for each spectral band of the image, and later used to retrieve biophysical parameters of the forest through empirical relationships, or through radiative transfer models (RTM) inversion [START_REF] Le Maire | Calibration of a species-specific spectral vegetation index for leaf area index (LAI) monitoring: example with MODIS reflectance time-series on Eucalyptus plantations[END_REF] - [START_REF] Le Maire | Calibration and validation of hyperspectral indices for the estimation of broadleavedforest leaf chlorophyll content, leaf mass per area, leaf area index and leafcanopy in[END_REF]. RTM explicitly take into account stand structural characteristics (tree dimensions and positions, leaf area index, leaf angle distribution, crown cover, among others) and can simulate the quantitative value of the reflectance spectra of the canopy as observed on top of the canopy or by a sensor onboard a plane or a satellite. They are based on the knowledge of the physical laws that control the transfer and interaction of solar radiation in a vegetative canopy, in interaction with the soil [START_REF] Gastellu-Etchegorry | A modeling approach to assess the robustness of spectrometric predictive equations for canopy chemistry[END_REF]. The DART -Discrete Anisotropic Radiative Transfer -model [START_REF] Gastellu-Etchegorry | A simple anisotropic reflectance model for homogeneous multilayer canopies[END_REF], [START_REF] Gastellu-Etchegorry | Discrete Anisotropic Radiative Transfer (DART 5) for modeling airborne and satellite spectroradiometer and LIDAR acquisitions of natural and urban landscapes[END_REF] is a comprehensive threedimensional model that simulates bidirectional reflectance and enables new possibilities of data analysis to evaluate, for example, canopy structure [START_REF] Barbier | Linking canopy images to forest structural parameters: potential of a modeling framework[END_REF], radiative budget [START_REF] Gastellu-Etchegorry | DART: a 3D model for simulating satellite images and studying surface radiation budget[END_REF], [START_REF] Demarez | Modeling of the radiation regime and photosynthesis of a finite canopy using the DART model. Influence of canopy architecture of canopy architecture assumptions and border effects[END_REF], photosynthesis [START_REF] Demarez | Modeling of the radiation regime and photosynthesis of a finite canopy using the DART model. Influence of canopy architecture of canopy architecture assumptions and border effects[END_REF], chlorophyll content [START_REF] Malenovský | Retrieval of spruce leaf chlorophyll content from airborne image data using continuum removal and radiative transfer[END_REF], [START_REF] Demarez | A modeling approach for studying forest chlorophyll content[END_REF], Leaf Area Index (LAI) [START_REF] Banskota | An LUT-Based inversion of DART model to estimate forest LAI from hyperspectral data[END_REF], [START_REF] Banskota | Investigating the utility of wavelet transforms for inverting a 3-D radiative transfer model using hyperspectral data to retrieve forest LAI[END_REF], among others. Eucalypt plantations in Brazil cover 5.6 million ha, which accounts for 71.9 % of planted forests in Brazil [START_REF]Indústria Brasileira de Árvores[END_REF]. Currently, most areas are planted with several genotypes, mainly on clonal plantations, which have been tested and selected for distinct widespread soils and climatic Brazilian conditions [START_REF] Gonçalves | Integrating genetic and silvicultural strategies to minimize abiotic and biotic constraints in Brazilian eucalypt plantations[END_REF]. These genotypes provide different phenotypes, with distinct canopy structure, leaf morphology and biochemical compounds and biomass production. Due to their high economic importance in Brazil, the understanding of how biophysical parameters of planted forests could explain the spatial-temporal growth dynamics and the estimation of such parameters through remotely-sensed images is of paramount importance [START_REF] Le Maire | Calibration of a species-specific spectral vegetation index for leaf area index (LAI) monitoring: example with MODIS reflectance time-series on Eucalyptus plantations[END_REF], [START_REF] Le Maire | Leaf area index estimation with MODIS reflectance time series and model inversion during full rotations of Eucalyptus plantations[END_REF]. Eucalyptus plantations in Brazil present particular structures: they are planted at high densities (e.g. 1700 trees/ha), they generally have a low leaf area index compared to other dense forests, and they are planted in rows of different spacing (anisotropy). One supplementary difficulty comes from the variability of eucalypts species and genotypes that are planted in Brazil. The different genotypes can have different structural and biophysical properties, even at the same age, and these parameters may change the canopy reflectance in different magnitude. It is therefore necessary to understand better the drivers of the reflectance differences between genotypes to further assess if their estimation through inversion procedures is possible. 2 Despite the successful use of physical approach of DART to retrieve canopies characteristics from inversion procedures, e.g. in [START_REF] Banskota | An LUT-Based inversion of DART model to estimate forest LAI from hyperspectral data[END_REF], [START_REF] Yáñez-Rausell | Estimation of spruce needle-leaf chlorophyll content cased on DART and PARAS canopy reflectance models[END_REF] - [START_REF] Gastellu-Etchegorry | An interpolation procedure for generalizing a look-up table inversion method[END_REF], few detailed studies have tested the efficiency of this 3D reflectance model in forward mode in forest canopy ecosystem [START_REF] Couturier | A modelbased performance test for forest classifiers on remote-sensing imagery[END_REF], [START_REF] Schneider | Simulating imaging spectrometer data: 3D forest modeling based on LiDAR and in situ data[END_REF]. The first assumption of inversion procedure is the suitability of the RTM to simulate accurately the reflectance for a range of canopy characteristics corresponding at least to the range of application conditions. In this study, we parameterized DART model using an extensive in situ measurement dataset. Eucalyptus plantations of 16 different genotypes were used to test the accuracy of the simulations generated by DART when compared with experimental images acquired from a very high spatial resolution satellite, Pleiades. In a second step, we performed a sensitivity analysis using the parameters variability as they were measured in situ to quantify the effect of the main stand parameters (inter-genotype variability) on the canopy reflectance. We finally discussed the use of DART for inversion studies for these particular ecosystems. II. DATASET DESCRIPTION A. Study site The study site is located in Itatinga Municipality, in the state of São Paulo, southeastern Brazil, 22°58'04''S and 48°43'40''W (Fig. 1), as part of the IPEF-Eucflux project. A genotype trial experiment of eucalypt was installed in November 2009 with 16 genotypes comprising several genetic origins from different eucalypt growing companies and regions in Brazil (G1, G2, G10: E. grandis; G3-G9, G11-G13, G15: E. grandis x urophylla; G14: E. saligna; G16: E. camaldulensis x grandis). Fourteen of these 16 genotypes were clones and two (G1 and G2) had seminal origin. Planting rows were mainly east-west oriented, with plant arrangement of 3 m × 2 m (1666 trees per hectare). The experiment comprised 9 blocks, each having 16 treatments (genotypes) randomly distributed within a 4 × 4 subplot grid of 192 trees each (each subplot comprised 12 lines of 16 trees). Only the 10 lines and 10 rows central part of the subplot was analyzed (100 trees, 20 m × 30 m area). B. In-situ measurements Forest inventories were carried out at 6, 12, 19, 26, 38, 52, 62 and 74 months of age. During these inventories, trunk diameter at breast height (DBH) and tree height were measured. Close to most of these dates, 10-12 trees were cut for each genotype to compute the biomass per compartment (leaves, branches, trunk and bark) to generate allometric relationships between trunk DBH and tree height, height to the base of the live crown, crown diameter and leaf area, as classically done in other studies in the same area [START_REF] Laclau | Mixed-species plantations of Acacia mangium and Eucalyptus grandis in Brazil -1. Growth dynamics and aboveground net primary production[END_REF] - [START_REF] Christina | Importance of deep water uptake in tropical eucalypt forest[END_REF]. All these allometric relationships presented good adjustments (e.g. R 2 ~ 0.72, 0.70 and 0.88, respectively, for crown diameter, crown height and leaf area) and included the age as an explanatory variable, allowing their application for each tree at each inventory date. LAI was calculated as the sum of the leaf area of each tree inside the plot divided by the plot area. Leaf angle distribution (LAD) was estimated from the leaf angles measured in the field for each genotype (as described in [START_REF] Le Maire | Leaf area index estimation with MODIS reflectance time series and model inversion during full rotations of Eucalyptus plantations[END_REF]) and adjusted with an ellipsoidal leaf angle density function. In each tree, a clinometer was used to measure the inclination of 72 leaves selected according to their position within the crown to be representative of the tree-scale distribution. The eucalypt stands were analyzed at the date of May, 2014 (54 months), corresponding to the date of satellite image acquisition, using interpolation of the field measurements between inventories at 52 and 62 months. For the leaf area, auxiliary leaf area index values retrieved from more frequent measurements on one of the genotypes allowed to improve the interpolation by considering a common seasonal variation. Leaves, trunks and forest floor optical properties were measured on October 2015 with an ASD Field SpecPro (Analytical Spectral Devices, Boulder, Colorado, USA) spectrometer in the spectral range from 400 to 2500 nm with 1 nm intervals at 71 months after planting (in October 2015). In these dates, three trees per genotype were selected and for each tree, leaves were collected randomly at three crown layers (bottom, middle and top, divided by exact height proportions) and two horizontal positions in each layer (near and far from trunk), totaling two leaves per crown layer, six leaves per tree and 18 leaves per genotype. These leaves were kept cold and in the dark for less than one hour. Adaxial leaves reflectance and transmittance were measured in the laboratory using an integrating sphere (LI-COR 1800, LI-COR, Inc., Lincoln, Nebraska, USA). Forest floor and bark reflectance were measured using a Contact Probe (ASD, Inc., Boulder, Colorado) on five different points for each genotype, in the same week without rain. The spectral measurement date occurred more than one year after the satellite image acquisition. However, these component spectra have probably not evolved a lot during this interval: for leaves, there were no significant difference between months 52 and 72 for specific leaf area, water content, and SPAD values (measured with the SPAD-Minolta device) (data not shown). For trunk and forest floor, we assumed no changes, which seem a reasonable hypothesis for these components. C. Pleiades satellite images Very high spatial resolution multispectral scenes including four bands (blue: 430-550 nm, green: 490-610 nm, red: 600-720 nm and near infrared: 750-950 nm) from Pleiades satellite were used to validate DART simulations. The image (four bands) was acquired on May 2014, at 13:36 GMT, with the following angles: view azimuth φ 𝑣 = 180.03°, view zenith θ 𝑣 = 13.40°, sun azimuth φ 𝑠 = 33.43° and sun zenith θ 𝑠 = 44.48°. The image was orthorectified and projected. Polygons of each internal plot extension (20 m × 30 m) were used to extract the radiance of the plots in each band of the Pleiades image. Transformation to TOA reflectance was performed, followed by an atmospheric correction to compute the reflectance of the top of canopy (TOC) of the scenes using the 6S model and default atmospheric parameterization for this location [START_REF] Vermote | Second simulation of the satellite signal in the solar spectrum, 6S: An overview[END_REF]. III. ANALYSES AND DART PARAMETERIZATION A. DART parameterization DART was used in the ray tracing method and reflectance mode [START_REF] Gastellu-Etchegorry | A simple anisotropic reflectance model for homogeneous multilayer canopies[END_REF], [START_REF] Gastellu-Etchegorry | DART: a 3D model for simulating satellite images and studying surface radiation budget[END_REF] to simulate TOC bidirectional reflectance images. Simulations with DART were conducted on 4 wavebands corresponding to Pleiades sensor relative spectral response. The input solar angles (θ 𝑠 andφ 𝑠 ) were computed knowing the local latitude, date and hour of satellite overpass. Image acquisition geometry (θ 𝑣 , φ 𝑣 ) was obtained from metadata of Pleiades images. All DART simulated scenes were created using individualized positions and dimensions of the 192 trees of each subplot, but the output stand reflectance computation was restricted to an internal plot of 20 m × 30 m (100 trees), to avoid any border effect. One scene was simulated for each of the 16 genotypes and 9 blocks at 54 months (corresponding to date May, 2014), with computing cubic cells of 0.50 m edge. Input parameters related to the trees positions (coordinates x and y in the plot), dimensions (e.g. crown diameter and height, DBH and total height), LAI and LAD for each tree were all insitu measurements (described on Section II.B.). For simulating tree crowns, we used a half ellipsoid shape, which typically fit well with the shape of eucalypts crown. Optical properties of the leaves were prescribed in function of the crown layer for each tree (upper, middle and lower) and in function of the genotype, such as the bark and forest floor reflectance. In these canopies, the branches are very thin and represent a very small absorbing surface in comparison to leaves and barks, and therefore they were not simulated. B. Comparison between simulated and satellite images The accuracy of the simulated reflectance TOC scenes from DART was checked against the TOC reflectance obtained from Pleiades scenes, for all 4 broadbands (blue, green, red, and NIR), 9 blocks and 16 genotypes. The overall accuracy level for simulating eucalypt plantations was expressed by the mean absolute error (MAE) of each spectral band [START_REF] Willmott | Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance[END_REF]: 𝑀𝐴𝐸 𝜆 = 1 𝑛 ∑ |𝑅 𝑃𝑙é𝑖𝑎𝑑𝑒𝑠,𝜆 (𝑖)-𝑅 𝐷𝐴𝑅𝑇,𝜆 (𝑖)| 𝑛 𝑖=1 , (1) where𝑅 𝑃𝑙é𝑖𝑎𝑑𝑒𝑠,𝜆 is the reflectance measured by Pleiades satellite for spectral band λ, 𝑅 𝐷𝐴𝑅𝑇,𝜆 is the reflectance simulated by DART for the same spectral band, and 𝑛 is the number of samples (𝑛=144 plots, product of 9 blocks by 16 genotypes). The systematic error (BIAS), root mean square error (RMSE) and the determination coefficient (R 2 ) were also computed, both at genotype scale (averaged by blocks, so n=16 for each band), or for each genotypes for inter-block variability (so n=9 for each band and each genotype). C. Sensitivity analysis of DART for eucalypt plantations A simple sensitivity analysis was performed to better understand the effect of inter-genotype differences in structure, biophysical and biochemical parameters on the simulations output. We selected one of the genotype (the G3, that represents the main genotype planted around the experimental area), grown in one of the block (B2, where the plots shows good growth and health) as an example. For each of the parameter listed below, we exchanged one by one the G3 value by the value of another genotype of the same block B2. The range and variation of these values reflected therefore the real inter-genotype variability as it appears on in situ measurements, which enabled more realistic description and analysis of parameters influences on the reflectance. For instance, the LAI of G3 was replaced by the one of G1, the DART reflectance in the four bands were simulated, then a new simulation was performed with the LAI of G2, etc. At the end, we computed the average, variance and produced a boxplot figure for each parameter at each reflectance band. The tested parameters were LAI, LAD, leaf, bark and forest floor optical properties (reflectance), trees dimensions (tree and crown height, crown diameter and DBH), and row azimuth. Note that for the particular case of row azimuth, we changed the orientation by using the orientation of the other blocks one by one, and this is not linked to the genotype. However, including this variability will give more precise information on the importance of this factor. This procedure allows us to better understand which parameters drive the inter-genotypes variability in reflectance. IV. RESULTS A. Differences between genotypes structural and biochemical properties The main characteristics of the genotypes (DBH, height, leaf area, LAI, crown length, crown diameter, leaf angle and mortality) based on field measurements and used for DART parameterization are shown in Fig. 2, together with their interblock repetitions. Overall, we can see that the tree dimensions and structural properties are similar between genotypes having the same age, and high local variability. However, when looking closer, there are some differences between genotypes. The DBH and height values were very similar between genotypes, with higher variability for the seminal material G1 and G2, and higher growth homogeneity of the clonal materials. G16 was the most homogeneous clone. G7, G12 and G16 presented the lowest leaf area values and lowest variability between trees. LAI for all genotypes was around 3-6 m 2 /m 2 and with small spatial variability, mainly for G12 and G16. G10, G11 and G13 presented the highest and G16 the lowest LAI values. In contrast with the tree height, the crown length varied more between genotypes. Similar with tree DBH and height, the crown diameter exhibited little variability across genotypes, with a median around 3 m indicating that at this age (54 months) the trees inside the plots are exploring more or less the space they individually have (3 m × 2 m). Note that there was a small measured difference between within rows and between-rows crown diameters that was included in the simulations. The leaf inclination angle showed high between-trees variability, mostly driven by differences in tree size since there were strong canopy vertical gradients of leaf inclination angles [START_REF] Le Maire | Leaf area index estimation with MODIS reflectance time series and model inversion during full rotations of Eucalyptus plantations[END_REF]. G16 had the highest leaf inclination angles with low variability between trees. Mortality exhibited large variability across genotypes, with the highest values (reaching around 20 %) for genotypes G1, G3, G6 and G13. For other genotypes, the mortality was lower in : IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, n° 11, 2017. p.4844-4852 than 10 %, which are common values for eucalypt plantations [START_REF] Zhou | Mapping local density of young Eucalyptus plantations by individual tree detection in high spatial resolution satellite images[END_REF]. Leaves, trunks and forest floor optical properties are shown on Fig. 3 for each genotype. Leaf reflectance (shown on Fig. 3 for expanded mature leaves of the middle crown layer) exhibited high absorption peaks in the blue and red regions and high NIR reflectance for all genotypes. Note that the reflectance ranking between genotypes was conserved for all wavelengths in the visible but changed further in the NIR and MID regions. There were larges differences of bark reflectance between genotypes. Interestingly the reflectance was very high in the visible and NIR regions compared to leaf reflectance. Some spectra clearly show an absorption feature in the red region. Forest floor reflectance showed similar pattern for all genotypes, but with a high inter-genotype variability, with low reflectance in the visible region and increasing values along the spectrum, and a mild absorption peak in the water absorption band (1400 nm). Fig. 4 shows the leaves reflectance in the green, red and near infrared bands for each crown level (bottom, middle and top) and genotypes. There was no significant difference between crowns layers, significant differences between genotypes and no significant differences for the interaction genotype × crown layers for each band (N-Way ANOVA under Matlab 2013a, α=0.05). Note that statistical analysis was done using all measured reflectance data instead of the average values shown on Fig. 4. B. Comparison of DART simulations with Pleiades satellite image The TOC reflectances simulated by DART and acquired by the Pleiades sensor at the four multispectral bands for each genotype are shown on Fig. 5, averaged by genotype and with standard deviation. In general, the mean TOC reflectances from DART simulations were in good agreement with the mean TOC reflectance of the Pleiades scenes for all four bands and genotypes. Discrepancies were found mainly for the blue band (430-550 nm) for all genotypes, and some discrepancies appeared in the near infrared band (750-950 nm) for some genotypes (e.g., genotypes 5, 8 and 12). A numerical comparison between the reflectance simulated by DART and acquired by Pleiades scenes was performed using the MAE, RMSE and R 2 for all blocks and genotypes in each band (Table 1). The minimum and maximum range of R 2 values computed for each genotype for the four bands are also presented. The MAE values were low for all bands (< 0.0195), with the lowest values for the green band. Higher values were found for blue and NIR bands, which corroborates the results of Fig. 5. BIAS, that represents the average difference between Pleiades and DART reflectance, was negative and indicated that TOC reflectances simulated by DART model were, in general, slightly higher than TOC reflectances derived from Pleiades images. RMSE were also low (<0.023), mainly for the bands in the visible domain (<0.0023). NIR band had the higher value. The R 2 best performance was for red and NIR bands and worst for blue band. The R 2 for each genotype computed with the different blocks (spatial variability) in all bands showed a wide range of values. The spatial variability of some genotypes was correctly simulated whereas others were not significant. An example of the level of detail of trees parameterization on DART simulated scenes compared with Pleiades scenes is shown on Fig. 6 for the near infrared band of G14, block 4. (scenes with 0.50 m and 2 m of spatial resolution). The near infrared (2.0 m of spatial resolution) and panchromatic band of Pleiades (0.50 m of spatial resolution) scenes for the same G14 and block 4 are also presented. The Pleiades panchromatic was chosen to present this example, due to the higher spatial resolution of this band. This visual comparison illustrates how the DART model represents the canopy. We can see that DART simulations are in accordance with the image in terms of shadow proportion, gaps, row orientations, textures and object dimensions. However, the model cannot reach the level of detail for a use on a tree-by-tree analysis in this type of canopy structure. C. Sensitivity Analysis The results of the sensitivity analysis of the simulated reflectance for the blue, green, red and NIR bands according to stand parameters (LAI, LAD, leaf, bark and forest floor optical properties -reflectance, trees dimensions and row azimuth) are presented in Fig. 7. The behavior of real range variation of each parameter individually (without interaction between them) on the average canopy reflectance was presented together to compare their magnitude. LAI, leaf reflectance, trees dimensions and row azimuth had the highest sensitivity and explain most of the difference between genotypes in the visible bands. These variability were of the same order of magnitude as the variability due to row orientation. Bark and forest floor reflectance and LAD showed the weakest sensitivity in these bands despite their intergenotype variability being relatively high. The NIR band showed similar reflectance results among the replacing tests, but with higher inter-genotype standard deviations compared to the others bands. The LAD, bark and forest floor reflectance showed higher influence in the NIR band compared to visible bands. V. DISCUSSION A. Parameterization of DART Overall, the differences between eucalypt trees of different genotypes and locations were not very large for many of the in : IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, n° 11, 2017. p.4844-4852 parameters. However, the final importance of a parameter to explain the difference in TOC reflectance between genotypes (and/or locations) is a conjunction of the inter-genotype variability (or spatial variability) of this parameter, and the sensitivity of that parameter in the model. It is therefore important, before setting some of the DART parameters to constants (and therefore not explaining the genotype or spatial variability), to model the system with the maximum precision, and simplify afterwards if possible. The model parameterization is therefore a critical step of this work. The leaf reflectance was shown to be different between genotypes, reflecting differences in pigment contents, and internal structure of leaves. A more detailed analysis could be done to assess which leaf structural or biochemical characteristics could explain this reflectance variability, but such analysis is out of the scope of the study: here we focused our analysis on the macro-scale differences between genotypes, and leaf reflectance was therefore an input parameter of DART. The high inter-genotypes difference of bark reflectance (Fig. 3) was expected, since their color and roughness was extremely different in the field. The absorption feature in the red is associated to the presence of chlorophyll pigments in the bark surface for some of the genotypes, as observed in many other studies (e.g. in [START_REF] Wittmann | The optical, absorptive and chlorophyll fluorescence properties of young stems of five woody species[END_REF], [START_REF] Girma | Photosynthetic bark: use of chlorophyll absorption continuum index to estimate Boswellia papyrifera bark chlorophyll content[END_REF]). There was also a high intergenotype variability on forest floor reflectance (Fig. 3), mainly in the NIR and MID regions. This behavior is due to the different composition of the forest floor materials (e.g. green or yellowing leaves just fallen and dead dry leaves, bark and branches proportion, leaf sizes), their structural variance, moisture content and decomposition stage [START_REF] Asner | Variability in leaf and litter optical properties: implications for BRDF model inversions using AVHRR, MODIS, and MISR[END_REF]; which directly influences the reflectance. The ANOVA analysis of the leaf reflectance for bottom, middle and top crown layers in the green, red and NIR bands (Fig. 4) showed that there was no statistical significant difference between bottom, middle and top crown layers but there were differences between genotypes considering all crown layers. Therefore, the use of different spectra for upper, middle and lower part of the canopy could be unnecessary for simulating reflectance in these wavebands. However, since some genotypes showed different spectra for upper layer, which could be locally important for TOC simulation, we preferred to keep this detailed description in the simulations. Also, the leaves inside each crown layers have different combinations of development stages (juveniles and mature). Generally there is a gradient of these development stages inside crown, with more juvenile leaves in the top layer and more mature leaves in the bottom. Mature leaves have more pigments, higher mass per area than juvenile [START_REF] Stone | Spectral reflectance characteristics of eucalypt foliage damaged by insects[END_REF], and different internal structure, which directly influence the reflectance in the visible and NIR wavelengths. However, our results did not clearly show any vertical trend of reflectance between crown layers. The explanation is that the proportion of juvenile leaves in the top layer is variable between genotypes, and between trees of different heights. B. Suitability of DART for TOC simulations Assessing if a RTM is suitable to simulate a given ecosystem depends on the objective of the study. In this study, we can distinguish the results in function of the level of variability of the observed canopy, i.e., evaluating the degree of precision of DART for simulating i) a "typical" Eucalyptus plantation reflectance, ii) inter-genotype reflectance variability and iii) the inter-block reflectance variability for the same genotype. Our results showed that the DART model was suitable to simulate Eucalyptus plantation in general, with their very high tree density, their tall trunks, bright forest floor, and ellipsoidal form of their crown (Fig. 5): this is especially underlined by the low MAE obtained for this ecosystem (lower than 2%). The inter-genotype variability of reflectance comes from the variability of many structural and biochemical parameters of the ecosystem, as represented in Fig. 2 and Fig. 3 (e.g. optical properties of the different components, leaf angles, dimensions of trees, etc.). This inter-genotype variability was adequately simulated as could be seen on Fig. 5 and Table 1, with coefficients of determination > 0.41 for all spectral bands, and of 0.55 in the NIR bands. Such a bias for blue band could come from residual atmospheric effects not properly taken into account in the atmospheric correction of the Pleiades images, which was based on standard atmospheric parameterization of 6S in absence of local measurements of atmospheric water, ozone and aerosols contents. Finally, the simulations of the spatial variability between blocks, for each genotype were not adequately simulated for most genotypes. There were very low average coefficients of determination in all bands considering each genotype for all blocks. The spatial variability for a given genotype is more difficult to assess by simulation, mainly because of the low variability existing between these blocks. Therefore, the precision of the simulation is not sufficient to catch up this spatial variability. However, some genotypes had higher mortality rates (e.g. G1, G3, G6 and G13), which created large gaps in the canopy and increased the variability to a range possible to simulate (high R 2 scores). As a consequence, the use of DART model in inversions mode for these ecosystems would gain precision if the genotype is already known, and in areas where the proportion of gaps remains low. Moreover, the row orientation could also act as a confounding factor and should also be prescribed prior to inversion, i.e. a pre-analysis of row orientations needs to be carried on. In terms of bi-directional TOC reflectance, the comparison between simulated and real satellite scenes from forest stands is a difficult task, since the reflectance image is dominated by the macroscopic properties of the illuminated and shadowed crowns as well as ground surface [START_REF] Houborg | Combining vegetation index and model inversion methods for the extraction of key vegetation biophysical parameters using Terra and Aqua MODIS reflectance data[END_REF], as illustrated in Fig. 6, at very high resolution. Our results confirm the ability of DART to simulate remote sensing data under several eucalypt forest conditions. Some comparisons between DART simulations and forest ecosystems reflectance was also done in [START_REF] Couturier | A modelbased performance test for forest classifiers on remote-sensing imagery[END_REF], [START_REF] Schneider | Simulating imaging spectrometer data: 3D forest modeling based on LiDAR and in situ data[END_REF] and the main conclusions were that DART showed very low pixels spectral dissimilarity compared with IKONOS images and R 2 of 0.48 for a pixel-wise comparison with APEX imaging spectrometer, respectively. DART has been successfully compared with other 3D models throughout the RAdiative transfer Models Intercomparison -RAMI exercise [START_REF] Pinty | RAdiation transfer Model Intercomparison (RAMI) exercise: results from the second phase[END_REF], [START_REF] Widlowski | The fourth phase of the radiative transfer model intercomparison (RAMI) exercise: actual canopy scenarios and conformity testing[END_REF] under several conditions. Our results extend DART model validation on real measured dataset of individualized trees and stands of Eucalyptus plantations, which have particular characteristics (e.g. a high tree density but rather low LAI, lots of trunk surface but few branches). C. Source of the inter-genotype reflectance variability After having tested the model suitability to simulate intergenotype TOC reflectance variability, we seek to address which of the stand structural or biochemical parameters (LAI, LAD, leaf, bark and forest floor optical properties, trees dimensions and row orientation) influences more the reflectance between genotype (Fig. 7). These parameters were chosen since they are the main input parameters of DART. The LAI was one of the most influencing parameter for explaining the difference of reflectance between genotypes. Numerous studies have proved that vegetation reflectance is strongly affected by LAI in the entire spectra, but more in the NIR [START_REF] Shi | Consistent estimation of multiple parameters from MODIS top ofatmosphere reflectance data using a coupled soil-canopy-atmosphereradiative transfer model[END_REF] - [START_REF] Le Maire | Calibration and validation of hyperspectral indices for the estimation of biochemical and biophysical parameters of broadleaves forest canopies[END_REF]. The leaf reflectance, which reflects in the visible the different leaves pigments contents, was another very important factor driving the canopy reflectance, mainly in the visible region. These results agree with [START_REF] Xiao | Sensitivity analysis of vegetation reflectance to biochemical and biophysical variables at leaf,canopy, and regional scales[END_REF], which performed a sensitivity analysis of vegetation reflectance and found more influence of leaves pigments content in the visible and LAI in the NIR regions at canopy scale. They also showed a weak effect of leaf angle at this scale. The crown dimensions also explained the difference of TOC reflectance between genotypes (Fig. 7), as shown in other studies [START_REF] Rautiainen | The effect of crown shape on the reflectance of coniferous stands[END_REF]. This variable, jointly with the row azimuth, mainly drives the proportion of visible soil between rows and the proportion of shaded/illuminated crowns on the image. The presence of empty spaces (dead trees) in some of the plots increased even more this heterogeneity, which also increased the contribution of this parameter to the inter-genotype and spatial variability of TOC reflectance. Some of the parameters tested here showed moderate sensitivity on simulated TOC reflectance, which is the case for bark and forest floor reflectance. Therefore, average values could have been chosen for these parameters, and could simplify further inversions. In contrast, TOC reflectance showed high sensitivity to LAI, leaf reflectance, trees dimensions and row azimuth. It seems therefore important to perform genotype-specific inversion in the future, or grouping genotypes for their crown dimensions. Also, knowledge of the row orientation will be critical for inversion purposes. Further step will be to simulate a comprehensive database along eucalypts growth stages for different genotypes, and use this database to estimate some variables such as the LAI or chlorophyll content through inversion procedures. Our first sensitivity analysis can further help distinguish the inversion errors coming from the model itself or coming from the inversion methodology (algorithm, constraints, etc.). These sensitivity analysis results confirm the relevance of using 3D models such as DART, as they are particularly suitable to explicit the influence of tree shape, leaf pigments and plot heterogeneity on the canopy reflectance of different genotypes and row orientations. VI. CONCLUSION In this study we tested DART model to simulate Eucalyptus plantation reflectance, their difference between genotypes and between plots for a given genotype. DART was reliable for eucalypts plantation simulation in general, and adequately simulated the difference of reflectance between 16 genotypes including the mostly planted ones in the region, and some particular genotypes (e.g. G16: E camaldulensis x grandis). However, the local difference of reflectance was correctly simulated only when the range of TOC reflectance was high for a given genotype, which occurred mainly through local mortality. The difference of TOC reflectance in the visible bands between genotypes is mainly explained by differences in LAI, leaf optical properties and row orientation. In the NIR, the same parameters influence the TOC canopy, together with the tree dimensions. Leaf angles, bark and forest floor reflectance have a smaller effect in comparison to the other parameters, although their inter-genotype variability was large. Successful test of DART in forward mode for simulating the TOC reflectance of these different genotypes open possibilities for parameter estimation through model inversion procedures for eucalypt plantations. FIGURESFig. 1 . 1 FIGURES Fig. 2 . 2 Fig. 2. Main stand structural characteristics (diameter at breast height -DBH, tree height, tree leaf area, leaf area index -LAI, crown length, crown diameter, leaf inclination angle and mortality) of the 16 genotypes on May, 2014. Mortality represents the percent of dead trees in each block per genotype. Lines inside boxes are the median values, inferior and superior boxes limits are the first and third quartiles, respectively; and error bars outside boxes extend from minimum and maximum values within 3 standard deviations. Variability considered here is the tree-scale variability considering all blocks. Mortality and LAI variability is inter-block variability. Fig. 4 . 4 Fig. 4. Leaves reflectance in the green, red and near infrared regions at bottom, middle and top crown layer for the 16 genotypes (labeled as G1 to G16). Fig. 5 . 5 Fig.[START_REF] Gastellu-Etchegorry | A modeling approach to assess the robustness of spectrometric predictive equations for canopy chemistry[END_REF]. DART (light gray) and Pleiades (dark gray) mean top of canopy (TOC) reflectance of four bands (B=blue, G=green, R=red, NIR=near infrared) for each genotype averaged for all blocks and subplots. Lines in each bar represent the standard deviation for blocks. Fig. 6 . 6 Fig. 6. Example of near infrared DART simulated scene with 0.50 m (a) and 2 m (b) of spatial resolution, panchromatic Pleiades image(c) with 0.50 m and near infrared Pleiades image with 2 m of spatial resolution for the genotype 14 in the block 4. Fig. 7 . 7 Fig. 7. Sensitivity analysis of the reflectance in blue, green, red and near infrared bands relative to stand parameters (respectively, LAI, LAD, leaf, bark and forest floor reflectance, trees dimensions and row azimuth). Boxplot definition is given in Fig. 2. Dashed green line represents the TOC reflectance of the genotype 3 (reference). Numbers above each boxplot are the standard deviation. Red crosses are the outliers values.Parameters TABLE 1 MEAN 1 ABSOLUTE ERROR (MAE), SYSTEMATIC ERROR (BIAS), ROOT MEAN SQUARE ERROR (RMSE) AND DETERMINATION COEFFICIENT (R 2 ) FOR SIMULATED BANDS (BLUE, GREEN, RED AND NIR) IN RELATION TO PLEIADES BANDS, AVERAGED BY GENOTYPE AND BLOCK. R 2 OF GENOTYPES (MIN. -MEAN -MAX.) IS THE MINIMUM, MEAN AND MAXIMUM R 2 VALUES IN EACH BAND FOR THE GENOTYPES, COMPUTED ON THE INTER-BLOCK VARIABILITY. Band MAE BIAS RMSE R 2 R 2 of genotypes (min. -mean-max.) Blue 0.0180 -0.0180 0.00106 0.41 0.0003-0.11 -0.79 Green 0.0063 -0.0063 0.00223 0.43 0.0003-0.12 -0.88 Red 0.0170 -0.0170 0.00104 0.51 0.0003-0.12 -0.75 NIR 0.0194 -0.0044 0.02200 0.55 0.0023 -0.28-0.91 in : IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, n°11, 2017. p.4844-4852 ACKNOWLEDGMENT This study was funded by Forestry Science and Research Institute -IPEF (EUCFLUX project -funded by Arcelor Mittal, Cenibra, Copener, Duratex, Fibria, International Paper, Klabin, Suzano and Vallourec), the HyperTropik project (TOSCA program grant of the French Space Agency, CNES), CNPq, CAPES and the Centre de Coopération Internationale en Recherche Agronomique pour le Développement -CIRAD. Pleiades image was acquired in the frame of GEOSUD program, a project (ANR-10-EQPX-20) of the program "Investissementsd'avenir" managed by the French National Research Agency. We are grateful to Eloi Grau (IRSTEA) for support in running the program, José Guilherme Fronza for field work, and the staff at the Itatinga Experimental Station, in particular Rildo Moreira e Moreira (ESALQ, USP) and Eder Araujo da Silva (http://www.floragroapoio.com.br) for their technical support.
01669806
en
[ "phys.phys.phys-ins-det" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01669806/file/Forward%20scattering%20effects%20on%20muon%20imaging_accepted.pdf
H Gómez email: [email protected] D Gibert C Goy K Jourde Y Karyotakis S Katsanevas J Marteau M Rosas-Carbajal A Tonazzo Forward scattering effects on muon imaging Keywords: A : Muon imaging is one of the most promising non-invasive techniques for density structure scanning, specially for large objects reaching the kilometre scale. It has already interesting applications in different fields like geophysics or nuclear safety and has been proposed for some others like engineering or archaeology. One of the approaches of this technique is based on the well-known radiography principle, by reconstructing the incident direction of the detected muons after crossing the studied objects. In this case, muons detected after a previous forward scattering on the object surface represent an irreducible background noise, leading to a bias on the measurement and consequently on the reconstruction of the object mean density. Therefore, a prior characterization of this effect represents valuable information to conveniently correct the obtained results. Although the muon scattering process has been already theoretically described, a general study of this process has been carried out based on Monte Carlo simulations, resulting in a versatile tool to evaluate this effect for different object geometries and compositions. As an example, these simulations have been used to evaluate the impact of forward scattered muons on two different applications of muon imaging: archaeology and volcanology, revealing a significant impact on the latter case. The general way in which all the tools used have been developed can allow to make equivalent studies in the future for other muon imaging applications following the same procedure. Introduction The idea to use muons produced in the Earth's atmosphere by cosmic rays as a scanning method of anthropic or geological structures, the so-called muon imaging, was proposed soon after the discovery of these muons [START_REF] Neddermeyer | Note on the nature of cosmic-ray particles[END_REF][START_REF] Neddermeyer | Cosmic-ray particles of inter-mediate mass[END_REF][START_REF] Auger | Les rayons cosmiques[END_REF]. Muon imaging leverages the capability of cosmic muons to pass through hundreds of metres or even kilometres of ordinary matter with an attenuation mainly related to the length and density of this matter encountered by the muons along their trajectory before their detection [START_REF] Nagamine | Introductory Muon Science[END_REF]. As this attenuation is principally caused by muons absorption and scattering, muon imaging can be mainly performed using two different techniques. The first one is the so-called transmission and absorption muography [START_REF] Lesparre | Geophysical muon imaging: feasibility and limits[END_REF][START_REF] Marteau | Muons tomography applied to geosciences[END_REF]. This technique relies on the well-known radiography concept (widely used, for example, in medicine with X-rays), based on the muon energy loss and its consequently probability to cross a given amount of material. The second is known as deviation muography, which relies on the measurement of the muon track deviation to determine the object density [START_REF] Borozdin | Radiographic imaging with cosmic-ray muons[END_REF][START_REF] Procureur | Muon imaging: Principles, technologies and applications[END_REF]. For the first technique, studying all the directions for which muons go through the studied object, and knowing its external shape, it is possible to obtain a 2D mean density image. Thus, muon imaging provides a non-invasive and remote scanning technique utilisable even for large objects, where the detection set-up may be relatively far away from the -potentially dangeroustarget (e.g. domes of active volcanoes or damaged nuclear reactors). One of the first studies performed based on muon imaging dates from 1955, being the scanning of the rock overburden over a tunnel in Australia [START_REF] George | Cosmic rays measure overburden of tunnel[END_REF]. Later, other applications from mining [START_REF] Malmqvist | Theoretical studies of in-situ rock density determination using cosmic-ray muon intensity measurements with application in mining geophysics[END_REF] to archaeology were proposed, both in the 70s. For the latter case, some measurements have been already performed, as the exploration of the Egyptian Chephren [START_REF] Alvarez | Search for hidden chambers in the pyramids using cosmic rays[END_REF] and Khufu [START_REF] Morishima | Discovery of a big void in Khufu's Pyramid by observation of cosmic-ray muons[END_REF] pyramids. Nowadays, thanks to the improvements on the detector performance, and also to their autonomy and portability, muon imaging reveals itself to be a scanning technique competitive and complementary to others non-invasive methods as seismic and electrical resistivity tomography or gravimetry. This has led to its proposal and utilisation in a wide range of fields. In addition to the above-mentioned applications (archaeology and mining), two others stand out. The first one is related with geophysics, more precisely with the monitoring of volcanoes. This A c c e p t e d m a n u s c r i p t has an important benefit both from a scientific and social point of view. The continuous monitoring of volcanoes helps to understand their internal dynamics, a key feature in the risk assessment. The other application, more related with particle physics, was motivated by the necessity to characterize the overburden of underground laboratories hosting various experiment detectors. It is worth mentioning other applications related to civil engineering and nuclear safety. For the first one, it will be possible for example to scan structures looking for defects. For the case of nuclear safety, set-ups looking for the transport of radioactive materials and wastes already work in cooperation with homeland security agencies. Moreover, the study of nuclear reactors looking for structural damages have been already used, as in the case of the recent Fukushima nuclear power plant accident [START_REF] Kume | Muon trackers for imaging a nuclear reactor[END_REF], and it is being considered as a remote scanning method. As mentioned, the improvement on the detectors used for muon imaging has been one of the main reasons for the renewal of this technique. Better detectors provide a better angular resolution for the muon direction reconstruction and improve the precision of the density radiography. Nonetheless, the background muon flux rejection remains a key procedure for the structural imaging with muons. An important potential noise source, specially in the measurements based on the transmission and absorption muography, is the forward scattering of low energy muons on the object surface, reaching afterwards the detector. This effect mimics through-going particles since the reconstructed direction of the scattered particle points towards the target. The result is an increase of the total number of detected particles, as if the target's opacity was lower than its actual value, leading to a systematic underestimation of the density [START_REF] Nishiyama | Monte Carlo simulation for background study of geophysical inspection with cosmic-ray muons[END_REF][START_REF] Rosas-Carbajal | Three-dimensional density structure of La Soufrière de Guadeloupe lava dome from simultaneous muon radiographies and gravity data[END_REF]. Being produced by muons, these events can not be rejected by particle identification techniques, representing an irreducible background. For this reason, an evaluation of the magnitude of this effect is mandatory to conveniently correct the reconstructed object density. In this work, a general evaluation of forward scattering of muons has been performed by Monte Carlo simulations. The aim was to develop a versatile tool to be able to evaluate this process for different object geometries and compositions, due to the increasing number of proposed applications based on muon tomography. The main features and results are presented in section 2. Then the impact of this process on the muon imaging capability has been evaluated defining a signal to background parameter. Two physics cases have been studied in section 3. The first one concerns an archaeological target, the Apollonia tumulus near Thessaloniki in Greece, and the second one La Soufrière volcano in Guadeloupe Islands of the Lesser Antilles. Finally, a summary of the different results and the main conclusions extracted from them are compiled in section 4. Evaluation of the forward scattering of muons As mentioned in the introduction, low-energy cosmic muons can change their original direction after interacting with the target or any other object in the surroundings before their detection. As muon imaging is based on the reconstruction of the detected muons direction, these muons would forge the measurement. As a consequence, the determination of the target's internal structure and the corresponding reconstructed mean density will be affected. Muons trajectory deviation is mainly driven by their interaction with matter via multiple Coulomb scattering. The resulting deflection angular distribution, theoretically described by the Molière theory [START_REF] Bethe | Molière's Theory of Multiple Scattering[END_REF], roughly follows a Gaussian, -2 -A c c e p t e d m a n u s c r i p t dN dα = 1 √ 2πα M S e -α 2 2α 2 M S (2.1) which is centred in zero (i.e. no deflection happens), having a standard deviation α M S : α M S = 13.6MeV βcp Q x X 0 (1 + 0.038ln(x/X 0 )) (2.2) where β is the relativistic factor, p the muon momentum in MeV/c, x the material thickness and Q the absolute electric charge of the muon. α M S also depends on the radiation length (X 0 ) which is empirically given by X 0 ≈ 716.4g/cm 2 ρ A Z (Z + 1)log(287/ √ (Z )) (2.3) with Z and A the atomic and mass numbers respectively and ρ the material density. This reveals the relationship of the multiple Coulomb scattering with the properties of the studied material. Different works (see for example [START_REF] Schenider | Coulomb scattering and spatial resolution in proton radiography[END_REF]) provide analytical solutions to the angular distribution of deflected muons after traversing an object with a determined geometry and composition. Besides, other relevant features, as the higher scattering probability for lower energy muons, are also demonstrated in these studies. However, the increasing number of different applications proposed for muon tomography, implies a large variety of objects dimensions, shapes and compositions, being less evident to obtain an analytical estimation of the forward scattering process suitable for all these cases. In this context, Monte Carlo simulations represent a useful tool for the study of muon scattering process, versatile enough to adapt them to the main features of each particular case. As first step on the development of these simulations, a general evaluation of the muons forward scattering has been carried using the Geant4 simulation tool-kit [START_REF] Agostinelli | GEANT4: A Simulation toolkit[END_REF]. It allows the simulation of the 3D muon transport through the defined geometry taking into account the energy loss and trajectory variations due to multiple Coulomb scattering as well as to ionization, bremsstrahlung, pair production and multiple inelastic scattering. Considering these possible processes, results can be compared with the estimations given by the analytical formulas above-mentioned. A scheme of the simulated set-up is shown in figure 1. For this case, generated muons are thrown to a fixed point on a standard rock surface (with a density of 2.5 g/cm 3 ). In the case of scattered muons, the direction changes, in zenith and/or azimuth angles, can be evaluated. A first set of simulations were performed in order to evaluate the general features of the muon forward scattering. In the previously described set-up, muons up to 10 GeV, with a zenith incident angle (θ ini det ) between 70 • and 90 • and an azimuth incident angle ϕ ini det = 0 • were generated. It is worth mentioning that by the set-up definition θ ini det = 0 • implies muons perpendicular to the rock surface, while θ ini det = 90 • corresponds to tangential ones. Figure 2 summarizes the results of this general simulation, leading to some conclusions about the muon forward scattering studied in these simulations. First, it is observed that this process is negligible if the muon energy is higher than 5 GeV, independently of the incident direction. For the lower energy muons, most of the "efficient" scattering processes (i.e. when the scattered particle exits the medium) occur if θ ini det is higher than 85 • and do not exist if θ ini det is lower than 80 • . That means that only low energy muons with incident directions close to the surface tangent are likely to be scattered on the object surface and to induce a -3 - signal in the detector. For these muons the angular deviation can reach up to 25 • both for the zenith and azimuth angles. By the simulation set-up definition, only the azimuth scattering angle (∆ϕ det ) has been registered for the whole angular range. As presented in figure 2, the ∆ϕ det distribution for all the muon energies considered is in agreement with the Gaussian predicted by Molière theory, as well as the other extracted conclusions agree with the analytical predictions [START_REF] Patrignani | Review on Particle Physics[END_REF]. Taking into account this general information and having checked the agreement between the general simulations and the analytical predictions, a more detailed simulation, optimizing the initial muon sampling was performed. The objective was to establish a probability density function (PDF) to further estimate the background due to forward scattered muons that could be detected during a muon imaging measurement and should be considered in the image analysis. With this aim 10 8 muons homogeneously distributed up to 5 GeV and with θ ini det between 85 • and 90 • (all with ϕ ini det = 0 • ) were generated and simulated in the described Geant4 framework. The generated PDF provides a probability value P(θ ini det , θ f in det , E ini µ ) depending on the initial and final zenith angle and the initial muon energy. A summary plot of the generated PDF divided in 0.5 GeV energy windows is shown in figure 3. At this point it is worth mentioning that for the studies presented in this work (summarized in section 3), the considered composition of the studied objects are the standard rock used to generate the PDF, but also a definition of soil with different composition and density than the rock (ρ = 2.2 g/cm 3 ). Moreover, there exist several types of rocks and soils with different compositions and densities typically, between 2.0 and 2.5 g/cm 3 . For this reason the influence of these two parameters in the PDF generation has been evaluated: a set of dedicated simulations have been performed changing the composition and the density of the target to compare their results. The obtained PDFs, including the standard soil case, agree to better than 97 %. Thus, the PDF presented in figure 3 Signal to background ratio estimations The impact of the forward scattered muons in an imaging measurement for a particular object can be evaluated based on simulations as those presented in section 2. This impact can be expressed as a signal to background ratio (S/B) for a given direction θ z -ϕ z . These spherical coordinates correspond to those centred at the detector where θ z = 0 • is the vertical direction and ϕ z = 0 • points to the main axis of the studied object. The signal S(θ z , ϕ z ) is estimated as not scattered muon flux, so their reconstructed direction corresponds to their initial one. The background B(θ z , ϕ z ) represents the scattered muons for which the reconstructed direction points towards the target. As mentioned, these evaluations allow the study of a particular object, with its corresponding composition. For this it is necessary to know its external shape, to assume the object mean density (since this is the observable that can be extracted from a muon tomography measurement), and to determine the muon detector position with respect to this object. This allows the estimation of the -5 - GeV windows between 0 and 5 GeV for the initial simulated muon energy (E ini µ ). These plots, correspond to 10 8 simulated muons with incident angles between 85 • and 90 • and energies between 0 and 5 GeV (both of them homogeneously distributed). They are used as PDF for further estimations on the forward scattered muon flux. -6 -A c c e p t e d m a n u s c r i p t object length traversed by muons for each direction as well as its surfaces positions with respect to the detector and to the Earth's surface (required for the determination of θ z and ϕ z ). For this work two cases have been considered, corresponding to two applications of the muon imaging: the archaeology and the volcanology. For the first one a Macedonian tumulus located near Apollonia (Greece) has been studied [START_REF] Gómez | Studies on muon tomography for archaeological internal structures scanning[END_REF]. For the second, La Soufrière volcano (Guadeloupe island in the Lesser Antilles), already explored by muon imaging, has been taken as reference [START_REF] Jourde | Experimental detection of upward going cosmic particles and consequences for correction of density radiography of volcanoes[END_REF][START_REF] Jourde | Muon dynamic radiography of density changes induced by hydrothermal activity at the La Soufrière of Guadeloupe volcano[END_REF]. Archaeology: Apollonia tumulus As quoted in section 1, the exploration of archaeological structures is one of the applications for which muon imaging has been proposed since it is non invasive and does not induce any harmful signals (contrary, for example, to vibrations used in seismic tomography). Already suggested in the 60s [START_REF] Alvarez | Search for hidden chambers in the pyramids using cosmic rays[END_REF], there exist at present different projects based on muon imaging devoted to the study of the internal structure of archaeological constructions (see for example [START_REF] Morishima | Discovery of a big void in Khufu's Pyramid by observation of cosmic-ray muons[END_REF][START_REF]ScanPyramids project[END_REF]). The ARCHé project proposes to scan the Apollonia Macedonian tumulus [START_REF] Gómez | Muon imaging for archaeological applications: feasibility studies of the Apollonia Macedonian Tumulus[END_REF]. These tumuli are man-made burial structures where the tomb, placed on the ground, is covered by soil, creating a mound which can also contain internal corridors. The geometry and dimensions of these tumuli are variable but they can always be approximated to a truncated cone. In the case of Apollonia tumulus its height is 17 m while the radius of the base and the top are 46 m and 16 m respectively. With this geometry the angle of the slope of the lateral surface of the tumulus is 29.5 • . In the present study a standard soil composition, with a density of 2.2 g/cm 3 , has been assumed. The detector has been placed 4 m beside the tumulus base (50 m from the tumulus base centre), so muons with zenith angles θ z > 63.4 • are those which will provide information about the structure of the tumulus. With all these features the signal and background, S(θ z , ϕ z ) and B(θ z , ϕ z ) respectively, can be estimated for a given direction θ z -ϕ z . As already described, these coordinates are centred at the detector and θ z = 0 • correspond to vertical muons, while ϕ z = 0 • points towards the centre of the tumulus base. From the knowledge of the external shape, it is possible to determine the length of tumulus traversed by muons for a given direction L(θ z , ϕ z ) and thus, the corresponding opacity as the product of this length by the density ( = L ×ρ). The required minimal muon energy (E min ) to cross the target of opacity can be calculated as E min = a b × e b× -1 (3.1) where a(E) and b(E) represent the energy loss coefficients due to ionization and radiative losses respectively. In this case, coefficients corresponding to standard rock summarized in [START_REF] Patrignani | Review on Particle Physics[END_REF] have been used, obtaining E min values as a function of . As a cross-check, these E min values have been also estimated from the CSDA range values of standard rock [START_REF] Groom | Muon stopping-power and range tables[END_REF]. The agreement between both methods is better than 95 %. Hence, the expected signal S(θ z , ϕ z ) corresponds to the muon flux on the studied direction with energies higher than E min : S(θ z , ϕ z ) = ∞ E=E mi n φ µ (E, θ z , ϕ z )dE (3.2) A c c e p t e d m a n u s c r i p t To compute the background due to muons forward scattered in the same direction, B(θ z , ϕ z ), two assumptions have been done. First, a point-like detector is considered. This implies that for each scattering point on the tumulus surfaces there is a unique final direction reaching the detector. Second, scattering effects in the azimuth angle are neglected. Since the general muon scattering studies (section 2) show that these effects are symmetric and mostly below 5 • for the azimuth angle (see figure 2), a low influence on the overall estimation is expected, having fade-out effects among the different azimuth directions. With these two assumptions, B(θ z , ϕ z ) corresponds to the product between the initial flux of muons which can be scattered by the corresponding probability to be scattered with a final zenith angle θ z . As already shown, only muons up to 5 GeV with an incident zenith angle higher than 85 • with respect to the surface normal need to be considered for the forward scattering studies. This delimits the energy and zenith angle ranges to estimate the initial muon flux. The scattering PDF, P(θ ini det , θ f in det , E ini µ ) , corresponds to the one presented in figure 3. This PDF was generated using the coordinates θ det -ϕ det , centred in the scattering point and orthogonal to the surface. In order to be able to use this PDF with the θ z -ϕ z coordinates, it is necessary to define the relationship between θ det and θ z , which is given by θ det = α + θ z . α represents the elevation angle of the scattering surface (that means, with respect to the Earth's surface). θ det , θ z and α angles are presented in figure 4. For the case of ϕ z = 0 • , α corresponds to the slope of the lateral surface. For the cases where ϕ z 0 • , it is estimated from the tangent to the tumulus surface at the scattering point. Thus, the expected background B(θ z , ϕ z ) is calculated as: B(θ z , ϕ z ) = 5 E=0 90-α θ=85-α P (θ, α + θ z , E)φ µ (E, θ, ϕ z )dEdθ (3.3) For the different muon flux calculations required to obtain S(θ z , ϕ z ) and B(θ z , ϕ z ), the parametrization proposed in [START_REF] Shukla | Energy and angular distributions of atmospheric muons at the Earth[END_REF] has been used, corresponding to: φ µ (θ, E) = I 0 (n -1)E n-1 0 (E 0 + E) -n 1 + E -1 D(θ) -(n-1) (3.4) D(θ) = R 2 d 2 cos 2 θ + 2 R d + 1 - R d cosθ (3.5) where the experimental parameters, summarized in table 1 together with other constants used in the equations, have been obtained from the fit of different experimental measurements. This parametrization provides an analytical formula for the muon flux estimation valid for low energy muons and high incident zenith angles. With all these ingredients the S/B ratio for the Apollonia tumulus has been calculated scanning the ϕ z range in 10 • steps and the corresponding θ z values for each case in 1 • steps. The results from these calculations are summarized in figure 5 as a function of θ z and the opacity , which is a more significant variable than ϕ z since the muon flux is basically independent of the azimuth angle. The main conclusion is that for all the studied directions the S/B ratio is higher than 73.9, which means that at most the 1.3 % of the detected muons have been previously scattered on the object surface. It is observed that the directions with the lowest S/B values are those with high θ z values. For these directions lower values for the signal are expected since they correspond to -8 - the most horizontal ones (where the muon flux is lower) and, due to the tumulus geometry, these are cases for which longer tumulus length is traversed. Actually, for directions with θ z lower than 85 • , the S/B ratio is always higher than 254.8, reducing the contribution of the scattered muons to the total detected to less than 0.4 %. In this region, the obtained S/B values can be considered homogeneous. Differences between directions are basically associated to the uncertainties in the muon scattering PDF. As mentioned in section 2, even if the used PDF was generated with another target material than the assumed tumulus composition, it would have a limited effect on the results. This leads to consider that the forward scattered muons on the object surface do not significantly influence the results of the muon imaging for the case of tumuli and, by extension, of other objects with similar dimensions and composition. Volcanology: La Soufrière The use of muon imaging for the scanning of volcanoes is another application of this technique, which implies the study of objects with larger dimensions than for archaeology. With this purpose, some projects have already performed measurements in different locations. One of them is the -9 - DIAPHANE collaboration [START_REF]Diaphane project[END_REF], which surveys La Soufrière volcano paying special attention to possible variations of the inner liquid/vapour content that can be related to the hydrothermal system dynamics. For this work, La Soufrière volcano has been taken as reference to study the impact of the forward scattered muons on the muon imaging reconstruction in volcanology. As for the case of tumuli, volcanoes geometry can also be approximated to a truncated cone. Based on the topographic plan of La Soufrière, their dimensions correspond to a height of 460 m and a base and top radius of 840 m and 160 m respectively. These dimensions lead to a lateral surface with a slope angle of 34.1 • . In this case a homogeneous composition of standard rock has been considered, with a density of ρ = 2.5 g/cm 3 , together with 3 different detector positions corresponding to real measurement points of the DIAPHANE project. They are labelled as h-270, h-170 and h-160 respectively because of the height where they are placed. These positions are summarized in table 2 taking as reference the centre of the volcano base. Main differences among the positions rely on the distance between the detector and the volcano, going from 5 to 25 m approximately, and on the height with respect to the volcano base, which has a direct influence on the length of the volcano traversed by muons before their detection and, consequently, on the signal S(θ z ,ϕ z ) computation. Both tumulus and volcano have been approximated to the same geometrical shape with their -10 -A c c e p t e d m a n u s c r i p t corresponding dimensions. So for volcanoes, the procedure to determine the S/B ratio for different incident directions is equivalent to the described in section 3.1 for the tumulus case. The only difference is that for this case the assumed density is ρ = 2.5 g/cm 3 instead of ρ = 2.2 g/cm 3 (corresponding to the standard soil), affecting in the opacity estimation. Nevertheless, this density variation is expected to have a reduced impact on the results as it has been estimated in section 2. As for the tumulus case, the S/B ratio have been evaluated scanning the ϕ z range in 10 • steps and the corresponding θ z values for each case in 1 • steps. Results have also been represented with respect to θ z and the opacity . They have been summarized for the three different detector positions in figure 6. For the three cases the S/B ratio takes values significantly lower than for the tumulus, although the corresponding distributions present similar features. For example, the S/B values for directions with θ z > 85 • are again lower than for the rest of the directions. Moreover, for all detector positions, directions with low opacity (corresponding to the volcano contour) present systematically higher values of S/B than those directions pointing to the bulk of the volcano. Focused on each detector, for the h-270 case, incident directions with high θ z have S/B ratio values in general below 1, which implies that it is possible to detect more forward scattered muons than those emerging from the volcano, significantly influencing the object density reconstruction. On the opposite side, for the directions where the opacity is smaller than 50×10 3 g/cm 2 , S/B ratio takes values higher than 5, so no more than the 17 % of the detected muons have been previously scattered. If we consider all the other directions, with θ z < 85 • and > 50 ×10 3 g/cm 2 , the S/B distribution is more homogeneous, having a mean value of 2.7 with a standard deviation of 1.4. That means that in average about 27 % of the detected muons are low energy forward scattered muons. In this case, the scattered muons have a significant impact on the volcano density reconstruction. Assuming the percentage of scattered muons constant for all the scanned directions, and estimating the uncertainty of this percentage from the standard deviation of the S/B mean value, it would imply that the reconstructed density should be corrected by multiplying it by a factor 1.4 +0. 4 -0.1 . Results for the detector positions labelled as h-170 and h-160 are similar between them. This suggests that S/B ratio has more dependence with the detector height with respect to the volcano base (170 m for h-170 and 160 m for h-160) than with the distance between the detector and the volcano (6.72 m for h-170 and 23.99 m for h-160). Since both detectors are placed lower than the h-270 case, the mean volcano length traversed by non-scattered muons is longer in this case. This leads to smaller S/B ratios mainly because lower S(θ z , ϕ z ) values. As mentioned, the features of the distribution are equivalent: smaller S/B values for θ z > 85 • and higher for low opacities. If only muon directions with θ z < 85 • and > 50 ×10 3 g/cm 2 are considered, a mean S/B value of 1.1 is obtained for both cases with a standard deviation of 0.9 and 1.0 for the h-170 and h-160 detector positions respectively. These values reveal a high influence of the low energy forward scattered muons in the overall detection (almost half of the detected muons have been previously scattered). Keeping the assumption of a constant S/B value for all the considered directions, the density correction factors are 1.9 +4.1 -0.4 for the h-170 position and 1.9 +9.1 -0.4 for the h-160 case. Summarizing, for the case of volcanoes, where the length of material to be traversed by muons is longer than for archaeology, the forward scattering of low energy muons and their further detection has a clear influence on the results of the muon imaging. The three studied scenarios and the defined geometry of the volcano as a truncated cone, reveal that the S/B ratio mainly depends on the length of material traversed by non-scattered muons, those considered for S(θ z ,ϕ z ) -11 -A c c e p t e d m a n u s c r i p t Figure 6. Distribution of the ratio between the non-scattered and scattered detected muons (defined as S/B in the text) with respect to the reconstructed zenith incident angle (θ z ) and the opacity ( ). The distribution is showed for the 3 detectors installed in La Soufrière volcano. Numbers correspond to the bin value and have been placed next to the corresponding bin to ease their reading. -12 -A c c e p t e d m a n u s c r i p t computation. Moreover, for a fixed detector position it can be considered S/B as homogeneous for all the incident directions corresponding to the volcano bulk volume, so a global correction factor for the reconstructed density can be applied. The main source of uncertainty of the S/B ratio estimation comes from those associated to the PDF and consequently, B(θ z , ϕ z ). For this reason, as deduced for the h-270 detection position, a higher S/B mean value translates to a more accurate determination of the correction factor for the reconstructed volcano density. Summary and discussion At present, muon imaging is being used and proposed for an increasing number of different applications. This implies that objects with quite different dimensions can be scanned, from some tens to several hundreds of meters as typical sizes. Furthermore, the composition and density of these objects can also vary from one to other. All the experimental approaches to do muon imaging, generally known as transmission and deviation tomography respectively, rely on the direction reconstruction of the detected muons. For this reason, specially for the transmissionbased technique, muons changing their direction because of a scattering on the object surface before their detection, represent an irreducible background noise, biasing the object mean density reconstruction. An estimation of the percentage of these forward scattered muons out of all detected, would allow the estimation of correction factors to reconstruct the proper density. Muons trajectory deviation is mainly driven by the multiple Coulomb scattering. The resulting angular distribution due to this effect is theoretically described by the Molière theory. Besides, some analytical descriptions of the process have been already performed for particular objects and compositions. Nevertheless, the large variety of objects that are currently proposed to be studied by muon tomography, requires more versatile tools to evaluate the forward scattering muons, easily adaptable to each case. With this aim a set of Monte Carlo simulations have been performed using the Geant4 framework. These simulations provide a general evaluation of the muon forward scattering probability depending on their energy and their incident angle, being in overall agreement with theoretical estimations. They revealed that muons with energies lower than 5 GeV and incident angles above 85 • with respect to the normal direction of the surface, are almost the only muons susceptible to be scattered and then detected. The simulations results have been used as PDF to evaluate the influence of scattered muons in different scenarios. To do that, the signal to background ratio (S/B) has been defined. S(θ z ,ϕ z ) corresponds to the flux of those muons reconstructed in a direction θ z -ϕ z without any previous scattering, while B(θ z ,ϕ z ) is the muon flux of muons reconstructed with the same direction but after a previous forward scattering on the object surface. The S/B evaluation has been presented for two particular cases, corresponding to two of the applications of muon imaging: archaeology and volcanology. Taking the muon distribution at Earth's surface proposed in [START_REF] Shukla | Energy and angular distributions of atmospheric muons at the Earth[END_REF], for the archaeological applications, the Apollonia tumulus has been considered as reference, placing the detector beside the tumulus base. S/B estimations reveals that the percentage of scattered muons detected is never higher than 1.6 %, being lower than 0.5 % if the incident zenith angle is smaller than 85 • . This leads to conclude that the influence of scattered muons for these cases can be neglected. A c c e p t e d m a n u s c r i p t This is not the case for volcanology applications. A model based on La Soufrière volcano, already scanned inside the DIAPHANE project, has been used together with three different detector positions, corresponding to real measurement points. It has been observed a significant influence of the forward scattered muons in the measurement, which can represent up to 50 % of the detected muons, and even more for incident zenith angles higher than 85 • . S/B values can be considered homogeneous for the directions corresponding to the bulk volume of the volcano. Main differences on S/B mainly depend on the height of the detector with respect to the volcano base. Due to the volcano geometry, defined as a truncated cone, this is directly related to the muon path length along the volcano. Other features, such as the distance between the detector and the volcano, seem to have smaller influence. With the estimations and numbers obtained in this work, correction factors for the density reconstruction have been computed, taking values from 1.4 to 1.9 depending on the detector position. Forward scattered muons represent events that are in principle not taken into account, so their detection has a direct impact on the mean density reconstruction. Nonetheless, the observed homogeneity on the S/B ratio for all the considered directions, both in the tumulus and volcano case, leads to think that these muons would not significantly affect fading the resolution of the resulting image. All these estimations and conclusions are based on simulations of scattered muons on standard rock, which has been demonstrated to produce equivalent results than the standard soil case. In any case, changing accordingly the material composition and properties, this simulation framework can be used to evaluate the influence of forward scattered muons for further muon imaging measurements of other objects and structures. Figure 1 . 1 Figure 1. Schema of the defined geometry to perform the general studies of forward scattering of muons. has been used for all the studies. -4 -A c c e p t e d m a n u s c r i p t Figure 2 . 2 Figure 2. Summary plots of the results of the general study of forward scattering of muons (see text for details about the study). Top left: Difference on the zenith angle (∆θ det = θ f in det -θ ini det ) with respect to the initial muon energy (E ini µ ). Top right: Correlation between the initial and final zenith angles (θ ini det vs. θ f in det ) for all the muon energies considered. Bottom left: Difference on the azimuth angle (∆ϕ det = ϕ f in det -ϕ ini det ) with respect to the initial muon energy (E ini µ ). Bottom right: ∆ϕ det distribution for all the muon energies considered. A Gaussian distribution as predicted by Molière theory (Equations 2.1 -2.3) is observed. A c c e p t e d m a n u s c r i p t Figure 3 . 3 Figure 3. Correlation between the initial and final zenith angles (θ ini det vs. θ f in det ) from the general study of forward scattering of muons (see text for details about the study). The correlation plots are divided in 0.5GeV windows between 0 and 5 GeV for the initial simulated muon energy (E ini µ ). These plots, correspond to 10 8 simulated muons with incident angles between 85 • and 90 • and energies between 0 and 5 GeV (both of them homogeneously distributed). They are used as PDF for further estimations on the forward scattered muon flux. Figure 4 .Table 1 . 41 Figure 4. Schema showing the relationship between θ det , θ z and α angles (see text for angles definition), for the use of the muon scattering PDF, P(θ ini det , θ f in det , E ini µ ), in the B(θ z , ϕ z ) calculation. A c c e p t e d m a n u s c r i p tFigure 5 . 5 Figure 5. Distribution of the ratio between the non-scattered and scattered detected muons (defined as S/B in the text) with respect to the reconstructed zenith incident angle (θ z ) and the opacity ( ) for the Apollonia tumulus case. Table 2 . 2 Summary of the detector positions with respect to the base centre of La Soufrière volcano model (see text for model details). Acknowledgments Authors would like to acknowledge the financial support from the UnivEarthS Labex program of Sorbonne Paris Cité (ANR-10-LABX-0023 and ANR-11-IDEX-0005-02). Data from La Soufrière volcano are part of the ANR DIAPHANE project ANR-14-ce04-0001. Part of the project has been funded by the INSU/IN2P3 TelluS and "DEFI Instrumentation aux limites" programmes.
01772396
en
[ "shs.psy" ]
2024/03/05 22:32:18
2007
https://hal.science/hal-01772396/file/Antoine%2C%20P.%2C%20Poinsot%2C%20R.%2C%20%26%20Congard%2C%20A.%20%282007%29%28bes%29.pdf
Pascal Antoine email: [email protected] R Poinsot A Congard Evaluer le bien-être subjectif : la place des émotions dans les psychothérapies positives Measuring Subjective well-being: place of the emotions in positive psychotherapies Keywords: émotion, bien-être, psychologie positive, psychothérapie positive, évaluation Measuring Subjective well emotion, well-being, positive psychology, positive psychotherapy : assessment Evaluer le bien-être subjectif : la place des émotions dans les psychothérapies positives Résumé. La psychologie positive est l'étude scientifique des expériences positives, du bien-être et du fonctionnement optimal de l'individu. Elle vise à dépasser la centration fréquente en psychologie clinique sur la souffrance, sa résolution ou sa réduction. Son objectif est de rendre le patient plus heureux grâce à la compréhension et l'investissement de trois voies : une existence plaisante, engagée et pleine de sens. Pour une approche scientifique de chacun de ces domaines, il est nécessaire de disposer de mesures valides et pratiques adaptées à un cadre clinique. En pratique, il est souhaitable d'évaluer séparément les facettes constituant le concept de « bien-être subjectif », notamment l'humeur et les émotions, afin d'étudier au mieux l'efficacité de la psychothérapie positive. L'objectif de cette étude est de développer en France une Mesure de la Valence Emotionnelle (MVE) basée sur le modèle du bien-être proposé par Diener. Pour développer cet outil, un questionnaire de fréquence des émotions a été construit et proposé à 571 participants. La version finale de la mesure est composée de 23 items organisés en six ensembles, constituant chacun une échelle à part entière. La consistance est satisfaisante, de même que les secteurs de la validité qui ont été éprouvés. Les six facettes émotionnelles sont divisées en deux facteurs d'ordre supérieur, l'un positif, l'autre négatif. Le bien-être subjectif était, de façon surprenante, rarement mesuré en psychopathologie. Cette absence était regrettable, la présence d'émotions positives, l'absence d'émotions négatives et une évaluation de son sentiment de satisfaction et d'accomplissement, étant des composantes du bien-être importantes mêmes pour les patients les plus en souffrance. Nous proposons un instrument d'évaluation du bien-être adapté au cadre clinique. Evaluer le bien-être subjectif : la place des émotions dans les psychothérapies positives Psychologie et psychothérapie positive En 2000, Seligman et Csikszentmihalyi publient un article intitulé "Positive Psychology". Cet écrit fait le point des recherches dans le domaine de la psychologie positive et constitue le point de départ d'une dynamisation et d'un élargissement de ce courant. Par définition, la psychologie positive est présentée comme l'étude des conditions et processus qui contribuent à l'épanouissement ou au fonctionnement optimal des personnes, des groupes et des organisations [START_REF] Gable | What (and Why) is Positive Psychology?[END_REF]. Ce courant prend racine essentiellement dans le constat d'un déséquilibre, notamment en psychologie clinique, faisant que la plupart des recherches sont centrées sur la maladie mentale, la détresse et les dysfonctionnements psychologiques. Environ un tiers des personnes souffrent un jour d'un trouble psychiatrique et, dans ce domaine, le panel des psychothérapies efficaces est vaste. Pour autant, les deux-tiers de personnes qui ne rencontreront pas la maladie psychiatrique éprouvent-ils nécessairement un total bien-être ? Ici, le but de la psychologie positive est d'intégrer ce que l'on connaît aujourd'hui de la résilience, des ressources personnelles et de l'épanouissement individuel pour construire et développer un corpus organisé de connaissances et de pratiques. Un des enjeux actuel consiste à passer d'approches descriptives ou explicatives à des approches prescriptives destinées aux patients comme au grand public. Certains étudient donc les techniques qui améliorent directement ou indirectement le bien-être, par exemple les pratiques méditatives de pleine conscience, l'écriture de journaux personnels ou plus spécifiquement les thérapies orientées sur le bienêtre [START_REF] Gable | What (and Why) is Positive Psychology?[END_REF]. Plusieurs propositions thérapeutiques existent déjà. La première historiquement est celle de [START_REF] Fordyce Mw | Development of a program to increase personal happiness[END_REF] qui présente 14 stratégies comportementales visant explicitement l'augmentation du bien-être. Il s'agit globalement d'être plus actif et d'investir de façon privilégiée les relations avec l'entourage. Plus récemment, [START_REF] Fava | Well-being therapy: Conceptual and technical issues[END_REF] propose une thérapie orientée sur le bien-être (well-being therapy) basée sur le modèle du bien-être de Ryff et Singer (1998). [START_REF] Frisch Mb | Quality of life therapy: Applying a life satisfaction approach to positive psychology and cognitive therapy[END_REF] propose une thérapie orientée sur la qualité de vie (quality of life therapy) intégrant la notion de satisfaction de la vie à des techniques de thérapie cognitive. Ces démarches ont en commun de s'adresser à des patients présentant des troubles affectifs dans un dispositif relativement classique où le bien-être est un ingrédient complémentaire mais pas central (Seligman et coll., 2006). Seligman et coll. (2004) distinguent trois processus qui conduisent au bien-être ou au bonheur : les émotions positives, l'engagement, et le sens de l'existence. Les émotions positives sont orientées vers le passé (gratitude et pardon), le présent (plaisir et pleine conscience) et le futur (espoir et optimisme). Elles facilitent la flexibilité de pensée et la résolution de problèmes [START_REF] Frederickson Bl | Positive emotions broaden scope of attention and thought-action repertoires[END_REF]Isen et coll., 1987). Elles contrebalancent les effets des émotions négatives au niveau physiologique [START_REF] Ong Ad | Cardiovascular intraindividual variability in later life: the influence of social connectedness and positive emotions[END_REF] et facilitent l'utilisation de coping ajusté [START_REF] Folkman | Positive Affect and the Other Side of Coping[END_REF], 2004). Les émotions positives permettent de mettre en oeuvre et de gérer les ressources (Tugade et Frederickson, 2004) et accélèrent la récupération face à des événements stressants (Frederikson et coll., 2003 ;Tugade et coll., 2004). L'engagement correspond à la poursuite active d'un but important pour soi et qui mobilise ses ressources psychologiques personnelles. Le sens de l'existence correspond à la poursuite d'un but abstrait dépassant largement l'individu. Ces trois voies (pleasant life, good life/engaged life et meaningfull life) sont relativement indépendantes et peuvent être diversement investies par les personnes. Lorsqu'elles sont également et intensément investies, on parle alors de « vie pleine et entière » (full life). Seligman part de ce modèle de bien-être pour proposer une psychothérapie positive dont l'objectif fondamental est bien plus d'augmenter le bien-être que de diminuer la souffrance1 . Mais, une nouvelle proposition thérapeutique soulève la question de son efficacité, de ses indications et de ses limites. Il est donc nécessaire de disposer d'instruments de mesure adaptés, ce qui revient à se demander comment évaluer, sinon le bonheur, au moins le bien-être. Pour Duckworth et coll. (2005), les outils les plus utilisés par les cliniciens en psychologie positive sont la Subjective Happiness Scale [START_REF] Lyubomirsky | A measure of subjective happiness: preliminary reliability and construct validation[END_REF], la Fordyce Happiness Measure [START_REF] Fordyce Mw | A review of research on the happiness measures: a sixty-second index of happiness and mental health[END_REF] et surtout la Satisfaction with Life Scale (Diener et coll., 1985 ;[START_REF] Pavot | Further Validation of the Satisfaction With Life Scale : Evidence for the Cross-Method Convergence of Well-Being Measures[END_REF]. L'échelle de satisfaction de vie est une évaluation cognitive tournée vers le passé et composée d'items du type : « sur la plupart des plans, ma vie est presque idéale ». Ces trois outils sont en fait largement de nature cognitive, ce qui n'est pas cohérent avec la principale recommandation de Diener (2006) concernant l'utilisation d'indicateurs de bien-être en santé : distinguer les facettes de l'expérience subjective de bien-être (well-being) et d'être souffrant (ill-being), incluant l'humeur et l'émotion, la perception de sa santé physique et mentale, et la satisfaction dans différents domaines de l'existence2 . Le bien-être et ses conceptions Il existe trois conceptualisations majeures. En premier lieu, le bien-être subjectif (BES), dont les composantes sont à la fois cognitives et émotionnelles [START_REF] Diener | Subjective Well-Being[END_REF] correspond à l'ensemble des évaluations individuelles, négative et positive, cognitive et émotionnelle, que l'on fait de sa vie (Diener et coll., 1998 ;[START_REF] Diener | guidelines for national indicators of subjective well-being and ill-being[END_REF]. Sur le plan cognitif, la satisfaction de la vie peut être décomposée en autant de domaines que l'individu a d'investissements et constitue de ce fait une structure hiérarchique. De la même façon, émotions positives et émotions négatives peuvent être décomposées en émotions plus simples constituant une structure factorielle hiérarchisée. Le bien-être subjectif constitue donc le niveau global supérieur de cette hiérarchie. Cette approche est celle qui a donné lieu aux travaux les plus importants dans le domaine. Une seconde approche, plus récente, est celle du bien-être psychologique [START_REF] Ryff | The Structure of Psychological Well-Being Revisited[END_REF]. Le bien-être est ici conçu comme un ensemble multidimensionnel largement cognitif synthétisé par un construit latent unique. Cette approche ne prend toutefois pas en compte les composantes émotionnelles du bien-être. Enfin, une troisième approche est celle de la santé mentale au travail [START_REF] Warr | Employee Well-being[END_REF] avec une centration sur le contexte professionnel plutôt que le bienêtre général. De fait, la conception la plus fructueuse reste celle du bien-être subjectif. En effet, les analyses factorielles (Diener et Emmons, 1985) ainsi que les analyses multi-traits multi-méthodes (Lucas et coll., 1996) ont indiqué qu'il est pertinent de prendre en compte de façon distincte les aspects cognitifs et émotionnels. Ils peuvent être organisés hiérarchiquement dans le construit de BES (Sandvik et coll., 1993), assez stable sur les plans situationnels et temporels, et consistant avec différents modes d'évaluation (hétéroou auto-évaluations). Les composantes émotionnelles du bien-être subjectif Les recherches sur les émotions dans le BES prennent une place dans un débat plus large. Il existe deux courants majeurs dans l'étude des émotions (Mayne, 1999) : le courant des émotions discrètes mesurées par un niveau d'activation physiologique ou par des expressions faciales, et le courant dimensionnel ou lexical où les émotions sont décrites dans un espace factoriel. C'est ce dernier qui nous concerne ici. Le courant lexical des émotions, contrairement à celui de la personnalité, n'en est qu'à ses débuts [START_REF] De Raad | Traits and Emotions : A Review of their Structure and Management[END_REF]. A l'instar des études sur la personnalité de type Big Five, il est possible de considérer que le langage regroupe les termes pertinents pour désigner l'ensemble des émotions, et il est possible d'en proposer un modèle hiérarchisé. Dans les années 80-90, les études étaient partielles et tâtonnantes, basées sur de petites listes de mots, et il a fallu attendre les travaux de Church (Church et coll., 1998[START_REF] Church At | The Structure of Affect in a Non-Western Culture : Evidence for Cross-Cultural Comparability[END_REF] pour voir l'exploration d'une liste de mots tendant à l'exhaustivité. Leurs résultats témoignent de la pertinence de facteurs positif et négatif pour décrire la structure des émotions à travers plusieurs cultures différentes. Les travaux factoriels ont antérieurement pris leur essor suite à la proposition de Watson et Tellegen (1985) de deux dimensions orthogonales : les émotions positives et les émotions négatives. De nombreux travaux se sont alors centrés sur ces deux dimensions, en questionnant notamment leur indépendance (Diener et coll., 1995 ;[START_REF] Egloff B | The independence of positive and negative affect depends on the affect measure[END_REF]Watson et coll., 1988). Par exemple, dans des études préalables, Diener et Emmons (1985) avaient mis en évidence une forte corrélation négative entre les mesures d'émotions plaisantes et désagréables, corrélation diminuant lorsque la mesure porte sur une période de temps qui s'allonge. Leur interprétation se base sur l'idée qu'il n'est pas possible de ressentir deux émotions différentes au même instant alors qu'il est possible de ressentir une succession d'émotions différentes si le laps de temps le permet. Stone et coll. (1993) rapportent des données (mesures quotidiennes de l'humeur pendant plusieurs semaines) qui peuvent étayer cette position à l'échelle de la journée. [START_REF] Zelenski | The Distribution of Basic Emotions in Everyday Life : A State and Trait Perspective from Experience Sampling Data[END_REF], en mesurant trois fois par jour pendant un mois les émotions ressenties, montrent que les émotions positives dominent tant en intensité qu'en fréquence chez les sujets tout-venants. Van Eck et coll. (1998) montrent que les événements aversifs quotidiens entraînent une augmentation des affects négatifs et une diminution des émotions positives. Plus ces événements sont évalués comme déplaisants, plus ces changements sont importants. Parallèlement, de nombreuses études montrent une certaine stabilité temporelle des émotions [START_REF] Izard Ce | Stability of Emotion Experiences and Their Relations to Traits of Personality[END_REF]Lucas et coll., 1996 ;[START_REF] Ormel | How Neuroticism, Long-Term Difficulties, and Life Situation Change Influence Psychological Distress : A Longitudinal Model[END_REF][START_REF] Watson | Measurement and Mismeasurement of Mood : Recurrent and Emergent Issues[END_REF][START_REF] Watson | The Long-Term Stability and Predictive Validity of Trait Measures of Affect[END_REF], même si les variables cognitives apparaissent plus stables sur le plan temporel [START_REF] Eid | Intraindividual Variability in Affect : Reliability, Validity, and Personality Correlates[END_REF] et situationnel [START_REF] Diener | Temporal stability and cross-situational consistency of affective, behavioral, and cognitive responses[END_REF]. Mettez une croix dans la case qui correspond le mieux". Sept modalités de réponses étaient possibles de "jamais" à "plusieurs fois par jour". Pour étudier la validité externe des différentes facettes émotionnelles, une mesure de satisfaction de la vie a été utilisée (Diener et coll., 1985) ainsi qu'une mesure de détresse. L'échantillon étant composé de soignants, la mesure de détresse utilisée est celle de burnout [START_REF] Maslach | The measurement of experienced burnout[END_REF]. Analyses Les analyses statistiques suivent une progression classique : analyse des distributions des réponses aux items pour vérifier que la totalité des modalités de réponses sont exploitées, analyse de la matrice de corrélations entre les items pour s'assurer qu'ils ne sont pas excessivement redondants, analyses factorielles afin de dégager les dimensions pertinentes et le type de structure résumant au mieux les données, analyse de la consistance interne des échelles dérivées des analyses factorielles afin de vérifier si l'erreur de mesure est acceptable, et, enfin, analyse des corrélations avec des mesures proches pour compléter l'étude de validité et situer la nature du nouveau construit par rapport à des instruments déjà connus. Toutes les analyses ont été réalisées avec les logiciels Statistica 6 et Lisrel 8.5. RESULTATS Les analyses de distributions des réponses aux items ont mis en évidence des problèmes pour des émotions à valence ou activation très élevée (ie. haine, honte, dépression). Ces trois items ont été éliminés car quatre des possibilités de réponses n'étaient utilisés que par une minorité des participants. En résumé, ces items ne permettent pas de différencier les participants car tous ont tendance à répondre de la même façon. Cette idée est à tester avec des instruments de nature émotionnelle au cours de mesures quotidiennes. On peut faire l'hypothèse de systèmes autonomes, l'un négatif modifié par une TCC classique, et l'autre positif modifié par une psychothérapie positive. Cette hypothèse serait cohérente avec les résultats en psychologie différentielle. La personnalité et le bien-être subjectif sont fortement liés, la personnalité étant parfois considérée comme le déterminant dominant du bien-être [START_REF] Compton Wc | Measures of mental health and a five factor theory of personality[END_REF][START_REF] Costa Pt | Influence of Extraversion and Neuroticism on Subjective Well-Being : Happy and Unhappy People[END_REF][START_REF] Deneve | The Happy Personality : A Meta-Analysis of 137 Personality Traits and Subjective Well-Being[END_REF][START_REF] Diener | Subjective Well-Being : Three Decades of Progress[END_REF][START_REF] Myers | Who is happy ?[END_REF]. L'affectivité positive est liée à l'extraversion et l'affectivité négative au névrosisme. [START_REF] Jp | Differential Roles of Neuroticism, Extraversion, and Event Desirability for Mood in Daily Life : An Integrative Model of Top-Down and Bottom-Up Influences[END_REF] élargissent ces constats aux événements quotidiens désirables et indésirables. Pour les auteurs, le névrosisme et les événements indésirables seraient des prédicteurs de l'humeur négative et positive, tandis que l'extraversion et les événements désirables ne seraient des prédicteurs que de l'humeur positive. Le BES peut faire l'objet d'interprétations ascendantes (bottom-up) ou descendantes (top-down) selon que l'on considère respectivement que c'est la suite d'événements favorables ou défavorables qui détermine le bien-être subjectif ou que le bien-être subjectif est une prédisposition à vivre les événements de façon positive ou négative [START_REF] Diener | Subjective Well-Being[END_REF]. Il est probable que la réalité soit en fait plus complexe et que la causalité soit réciproque [START_REF] Feist Gj | Integrating Top-Down and Bottom-Up Structural Models of Subjective Well-Being : A Longitudinal Investigation[END_REF]. Ces liens entre le bien-être et la personnalité sont même suffisamment consistants et étroits pour conduire Lykken et Tellegen à une conclusion extrême : « Essayer d'être plus heureux est aussi futile que d 'essayer d'être plus grand et, en conséquence, c'est contre-productif » (Lykken et Tellegen, 1996, p.189traduction Rolland, 2000). Toutefois, cette conclusion n'est pas compatible avec les résultats de Seligman et coll. (2005), mais elle conduit à rappeler que le bien-être serait un trait prédéterminé à la naissance [START_REF] Diener | Traits Can Be Powerful, but Are Not Enough : Lessons from Subjective Well-Being[END_REF] modifiable seulement dans une certaine mesure… Seligman et coll. (2004) tiennent compte de cette limite en particulier pour la première voie de la thérapie (pleasant life). 1988), mais les 20 items de cette échelle ne sont pas strictement de nature émotionnelle, certains (determined, active, strong) impliquant un système plus complexe de variables latentes. La comparaison entre ces deux outils est donc importante. L'alternative majeure aux travaux de Watson et Tellegen (1985) est constituée par les propositions de Russel [START_REF] Ja | A circumplex model of affect[END_REF] et Diener (Diener et coll., 1985). Ces auteurs distinguent deux dimensions Concernant les trois voies Diener et coll. (1995) choisissent en conséquence de travailler sur une structure factorielle de la fréquence des émotions ressenties durant le mois précédent. Pour construire leur modèle, ils partent de trois courants théoriques liés aux émotions, le courant cognitif, le courant évolutionniste et le courant empirique. Par recoupement de ces théories, ils proposent six gammes d'émotions, dont deux dites « plaisantes » (ou positives), Love et Joy, et quatre dites « déplaisantes » (ou négatives), Fear, Anger, Shame et Sadness. et ensemble constitue une structure hiérarchique dans laquelle les deux facteurs d'ordre supérieur sont modérément corrélés. S'inscrivant dans cette approche évaluative, le but principal de la recherche présentée ici est de disposer en langue française d'un instrument de mesure de la fréquence des émotions positive et négative. L'objectif est de proposer un outil court, facile d'utilisation, et susceptible d'être utilisé en mesures répétées dans le cadre d'une recherche longitudinale ou d'un accompagnement psychologique. Nous avons choisi de prendre modèle sur celui créé par Diener et coll. (1995), qui repose sur une structure hiérarchique permettant une lecture globale (émotions positives vs émotions négatives) et une interprétation dans chaque gamme d'émotions. Méthodologiquement, il est donc important de vérifier la qualité de la structure factorielle et la consistance des échelles obtenues dans cette forme française, ainsi que d'étudier les liens avec des outils de bien-être et de détresse existants.METHODE AdaptationL'adaptation a été réalisée en plusieurs étapes. Dans un premier temps, 24 termes anglais désignant des émotions, issus de l'articlede Diener et coll. (1995), ont été traduits chacun par deux à trois mots en langue française à l'aide d'un dictionnaire. Une recherche de synonymes a été conduite ensuite sur ces traductions pour aboutir à un corpus final de 129 mots français. Dans un second temps, sept juges, tous psychologues, ont pris connaissance de l'articlede Diener et coll. (1995). Ils ont ensuite indiqué pour chacun des 129 termes s'il correspondait à l'esprit de l'échelle originale. En outre, ils devaient exclure les termes trop peu courants pour des sujets tout-venants. Pour chaque gamme d'émotions, les six termes français les plus fréquemment conservés par les juges ont été retenus pour l'étape suivante.EchantillonsTrois échantillons ont été agrégés pour le besoin des analyses multidimensionnelles. Un premier échantillon est constitué de 259 militaires, tous de sexe masculin, et relativement jeunes (moy= 28 ans ± 5 ans). Ils ont répondu à ce questionnaire dans le cadre d'une étude sur le stress pendant les opérations extérieures. Un deuxième échantillon est constitué de 198 personnes âgées (moy= 75 ans ± 12 ans), dont une majorité de femme (N = 122 ; 62 %). Ils ont répondu à ce questionnaire au cours d'une étude sur la détresse du sujet âgé institutionnalisé. Enfin, un troisième échantillon a été recruté pour la présente recherche, notamment dans le cadre des analyses de la validité externe de l'indicateur de bien-être subjectif. Cet échantillon est constitué de 125 soignants (personnel médical, paramédical, psychologue, et assistant social), âgés en moyenne de 36 ans (± 10 ans), en majorité des femmes (N = 95 ; 76 %). Au total, sur ces 582 participants, 571 questionnaires étaient exploitables d'un point de vue statistique (98 %), ce qui permet déjà de souligner la bonne réception de l'outil par les sujets.MatérielLa forme expérimentale du questionnaire de bien-être subjectif est constituée de six groupes de six items : [bienveillance, amitié, attachement, affection, bonté, amour],[anxiété, inquiétude, appréhension, peur, angoisse, crainte],[agacement, haine, colère, irritation, mécontent, énervement],[satisfaction, bonheur, allégresse, joie, bien-être, gaieté], [remords, gêne, regret, embarras, culpabilité, honte],[cafard, tristesse, morosité, dépression, mélancolie, abattement]. La consigne était : "Indiquez la fréquence avec laquelle vous avez ressenti chacune des émotions durant le mois qui vient de passer. L 'analyse de la matrice de corrélations inter-items a mis en évidence des corrélations très faibles entre des items censés appartenir aux mêmes groupes théoriques ou des corrélations trop élevées avec les items d'autres gammes d'émotions (ie. bienveillance, anxiété, allégresse). Par conséquent ces items n'ont pas été retenus.Les analyses en composantes principales (ACP) avec rotation varimax ont permis d'identifier sept items qui ne répondaient pas aux principes d'une structure simple(ie. amitié, bonté, appréhension, énervement, bien-être, remords, abattement). Ces items ont été évincés. Le modèle final comprend donc 23 items, soit six dimensions de trois ou quatre items qui expliquent 68 % de la variance totale. L'analyse confirmatoire de ce modèle de structure est satisfaisante (Chi2/ddl = 2,89; p<0,001 ; SRMR = 0,051 ; GFI = 0,91) 3 . Une analyse complémentaire a été réalisée afin de vérifier la pertinence d'une structure hiérarchique (cf. tableau I). Le sommet de cette structure est constitué des variables latentes d'affectivité négative et positive alors que le niveau de base est constitué des six gammes émotionnelles. INSERER ICI TAB I La variance propre à chaque item se décompose en une variance générale et une variance spécifique. Deux niveaux d'interprétation en découlent. On peut s'intéresser aux émotions en distinguant classiquement l'affectivité négative et positive, ceci étant conforté par l'analyse hiérarchique. L'étude des corrélations entre les échelles indique une indépendance globale des émotions positives et négatives (r = -0,07 ; ns). Il est possible également d'affiner l'évaluation du bien-être subjectif en distinguant six facettes intercorrélées (cf. tableau II). On constate que les liens entre les échelles sont un peu plus complexes. Notamment, il existe une corrélation non négligeable entre l'échelle de joie et l'échelle de tristesse alors que la joie n'est pas corrélée avec les autres échelles d'émotion négative et que la tristesse n'est pas corrélée avec le score d'affection. INSERER ICI TAB II Les indices de consistance interne sont satisfaisants (cf. tableau III). Les échelles d'affectivité négative et positive sont très consistantes (respectivement 0,92 et 0,81). Les six échelles d'émotions sont également satisfaisantes sur ce critère, situées entre 0,71 et 0,87. INSERER ICI TAB III Deux résultats sont notables. En premier lieu, le score de satisfaction de la vie est corrélé modérément tant avec les émotions positives (r = 0,42 ; p<0,001) qu'avec les émotions négatives (r = -0,38 ; p<0,001). Le résultat saillant se situe au niveau des facettes puisque les corrélations les plus élevées sont avec l'échelle de joie et de tristesse. Si on peut estimer une relative communauté entre ces trois mesures, les autres construits (affection, anxiété, colère et gêne) apparaissent plus originaux. Le second résultat concerne les liens entre le 3 L'analyse confirmatoire vise un χ 2 non significatif ou, à défaut, un rapport χ 2 /ddl<3, correctif fréquemment admis. Le SRMR (Standardized Root Mean Squared Residual) quantifie la différence entre les covariances observées et théoriques. On attend un SRMR<0,08. Le GFI (Goodness of Fit Index) s'interprète comme un R 2 , et traduit la proportion de covariances observées expliquée par le modèle testé. On attend un GFI>0,90. burnout et les facettes émotionnelles. Le sentiment d'épuisement est la composante la plus étroitement associée avec les émotions, en particulier la tristesse (r = 0,41 ; p<0,001). En revanche, les liens avec le sentiment de dépersonnalisation et le manque d'accomplissement sont faibles. Il n'existe aucune corrélation négative notable entre les émotions positives et les trois composantes du burnout. INSERER ICI TAB IV DISCUSSION Le but principal de cette recherche était de disposer en langue française d'un instrument d'évaluation de la fréquence des émotions positive et négative. La construction de l'outil francophone reposait sur le modèle de Diener et coll. (1995). Les résultats aboutissent à une mesure présentant des qualités similaires à l'échelle originale. Cet outil permet d'évaluer six gammes distinctes d'émotions organisées en deux dimensions d'ordre supérieur. Les échelles sont consistantes et mesurent des phénomènes spécifiques par rapport à d'autres critères couramment utilisés dans les études sur le bien-être subjectif ou le stress. Ces résultats doivent être répliqués auprès de patients souffrant de différents troubles. Il est important en particulier de vérifier l'efficacité d'une thérapie cognitive et comportemantale « classique » versus une psychothérapie positive sur les émotions négatives et positives. On note dans les présents résultats une indépendance des deux types d'émotions. Néanmoins on peut s'interroger sur les processus sous-jacents. S'agit-il de processus communs qui détermineraient à la fois les émotions positives et les émotions négatives ou existe-t-il des processus spécifiques aux émotions négatives et d'autres aux émotions positives ? Frederickson et Joiner (2002) font l'hypothèse d'une spirale émotionnelle et comportementale positive à l'instar de la spirale dépressive rencontrée chez les patients. du bien-être (pleasant life, good life/engaged life et meaningfull life) , il faut enfin vérifier si les techniques qui augmentent une composante du bien-être améliorent également les deux autres. Par exemple, est-ce que l'engagement dans une vie pleine de sens augmente la fréquence des émotions positives ? D'un point de vue psychométrique, d'autres travaux sont à entreprendre, notamment l'étude de la fidélité temporelle et de la sensibilité au changement. L'analyse de la validité peut être complétée à l'aide d'outils d'évaluation des émotions existants. D'autres échelles existent telle que la PANAS (Positive and Negative Affect Schedule ; Watson et coll. indépendantes dans les émotions : leur niveau d'activation (arousal) et leur caractère plus ou moins plaisant-déplaisant (valence ou dimension hédonique). L'axe arousal irait d'un état proche du sommeil ou de la relaxation à un état proche de la frénésie. Ainsi, lorsque Diener propose des échelles d'émotions positives et négatives, celles-ci se situent sur l'axe hédonique (Meyer et Shack, 1989) et ne peuvent être confondues avec les échelles de Watson et Tellegen (1985) qui ne distinguent pas les axes hédonique et arousal. D'un autre côté, les résultats de Feldman Barrett et Russell (1998) parviennent à montrer deux axes indépendants, l'axe de la valence et l'axe du niveau d'activation. Ces auteurs montrent également que les émotions seraient bipolaires sur l'axe de leur valence, et bipolaires sur l'axe de leur activation. Par exemple, la fatigue est l'opposée de la tension sur l'axe ANNEXE ICe questionnaire concerne les émotions que vous avez pu ressentir depuis un mois. Ce sont des émotions ressenties dans votre milieu professionnel, dans votre entourage familial ou lors de vos loisirs... Indiquez la fréquence avec laquelle vous avez ressenti chacune des émotions durant le mois qui vient de passer. Mettez une croix dans la case qui vous correspond le mieux. -------------------------------------------- Scorage : La mesure de valence émotionnelle (MVE) permet de calculer 8 scores. Le score d'AFFECTION correspond à la somme des items: attachement, affection, amour Le score de BIEN-ETRE correspond à la somme des items: satisfaction, bonheur, joie, gaieté Le score d'ANXIETE correspond à la somme des items: inquiétude, peur, angoisse, crainte Le score de COLERE correspond à la somme des items: agacement, colère, irritation, mécontentement Le score de REMORDS correspond à la somme des items: gêne, regret, embarras, culpabilité Le score de DEPRESSION correspond à la somme des items: cafard, tristesse, morosité, mélancolie Le score d'AFFECTIVITE NEGATIVE correspond à la somme des 4 scores d'émotion négative Le score d'AFFECTIVITE POSITIVE correspond à la somme des 2 scores d'émotion positive Tableau I. Saturations des 23 items dans une structure hiérarchique N= 571 ; AFF POSIT (affectivité positive) et AFF NEG (affectivité négative). Tableau III : Corrélations a entre les scores aux facettes émotionnelles et les indicateurs psychologiques r de Bravais-Pearson ; N= 125 sauf pour satisfaction de la vie (N=107) ; *p<0,05 ; **p<0,01 ; ***p<0,001 ; AFF POSIT (affectivité positive) et AFF NEG (affectivité négative). Les scores d'épuisement, d'accomplissement et de dépersonnalisation font référence à l'échelle de burnout. a a entre les scores aux six facettes émotionnelles Tableau III : Caractéristiques des échelles d'émotion Tableau II : Corrélations a AFF POSIT 0,40 0,45 0,55 0,45 0,59 0,61 0,59 1,00 0,19*** 0,10* 0,42*** 0,10* 0,01 AMOUR a r de Bravais-Pearson (N= 571) ; *p<0,05 ; **p<0,01 ; ***p<0,001. AFF NEG AMOUR JOIE PEUR ATTACHEMENT 0,20 0,66 AFFECTION 0,69 AMOUR 0,52 0,22 SATISFACTION 0,65 BONHEUR 0,57 JOIE 0,60 GAIETE -0,22 0,51 INQUIETUDE 0,64 0,41 PEUR 0,58 0,57 AMOUR ANXIETE 1,00 COLERE 0,49*** 1,00 JOIE -0,11* -0,06 1,00 GENE 0,60*** 0,57*** -0,12** 1,00 TRISTESSE 0,63*** 0,48*** -0,40*** 0,59*** ANXIETE COLERE JOIE GENE moyenne Nombre Alpha de Écart-type d'items Cronbach AFF NEG 34,9 8,4 9 0,92 AFF POSIT 38,3 15,7 16 0,81 AMOUR 14,5 4,7 3 0,71 ANXIETE 9,5 5,0 4 0,84 COLERE 11,9 5,2 4 0,84 JOIE 20,4 5,3 4 0,84 GENE 8,0 3,8 4 0,77 TRISTESSE 8,9 5,2 4 0,87 ANXIETE COLERE JOIE Satisfaction de la vie 0,25** -0,19 -0,18 0,47*** -0,27** COLERE 1,00 TRISTESSE GENE GENE TRISTESSE AFF POSIT TRISTESSE -0,56*** 0,42*** Sentiment d'épuisement -0,01 0,32** 0,34** -0,16 0,26** 0,41*** -0,10 Sentiment d'accomplissement 0,13 -0,11 -0,16 0,28*** -0,15 -0,19* 0,24** a AMOUR Sentiment de dépersonnalisation -0,12 0,26** 0,21* -0,06 0,24** 0,26** -0,10 AFF NEG -0,38*** 0,42*** -0,19* 0,30** ANGOISSE 0,65 0,55 CRAINTE 0,66 0,51 AGACEMENT 0,51 0,64 COLERE 0,48 0,52 IRRITATION 0,56 0,66 MECONTENT 0,56 0,65 GENE 0,57 0,51 REGRET 0,64 0,44 EMBARRAS 0,63 0,51 CULPABILITE 0,60 0,32 CAFARD -0,25 0,67 0,46 TRISTESSE -0,27 0,64 0,50 MOROSITE -0,21 0,67 0,42 MELANCOLIE -0,24 0,69 0,42 a Analyse factorielle hiérarchique ; N= 571 ; AFF POSIT (affectivité positive) et AFF NEG (affectivité négative). a Si l'objectif est d'améliorer chaque composante du bien-être, ce type d'intervention n'est pas pour autant réservé au public tout venant. Les patients souffrant de dépression sont concernés en premier lieu et Seligman fait l'hypothèse d'une triple étiologie de la dépression sous la forme d'une carence dans les trois voies du bien-être. Les quatre autres recommandations sont : utiliser des mesures sensibles au changement, distinguant les évolutions à court et à long terme et qui puissent être employées dans des études longitudinales, dans le cadre d'échantillonnages temporels ou de relevés quotidiens ; construire des outils présentant de bonnes qualités psychométriques, notamment concernant leur validité ; tenir compte des limites des outils de mesure, par nature imparfaits, lors de l'interprétation des données ; et ne pas occulter d'autres indicateurs.
00177246
en
[ "phys.meca.acou", "spi.acou" ]
2024/03/05 22:32:18
1990
https://hal.science/hal-00177246/file/IEEE_ultrason90n.pdf
Ginette Saracco Philippe Guillemain Richard Kronland-Martinet Characterization of elastic shells by the use of the wavelet transform ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
01772588
en
[ "spi.meca.mema" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01772588/file/Jacquot_JIT_2017_post-print%20HAL.pdf
Pierre-Baptiste Jacquot Didier Perrin email: [email protected]@33466785369 Benjamin Gallard Romain Léger Patrick Ienny Impact on peel strength, tensile strength and shear viscosity of the addition of functionalized low density polyethylene to a thermoplastic polyurethane sheet calendered on a polyester fabric Keywords: Polyurethane, Fabric, Peel strength, Calendering, Polymer Coated technical textiles are widely used for several industrial applications. Most of these coated fabrics are made with a polyester fabric and a PolyVinyl Chloride (PVC) coating but in order to reduce the environmental impact, the producers are willing to substitute PVC by thermoplastic polyurethane (TPU). However, a technological lock of the calendering of TPU on polyester fabric is the ability to get a good adhesion of the coating on the fabric. Producers could increase the temperatures of extrusion of the coating but TPU have a short range of extrusion temperatures making it difficult to extrude. One solution is to make a blend with another polymer which has a higher extrusion temperature range. In the present work, the studies of the addition of Low Density PolyEthylene (LDPE) and Linear Low Density PolyEthylene grafted Maleic Anhydride ( LLDPE-g-Ma) in polyurethane coating on the tensile strength of the sheet and on the peel strength with a polyester fabric have been studied as well as the influence of the extrusion temperature. SEM observations, FTIR spectrums and viscosity measurements have been performed to understand the behavior of the different blends. Results show that extrusion temperature and penetration depth of the coating in the fabric have a positive influence on the peel strength. 1.Introduction Coated technical textiles are widely used for several applications like sails, big tops, paragliders or inflatable boats and the demand is still on the rise. These very light technical textiles are usually manufactured with a polyester fabric and a PolyVinyl Chloride (PVC) matrix. However, environmental constraints force the manufacturers to find a substitute material for the PVC which is harmful and difficult to recycle [START_REF] Akovali | Toxicity of Building Materials: 2 -Plastic materials: polyvinyl chloride (PVC)[END_REF]. Thermoplastic PolyUrethane (TPU) is a good substitute material for the PVC. Depending on its formulation and the components, TPU can have good properties [START_REF] Koch | The Structure and Properties of Polyurethane Textile Coating. s.l[END_REF] such as UV resistance (3) (4), abrasion resistance [START_REF] Papaj | Effect of hardener variation on protective properties of polyurethane coating. s.l[END_REF], solvent resistance [START_REF] Gite | Preparation and properties of polyurethane coatings based on acrylic polyols and trimer of isophorone diisocyanate. s.l[END_REF], tensile strength [START_REF] Zhang | Synthesis, characterization and mechanical properties of polyester-based aliphatic polyurethane elastomers containing hyperbranched polyester segments. s.l[END_REF] or high elongation [START_REF] Moon | Effect of Chain Extenders on Polyurethanes Containing Both Poly(butylene succinate) and Poly(ethylene glycol) as Soft Segments. s.l[END_REF]. These properties make TPU an attractive material for coated textiles [START_REF] Oertel | Thermoplastic Polyurethane for Coated Fabrics. s.l[END_REF] or for leather like products [START_REF] Schmelzer | Polyurethanes for Flexible Surface Coatings and Adhesives. s.l[END_REF]. These TPU coated textiles are used for inflatable boats, flexible tanks or more technical applications like Lighter-Than-Air systems for high altitude applications [START_REF] Euler | Material Challenges for Lighter-Than-Air Systems in High Altitude Applications[END_REF]. Despite all these good properties, there is still a technological lock. Some industrials report that they are recalcitrant to use polyurethane sheets for coated textile because of very low peel strength of the sheet on the polyester fabric after calendering. To the best of our knowledge, industrials are more willing to use other processes like knife-over-roll coating or air-knife coating. These two methods are especially used for tightly fabrics where a low thickness of polyurethane is required like waterproof garments or materials for small inflatable boats (12) [START_REF] Lee | Dictionary of Composite Materials Technology[END_REF]. But if a more important thickness is needed, these methods are quite difficult to use and manufacturers do not have any other choice than using calender coating or rotary-screen coating which are also cheaper than the others [START_REF] Singa | A Review on Coating & Lamination in Textiles: Processes and Applications[END_REF]. To use both mentioned processes, they have to make special surface treatments on the fabrics. Six theories have been proposed to explain the different mechanism of adhesion: mechanical interlocking [START_REF] Mcbain | On adhésive and adhesive action. s.l[END_REF], wetting [START_REF] Schonborn | Surface tension of molten polypropylene[END_REF], diffusion [START_REF] Voyutskii | Autoadhesion and adhesion of high polymers[END_REF], electrostatic [START_REF] Deryagin | Role of the molecular and electrostatic force in the adhesion of polymers[END_REF], chemical [START_REF] Buchan | Chemical nature of the rubber to glass bond[END_REF] and weak boundary layer [START_REF] Bikerman | The Science of Adhesive Joints. s.l[END_REF]. All these theories show that the adhesion between two materials is linked with the interface as outlined by Mittal (20). Further studies explain that the quality of adhesion between the fabric and the matrix is a key parameter to obtain good mechanical performance of the composite [START_REF] Schultz | The Role of the Interface in Carbon Fibre-Epoxy Composites. s.l[END_REF]. As a consequence, several treatments have been developed to enhance the quality of the interface. Previous researches used different treatments for the fabric such as atmospheric air or corona plasma treatments to modify the surface energy of the fabric and increase the adhesion of the coating. For example, Leroux et al. showed that the adhesion of a silicon resin on a polyester fabric after atmospheric air plasma treatment has been multiplied by two [START_REF] Leroux | Atmospheric air plasma treatment of polyester textile materials[END_REF]. There are numerous other papers that deal with plasma treatment and their influence on the hydrophilicity increase of the treated fabric [START_REF] Belgacem | Surface Modification of Cellulose Fibres[END_REF] [START_REF] Kan | Effect of atmospheric pressure plasma treatment on wettability and dryability of synthetic textile fibres[END_REF] [START_REF] Garg | Improvement of adhesion of conductive polypyrrole coating on wool and polyester fabrics using atmospheric plasma treatment[END_REF] [START_REF] Oktem | Modification of Polyester and Polyamide Fabrics by different in-situ Plasma Polymerization Methods[END_REF] [START_REF] Yip | Surface Modification of Polyamides Materials With Low Temperature Plasma[END_REF] [START_REF] Morent | Non-thermal plasma treatment of textiles[END_REF] [START_REF] Ferrero | Wettability measurements on plasma treated synthetic fabrics[END_REF]. However Novak et al. showed that the shelf-life of these treatments for a polypropylene material with polyvinyl acetate was only about 50 days due to the loose of the surface oxidation [START_REF] Novak | Investigation of long-term hydrophobic recovery of plasma modified polypropylene[END_REF]. Other research used corona treatments [START_REF] Belgacem | Effect of Corona Modification on the Mechanical Properties of Polypropylene/ Cellulose Composites. s.l[END_REF] [START_REF] Ragoubi | Contribution à l'amélioration de la compatibilité interfaciale fibres naturelles/matrice thermoplastique via un traitement sous décharge couronne[END_REF] [START_REF] Ragoubi | Impact of corona treated hemp fibres onto mechanical properties of polypropylene composites made thereof[END_REF] or chemical treatments [START_REF] Bledzki | Composites reinforced with cellulose based fibres. s.l[END_REF] [START_REF] Zl | Chemical coupling in wood fibre and polymer Composites: A review of coupling agents and treatments[END_REF] to increase the wettability of the fabrics. One possibility is to increase the extrusion temperature in order to modify the viscosity and the surface energy. The problem is that TPU has a short range of extrusion temperatures and an increase of only 5°C can generate a drop in the viscosity of the polymer making it impossible to calender on a fabric. Hence we propose to make a blend with Low Density Polyethylene (LDPE) and Linear Low Density PolyEthylene grafted Maleic Anhydride (LLDPE-g-Ma). The aim of this blend is to extrude the sheet at higher temperatures in order to get a better adhesion with a polyester fabric. Because of their high difference of polarities and their high interfacial tension, Polyurethane and Polyethylene are two immiscible materials. However, previous researches explain that it is possible to have a compatibility if the PE is grafted with maleic anhydride [START_REF] Potschke | Blends of Thermoplastic Polyurethane and Maleic-Anhydride Grafted Polyethylene[END_REF] [START_REF] Song | Flow Accelerates Adhesion Between Functional Polyethylene and Polyurethane[END_REF] or secondary amine [START_REF] Song | Flow Accelerates Adhesion Between Functional Polyethylene and Polyurethane[END_REF]. These compatibilizers are capable to stay at the interface and entangling with both sides. The final material is then prepared by calendering the sheet of TPU/LDPE blend on a polyester fabric. According to the literature, there is no previous research about the influence of this blend on the plastic sheet adhesion on a polyester fabric. However we can notice that Jie Song et al. show that the adhesion of a polyurethane paint was greater on a polyolefin/TPU blend substrate than on a simple PO substrate [START_REF] Song | Polyethylene/polyurethane blends for improved paint adhesion. s.l[END_REF]. This paper proposes a new solution to enhance the adhesion of the sheet on the fabric. It suggests a modification of the plastic sheet that is extruded before being calendered. Thanks to an experimental design, the influences of extrusion temperature as well as the influence of LDPE and PE-g-Ma percentage in the blend on the peel strength and the mechanical performance of the film is analyzed. The value of the adhesion of the sheet on the fabric is the main proof of the influence of the blend. The sheet viscosity, the miscibility of the LDPE and PE-g-Ma in the TPU, the penetration depth of the coating in the yarns of the fabrics and the FTIR analysis are used to analyze and explain the results of the adhesion. Materials and Methods Materials Characterization The coating As presented previously, several blends have been realized with Low Density Polyethylene (LDPE), Thermoplastic PolyUrethane (TPU) and Linear Low Density Polyethylene Grafted Maleic Anhydride (LLDPE-g-Ma). References and properties of LDPE, LLDPE-g-Ma and TPU are gathered in Table I and Table II. The fabric The coating has been calendered on the polyester fabric described in Table III. This fabric (Figure 1) has been woven without the use of any additives like sizings on the surface of the yarns to avoid a decrease of the wetting capacity [START_REF] Luo | Surface and wettability property analysis of CCF300 carbon fiber with different sizing or without sizing[END_REF]. Experiments Methods Experimental design Introduction to experimental designs An experimental design has been used to minimize the number of experiments. In a first part, the experiments have been conducted with only Temperature and LDPE amount parameters. Then the blends giving the best compromise between adhesion and mechanical characterization have been adapted by adding of LLDPE-g-Ma. Central Composite Design (CCD) To define the optimum settings of these two factors level which can significantly influence the adhesion and mechanical characterization, a Central Composite Design (CCD) was applied in the experimental domain presented in Table IV (40) (41) [START_REF] Sanz | Optimization of dimethyltin chloride determination by hydride generation gas phase molecular absorption spectrometry using a central design composite design[END_REF]. Actually, the most popular response surface method based on a rotatable central composite design with five levels and two factors was applied to investigate the influence of process factors on multiple responses including: adhesion (Y1) and mechanical characterization (Y2). In CCD designs, all process variables are studied in five levels (-a, -1, 0, +1, +a); each of these values is a code for an original variable value. Coding the variable levels is a simple linear transformation of the original measurement scale so that the highest value of the original variable becomes (+1) and the lowest value becomes (-1). The average of these two values is assigned to (0) while the values of -a and +a are applied to find the minimum and the maximum values. The a values depend on the number of variables studied (2 in our case) and for two, three, and four variables, they are 1.41, 1.68, and 2.00, respectively. All design descriptions are in terms of coded values of the variables. The independent variables for this study and their related levels and codes are shown in Table V. where the notation is the coefficient of linear terms X i , is the coefficient of quadratic terms , and is the coefficient of interaction terms . Each variable in the model has a coefficient. Numerical magnitude of the standardized model coefficients reveal their importance in the obtained model and the modeled response, accordingly (among standardized coefficients the larger values are more effective). Furthermore, negative coefficients represent inverse effect of the corresponding factor on the modeled response. In addition to a quadratic model, the model can also be displayed in three dimensions plots. The CCD outputs include contour and 3D response surface plots, which visualize the results of the experiment and enable the researcher to visually examine the relationships between the variables in the plot and the response. A statistical test of the model fit is made by comparing the variance due to the lack of fit to the pure error variance using the F-test. The fitted model is considered adequate if the variance due to the lack of fit is not significantly different from the pure error variance [START_REF] Lewis | Pharmaceutical Experimental Design[END_REF] (44) [START_REF] Myers | Response Surface Methodology: Process and Product Optimization Using Design Experiments[END_REF]. The adequacy of the model is further tested using three check points [START_REF] Lewis | Pharmaceutical Experimental Design[END_REF]. Taking into account all requirements for all responses, we can, thus, choose the conditions on the design variables that maximize D. One can see that a high value of D is obtained only if all individual desirabilities d i are high. The values of D computed from the observed responses allow us to locate optimal region. Blend preparation The LDPE/TPU blends have been prepared in two steps. First the pellets of TPU and LDPE (and/or LLDPE-g-Ma) have been well mixed in a container and then extrusion film experiments have been carried out using a single screw extruder with a Maddock mixer following. Extrusioncalendering process Extrusion has been performed with a laboratory-scale extruder Polylab system composed of a HAAKE RheoDrive4 motor coupled with a HAAKE Rheomex 19/25 OS single screw extruder with a Maddock mixer. The system was piloted by PolySoft OS software to set and control temperature zones and screw speed. The extruder unit was equipped with a fish-tail designed die of 100mm wide and 450µm thick to process the molten polymer into a film. The Table VI collects all the process parameters for the extrusion-calendering. In the purpose to have the ability to test the mechanical performance of the films that were calendered on the fabric, the same films were prepared with the same parameters but without fabric. All prepared films parameters were summarized in Table VII. The experiments from 13 to 15 included grafted maleic anhydride. These experiments have been made according to the results from the experimental design (see Results). The films produced had a thickness of 200µm corresponding to the gap between the compressive heating rolls of the 3-roll laboratory calender. This is the lower thickness that it was possible to obtain with the calender. This thickness has been chosen to get the lighter material as possible. The different heating zones of the extruder are presented in the Figure 3 (47). During the test, the force has been recorded as a function of the displacement thanks to TestXpert® II software (Zwick). Reported data are the average of 10 samples. The tensile strength was obtained by dividing the force applied at the breaking by the initial section of the sample. Analysis of the Peel strength Analysis of the shear viscosity of the blend The dynamical rheological measurements have been performed on disks using a strain controlled rheometer ARES (TA Instrument) equipped with a 25 mm parallel plates geometry in continuous shear mode at the same temperatures than those used for the different calendering tests. According to prior experiments consisting in determining the linear viscoelastic domain for which the behavior of the polymer does not depend of the strain, the frequency sweep at strain was kept at ε=3% and the pulsation ω was in the range of 0.1 to 100 rad/s. Nitrogen was used to decrease the ageing of blends. Disk samples of 1.8 mm thick and 25mm wide were prepared by injection. The gap was set at 1.5mm. The result is the average value of three samples. Analysis of the coated textile sections and of the blends morphology The section of the coated textile has been analyzed with a Scanning Electron Microscope using the detection of backscattered electrons and a magnification of x500. The penetration of the coating on the fabric has been measured as following. Red arrows in Figure 5 indicate the depth of coating penetration: The analysis of the compatibility between TPU and PE was realized by the observation of the presence of PE particles in TPU with the same Scanning Electron Microscope using the detection of backscattered electrons and a magnification off x1000. The morphology gave information about the compatibility between these two materials. Analysis of the chemical composition Infrared measurements at room temperature were performed on a Perkin-Elmer Spectrum One FT-IR (Fourier Transformed Infrared) Spectrometer with 32 scans and a resolution of 2 cm -1 in the absorption mode to determine the chemical composition of the different blends. Results Peel Strength of TPU/LDPE blends According to For a same amount of LDPE, an increase of the extrusion temperature seems to increase the peel strength. For example, for a same amount of 19%wt of LDPE (samples 2 and 3) but different extrusion temperatures (respectively 174°C and 196°C), the peel strength is doubled (13.02N/50mm and 27.43N/50mm respectively). The same trend is observed with samples 1 and 8, while the opposite trend is observed with samples 4 and 7. This can be explained by the very high die temperature employed for sample 7 that may cause a degradation of the blend. This is further correlated to an important decrease of the tensile strength. Conversely and for a same extrusion temperature the amount of LDPE seems to have slight influence on the peel strength. For example, experiments 2 and 9 have been both performed at 174°C with respectively 19%wt and 15%wt of LDPE but the peel strengths are nearly the same (respectively 13.02N/50mm and 13N/50mm). This observation is also true for experiments 4 and 6 and experiments 3 and 5. Tensile strength of TPU/LDPE blends Tensile strengths of the different blends are displayed in Table IX. Tensile strengths of neat TPU at 175°C is about 23 MPa while the tensile strength of neat LDPE depends of the process temperature (24.6MPa and 42.6 MPa for extrusion temperatures of, respectively, 174°C and 190°C. The difference can be due to the partial fusion of LDPE pellets at a temperature of 174°C while the fusion is complete at 190°C). Excepted for experiment 9, the tensile strength of all blends is lower than the tensile strength of neat TPU and neat LDPE which means that there is an incompatibility between TPU and LDPE (36) [START_REF] Song | Flow Accelerates Adhesion Between Functional Polyethylene and Polyurethane[END_REF]. For an extrusion temperature of 210°C, the film seems to be degraded. The corresponding tensile strength is only 11.6 MPa while it is more than 14 MPa for all other blends. It is important to note that the tensile strength of TPU/LDPE blends seems to depend on temperature. Indeed, except for experiment 3, an increase of the extrusion temperature leads to a decrease of the tensile strength. Analysis of Experimental design optimization Table X shows 12 different experimental runs of CCD and the corresponding response data. Model equations Results of experiments of the CCD design are used to estimate the model coefficients (without using the check points). The fitted models expressed in coded variables are represented by Eqs. ( 3)-( 4): -Interfacial adhesion (Y1): ( -Tensile strength (Y2): (4) Statistical analysis and validation of the models The analysis of variance for the fitted models showed that in all cases, the regression sum of squares was statistically significant (their p-value is less than 0.05) and the lack of fit is not significant (43) [START_REF] Mathieu | Plans d'experiences: Application à l'entreprise[END_REF]. The measured values were very close to those calculated using the model equations. Indeed, the differences between calculated and measured responses were not statistically significant when using the t-test as shown in The examination of all the results obtained by means of the isoresponse curves, allows us to deduce that it is not obvious how one can find experimental conditions that can optimize both a b the responses simultaneously. The desirability functions allow to reach a compromise which can better satisfy conflicting objectives. Optimization The To choose the best coordinates of the acceptable compromise, we take into account the economic and process aspects of the mixture preparation. Thus, the acceptable compromise is selected at the point: X T =190°C and X PE =29% giving an interfacial adhesion of 28N/50mm and a tensile strength which reaches a value of 24 MPa. The choice has been made in the purpose of promoting the flexibility and the fuel resistance properties of the material. The LDPE has poor fuel resistance properties so it is necessary to have a material with a major part of TPU. Thanks to the experimental design, we have chosen the composition and the extrusion temperature to have the highest value of peel strength but also a high value of tensile strength (>22MPa). However the value of tensile strength of the blend for the chosen extrusion temperature was still lower than that of the 2 components due to the incompatibility of TPU and LDPE [START_REF] Song | Polyethylene/polyurethane blends for improved paint adhesion. s.l[END_REF]. For the next part of our study, we added 3 other compositions with maleic anhydride which is a compatibiliser for these blends [START_REF] Song | Flow Accelerates Adhesion Between Functional Polyethylene and Polyurethane[END_REF]. The LLDPE-g-Ma was not used in the first part of the study because of the important price of it compared to the price of LDPE. The blend 13 was composed of 71%wt TPU and 29%wt of LLDPE-g-Ma. We substituted the LDPE by LLDPE-g-Ma to check its influence on the peel strength. The blend 14 was composed of 71%wt of TPU, 26%wt of LDPE and 3%wt of LLDPE-g-Ma. The purpose of this blend was to get a blend similar to the best one determined before with the addition of 3% of Ma to increase the tensile strength. Then the experiment 15 was the neat LLDPE-g-Ma which was a reference like experiments 10, 11 and 12. The peel strength and tensile strength of experiments 13, 14 and 15 are summarized in the Table XIII. The analysis of the film surface of the blends shows three different morphologies: nodular, co-continuous and continuous (Figure 9). For all the LDPE/TPU blends (experiments 1 to 9), the morphology is nodular (Figure 9.A) which is a proof of the immiscibility of the LDPE in TPU. Also the smooth interface between the 2 components indicates that there is a poor interfacial adhesion. A co-continuous morphology was observed for LDPE/LLDPE-g-Ma/TPU (experiment 14) (Figure 9.B) and almost continuous for LLDPE-g-Ma/TPU blend (experiment 13) (Figure 9.C) which means that there is a better miscibility between TPU and LLDPE-g-Ma. As said previously in the introduction, the g-Ma is a compatibiliser for LDPE and TPU so these results are in agreement with the literature [START_REF] Potschke | Blends of Thermoplastic Polyurethane and Maleic-Anhydride Grafted Polyethylene[END_REF]. On the pictures it is clear that the polymer is oriented in one direction. This is due to the process and especially to the rolls of the calendering unit. Analysis of the chemical composition FTIR Spectrum of neat LDPE and neat LLDPE-g-Ma are very similar (Figure 10). The Ma group can be seen around 1700 cm -1 and 1800 cm -1 (Figure 11) as explained on previous research for PP-g-Ma (48) (49) and EPDM [START_REF] Barra | Maleic Anhydride Grafting on EPDM: Qualitative and Quantitative Determination[END_REF]. Figure 12 shows the FTIR spectrum of experiment 2 and 6. Although the peel strength is very different (sample 6 displayed a peel strength almost 3 times higher than sample 2), the 2 spectrum are similar. No difference was observed on the FTIR spectrums of the blends 1 to 9. The FTIR spectrums of experiment 6, 13 and 14 are presented in Figure 12 and 13 and the same conclusion can be made. The percentage of Ma in the blends 13 and 14 is so weak that it is almost not visible on the FTIR experiment except for the peak around 1730cm -1 as it can be seen on the Figure 13. The large A B B C B peak around 1700 cm -1 is a peak from TPU corresponding to the urethane group (C=O) [START_REF] Liu | The effects of the molecular weight and structure of polycarbonatediols on the properties of waterborne polyurethanes. s.l[END_REF] and cannot been attributed to Ma. In conclusion to these analyses, no significant difference on the FTIR spectrum has been observed between all the blends. It means that there is no new-bonds creation by mixing TPU and LLDPE or LLDPE-g-Ma so the noted better adhesion is not due to a chemical link between the fabric and the coating. Analysis of the shear viscosity of the blend The viscosity of the sheet is important because the ability of the polymer to penetrate inside the yarn depends of this viscosity. The shear viscosity given in the Table XIV is the viscosity of the different blends at the corresponding extrusion temperatures for a shear rate between 10 s -1 and 100 s -1 . These shear rates are those corresponding to the calendering process according to the literature (52) (53) [START_REF] Cheremisinoff | Polymer mixing and extrusion technology[END_REF]. At the same die temperature, the viscosity of neat LDPE and neat LLDPE-g-Ma is 7 times higher than the viscosity of neat TPU. Although the viscosity of LLDPE-g-Ma is lower than LDPE ones, the blend of LLDPE-g-Ma/LDPE/TPU (experiment 14) has a viscosity twice higher than that of experiment 6 for a same temperature of process. This must be linked with the miscibility of the different materials and it should be compared with the mechanical performance and the morphology of the blends. For the same LDPE/TPU blends, the higher the process temperature is, the lower is the viscosity. For example for an amount of 22.5%wt of LDPE and a shear rate of 10s -1 , the viscosity is about 421 Pa.s for a temperature of 190°C and 47 Pa.s for a temperature of 210°C. Analysis of the coated textile sections The analysis of the coated textile section gives important information about the depth of penetration of the polymer between the filaments of the yarns that compose the fabric. The Table XV gives the depth of penetration for each experiment. The coating has a better penetration when extruded at high temperature especially if the temperature is higher than 190°C. It could be due to a difference of surface energy or viscosity. However it is important to notice that the diameter of a filament is 23 µm so the coating never penetrates inside the fabric but always keeps on the surface (Figure 14). Discussion Analysis of the peel strength increase According to the literature, the coating peel strength on a substrate is directly related to the six adhesion theories mentioned in introduction. However for this case, the theories of the weak boundary layer, electrostatic and diffusion cannot be used to explain the results. FTIR shows that no new chemical bonds have been created in any of the different blends. Indeed, despite the large difference of peel strength, all the FTIR spectrums are identical. It A B B means that the peel strength difference is not due to the creation of a new chemical bond between the fabric and the coating. Regarding the theories of wetting and mechanical interlocking, the observations of the coating penetration depth in the fabric give good information. As expected, the lower is the viscosity, the better is the penetration. According to the results, peel strength seemed to increase when coating penetration was higher than 7µm For example, on sample 7, the penetration is about 14µm and the peel strength is 21.2 N/50mm while penetration is only 2µm for sample 10 and the corresponding peel strength 1.8 N/50mm. But this trend could not be generalized; in fact sample 6 displayed a 30 N/50 mm peel strength with a penetration depth of only 7µm. The conclusion is that the coating penetration depth in the fabric, and the related viscosity, has a strong influence on the peel strength but is not the only parameter involved. Actually, the temperature seems to have a strong impact on the peel strength. The temperature dependence of surface energy has been shown by previous papers (55) (56) (57).This modification of surface energy could lead to a better affinity between the fabric and the coating. As said previously, for a same amount of LDPE an increase of the extrusion temperature leads to an increase of the peel strength until a maximum value for a temperature of 190°C. For higher temperatures (experiment 1, 3, 5 and 7), the peel strength decreases as a consequence of the polymer degradation, which could be observed by the decrease of the tensile strength. The low significance value obtained with the experimental design means that there is a good correlation between the model and the experiments. The experimental design also allows us to determine the best coating composition and the best extrusion temperature to get the highest possible value of peel strength and a good value of tensile strength. Maleic anhydride influence analysis As observed previously, the complete substitution of LDPE by LLDPE-g-Ma has a negative impact on the peel strength, but a substitution of only 3%wt of LDPE by LLDPE-g-Ma did not degrade this property while it increases its tensile strength. The LLDPE-g-Ma is needed to get good peel strength and also good tensile properties. Actually, the addition of maleic anhydride in the blend created a modification of the initial nodular morphology to a co-continuous morphology for blends with 3%wt of Ma and to almost continuous morphology for blends with 29%wt of Ma. This is in perfect accordance with the literature (36) (37) [START_REF] Song | Polyethylene/polyurethane blends for improved paint adhesion. s.l[END_REF]. Other tests like tear strength or measurements of interfacial tensions using the Palierne's model have been performed on the blends. The results will be published soon in another paper. The interfacial decreases significantly with the addition of maleic anhydride. This is a proof of the compatibilisation of the blend. One interesting point is the difference of viscosity between experiment 6, 13 and 14 which have the same amount of LDPE or LLDPE-g-Ma. At the same temperature, LLDPE-g-Ma has a lower viscosity than LDPE. But also for a same temperature, the viscosity of the blend 14 made of 26%wt of LDPE and 3%wt of LLDPE-g-Ma is 2 times higher than the viscosity of the blend 6 made with 29%wt of LDPE and blend 13 with 29%wt of LLDPE-g-Ma. This increase of viscosity means that there is a good compatibility between LDPE and LLDPE-g-Ma. The complete substitution of LDPE by LLDPE-g-Ma seems to not have any impact on the viscosity of the blend at 190°C as it was found by comparing experiment 6 and 13. Conclusion In the present paper, the impact on peel strength of the addition of low density polyethylene and linear low density polyethylene to a thermoplastic polyurethane sheet calendered on a polyester fabric has been studied. This study has been divided into two parts. In the first part, the study has shown that the addition of LDPE in the TPU coating has no direct impact on the peel strength while the die temperature has a strong influence. It has been shown that an increase of the extrusion temperature leads to an increase of the peel strength. However it is important to note that the best peel strength is obtained for an extrusion temperature of 190°C which is not the highest temperature. This must be due to a degradation of the film at higher temperature as shown by analyzing the tensile strength. The increase of the peel strength can be attributed to several phenomena among which the penetration of the coating in the fabric which creates a mechanical interlocking, and the extrusion temperature which create a different surface energy of the coating resulting in a better affinity with the fabric. This theory will have to be proved for our study in a future work by using a pendant drop experiment as previously realized by Kwok et al [START_REF] Kwok | Study on the surface tensions of polymer melts using axisymmetric drop shape analysis[END_REF]. In a second part, the influence of maleic anhydride as a compatibiliser between TPU and LDPE has been studied with the addition of LLDPE-g-Ma. It has been shown that the substitution of LDPE by LLDPE-g-Ma has a negative impact on the peel strength but hugely increases the tensile strength. However the substitution of only 3%wt of LDPE (among 29%wt) by LLDPE-g-Ma has no impact on the peel strength but still increases the tensile strength. In future work, this experimental investigation will be continued firstly with a study of the extruded coating sheet surface energy depending on the die temperature. The effect of the temperature on the surface energy will be helpful to confirm the theory proposed in our conclusion to explain the better adhesion. Also it will be important to focus on the other kind of bonds among polymers and interfaces to explain the better adhesion. Figure 1 : 1 Figure 1: Modelization with a TexGen© software of the plain fabric used in the study. The search for experimental conditions which optimize the five responses simultaneously requires the use of the desirability function approach. The method consists in transforming the measured property of each response to a dimensionless desirability scale d i defined as a partial desirability function. This makes possible the combination of the results obtained for properties measured on different scales. The scale of the desirability function ranges between d =0, for a completely undesirable response, and d =1, if the response is at the target value. Once the function d i is defined for each of the responses of interest, an overall objective function (D), representing the global desirability function is calculated by determining the geometric mean of the individual desirabilities. Therefore, the function D over the experimental domain is calculated using Eq. (2) as follows (43) (45) (46): The extruder was connected to the air network which provides ambient temperature air to cool the hopper zone. The calendering was performed on only one face of the fabric using a 3-roll laboratory calender from THERMO SCIENTIFIC according to Figure 2. The rolls were 200mm wide and were cooled with a HAAKE Phoenix II P1 thermostat (THERMO SCIENTIFIC) with oil and regulation pump speed. Figure 2 : 2 Figure 2: Scheme of the calendering process. Figure 3 : 3 Figure 3: Scheme of the extruder and temperature zones (47). Hopper Zone Zone 1 1 Zone 2 DieThe peel strength of the coating sheet on the fabric has been determined by a peel test carried out on a Zwick Z010 according to the standard NF EN ISO 2411. Samples were cut from the middle of the coated fabrics to avoid edge effects. The coating was first separated from the fabric using a tweezer and a cutting blade. The 50 mm width coating sheet and the fabric were clamped separately on the machine with a distance of 50 mm between grips (Figure4). A crosshead speed of 100 mm/min and a 0.5 kN cell was chosen. During the test the force was recorded as a function of displacement thanks to TestXpert® II software (Zwick). Reported data are the average of five samples. Figure 4 : 4 Figure 4: Scheme of the peel strength test according to standard NF EN ISO 2411 2.2.5 Analysis of the mechanical properties of the sheet Figure 5 : 5 Figure 5: Measurement method of the coating penetration on the fabric. 3. 3 . 3 Figure 6 : 336 Interpretation of the response surface models Following the validation of the model, the isoresponse curves were drawn for each response by plotting the response variation against both the factors. (Temperature in °C vs PE amount in wt %). If zones of interest boundaries are set (according to the targets: adhesion and tensile strength), these curves are very useful. Below are discussed the results corresponding to the two studied responses: Interfacial adhesion ( The examination of interfacial adhesion of TPU/LDPE mixture isoresponse curves (Figure 6) shows that the high values of T and PE give a negative effect on the response. The maximal interfacial adhesion 29.48 N/50 mm) is reached at a PE ratio in the range 38-39% and a mixture temperature of 192.5°C.  Tensile strength ( The isoresponse curves in Figure 6 show that the tensile strength of the blends is almost the same for temperatures lower than 200°C and a PE amount of 50%. Actually, the range of is between 19 and 22 N. Beyond this value, tensile strength sharply increases to reach 42N. In addition, the temperature has not an important effect on the response in the studied domain in contrast to the PE amount.  (a-b): Isoresponse curves in the plane: (a) Interfacial adhesion (Y1) and (b) Tensile strength (Y2). partial desirabilities of the two responses established based on the study of the behavior of some TPU/LDPE mixtures are shown in Figure 7. A target is fixed at 28 N / 50 mm and 25 MPa for the interfacial adhesion and tensile strength responses respectively. After calculation by the NEMRODW 2015 software, a three-dimensional plot of the global desirability function D can be represented as shown in Figure 8. We can note the rather flat area corresponding to the optimal conditions (D=0.84). Figure 7 : 7 Figure 7: Individual desirability function of the responses (d1: Interfacial adhesion (N/50mm) and d2: Tensile strength (MPa)). Figure 8 : 8 Figure 8: Response surface of the global desirability function. Figure 9 : 9 Figure 9: SEM pictures of the film morphologies for: A: nodular (experiment 5), B :cocontinuous (experiment 14), C: continous (experiment 13). Figure 10 : 10 Figure 10: FTIR spectrum of neat LDPE and neat LLDPE-g-Ma. Figure 11 : 11 Figure 11: FTIR spectrum (1650-1900cm -1 ) of neat LDPE and neat LLDPE-g-Ma. Figure 12 : 12 Figure 12: FTIR spectrum of experiment 2, 6, 13 and 14. Figure 13 : 13 Figure 13: FTIR spectrum (1650-1900cm-1) of neat TPU and experiment 6, 13 and 14. Figure 14 : 14 Figure 14: sections of the coated textile: A: experiment 11 and B: experiment 7. Table I : Main properties of LDPE (Low Density Polyethylene) and LLDPE-g-Ma (Linear Low Density Polyethylene grafted maleic anhydride). I LDPE LD 171 BA LLDPE-g-Ma OREVAC OE825 Manufacturer EXXON MOBILE® OREVAC® by ARKEMA Density 0.929g/cm 3 0.913g/cm 3 Melt Index (190°C/2.16kg) 0.55g/10min 3g/10min Peak Melting Temperature 114°C 118°C Additives no Maleic anhydride Table II : Main properties of TPU. II IROGRAN A 90 P 5055 DP Manufacturer HUNTSMAN® Isocyanate Aromatic Alcohol Polyether Density 0.7g/cm 3 Melt Index (190°C/10kg) 42g/10min Peak Melting Temperature 113°C Additives no Recommended injection temperature 190°C-200°C Table III : Main properties of the fabric. Composition Polyester Weaving Plain Additives on the surface No Number of yarns per cm : weft 18 Number of yarns per cm: warp 18 Thickness (µm) 170 Fabric weight (g/m 2 ) 105 Number of filaments per yarn 48 Filament diameter (µm) 23 Yarn count (g/km) 28 Mechanical properties: weft (daN/5cm) 155 Mechanical properties: warp (daN/5cm) 155 III Table IV : Composite design. IV Variable Factor Unit Center Step of variation X T Temperature °C 192,50 12,38 X PE PE amount wt% 50,00 35,36 Table V : Original and coded values of the independent variables of the extraction process. Independent variables Symbols Coded values V -1.41 -1 0 1 1.41 Original values Temperature (°C) T 175 180 192.5 205 210 PE amount (%) PE 0 16.67 50 85.36 100 CCD's are designed to estimate the coefficients of a quadratic model. To get the best response surface, rotatable CCDs are commonly applied. Rotatability implies that the variation in the response prediction will be constant at a given distance from the center of the design. The design matrix for a rotatable CCD for 2 variables (each one evaluated at 5 levels), involves 9 design points or experiments with adding of 3 additional experiments called check points (runs nos. 10 to 12) in order to subsequently check the validity of the fitted models. After performing 12 different experiments, a quadratic model was fitted to the response data using Nemrodw 2015 software. The whole table data (Table VI) is presented in the part Results. The complete quadratic model for k variables contains (k + 1)(k + 2)/2 parameters and is given by: Table VI : VI Process parameters. Parameters Value Die gap 450µm Die temperature [174°C; 209°C] (+/-1°C) (see Erreur ! Source du renvoi introuvable.) Extrusion speed 60 rpm Calendering speed 6 rpm Temperature of the thermoregulated rolls 40°C Distance between the die and the rolls 20mm Table VII : Composition and extrusion temperatures of the different blends. VII Sample 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 number Hopper zone 211 180 202 196 202 196 216 187 180 170 180 196 180 196 Zone 1 211 180 202 196 202 196 216 187 180 172 180 196 180 196 Zone 2 206 175 197 191 197 191 211 182 175 174 175 191 175 191 Die 199 174 196 190 196 190 210 181 174 175 174 190 174 190 % TPU 83.4 81 81 77.5 85 71 77.5 83.4 85 100 0 0 0 71 71 % LDPE 16.6 19 19 22.5 15 29 22.5 16.6 15 0 100 100 0 0 26 % PE-g-Ma 0 0 0 0 0 0 0 0 0 0 0 0 100 29 3 Table VIII : Peel strength of neat TPU, neat LDPE and TPU/LDPE blends. VIII Sample number 1 2 3 4 5 6 7 8 9 10 11 12 LDPE ratio 16.6 19 19 22.5 15 29 22.5 16.6 15 0 100 100 Die temperature 199 174 196 190 196 190 210 181 174 175 174 190 (°C) Peel strength 22.3 13 27.4 30 29.1 30 21.2 17 13 7 1.8 4.5 (N/50mm) Standard 2.14 0.53 2.8 1.61 2.62 2.06 2.7 0.3 0.78 0.31 0.22 0.46 deviation Table VIII, peel strengths of neat TPU do not exceed 7N/50mm. For the blends given in the Table VIII, an increase between 200% can be highlighted for experiment number 9 (15% of LDPE and extrusion temperature 174°C) and 430% for experiments 4 and 6 (respectively 23% and 29% of LDPE and extrusion temperature 190°C). However neat LDPE (at both 174°C and 175°C) also exhibit a very low peel strength which means that the increase of the peel strength of the blends is not only due to the LDPE. Table IX : Tensile strength of neat TPU, neat LDPE and TPU/LDPE blends. IX Sample number 1 2 3 4 5 6 7 8 9 10 11 12 Amount of 16.6 19 19 22.5 15 29 22.5 16.6 15 0 100 100 LDPE (%) Die temperature 199 174 196 190 196 190 210 181 174 175 174 190 (°C) Tensile strength 14.3 22.5 17.9 13.9 17.8 18.6 11.6 18.7 25.5 23.1 24.6 42.6 (MPa) Standard 0.82 0.99 0.7 0.77 1.1 0.85 0.92 1.04 1.48 2.01 1.69 0.74 deviation Table X : CDD design matrix along with the experimental responses. No X 1 : Temperature (°C) Polyethylene amount (%) Response Y1(interfacial adhesion) (N/50 mm) Response Y2 (tensile strength) (MPa) X 1 199 16.0 22.31 14.28 2 174 19.0 13.02 22.46 3 196 19.0 27.43 17.90 4 190 23.0 30.00 13.87 5 196 15.0 29.07 17.75 6 190 29.0 30.00 18.62 7 210 22.5 21.19 11.65 8 181 16.6 17.00 19.00 9 174 15.0 13.00 26.00 10 175 0.0 7.00 23.00 11 174 100.0 1.83 24.58 12 190 100.0 4.53 42.64 Table XI : Analysis of the responses of the CCD design. XI Table XI illustrates the ANOVA corresponding to two responses namely interfacial adhesion ( ) and tensile strength ( ). In addition, Table XII shows the check point results used to validate the accuracy of the models. Source of Sum of df Mean Ratio Significance variation squares square (p-value) (1) Interfacial adhesion R 2 =0.958 & adj R 2 =0.924 Regression 1.09.10 3 5 2.19.10 2 27.59 0.0453*** Residuals 4.76.10 1 6 7.94.10 0 Total 1.14.10 3 11 (2) Tensile strength R 2 =0.932 & adj R 2 =0.875 Regression 6.76.10 2 5 1.35.10 2 16.39 0.193** Residuals 4.95.10 1 6 8.24.10 0 Total 7.25.10 2 11 *significant at the level 95%; **significant at the level 99%; ***significant at the level 99.9%; (NS): non-significant at the level 95%. Table XII : Numerical results for check points. XII Table XII (equivalent Student values in function of both the Ru n Y exp Y calc Y exp -Y calc dU t-test (1) Interfacial adhesion 10 7.00 5.81 1.19 0.80 0.942 11 1.83 1.54 0.29 0.99 1.008 12 4.53 4.91 -0.38 0.98 -1.029 (2) Tensile strength 10 23.00 24.99 -1.99 0.80 -1.557 11 24.58 24.99 0.41 0.99 -1.435 12 42.64 42.11 0.53 0.98 1.407 response). It could be concluded that the second order models were adequate to describe the two response surfaces and could be used as prediction equations in the studied domain. Table XIII : Peel strength and tensile strength of experiments 13, 14 and 15 XIII Sample number LDPE (%) Amount of LLDPE-g-Ma Die temperature (°C) Peel strength (N/50mm) Standard deviation Tensile Strength (MPa) Standard deviation 3.4 Analysis of the compatibility TPU/LDPE, TPU/LLDPe-g-Ma and 13 0 26 29 3 190 190 16.5 30.7 1.87 3.09 31.5 26.51 1.26 0.3 14 Amount of TPU/LDPE/LLDPE-g-Ma 0 100 175 4.8 0.7 27.12 1.35 Table XIV : Shear rate viscosities of neat TPU, neat LDPE, neat LLDPE-g-Ma and of the different blends at the corresponding processing temperatures and for a shear rate between 10s -1 and 100s -1 . XIV Amount of 16.6 19 19 22.5 15 29 22.5 16.6 15 0 100 100 0 26 0 LDPE (%) Amount of LLDPE-g-Ma 0 0 0 0 0 0 0 0 0 0 0 0 29 3 100 (%) Die temperature 199 174 196 190 196 190 210 181 174 175 174 190 190 190 175 (°C) Shear rate 110 481 119 222 112 421 47 447 487 237 3800 2900 445 884 2980 viscosity (Pa.s) -93 -284 -96 -166 -99 -267 -43 -276 -335 -188 -960 -700 -248 -485 -1300 10s -1 -100s -1 Sample 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 number Table XV : Penetration depth of the coating for each blend in the fabric. XV Sample number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Amount of 16.6 19 19 22.5 15 29 22.5 16.6 15 0 100 100 0 26 0 LDPE (%) Amount of LLDPE-g-Ma 0 0 0 0 0 0 0 0 0 0 0 0 29 3 100 (%) Die temperature 199 174 196 190 196 190 210 181 174 175 174 190 190 190 175 (°C) Penetration 13 6 11 10 9 7 14 8 7 2 5 6 8 6 2 depth (µm)
01772611
en
[ "math", "math.math-mp" ]
2024/03/05 22:32:18
2019
https://hal.science/hal-01772611/file/Radon18.pdf
R G Novikov email: [email protected] Non-abelian Radon transform and its applications Considerations of the non-abelian Radon transform were started in [Manakov, Zakharov, 1981] in the framework of the theory of solitons in dimension 2+1. On the other hand, the problem of inversion of transforms of such a type arises in different tomographies, including emission tomographies, polarization tomographies, and vector field tomography. In this article we give a short review of old and recent results on this subject. Introduction We consider the transport equation θ∂ x ψ + A(x, θ)ψ = 0, x ∈ R d , θ ∈ S d-1 , (1) where θ∂ x = d ∑ j=1 θ j ∂/∂x j and A is a sufficiently regular function on R d ×S d-1 with sufficient decay as |x| → ∞. We assume that A and ψ take values in M (n, C) that is in n × n complex matrices. For equation (1) we consider the "scattering" matrix S: S(x, θ) = lim s→+∞ ψ + (x + sθ, θ), (x, θ) ∈ T S d-1 , ( 2 ) where T S d-1 = {(x, θ) ∈ R d × S d-1 : xθ = 0} (3) and ψ + (x, θ) is the solution of (1) such that lim s→-∞ ψ + (x + sθ, θ) = I, x ∈ R d , θ ∈ S d-1 , ( 4 ) where I is the identity matrix. We interpret T S d-1 as the set of all rays in R d . As a ray γ we understand a straight line with fixed orientation. If γ = (x, θ) ∈ T S d-1 , then γ = {y ∈ R d : y = x + tθ, t ∈ R} (up to orientation) and θ gives the orientation of γ. We say that S is the non-abelian Radon transform along oriented straight lines (or the non-abelian X-ray transform) of A. We consider the following inverse problem for d ≥ 2: Problem 1. Given S, find A. Note that S does not determine A uniquely, in general. One of the reasons is that S is a function on T S d-1 , whereas A is a function on R d × S d-1 and dim R d × S d-1 = 2d -1 > dim T S d-1 = 2d -2. In particular, for Problem 1 there is a gauge type non-uniqueness, that is S is invariant with respect to the gauge transforms A → A ′ , A ′ (x, θ) = g -1 (x, θ)A(x, θ)g(x, θ) + g -1 (x, θ)θ∂ x g(x, θ), (5 ) where g is a sufficiently regular GL(n, C)-valued function on R d × S d-1 and g → I sufficiently fast as |x| → ∞. In addition, in particular, for Problem 1 there are Boman type non-uniqueness (see [Bo], [GN]) and non-uniqueness related with solitons (see [N1]). Equation (1), the "scattering" matrix S and Problem 1 arise, for example, in different tomographies (see Sections 2-6, 8), in differential geometry (see Section 7) and in the theory of the Yang-Mills fields (see Section 9). In Sections 2-9 we give a short review of old and recent results on this subject. Classical X-ray transmission tomography Problem 1 arises as a problem of the classical X-ray transmission tomography in the framework of the following reduction: n = 1, A(x, θ) = a(x), x ∈ R d , θ ∈ S d-1 , (6) S(γ) = exp[-P a(γ)], P a(γ) = ∫ R a(x + sθ)ds, γ = (x, θ) ∈ T S d-1 , ( 7 ) where a is the X-ray attenuation coefficient of the medium, P is the classical Radon transform along straight lines (classical X-ray transform), S(γ) describes the X-ray photograph along γ. In this case, for d ≥ 2, S T S 1 (Y ) uniquely determines a Y , ( 8 ) where Y is an arbitrary two-dimensional plane in R d , T S 1 (Y ) is the set of all oriented straight lines in Y . In addition, this determination can be implemented via the Radon inversion formula for P in dimension d = 2; see [R]. In connection with this formula see also Remark 2 in Subsection 2.2. For more information on the classical X-ray transmission tomography and on the classical X-ray transform, see, e.g., [GGG], [Na] and references therein. Single-photon emission computed tomography (SPECT) In SPECT one considers a body containing radioactive isotopes emitting photons. The emission data p in SPECT consist in the radiation measured outside the body by a family of detectors during some fixed time (where expected p is described by P a f defined below). The basic problem of SPECT consists in finding the distribution f of these isotopes in the body from the emission data p and some a priori information concerning the body. Usually this a priori information consists in the photon attenuation coefficient a in the points of body, where this coefficient is found in advance by the methods of the classical X-ray transmission tomography (mentioned in Section 2). Problem 1 arises as a problem of SPECT in the framework of the following reduction [N1]: n = 2, A 11 = a(x), A 12 = f (x), A 21 = 0, A 22 = 0, x ∈ R d , ( 9 ) S 11 = exp [-P 0 a], S 12 = -P a f, S 21 = 0, S 22 = 1, ( 10 ) where a is the photon attenuation coefficient of the medium, f is the density of radioactive isotopes, P 0 = P is defined in (7), P a is the attenuated Radon transform along oriented straight lines (attenuated ray transform), P a f describes the expected emission data, P a f (γ) = ∫ R exp[-Da(x + sθ, θ)]f (x + sθ)ds, γ = (x, θ) ∈ T S d-1 , (11) Da(x, θ) = +∞ ∫ 0 a(x + sθ)ds, x ∈ R d , θ ∈ S d-1 , ( 12 ) where D is the divergent beam transform. In this case (as well as for the case of the classical X-ray transmission tomography), for d ≥ 2, S T S 1 (Y ) uniquely determines a Y and f Y , ( 13 ) where Y is an arbitrary two-dimensional plane in R d , T S 1 (Y ) is the set of all oriented straight lines in Y . In addition, this determination can be implemented via the following inversion formula [N2]: f = P -1 a g, where g = P a f, ( 14 ) P -1 a g(x) = 1 4π ∫ S θ ⊥ ∂ x ( exp [-Da(x, -θ)]g θ (θ ⊥ x) ) dθ, gθ (s) = exp (A θ (s)) cos (B θ (s))H(exp (A θ ) cos (B θ )g θ )(s)+ exp (A θ (s)) sin (B θ (s))H(exp (A θ ) sin (B θ )g θ )(s), A θ (s) = (1/2)P 0 a(sθ ⊥ , θ), B θ (s) = HA θ (s), g θ (s) = g(sθ ⊥ , θ), ( 15 ) Hu(s) = 1 π p.v. ∫ R u(t) s -t dt, ( 16 ) x ∈ R 2 , θ ⊥ = (-θ 2 , θ 1 ) for θ = (θ 1 , θ 2 ) ∈ S 1 , s ∈ R. Remark 1. The assumptions on a and f in ( 13)-( 15) can be specified as follows: a, f are real -valued, a, f ∈ L ∞ (R 2 ), a, f = O(|x| -σ ) as |x| → ∞ for some σ > 1, ( 17 ) where Y is identified with R 2 in (13). Remark 2. For a ≡ 0, formulas ( 14), ( 15) are reduced to the classical Radon inversion formula for P defined in (7) for d = 2. For more information on SPECT and for more results on the attenuated ray transform P a we refer to [Na], [START_REF] Kunyansky | Generalized and attenuated Radon transforms: restorative approach to the numerical inversion[END_REF], [START_REF] Kunyansky | A new SPECT reconstruction algorithm based on the Novikov's explicit inversion formula[END_REF], [N2], [N3], [START_REF] Guillement | Optimized analytic reconstruction for SPECT[END_REF], [START_REF] Guillement | Inversion of weighted Radon transforms via finite Fourier series weight approximations[END_REF] and references therein. Tomographies related with weighted Radon transforms We consider the weighted Radon transforms P W (along oriented straight lines) defined by the formula P W f (x, θ) = ∫ R W (x + sθ, θ)f (x + sθ)ds, (x, θ) ∈ T S d-1 , ( 18 ) where W = W (x, θ) is the weight, f = f (x) is a test function. The assumptions on W can be specified as follows: W ∈ L ∞ (R d × S d-1 ), W = W , 0 < c 0 ≤ W ≤ c 1 , ( 19 ) lim s→±∞ W (x + sθ, θ) = w ± (x, θ), (x, θ) ∈ T S d-1 . If W = 1, then P W is reduced to the classical X-ray transform P defined in (7). If W (x, θ) = exp ( -Da(x, θ) ) , ( 20 ) where Da is defined by ( 12), then P W is reduced to the classical attenuated ray transform P a defined by ( 11), ( 12). Transforms P W with some other weights also arise in applications. For example, such transforms arise in positron emission tomography, optical tomography, fluorescence tomography; see [Na], [Ba], [MP]. The transforms P W f arise in the framework of the following reduction of the nonabelian Radon transform S: n = 2, A 11 = θ∂ x ln W (x, θ), A 12 = f (x), A 21 = 0, A 22 = 0, ( 21 ) S 11 = w - w + , S 12 = - 1 w + P W f, S 21 = 0, S 22 = 1. ( 22 ) In connection with P W and with the reduction ( 21), ( 22) we consider the following version of Problem 1, where we assume that W is known. Problem 2. Given P W f and W , find f . General uniqueness and reconstruction results on Problem 2 were given, in particular, in [LB], [Be], [MQ], [F], [BQ], [START_REF] Kunyansky | Generalized and attenuated Radon transforms: restorative approach to the numerical inversion[END_REF], [N7], [START_REF] Guillement | Inversion of weighted Radon transforms via finite Fourier series weight approximations[END_REF], [I]. For some W exact and simultaneously explicit formulas for solving Problem 2 are also known, see [R], [N1], [BS], [Gi], [N6] and references therein. Note that Problem 2 is nonoverdetermined for d = 2 and is overdetermined for d ≥ 3. Indeed, P W f is a function on T S d-1 , whereas f is a function on R d and dim T S d-1 = 2d -2, dim R d = d, 2d -2 = d for d = 2, 2d -2 > d for d ≥ 3. Nevertheless, Problem 2 is not uniquely solvable, in general, even for d ≥ 3. An example of non-uniqueness for Problem 2 for d = 2 was constructed in [Bo]. In this example W ∈ C ∞ (R 2 × S 1 ), f ∈ C ∞ 0 (R 2 ). An example of non-uniqueness for Problem 2 for d ≥ 3 was constructed in [GN]. In this example W ∈ C α (R d × S d-1 ) for some α > 0, f ∈ C ∞ 0 (R d ). In these examples assumptions ( 19) are also fulfilled. The notation C ∞ 0 stands for infinitely smooth compactly supported functions. For more information on the theory and applications of the transforms P W we refer to [LB], [Be], [MQ], [F], [Na], [BQ], [Bo], [START_REF] Kunyansky | Generalized and attenuated Radon transforms: restorative approach to the numerical inversion[END_REF], [N7], [START_REF] Guillement | Inversion of weighted Radon transforms via finite Fourier series weight approximations[END_REF], [I], [GN] and references therein. Neutron polarization tomography (NPT) In NPT one considers a medium with spatially varying magnetic field. The polarization data consist in changes of the polarization (spin) between incoming and outcoming neutrons. The basic problem of NPT consists in finding the magnetic field from the polarization data. See, e.g., [DMKHSB], [LDS] and references therein. Problem 1 arises as a problem of NPT in the framework of the following reduction: n = 3, A 11 = A 22 = A 33 = 0, A 12 = -A 21 = -g B 3 (x), A 13 = -A 31 = g B 2 (x), A 23 = -A 32 = -g B 1 (x), ( 23 ) where B = (B 1 , B 2 , B 3 ) is the magnetic field, g is the gyromagnetic ratio of the neutron; in addition, S for equation (1) with A given by ( 23) describes the polarization data (but, in general, S can not be given explicitly in this case). In this case S on T S 2 uniquely determines B on R 3 as a corollary of items (1), (2) of Theorem 6.1 of [N1]. In addition, the related 3D -reconstruction is based on local 2Dreconstructions based on solving Riemann conjugation problems (going back to [MZ]) and on the layer by layer reconstruction approach. The final 3D uniqueness and reconstruction results are global. For the related 2D global uniqueness, see [E]. Electromagnetic polarization tomography (EPT) In EPT one considers a medium with zero conductivity, unit magnetic permeability, and small anisotropic perturbation of some known (for example, uniform) dielectric permeability. The polarization data consist in changes of the polarization between incoming and outcoming monochromatic electromagnetic waves. The basic problem of EPT consists in finding the anisotropic perturbation of the dielectric permeability from the polarization data. See [START_REF] Sharafutdinov | Integral Geometry of Tensor Fields[END_REF], [NS], [START_REF] Sharafutdinov | The problem of polarization tomography[END_REF], [N5] and references therein. Problem 1 arises as a problem of EPT (with uniform background dielectric permeability) in the framework of the following reduction (see [START_REF] Sharafutdinov | Integral Geometry of Tensor Fields[END_REF], [NS]): n = 3, A(x, θ) = -π θ f (x)π θ , x ∈ R d , θ ∈ S d-1 , ( 24 ) where π θ ∈ M (3, R), π θ,ij = δ ij -θ i θ j , f takes values in M (3, C ) and describes the anisotropic perturbation of the dielectric permeability tensor; by some physical arguments f must be skew-Hermition, f ij = -fji ; in addition, S for equation (1) with A given by ( 24) describes the polarization data (but, in general, S can not be given explicitly in this case). In this case S on T S 2 does not determine f on R 3 uniquely, in general, (in spite of the fact that dim T S 2 = 4 > dim R 3 = 3), in particular, if f 11 = f 22 = f 33 ≡ 0, ( 25 ) f 12 (x) = ∂u(x)/∂x 3 , f 13 (x) = -∂u(x)/∂x 2 , f 23 (x) = ∂u(x)/∂x 1 , f 21 = -f 12 , f 31 = -f 13 , f 32 = -f 23 , where u is a real smooth compactly supported function, then S ≡ I on T S 2 ; see [NS]. On the other hand, a very natural additional physical assumption is that f is an imaginary-valued symmetric matrix: f = -f , f ij = f ji . According to [N4], in this case S on Λ uniquely determines f, at least, if f is sufficiently small, ( 26 ) where Λ is an appropriate 3d subset of T S 2 , for example, Λ = ∪ 6 i=1 Γ ω i , Γ ω i = {γ = (x, θ) ∈ T S 2 : θω i = 0}, ( 27 ) ω 1 = e 1 , ω 2 = e 2 , ω 3 = e 3 , ω 4 = (e 1 + e 2 )/ √ 2, ω 5 = (e 1 + e 3 )/ √ 2, ω 6 = (e 2 + e 3 )/ √ 2, where e 1 , e 2 , e 3 is the basis in R 3 . In addition, this determination is based on a convergent iterative reconstruction algorithm. For more information on EPT and for more results on related non-abelian ray transforms we refer to [START_REF] Sharafutdinov | Integral Geometry of Tensor Fields[END_REF], [NS], [START_REF] Sharafutdinov | The problem of polarization tomography[END_REF], [N5] and references therein. Inverse connection problem Let A(x, θ) = a 0 (x) + d ∑ j=1 θ j a j (x), x ∈ R d , θ = (θ 1 , . . . , θ d ) ∈ S d-1 , ( 28 ) where a j are sufficiently regular M (n, C)-valued functions on R d with sufficient decay as |x| → ∞, j = 0, 1, . . . , d. Then Problem 1 arises in differential geometry. In particular, for a 0 ≡ 0 equation (1) with A given by ( 28) describes the parallel transport of the fibre in the trivial vector bundle with the base R d and the fibre C n and with the connection a = (a 1 , . . . , a d ) along the Euclidean geodesics in R d ; in addition, S(γ) for fixed γ ∈ T S d-1 is the operator of this parallel transport along γ (from -∞ to +∞ on γ); see [START_REF] Sharafutdinov | On an inverse problem of determining a connection on a vector bundle[END_REF], [N1]. Besides, for a 0 ̸ ≡ 0 equation (1) with A given by ( 28) describes the parallel transport of the fibre in the trivial vector bundle with the base R d+1 1,d and the fibre C n and with the connection a = (a 0 , a 1 , . . . , a d ) (independent of time) along the light rays in the Minkowski space R d+1 1,d ; in addition, S(γ) for fixed γ = (x, θ) ∈ T S d-1 is the operator of this parallel transport along the light rays l(γ, τ ) = {(t, y) ∈ R d+1 : t = 2 -1/2 s + τ, y = 2 -1/2 sθ + x, s ∈ R}, τ ∈ R, with the orientation given by the vector 2 -1/2 (1, θ) (from -∞ to +∞ on l(γ, τ ) for an arbitrary τ ∈ R); see [N1]. In these cases Problem 1 is an inverse connection problem. The determination in this problem is considered modulo gauge transforms a = (a 0 , a 1 , . . . , a d ) → a ′ = (a ′ 0 , a ′ 1 , . . . , a ′ d ), a ′ 0 = g -1 a 0 g, a ′ i = g -1 a i g + g -1 ∂ i g, ∂ i g(x) = ∂g(x) ∂x i , i = 1, . . . , d, (29) where g is a sufficiently regular GL(n, C)valued function on R d and g → I sufficiently fast as |x| → ∞. Global uniqueness and reconstruction results on this inverse connection problem in dimension d ≥ 3 were given for the first time in [N1]. The related reconstruction is based on local 2D-reconstructions based on solving Riemann conjugation problems (going back to [MZ]) and on the layer by layer reconstruction approach. In addition, counter examples to the global uniqueness for the aforementioned inverse connection problem for a 0 ≡ 0 in dimension d = 2 were also given for the first time in [N1]. These counter examples use the soliton solutions constructed in [Wa], [V] for equation (38) mentioned below. In addition, for the global uniqueness in dimension d = 2 for the case of compactly supported a = (a 0 , a 1 , . . . , a d ), see [E]. Note that [N1] was stimulated by [START_REF] Sharafutdinov | On an inverse problem of determining a connection on a vector bundle[END_REF], where [START_REF] Sharafutdinov | On an inverse problem of determining a connection on a vector bundle[END_REF] was preceded by [We]. For more information on the inverse connection problem we refer to [MZ], [START_REF] Sharafutdinov | On an inverse problem of determining a connection on a vector bundle[END_REF], [N1], [E], [N4], [P], [GPSU] and references therein. Vector field tomography The inverse connection problem of Section 7 arises as a problem of the vector ultrasonic tomography in the framework of the following reduction: n = 2, a 0 = ( a(x) 0 0 0 ) , a j = ( 0 u j (x) 0 0 ) , x ∈ R d , j = 1, . . . , d, (30) S 11 = exp[-P 0 a], S 12 = exp[-P a u], S 21 = 0, S 22 = 1, where P 0 a is defined as in ( 7), (10), P a u(γ) = ∫ R exp[-Da(x + sθ, θ)]θu(x + sθ)ds, γ = (x, θ) ∈ T S d-1 , (32) θu = d ∑ j=1 θ j u j , Da is defined as in ( 12), a is the attenuation coefficient, u = (u 1 , . . . , u d ) is the flow velocity, P a u is the attenuated vectorial Radon transform of u along oriented straight lines. The transform P a u for a = 0 is the standard vectorial Radon transform of u and is related to time-of-flight measurements or to Doppler measurements; P a u for a ̸ ≡ 0 is related to the attenuated Doppler measurements; see [Sch] and references therein. In connection with mathematics of vector field tomography we refer to [GGG], [START_REF] Sharafutdinov | Integral Geometry of Tensor Fields[END_REF], [N1], [START_REF] Sharafutdinov | Slice-by-slice reconstruction algorithm for vector tomography with incomplete data[END_REF], [KB], [Sch] and references therein. Theory of the Yang-Mills fields A. The inverse connection problem of Section 7 for a 0 ≡ 0 arises, in particular, in the framework of studies on inverse problems for the Schrödinger equation d ∑ j=1 - ( ∂ ∂x j + a j (x) ) 2 ψ + v(x)ψ = Eψ (33) in the Yang-Mills field a = (a 1 , . . . , a d ) at high energies E (i.e., for E → +∞); see [N1] and references therein. The reason of this consists in the fact that for ψ of the form ψ = e isθx (µ 0 (x, θ) + O(s -1 )), x ∈ R d , θ ∈ S d-1 , s = √ E → +∞, (34) equation ( 33) in its leading part is reduced to equation (1) with µ 0 in place of ψ, where A is given by ( 28) with a 0 ≡ 0. B. The inverse connection problem of Section 7 for d = 2 arises, in particular, in the framework of integrating the self-dual Yang-Mills equations; see [MZ], [Wa], [V], [N1] and references therein. Actually, Problem 1 for A(x, θ) = a 0 (x) + θ 1 a 1 (x) + θ 2 a 2 (x), x = (x 1 , x 2 ) ∈ R 2 , θ = (θ 1 , θ 2 ) ∈ S 1 , ( 35 ) with M (n, C)-valued a 0 , a 1 , a 2 (and some linear relation between a 1 and a 2 ) was considered for the first time in [MZ] in the framework of integration by the inverse scattering method of the evolution equation (χ -1 χ t ) t = (χ -1 χ z ) z , ( 36 ) where t, z, z in (36) denote partial derivatives with respect to t, z = x 1 + ix 2 , z = x 1 -ix 2 and where χ is SU (n)-valued function. Equation ( 36) is a (2+1)-dimensional reduction of the self-duel Yang-Mills equations in 2+2 dimensions. To our knowledge, the terminology "non-abelian Radon transform" was introduced namely in [MZ] where it was used for S in (2) corresponding to the aforementioned A of (35). The inverse scattering transform in [MZ] is based on Riemann conjugation problems. Related analysis was significantly developed, in particular, in [N1]. In addition, Problem 1 for A(x, θ) = θ 2 a 2 (x), x = (x 1 , x 2 ) ∈ R 2 , θ = (θ 1 , θ 2 ) ∈ S 1 , (37) with M (n, C)-valued a 2 arises in the framework of the inverse scattering method for the equation (J -1 J x 1 ) x 1 -(J -1 J x 2 ) t = 0, ( 38 ) where t, x 1 , x 2 in (38) denote partial derivatives with respect to t, x 1 , x 2 and where J is SU(n)-valued function; see [Wa], [V], [N1], at least, for n = 2. Equation ( 38) is also a (2+1)-dimensional reduction of the self-duel Yang-Mills equations in 2+2 dimensions. This reduction is different from (36). The aforementioned counter examples to the global uniqueness for the inverse connection problem of Section 7 for a 0 ≡ 0 in dimension d = 2 were constructed in [N1] using results of [Wa] and subsequent results of [V] concerning soliton solutions for equation (38).
01772652
en
[ "spi.signal" ]
2024/03/05 22:32:18
2006
https://hal.science/hal-01772652/file/IWAENC2006.pdf
Abdeldjalil Aissa El Bey email: [email protected] Hicham Bousbia-Salah Karim Abed-Meraim Yves Grenier email: [email protected] Yves Grenier Audio A Aïssa-El-Bey AUDIO SOURCE SEPARATION USING SPARSITY niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. INTRODUCTION This paper deals with blind source separation (BSS). The blind context means that neither the sources nor the mixing matrix are known. The goal of BSS is to recover the sources up to scaling and permutation by, only, using the mixtures. Blind source separation (BSS) has applications in several areas, such as communication, speech / audio processing, and biomedical engineering [START_REF] Cichocki | Adaptive Blind Signal and Image Processing[END_REF]. A fundamental and necessary assumption of BSS is that the sources are statistically independent and thus are often separated using higher-order statistical information [START_REF] Cardoso | Blind signal separation: statistical principles[END_REF]. If some information about the sources is available at hand, such as temporal coherency [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF], source nonstationarity [START_REF] Belouchrani | Blind source separation based on time-frequency signal representations[END_REF], or source cyclostationarity [START_REF] Abed-Meraim | Blind source separation using second order cyclostationary statistics[END_REF], then one can remain in the secondorder statistical scenario. In the case of non-stationary signals (including audio signals), certain solutions using time-frequency analysis of the observations exist [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF]. Other solutions use the statistical independence of the sources assuming a local stationarity to solve the BSS problem [START_REF] Pham | Blind separation of instantaneous mixtures of non stationary sources[END_REF]. This is a strong assumption that is not always verified [START_REF] Smith | An analysis of the limitations of blind signal separation application with speech[END_REF]. To avoid this problem, we propose a new approach that handles the general linear instantaneous model (possibly noisy) by using the sparsity assumption of the sources in the time domain. The use of sparsity to handle this model, has arisen in several papers in the area of source separation [START_REF] Cichocki | Adaptive Blind Signal and Image Processing[END_REF][START_REF] Zibulevsky | Sparse source separation with relative Newton method[END_REF]. We first present a sparsity contrast function for BSS. Then, in order to achieve BSS, we optimize the considered contrast function using an iterative algorithm based on the gradient technique. In the following section, we discuss the data model that formulates our problem. Next, we detail the different steps of the proposed algorithm. In Section 4, some simulations are undertaken to validate our algorithm and to show the usefulness of the proposed method. DATA MODEL Assume that N audio signals impinge on an array of M ≥ N sensors. The measured array output is a weighted superposition of the signals, corrupted by additive noise, i.e. x(t) = As(t) + w(t) t = 0, . . . , T -1 (1) where s(t) = [s 1 (t), • • • , s N (t)] T is the N × 1 sparse source vector, w(t) = [w 1 (t), • • • , w M (t)] T is the M × 1 gaussian complex noise vector, A is the M × N full column rank mixing matrix (i.e., M ≥ N ), and the superscript T denotes the transpose operator. The purpose of blind source separation is to find a separating matrix, i.e. a N × M matrix such that s(t) = Bx(t) is an estimate of the source signals. Before proceeding, note that complete blind identification of separating matrix B (or the equivalently mixing matrix A) is impossible in this context, because the exchange of a fixed scalar between the source signal and the corresponding column of A leaves the observations unaffected. Also note that the numbering of the signals is immaterial. It follows that the best that can be done is to determine B up to a permutation and scalar shifts of its columns, i.e., B is a separating matrix iff: Bx(t) = PΛs(t) (2) where P is a permutation matrix and Λ a non-singular diagonal matrix. ITERATIVE SPARSE ALGORITHM In this section, we propose an iterative algorithm for the separation of sparse audio signals ISBS for Iterative Sparse Blind Separation. As well known, audio signals are characterized by their sparsity property in the time domain [START_REF] Cichocki | Adaptive Blind Signal and Image Processing[END_REF][START_REF] Zibulevsky | Sparse source separation with relative Newton method[END_REF] which is measured by their p norm where 0 ≤ p ≤ 1. This norm represents how the "energy" is concentrated on a small number of coefficients. Based on this, one can define the following sparsity contrast function, G p (s) = 1 N N i=1 [J p (s i )] 1 p (3) where J p (s i ) = 1 T T -1 t=0 |s i (t)| p (4) The algorithm finds a separating matrix B such as, B = arg min B {G p (B)} (5) where G p (B) def = G p (z) (6) and z(t) = Bx(t) represents the estimated sources. The approach we choose to solve ( 5) is inspired from [START_REF] Pham | Blind separation of mixture of independent sources through a quasi-maximum likelihood approach[END_REF]. It is a block technique based on the processing of T received samples and consists in searching the minimum of the sample version of [START_REF] Abed-Meraim | Blind source separation using second order cyclostationary statistics[END_REF]. Solutions are obtained iteratively in the form: B (k+1) = (I + (k) )B (k) (7) z (k+1) (t) = (I + (k) )z (k) (t) (8) where I denotes the identity matrix. At iteration k, a matrix (k) is determined from a local linearization of G p (Bx(t)). It is an approximate Newton technique with the benefit that (k) can be very simply computed (no Hessian inversion) under the additional assumption that B (k) is close to a separating matrix. This procedure is illustrated in the following steps: At the (k + 1) th iteration, the proposed criterion (4) can be developed as follows: J p (z (k+1) i ) = 1 T T -1 t=0 z (k) i (t) + N j=1 (k) ij z (k) j (t) p = 1 T T -1 t=0 |z (k) i (t)| p 1 + N j=1 (k) ij z (k) j (t) z (k) i (t) p Under the assumption that B (k) is close to a separating matrix, we have | (k) ij | 1 and thus, a first order approximation of J p (z (k+1) i ) is given by: J p (z (k+1) i ) ≈ 1 T T -1 t=0 |z (k) i (t)| p 1 + p N j=1 e( (k) ij ) e z (k) j (t) z (k) i (t) -m( (k) ij ) m z (k) j (t) z (k) i (t) (9) Table 1: Iterative Sparse Blind Separation (ISBS) algorithm 1. Initialize B (1) randomly (z (1) (t) = B (1) x(t)). For k = 1, • • • , K, compute R (k) by (12). 3. Update the separation matrix B (k+1) by (15). 4. Update the source estimate (16). thus, J p (z (k+1) i ) ≈ 1 T T -1 t=0 |z (k) i (t)| p + p N j=1 e( (k) ij ) e |z (k) i (t)| p-1 e -φ (k) i (t) z (k) j (t) -m( (k) ij ) m |z (k) i (t)| p-1 e -φ (k) i (t) z (k) j (t) (10 ) where e(x) and m(x) denote the real and imaginary parts of x and φ (k) i (t) is the argument of the complex num- ber z (k) i (t). Using equation (3), minimization of the above criterion [START_REF] Pham | Blind separation of mixture of independent sources through a quasi-maximum likelihood approach[END_REF] is similar to minimization of G p (z (k+1) ). Equation ( 3) can be rewritten in more compact form as: G p I + (k) = G p (I) + e T r (k) R (k)H (11) where (•) denotes the conjugate of (•) and the ij th entry of matrix R (k) is given by: R (k) ij = 1 T T -1 t=0 |z (k) i (t)| p-1 e -φ (k) i (t) z (k) j (t) (12) and T r is the matrix trace operator. Using a gradient technique, (k) can be written as: (k) = -µR (k) (13) where µ > 0 is the gradient step. Replacing (13) into (11) leads to, G p I + (k) = G p (I) -µ R (k) 2 (14) So µ controls the decrement of the criterion. Hence, at the (k + 1) th iteration, we have B (k+1) = (I -µR (k) )B (k) (15) z (k+1) = (I -µR (k) )z (k) ( 16 ) This algorithm is summarized in Table 1. SIMULATION RESULTS We present here some numerical simulations to evaluate the performance of our algorithm. We consider an array of M = 5 sensors with half wavelength spacing receiving two audio signals in the presence of stationary complex temporally white noise of covariance σ 2 I (σ 2 being the noise power). 10000 samples are used with a sampling frequency of 8Khz. The sources arrive from the directions θ 1 = 30 and θ 2 = 45 degree. In order to evaluate the performance, the separation quality is measured using two different criteria, the first one is the mean rejection level criterion [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF] defined as: Iperf def = p =q E |(BA) pq | 2 ρ q E (|(BA) pp | 2 ) ρ p ( 17 ) where ρ i = E(|s i (t)| 2 ) is the i th source power evaluated here as 1 T T -1 t=0 |s i (t)| 2 . The second is the normalized mean square error (NMSE) of the sources defined as: N M SE i def = 1 N r Nr r=1 min α α s i,r -s i 2 s i 2 (18) N M SE i = 1 N r Nr r=1 1 - s i,r s H i s i,r s i 2 (19) N M SE = 1 N N i=1 N M SE i . ( 20 ) where s i def = [s i (0), . . . , s i (T -1)] and s i,r is defined similarly and represents the r th estimate of source s i , α is a scalar factor that compensate for the scale indeterminacy of the BSS problem and N r is the number of Monte-Carlo runs. Both criteria are estimated over N r = 200 runs. Figure 1 represents the two original sources (s 1 (t), s 2 (t)) and the recovered ones (z 1 (t), z 2 (t)) by the proposed algorithm in a noiseless case. In Figure 2, the mean rejection level is plotted versus the SNR for the proposed algorithm and the algorithm SOBI [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF] which is considered as one of the most performing in separating audio sources. We used SOBI with 6 correlation matrices of respective delays τ = 1, . . . , 6. It is clearly shown that our algorithm (ISBS) performs better in terms of the mean rejection level especially for high SNR. One can observe in Figure 3, that we reach the same conclusion for the N M SE. Figure 4 compares the mean rejection level for ISBS and SOBI when the number of sensors increases. For 2 sensors, both algorithms perform equally. When the number of sensors is greater, ISBS has a much lower mean rejection level than SOBI. Figure 5 shows the mean rejection level against the sample size for ISBS and SOBI. When the sample size is small, SOBI outperforms the proposed algorithm ISBS. Whereas, ISBS has a much lower mean rejection level when the sample size is larger. It can be explained by the size of the samples: if it increases, the signals present more sparsity, which gives an advantage to ISBS. DISCUSSION The proposed algorithm outperforms in terms of mean rejection level and N M SE other algorithms that deal with separation from instantaneous mixtures using source independency. It is mostly dedicated to sparse sources in the time domain. Among its other advantages, the algorithm ISBS shows a low computational complexity and thus can be easily implemented. Furthermore, its flexibility allows us to extend the method the adaptive case. Nevertheless, the proposed algorithm presents a relative weakness due to the well known disadvantages of the use of gradient techniques such as, the choice of the step gradient µ that the speed convergence depends on and the problem of local minima. CONCLUSION This paper presents a blind source separation method for sparse sources in the time domain. A sparse contrast func- tion is introduced and an iterative algorithm based on gradient technique is proposed to minimize it and perform BSS. Numerical simulations have been performed to evidence the usefulness of the method. They showed good performance in terms of mean rejection level and N M SE compared to other separation technics (SOBI). Figure 1 :Figure 2 : 12 Figure 1: Blind source separation example for 2 audio sources and 5 sensors: up the two original source signals and bottom the two estimated sources by our algorithm. Figure 3 : 3 Figure 3: NMSE versus the SNR for 2 audio sources and 5 sensors: comparaison between SOBI and the proposed algorithm. Figure 4 : 4 Figure 4: Mean Rejection Level versus the number of sensors M for 2 audio sources for SNR= 10dB and 30dB. Figure 5 : 5 Figure 5: Mean Rejection Level versus the sample size T for 2 audio sources for SNR=30dB.
01772669
en
[ "spi.meca.mema", "spi.mat" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01772669/file/Campana_JCOMA_2018_post%20print.pdf
Charlotte Campana Romain Léger Rodolphe Sonnier Laurent Ferry Romain Leger email: [email protected] Patrick Ienny Effect of post curing temperature on mechanical properties of a flax fiber reinforced epoxy composite Keywords: A. Biocomposite, B. Post-curing, C. Mechanical properties, D. Flax fibers 1 INTRODUCTION Nowadays, composites reinforced with synthetics fibers such as carbon or glass fibers (CFRP or GFRP) are commonly used in various industrial fields from automotive to aerospace in order to reduce the weight of the final pieces [START_REF] Benzarti | Understanding the durability of advanced fibre-reinforced polymer (FRP) composites for structural applications[END_REF]. But glass and carbon fibers present some drawbacks (energetically expensive, made of nonrenewable resources, or difficult to recycle) [START_REF] Pickering | Recycling technologies for thermoset composite materials-current status. 2nd[END_REF][START_REF] Reynolds | An introduction to composites recycling[END_REF]. An interesting alternative to glass fibers is natural fibers such as flax, hemp, or wood fibers. Indeed, natural fibers are carbon neutral, come from a renewable source and can easily be biodegraded [START_REF] Joshi | Are natural fiber composites environmentally superior to glass fiber reinforced composites? Compos Part A[END_REF]. They also have a lower density leading to specific properties similar to those of glass fibers [START_REF] Wambua | Natural fibres: Can they replace glass in fibre reinforced plastics?[END_REF][START_REF] Mohanty | Sustainable Bio-Composites from renewable resources: Opportunities and challenges in the green materials world[END_REF]. But natural fibers also present some drawbacks: a large variability of mechanical and physico-chemical properties depending on the period of harvesting, the stem location and the extraction method, their low durability because of their hydrophilicity, their sensitivity to high temperature leading to processing difficulties especially during post-curing of thermoset composites [START_REF] Haag | Influence of flax fibre variety and year-to-year variability on composite properties[END_REF][START_REF] Charlet | Characteristics of Hermès flax fibres as a function of their location in the stem and properties of the derived unidirectional composites[END_REF][START_REF] Baley | Influence of the absorbed water on the tensile strength of flax fibers[END_REF][START_REF] Alix | Effect of chemical treatments on water sorption and mechanical properties of flax fibres[END_REF][START_REF] Célino | The hygroscopic behavior of plant fibers: a review[END_REF][START_REF] Gassan | Thermal degradation of flax and jute fibers[END_REF][START_REF] Placet | Characterization of the thermo-mechanical behaviour of Hemp fibres intended for the manufacturing of high performance composites[END_REF]. This sensibility to high temperature raises several questions in the industrial field: Can the same processing protocol be used for biocomposites? What happens if high temperatures are applied during process? Indeed, GFRP are generally cured at a "low" temperature (between 60 and 100°C) and post cured at higher temperature around 150°C or more to complete the curing and reach the highest crosslinking rate and glass transition temperature possible [START_REF] Cook | Ageing and yielding in model epoxy thermosets[END_REF][START_REF] Kumar | Effect of post-curing on thermal and mechanical behavior of GFRP composites[END_REF]. A post-curing is also done to improve the modulus and strength of both the polymer and the composite and reduce the residual stresses. However, a post-cure can also lead to the thermo-oxidation of the resin. Such curing and post-curing conditions could degrade natural fibers. Natural fibers mechanical properties are mainly dependent on their water content [START_REF] Baley | Influence of the absorbed water on the tensile strength of flax fibers[END_REF][START_REF] Masseteau | An evaluation of the effects of moisture content on the modulus of elasticity of a unidirectional flax fiber composite[END_REF][START_REF] Placet | Influence of environmental relative humidity on the tensile and rotational behaviour of hemp fibres[END_REF][START_REF] Thuault | Effects of the hygrothermal environment on the mechanical properties of flax fibres[END_REF]. Post-curing at high temperature is likely to change the fiber water content and thus modify their mechanical behavior. Müssig and Haag mentioned that exposure of flax fibers at 120°C leads to loss of moisture and degradation of waxes [START_REF] Müssig | The use of flax fibres as reinforcements in composites[END_REF]. Placet showed that the mechanical properties (rigidity and fatigue behavior) of natural fibers are affected by thermal treatment beyond 150°C [START_REF] Placet | Characterization of the thermo-mechanical behaviour of Hemp fibres intended for the manufacturing of high performance composites[END_REF]. It was assumed to be related to the degradation of the cellular walls. Gassan and Bledski showed that a thermal exposure between 170 and 210°C leads to a significant drop in tenacity up to 70% for 2 hours at 210°C [START_REF] Gassan | Thermal degradation of flax and jute fibers[END_REF]. The degradation of mechanical properties could also occur at lower temperature during drying for instance [START_REF] Baley | Influence of drying on the mechanical behaviour of flax fibres and their unidirectional composites[END_REF]. On the contrary Xue and al. reported that a thermal exposure between 170 and 180°C does not impact significantly the tensile properties of kenaf bast fibers if the exposure lasts less than one hour [START_REF] Xue | Temperature and loading rate effects on tensile properties of kenaf bast fiber bundles and composites[END_REF]. It appears from this quick overview that achieving a complete curing of biocomposites without damaging fibers is very challenging. But, most of the time, in order to substitute glass fibers by natural fibers in structural composites, a traditional curing and post-curing at high temperature, recommended in the resin datasheet, is applied and leads to composite with degraded mechanical properties. The main objective of this study is to assess the impact of such traditional processing upon the properties of a biocomposite based on epoxy resin and unidirectional flax fabrics. The motivation is to identify which component (resin, reinforcement or interphase) is the most sensitive to post-curing at high temperature thus determining the final functional properties of the materials. EXPERIMENTAL Materials The epoxy resin (DER 332) was provided by Dow Chemicals (Midland, USA) with an epoxy equivalent weight of 170g/eq. Isophorone diamine (IPDA) from Sigma Aldrich (Saint-Louis, USA) with a functionality of 4 was used as hardener. The mixing of resin and hardener was carried out at a stoichiometric epoxy/amine ratio (80%wt DER 332 and 20%wt IPDA). Quasi-unidirectional flax fabrics UD 360 have been supplied by Fibres Recherche Développement (Troyes, France). Its areal weight is 360g/m² (weft: 330g/m 2 ; warp: 30g/m 2 ) and its thickness is 0.4mm. Preparation of composites The composites were manufactured using a vacuum infusion process in controlled atmosphere (50%RH and 23°C). The resin was heated at 40°C to obtain a viscosity (of 120P/s) compatible with the vacuum infusion process and then mixed with the hardener during 3 minutes at 400rpm before the infusion. Four plies of fibers (300 x 300mm 2 ) oriented in the weft direction were infused at a constant pressure of 100 mbar during 30 minutes. The composite was then cured at 80°C for 24h (NoPC). Three post-curing cycles were studied: 2 hours at 100°C (PC100), 120°C (PC120) or 150°C (PC150). Resulting composites contain 30%vol of fibers. Composite plates (300 x 300 x 3mm 3 ) were then cut into various size samples and stored at 23°C and 50%RH according to ISO 291 and ASTM D618. Characterization Differential Scanning Calorimetry (DSC) A PYRIS Diamond DSC from Perkin Elmer was used to assess the evolution of the glass transition temperature and the crosslinking rate of the composites. Samples between 10 and 15mg were tested in aluminum pans with holes. The purge gas was nitrogen (at 20L/min). The heating program consists of a first step of 3 minutes at 20°C followed by a heating step at 10°C/min from 20 to 200°C. The samples were then cooled at 10°C/min to 20°C and heated again following the same heating program. To determine the crosslinking rate, uncured resin samples were analyzed following the usual thermal program. The DSC curves (Fig. 1) showed an exothermic peak leading to the total curing enthalpy of the system (ΔH total , crosslinking rate of 100%). For the cured samples (NoPC, PC100, PC120 and PC150), the Heat Flow Vs. Temperature curves (Fig. 2) showed an endothermic peak ( ΔH peak , relaxation enthalpy) due to the cure at 80°C (NoPC) or the postcuring at 100 and 120°C (PC100 and PC120). The DSC curves for PC150 (Fig. 2) showed no endothermic peak but a clear glass transition temperature. A simple cross-multiplication allowed us to calculate the composite crosslinking rate following the equation 1. The total enthalpy of curing (ΔH total ) and the various relaxation enthalpies are shown in Table 1. (Eq. 1) Thermo-Gravimetric Analysis (TGA) A Setaram (SETSYS evolution model) TGA device was used to assess the weight loss during different heating programs. Sample between 10 and 15mg were tested in ceramic crucibles. The purge gas was nitrogen (at 20L/min). The basic heating program consists of a first step of 20 minutes at 20°C followed by a heating step at 10°C/min from 20 to 900°C. The curing/post-curing cycles were also tested on the fibers: 24 hours at 80°C followed by 2 hours at 100°C, 120°C or 150°C to assess to possible weight loss during the processes. According to NF EN ISO 527-4 standard, Young modulus is obtained by linear regression between a strain of 0.05 and 0.25%. However, the NF EN ISO 527-4 standard is not adapted to bio-composites because of their bilinear responses in the range rather than the linear behavior assumed in the standard [START_REF] Baley | Influence of the absorbed water on the tensile strength of flax fibers[END_REF]. This bilinear behavior can be attributed to a reorientation of the cellular fibrils. There is a transition in the slope between 0.2 and 0.3% called a "knee-point" leading to incorrect modulus calculations (Fig. 3a) [START_REF] Liang | Quasi-static behaviour and damage assessment of flax/epoxy composites[END_REF][START_REF] Bensadoun | Fatigue behaviour assessment of flax-epoxy composites[END_REF][START_REF] Berges | Influence of moisture uptake on the static, cyclic and dynamic behaviour of unidirectional flax fibre-reinforced epoxy laminates[END_REF]. To overcome this issue, the modulus was calculated according to NF EN ISO 527-4 standard but also at higher strain between 0.5 and 0.8% leading to a stabilized modulus (Fig. 3b). Flax fabrics Tensile tests were carried out on flax fabrics using a Zwick testing machine (TH010 model) with a loading cell of 10kN. The tests were performed at a constant displacement rate of 1 mm/min. Samples were 50mm large with a length ranging from 10 to 200mm to assess different modes of breaking (neat breaking and disentanglement). Series of 3 specimens per length were tested. Some fabrics were previously heated 24h at 80°C (F-No PC) and then 2h at various temperatures (F-PC100, F-PC120 and F-PC150) in order to simulate the curing process used to prepare the corresponding composites. The stiffness of the sample was determined using a linear regression on the linear part of the load-strain curves obtained. Interlaminar Shear Strength (ILSS) tests Following the NF EN ISO 14130 standard, ILSS tests were conducted on a Zwick testingmachine (TH010 model) equipped with a 2.5kN load cell; the constant displacement rate was 1mm/min. The distance between the bending supports L was 15mm. The length l, width b and thickness h of specimens were respectively 30, 15 and 3mm. The interlaminar shear strength was calculated from equation 2 and the maximal tensile stress from equation 3, where F is the force applied to break the samples. (Eq. 2) (Eq. 3) Scanning electron microscopy (SEM) Images of fractured cross-sections or of polished sections were acquired with an environmental scanning electron microscope (FEI Quanta 200). Samples were metallized with carbon in high vacuum to stabilize them during the analysis. The micrographs were then obtained under high vacuum at a voltage of 15kV, a working distance of 10mm and a 100 to 5000 magnification. RESULTS AND DISCUSSION Influence of post-curing on composite properties Evolution of the glass transition temperature and density Table 2 shows that an increase of the post-curing temperature leads to an increase in the glass transition temperature measured during the first heating ramp from 115°C for NoPC to 167°C for PC 150. This phenomenon is due to the increase of the crosslinking rate of the composite. Indeed, the higher the process temperature is, the higher the crosslinking rate and the glass transition temperature will be. The post-curing of 2 hours at 150°C leads to a fully crosslinked resin with a maximum glass transition temperature of 167°C. For all other post-curing conditions, the crosslinking is incomplete and a second heating ramp allows increasing further the glass transition temperature. NoPC samples have a density of 1.276g/cm 3 . When the post-curing temperature increases, the composite density decreases progressively with a maximum decrease of 0.82% for PC150 (d PC100 = 1.270g/cm 3 , d PC120 = 1.266g/cm 3 and d PC150 = 1.262g/cm 3 ). Crosslinking rate, glass transition temperature and density are all related to the curing/post-curing temperature. The higher the glass transition temperature is, the lower the composite density will be (because of the resin density) [START_REF] Gillham | Anomalous behavior of Cured Epoxy Resins : Density at room temperature vs. Time and Temperature of cure[END_REF][START_REF] Enns | Effect of the extent of cure on the modulus, glass transition, water absorptio, and density of an amine-cured epoxy[END_REF][START_REF] Fisch | Influence of structure and curing conditions on the density, degree of cure, and glass transition temperature during the curing of epoxide resins[END_REF]. The post-curing increases the crosslinking rate leading to less free ends. A re-arrangement of the molecular chains causes a variation of the samples specific volume (free volume + occupied volume) and thus a decrease of the density. Tensile strength tests The tensile strength of the non-post-cured (NoPC) and post-cured (PC100, PC120 and PC150) composites is shown on Figure 4a. A decrease of the tensile strength from 252MPa for NoPC to 136MPa for PC150 is observed when the post-curing temperature increases. The higher the temperature of post-curing is, the lower the maximal stress is with a maximal decrease of 46% for the highest temperature. The decrease in tensile strength is particularly significant when post-curing temperature increases from 120 to 150°C with a relative variation of 40% between these two temperatures. A decrease of tensile strength despite a higher crosslinking rate evidences that one (or both) of the composite components is modified during post-curing. Figure 4 -a) Tensile strength (MPa), elongation at break (%) and Young Modulus (GPa) of the composites post-cured at 100, 120 and 150°C and b) comparison between the Young modulus (GPa) and the stabilized modulus (GPa) As shown in Figure 4a, the elongation at break (%) follows the same tendency. A significant decrease is observed for PC150 with an elongation at break of 0.74% (versus 1.64%, 1.45% and 1.33% for NoPC, PC100 and PC120 respectively). This may be explained by the modification of the resin which is more brittle when crosslinking rate is higher (see 3.2.1). Bensadoum and al. studied epoxy composites reinforced with quasi-UD and UD flax fibers with a tensile modulus (calculated between 0.3 and 0.5% strain) of 15.9GPa and 20GPa respectively. Those composites had a fiber volume fraction of 40% and were manufactured using Resin Transfer Moulding [START_REF] Bensadoun | Fatigue behaviour assessment of flax-epoxy composites[END_REF]. NoPC samples have a stabilized modulus of 12.9GPa (between 0.5% and 0.8% strain) for a composite with a fiber volume fraction of 30%. Using the rule of mixtures, this modulus would be of 16.4GPa for a composite with a fiber volume fraction of 40%. Baley and al. studied a flax/epoxy composite with a fiber volume fraction of 40% and manufactured by compression molding after wet impregnation of the fibers. The studied composites had an axial modulus (between 0.05 and 0.25% strain) of 22.5±1.51GPa [START_REF] Baley | Influence of drying on the mechanical behaviour of flax fibres and their unidirectional composites[END_REF]. NoPC samples have a modulus between 0.05 and 0.25% strain of 17.9GPa for a fiber volume fraction of 30%. Using the rule of mixture, this composite would have a modulus of 23.06GPa for a fiber volume fraction of 40%. Many factors can affect the mechanical properties of a composite (type of flax and epoxy resin used, manufacturing process, …). However, the values obtained for NoPC are comparable to the ones obtained by Bensadoum's and Baley's composites allowing to position our values in an acceptable range of modulus for an epoxy resin reinforced with flax fibers. Figure 4a also shows the Young modulus for all composites according to NF EN ISO 527-4 standard. Despite the high standard deviations, the Young's modulus tends to increase when the temperature of post-curing increases from 17.9 ± 0.68 GPa for NoPC to 22.2 ± 2.18 GPa for PC150. This trend is also followed by the stabilized modulus which is 28% lower and more reproducible (Figure 4b). We observe a decrease of the elongation at break and the failure tensile strength while the modulus increases meaning that the composite is more brittle. Influence of post-curing on the composite components In order to understand the behavior modifications of the composite when a post-cure is carried out, each component of the composite (resin, fibers and interface) was studied separately. Post curing of the epoxy resin Evolution of the glass transition temperature As shown in Table 3, 24 hours of curing at 80°C leads to a resin exhibiting a 92% crosslinking rate and a T g of 121°C. The resin follows the same trend as the composite. When the post-curing temperature increases, an increase of T g is also observed with a maximal value of 165°C for the resin post-cured 2 hours at 150°C. This trend was expected since the aim of post-curing is to obtain a 100% crosslinking rate. After post-curing the molecular mobility within the resin is reduced, leading to a higher T g . It has been reported that the composite T g (Table 2) may be slightly lower than the resin T g (Table 3) due to the disruption of the resin network by the incorporation of fibers that could either absorb the amine hardener in the interface region or generate non cellulosic materials that migrate into the resin [START_REF] Fernández | Role of flax cell wall components on the microstructure and transverse mechanical behaviour of flax fabrics reinforced epoxy biocomposites[END_REF]. But an opposite phenomenon is observed when a post-cure is carried out at 100, 120 or 150°C because the composite T g is higher than the resin T g . This could be explained by a change in specific heat or thermal conductivity when fibers are introduced [START_REF] Crowson | The elastic properties in bulk and shear of a glass bead-reinforced epoxy resin composite[END_REF][START_REF] Nicolais | The Glass Transition Temperature of Poly(phenylene oxide): Annealing and Filler Effects[END_REF]. Tensile tests A decrease of the Young modulus (Table 4) is observed when a post-cure is realized on the samples with a maximal drop of 15% for R-PC150.This decrease could be explained by a change of resin density that varied from 1.276g/cm3 for NoPC to 1.262g/cm3 for PC150. However, no significant modification of the modulus was observed when the post-curing temperature increases (or the phenomenon is concealed by the high standard deviation of the results). Table 4 also shows the same trend (high standard deviation around an average strain of 3%) regarding the resin elongation at break after post-curing. The maximal tensile strength at break does not significantly change for R-PC100 and R-PC120 but decreases by 20% when the post-curing temperature reaches 150°C while the opposite trend was expected [START_REF] Aruniit | Preliminary Study of the Influence of Post Curing Parameters to the Particle Reinforced Composite's Mechanical and Physical Properties[END_REF], [START_REF] Moussa | Long-term physical and mechanical properties of cold curing structural epoxy adhesives[END_REF]. For R-PC150, another phenomenon is altering the mechanical properties. Carbas and al. [START_REF] Carbas | Effect of Cure Temperature on the Glass Transition Temperature and Mechanical Properties of Epoxy Adhesives[END_REF] showed that when the curing temperature is above or near the resin T g , the mechanical properties (Modulus and failure tensile strength) of the epoxy resin at room temperature decrease because of a thermal degradation or an oxidative crosslinking (crosslinking within the epoxy polymer). A slight change of color (from transparent to light yellow) is also evidenced and assigned to an early oxidation of the epoxy resin because the post-curing temperature is approaching the R-PC150 glass transition temperature (T g,R-PC150 = 165°C, Table 3) [START_REF] Tandon | Modeling of oxidative development in PMR-15 resin[END_REF][START_REF] Colin | A new method for predicting the thermal oxidation of thermoset matrices: Application to an amine crosslinked epoxy[END_REF]. Flax fabrics The second composite component to be studied is the flax fabric used to reinforce the epoxy resin. Flax fibers contain generally between 7 and 10% of moisture in standard conditions (23°C and 50%RH) and this moisture is removed during post-curing at high temperature [START_REF] Baley | Influence of the absorbed water on the tensile strength of flax fibers[END_REF][START_REF] Gassan | Thermal degradation of flax and jute fibers[END_REF][START_REF] Bourmaud | Importance of fiber preparation to optimize the surface and mechanical properties of unitary flax fiber[END_REF][START_REF] Jalaludin | Moisture adsorption isotherms of wood studied using dynamic vapour sorption apparatus[END_REF]. TGA tests (Figure 5) show that the present flax fibers contain only 5% of moisture and that this moisture evaporates when the temperature exceeds 86°C. This was the reason for choosing 80°C as curing temperature. During post-curing at 100, 120 or 150°C, moisture is removed from the fibers and may be extracted from the composites. The higher the temperature is, the more damaging this phenomenon becomes. For instance, when temperature is over 120°C some components of cell walls like waxes are degraded [START_REF] Müssig | The use of flax fibres as reinforcements in composites[END_REF]. Tensile tests were carried out onto fabrics by changing the sample length in order to modify the breaking mechanisms. The evolution of the maximal load until breaking for the different samples is shown in Figures 6 and7. First observations concern the maximal load that decreases when the sample length increases due to a change of the main mode of breaking when longer samples are tested [START_REF] Charlet | Mechanical properties of interfaces within a flax bundle -Part I: Experimental analysis[END_REF][START_REF] Zhu | Recent development of flax fibres and their reinforced composites based on different polymeric matrices[END_REF][START_REF] Charlet | Mechanical characterization and modeling of interfacial lamella within a flax bundle[END_REF]. Bledzki and Gassan also showed that the flax fiber tensile strength is dependent on the length of the fiber itself. The longer it is, the more inhomogeneous it will be, weakening its structure [START_REF] Bledzki | Composites reinforced with cellulose based fibres[END_REF]. Indeed, for short fibers, a neat breaking is observed at the center of the specimen but when the length of the specimen increases, disentanglement is also noted. Hence the behavior of samples depends on a combination of neat breaking and disentanglement. The latter tends to become predominant for longer specimen. Figure 8 shows the evolution of the maximal load normalized in comparison to raw specimens to see the effect of temperature on the fabrics. It shows that the disentanglement is all the more so retarded by the post-curing temperature when sample length increases because the higher the temperature is, the higher the maximal load will be (compared to other similar length samples). Figure 6 and Figure 8 also highlight that for shorter samples (10mm), the higher the temperature of post-curing is, the lower the maximum supported load is, while for longer samples the opposite trend is observed. Indeed, the post-curing process will remove the moisture contained in the fibers. The loss of water will create a modification of the cohesion between the cellulose microfibrils leading to a decrease of the maximal load supported by the fibers [START_REF] Baley | Influence of the absorbed water on the tensile strength of flax fibers[END_REF]. Thus, the maximal load supported will decrease. Apparent stiffness of fabrics is shown in Figure 9. The apparent stiffness of raw flax fabrics tends to decrease when the specimen length increases. This is due to the occurrence of disentanglement that reduces the global stiffness. The 24 hours curing process at 80°C generates a 10% rise of the fabric apparent stiffness (compare NoPC and raw fabrics) for 10mm samples. Then, the higher the post-curing temperature is, the higher the increase of apparent stiffness will be with a 19% increase for F-PC150. This was expected since heating a single fiber results in an increase of its stiffness because of a reorganization of the fibrils [START_REF] Baley | Influence of drying on the mechanical behaviour of flax fibres and their unidirectional composites[END_REF]. But, for sample length equal to or higher than 50mm, the impact of post-curing on apparent stiffness is limited (Fig. 6, Fig. 9). It seems that disentanglement conceals the impact of the post-curing treatments on fibers. Characterization of the interface (SEM+ILSS) The interfacial zone, where interactions between flax fibers and epoxy resin allow the stress transfer from resin to reinforcement, may also be considered as a specific component of the composite that could be affected by the thermal treatments. Indeed, a modification of the interface between fibers and resin may explain a decrease of the tensile strength of the composite. In order to assess the quality of the interface, ILSS tests and SEM fractographies were carried out. We choose to use ILSS tests in order to determine if there was a modification of the interface quality after the various post-curing treatments. Indeed, ILSS allows to determine on a relative basis the tendencies of the bond strength in a given system where only the bonding level is changing. Thus, in our case, ILSS may help determining if the interfacial bonding properties could be responsible of the modification of the composite mechanical properties [START_REF] Narkis | Review of methods for characterization of interfacial fiber-matrix interactions[END_REF]. The ILSS of a composite reinforced with fibers depends on the amount of fibers (synthetic or natural), the processing techniques or even the temperature of measurement. But ultimately, it is mainly dominated by the resin properties and the fibermatrix interfacial strength [START_REF] Kumar | Effect of post-curing on thermal and mechanical behavior of GFRP composites[END_REF][START_REF] Ahmed | Tensile, flexural and interlaminar shear properties of woven jute and jute-glass fabric reinforced polyester composites[END_REF][START_REF] Wu | Effect of matrix modification on interlaminar shear strength of glass fibre reinforced epoxy composites at cryogenic temperature[END_REF][START_REF] Chandrasekaran | Role of processing on interlaminar shear strength enhancement of epoxy/glass fiber/multi-walled carbon nanotube hybrid composites[END_REF]. Synthetic fibers have a rather common geometrical factor whereas natural fibers have a complex geometrical factor (composed of sub-elements) which will improve the fiber-matrix interfacial strength and therefore, have a positive effect on the ILSS values. There is no sign that the interface was modified by the post-curing treatment either on the fracture images or on the polished section. In order to confirm the SEM observations by quantitative results, ILSS tests were conducted. Those tests allow to quantify the interfacial interactions and to observe a possible change when the post-curing temperature increases. In this test, the sample undergoes important shear stresses that may cause the failure of the sample. First, it can be observed in Table 5, that the maximal tensile stress in ILSS samples is lower than the tensile strength determined during the tensile tests in Figure 4 with the exception of PC150. Those results allow concluding that the specimens broke under shear stress for NoPC, PC100 and PC120 while no conclusion can be drawn from PC150 results. According to Table 5, it would seem that the post-curing does not affect the composite ILSS whereas the tensile strength decreases significantly. However, the ILSS for NoPC are 49% higher than those obtain by H. Bos in her study for an epoxy reinforced with flax fabrics composite (15.4 ± 0.4MPa) [START_REF] Bos | The potential of flax fibres as reinforcement for composite materials[END_REF] but similar to Meredith's composite (high strength epoxy resin reinforced with flax fabric) that has an ILSS of 23.3MPa [START_REF] Meredith | On the static and dynamic properties of flax and Cordenka epoxy composites[END_REF]. Epoxy composites reinforced with other natural fibers such as jute fibers exhibit an ILSS 53% superior with an ILSS of 43 ±9 MPa [START_REF] Doan | Jute fibre/epoxy composites: Surface properties and interfacial adhesion[END_REF]. The same flax fabrics (UD360) impregnated with an unsaturated polyester resin lead to a composite with a 40% lower ILSS of 13.73 ± 0.85MPa [START_REF] Testoni | In situ long-term durability analysis of biocomposites in the marine environment[END_REF]. CONCLUSIONS The objective of this study was to identify the component (fibers, resin or interface) responsible for the progressive change of the composite mechanical properties when a postcuring is carried out at 100, 120 or 150°C. After reproducing the post-curing conditions on each component, static mechanical tests were carried out to assess if an alteration occurred within a 2-hour thermal treatment. The composite glass transition temperature and the crosslinking rate increase when a postcure is carried out. The higher the temperature of post-curing, the higher the Tg/crosslinking rate will be (115°C/92% for NoPC and 167°C/100% for PC150). But post-curing leads to a modification of composite mechanical properties, especially at high temperature (150°C). Both tensile strength and elongation at break decrease when the post-curing temperature increases whereas the stabilized modulus does not significantly vary. Since the interfacial adhesion in NoPC is weak, these alterations cannot be assigned to an interfacial damaging. No significant change of the resin behavior was observed for R-PC100 and R-PC120 but at 150°C, only 15°C below the glass transition temperature, an early oxidation is the first sign of resin damage. On the contrary, a strong modification of the properties of the fibers when they undergo a thermal treatment similar to post curing was found. Indeed, an increase of the stiffness was highlighted as well as a decrease of the maximal load supported by the fabric when heating at high temperature. However, the impact of heating may be masked by fibers disentanglement when specimen length is high enough. Therefore, tests on short specimens are advised. It appears that combining a well crosslinked resin with natural fibers to obtain a composite with the highest mechanical properties, is hardly possible. A good compromise for a flax fiber reinforced epoxy composite, would be to perform a 24-hour curing at 80°C followed by a 2 hours post-curing at 120°C. The resulting composite would have a Tg of 145°C and a limited drop of about 14% of the tensile strength compared with an equivalent non-post-cured composite. It would seem that today's epoxy resins are not completely adapted for processing natural fiber reinforced epoxy composites with high (and optimal) mechanical properties. A solution to improve the competitiveness of natural fibers in structural composites would be to associate them with resin that could be cured (rapidly) at low temperature. Figure 1 :Figure 2 : 12 Figure 1 : DSC Curve for the system DGEBA -IPDA (cured 100%) Figure 3 - 3 Figure 3 -Strain-Tensile strength curves a) and Tangent Modulus b) for NoPC and PC150 Figure 5 - 5 Figure 5 -TGA of the Flax Fibers Figure 6 -Figure 7 -Figure 8 - 678 Figure 6 -Load Vs. Displacement of Raw, NoPC and PC150 series for different sample lengths a) Figure 9 - 9 Figure 9 -Stiffness of the different materials Versus Sample length (mm) Figure 10 - 10 Figure 10 -a) No PC x1000, b) NoPC x5000, c) NoPC -polished section x800, d) PC150 x1000, e) PC150 x5000, f) PC150 -polished section x800 Table 1 -Total reaction enthalpy for the system DGEBA + IPDA and post relaxation enthalpies for NoPC, PC100, PC120 and PC150 1 100% cured (DSC) NoPC PC100 PC120 PC150 Table 2 -Glass transition temperature of the various composites 2 Sample NoPC PC100 PC120 PC150 1st temperature ramp (20°C -200°C) 115°C 127°C 145°C 167°C 2nd temperature ramp 158°C 161°C 162°C 167°C (20°C -200°C) Table 3 -Glass transition temperatures and crosslinking rate of the resin 3 Sample R-NoPC R-PC100 R-PC120 R-PC150 Tg (°C) 121°C 123°C 139°C 165°C (1st heating ramp) Tg (°C) 165°C 162°C 163°C 165°C (2nd heating ramp) Cross-linking rate (%) 92 94 97 100 Table 4 -Young Modulus (GPa), Maximal stress (MPa) and strain (%) of the epoxy resin post-cured at 100, 120 and 150°C 4 Sample R-NoPC R-PC100 R-PC120 R-PC150 Modulus (GPa) 3.4 ± 0.52 2.7 ± 0.27 2.9 ± 0.06 2.9 ± 0.37 according to ISO 527 Maximal tensile 68 ± 8.1 69 ± 5.2 66 ± 3.2 54 ± 3.8 strength (MPa) Elongation at break (%) 2.67 ± 0.54 3.56 ± 0.56 3.33 ± 0.04 3.12 ± 0.43 Table 5 -Maximal tensile strength and maximal shear stress for the various composites 5 Sample σ max, tensile stress (MPa) Tensile strength (MPa) ILSS (MPa) No PC 205 ± 15 252 23.02 ± 1.62 PC 100 200 ± 10 232 22.48 ± 1.10 PC 120 215 ± 12 218 22.03 ± 1.30 PC 150 195 ± 11 136 >21.8 ± 1.17
01772686
en
[ "spi.signal" ]
2024/03/05 22:32:18
2006
https://hal.science/hal-01772686/file/ICA2006.pdf
A Aïssa-El-Bey K Abed-Meraim Y Grenier email: [email protected] On the identifiability testing in blind source separation using resampling technique This paper focuses on the second order identifiability problem of blind source separation and its testing. We present first necessary and sufficient conditions for the identifiability and partial identifiability using a finite set of correlation matrices. These conditions depend on the autocorrelation fonction of the unknown sources. However, it is shown here that they can be tested directly from the observation through the decorrelator output. This issue is of prime importance to decide whether the sources have been well separated or else if further treatments are needed. We then propose an identifiability testing based on resampling (jackknife) technique, that is validated by simulation results. Introduction Blind source separation (BSS) of instantaneous mixtures has attracted so far a lot of attention due to its many potential applications [START_REF]Blind estimation using higher-order statistics[END_REF] and its mathematical tractability that lead to several nice and simple BSS solutions [START_REF]Blind estimation using higher-order statistics[END_REF][START_REF] Pham | Blind source separation of instantaneous mixtures of nonstationary sources[END_REF][START_REF] Belouchrani | Blind source separation using second order statistics[END_REF][START_REF] Cardoso | A.Blind beamforming for non-Gaussian signals[END_REF]. The underlaying model is given by: x(t) = y(t) + w(t) = As(t) + w(t) where s(t) = [s 1 (t), • • • , s m (t)] T is the m × 1 complex source vector, w(t) = [w 1 (t), • • • , w n (t)] T is the n×1 complex noise vector, A is the n×m full column rank mixing matrix (i.e., n ≥ m), and the superscript T denotes the transpose operator. The source signal vector s(t), is assumed to be a multivariate stationary complex stochastic process. In this paper we consider only the second order BSS methods and hence the component processes s i (t), 1 ≤ i ≤ m are assumed to be temporally coherent and mutually uncorrelated, with zero mean and second order moments: S(τ ) def = E (s(t + τ )s (t)) = diag[ρ 1 (τ ), • • • , ρ m (τ )] where ρ i (τ ) def = E(s i (t + τ )s * i (t)), the expectation operator is E, and the superscripts * and denote the conjugate of a complex number and the complex conjugate transpose of a vector, respectively. The additive noise w(t) is modeled as a white stationary zero-mean complex random process. In that case, the source separation is achieved by decorrelating the signals at different time lags. This is made possible under certain identifiability conditions that have been developed in [3] and recalled briefly in this paper. Although the previous conditions are expressed in terms of the autocorrelation coefficient of the unknown source signals, we propose here a solution to test them directly out of the received data using the jackknife (resampling) technique. Second Order Identifiability In [START_REF] Tong | Indeterminacy and identifiability of blind identification[END_REF], Tong et al. have shown that the sources are blindly separable based on (the whole set) of second order statistics only if they have different spectral density functions. In practice we achieve the BSS using only a finite set of correlation matrices. Therefore, the preview identifiability result was generalized to that case in [START_REF] Belouchrani | Blind source separation using second order statistics[END_REF]3] leading to the necessary and sufficient identifiability conditions given by the following theorem: Theorem 1. Let τ 1 < τ 2 < • • • < τ K be K ≥ 1 time lags, and define ρ i = [ρ i (τ 1 ), ρ i (τ 2 ), • • • , ρ i (τ K )] and ρi = [ (ρ i ), (ρ i )] where (x) and (x) denote the real part and imaginary part of x, respectively. Taking advantage of the indetermination, we assume without loss of generality that the sources are scaled such that ρ i = ρi = 1, for all i1 . Then, BSS can be achieved using the output correlation matrices at time lags τ 1 , τ 2 , • • • , τ K if and only if for all 1 ≤ i = j ≤ m: ρi and ρj are (pairwise) linearly independent Interestingly, we can see from condition (1) that BSS can be achieved from only one correlation matrix R x (k) def = E(x(t + k)x (t) ) provided that the vectors [ (ρ i (k)), (ρ i (k)] and [ (ρ j (k)), (ρ j (k)] are pairwise linearly independent for all i = j. Note also that, from (1), BSS can be achieved if at most one temporally white source signal exists. In contrast, recall that when using higher order statistics, BSS can only be achieved if at most one Gaussian source signal exists. Under the condition of Theorem 1, the BSS can be achieved by decorrelation according to the following result: Theorem 2. Let τ 1 , τ 2 , • • • , τ K be K time lags and z(t) = [z 1 (t), • • • , z m (t)] T be an m × 1 vector given by z(t) = Bx(t). Define r ij (k) def = E(z i (t + k)z * j (t)). If the identifiability condition holds, then B is a separating matrix (i.e. By(t) = PΛs(t) for a given permutation matrix P and a non-singular diagonal matrix Λ) if and only if r ij (k) = 0 and τ K k=τ1 |r ii (k)| > 0 (2) for all 1 ≤ i = j ≤ m and k = τ 1 , τ 2 , • • • , τ K . Note that, if one of the time lags is zero, the result of Theorem 2 holds only under the noiseless assumption. In that case, we can replace the condition τ K k=τ1 |r ii (k)| > 0 by r ii (0) > 0, for i = 1, • • • , m. On the other hand, if all the time lags are non-zero and if the noise is temporally white (but can be spatially colored with unknown spatial covariance matrix) then the above result holds without the noiseless assumption. Based on Theorem 2, we can define different objective (contrast) functions for signal decorrelation. In [START_REF] Kawamoto | Blind separation of sources using temporal correlation of the observed signal[END_REF], the following criterion2 was used G(z) = τ K k=τ1 log |diag(R z (k))| -log |R z (k)| (3) where diag(A) is the diagonal matrix obtained by zeroing the off diagonal entries of A. Another criterion used in [START_REF] Abed-Meraim | A general framework for blind source separation using second order statistics[END_REF] is G(z) = τ K k=τ1 1≤i<j≤m |r ij (k)| 2 + m i=1 | τ K k=τ1 |r ii (k)| -1| 2 (4) Equations ( 3) and ( 4) are non-negative functions which are zero if and only if R z (k) = E(z(n + k)z (n)) are diagonal for k = τ 1 , • • • , τ K or equivalently if (2) is met. Partial Identifiability It is generally believed that when the identifiability conditions are not met, the BSS cannot be achieved. This is only half of the truth as it is possible to partially separate the sources in the sense that we can extract those which satisfy the identifiability conditions. More precisely, the sources can be separated in blocks each of them containing a mixture of sources that are not separable using the considered set of statistics. For example, consider a mixture of 3 sources such that ρ1 = ρ2 while ρ1 and ρ3 are linearly independent. In that case, source s 3 can be extracted while sources s 1 and s 2 cannot. In other words, by decorrelating the observed signal at the considered time lags, one obtain 3 signals one of them being s 3 (up to a scalar constant) and the two others are linear mixtures of s 1 and s 2 . This result can be mathematically formulated as follows: assume there are d distinct groups of sources each of them containing d i source signals with same (up to a sign) correlation vector ρi , i = 1, • • • , d (clearly, m = d 1 + • • • + d d ). The correlation vectors ρ1 , • • • , ρd are pairwise linearly independent. We write s(t) = [s T 1 (t), • • • , s T d (t)] T where each sub-vector s i (t) contains the d i source signals with correlation vector ρi . Theorem 3. Let z(t) = Bx(t) be an m × 1 random vector satisfying equation (2) for all 1 ≤ i = j ≤ m and k = τ 1 , • • • , τ K . Then, there exists a permutation matrix P such that z(t) def = Pz(t) = [z T 1 (t), • • • , z T d (t)] T where z i (t) = U i s i (t), U i being a d i × d i non-singular matrix. Moreover, sources belonging to the same group, i.e., having same (up to a sign) correlation vector ρi can not be separated using only the correlation matrices R x (k), k = τ 1 , • • • , τ K . This result (see [3])shows that when some of the sources have same (up to a sign) correlation vectors then the best that can be done is to separate them per blocks and this can be achieved by decorrelation. However, this result would be useless if we cannot check the linear dependency of the correlation vectors ρi and partition the signals per groups (as shown above) according to their correlation vectors. This leads us to the important problem of testing the identifiability condition that is discussed next. Testing of identifiability condition Theoretical result The necessary and sufficient identifiability condition (1) depends on the correlation coefficients of the source signals. The latter being unknown, it is therefore impossible to a priori check whether the sources are 'separable' or not from a given set of output correlation matrices. However, it is possible to check a posteriori whether the sources have been 'separated' or not. We have the following result [3]: Theorem 4. Let τ 1 < τ 2 < • • • < τ K be K distinct time lags and z(t) = Bx(t). Assume that B is a matrix such that z(t) satisfies3 equation (2) for all 1 ≤ i = j ≤ m and k = τ 1 , • • • , τ K . Then there exists a permutation matrix P such that for k = τ 1 , • • • , τ K . E(z(t + k)z (t)) = P T S(k)P In other words the entries of z(t) def = Pz(t) have the same correlation coefficients as those of s(t) at time lags τ 1 , • • • , τ K , i.e. E(z i (t + k)z * i (t)) = ρ i (k) for k = τ 1 , • • • , τ K and i = 1, • • • , m. From Theorem 4, the existence of condition (1) can be checked by using the approximate correlation coefficients r ii (k 2) holds, it does not mean that the source signals have been separated. Three situations may happen: ) def = E(z i (t + k)z i (t)). It is important to point out that even if equation ( 1. For all pairs (i, j), ρi and ρj (computed from z(t)) are pairwise linearly independent. Then we are sure that the sources have been separated and that z(t) = s(t) up to the inherent indeterminacies of the BSS problem. In fact, the testing of the identifiability condition is equivalent to pairwise testing the angles between ρi and ρj for all 1 ≤ i = j ≤ m. The larger the angle between ρi and ρj , the better the quality of source separation (see performance analysis in [START_REF] Belouchrani | Blind source separation using second order statistics[END_REF]). 2. For all pairs (i, j), ρi and ρj are linearly dependent. Thus the sources haven't been separated and z(t) is still a linear combination of s(t). 3. A few pairs (i, j) out of all pairs satisfy ρi and ρj linearly dependent. Therefore the sources have been separated in blocks. Now, having only one signal realization at hand, we propose to use a resampling technique to evaluate the statistics needed for the testing. Testing using resampling techniques Note that in practice the source correlation coefficients are calculated from noisy finite sample data. Due to the joint effects of noise and finite sample size, it is impossible to obtain the exact source correlation coefficients to test the identifiability condition. The identifiability condition should be tested using a certain threshold α, i.e., decide that ρi and ρj are linearly independent if || ρi ρT j |-1| > α. To find α we use the fact that the estimation error of ρi ρT j is asymptotically gaussian 4 and hence one can build the confidence interval of such a variable according to its variance. This algorithm can be summarized as follows: 1. Estimate a demixing matrix B and z(t) def = Bx(t) using an existing second order decorrelation algorithm (e.g. SOBI [START_REF] Belouchrani | Blind source separation using second order statistics[END_REF]). 2. For each component z i (t), estimate the corresponding normalized vector ρ i . 3. Calculate the scalar product R(i, j) = | ρi ρT j | for each pair (i, j). 4. Estimate σ(i,j) the standard deviation of R(i, j) using resampling technique (see Section 5). 5. Choose α (i,j) according to the confidence interval. e.g. to have a confidence interval equal to 99.7% we choose α (i,j) = 3σ (i,j) , and compare | R(i, j) -1| to α (i,j) to test whether sources i and j have been separated or not. Resampling techniques: The jackknife In many signal processing applications one is interested in forming estimates of a certain number of unknown parameters of a random process, using a set of sample values. Further, one is interested in finding the sampling distribution of the estimators, so that the respective means, variances, and cumulants can be calculated, or in making some kind of probability statements with respect to the unknown true values of the parameters. The bootstrap [START_REF] Zoubir | The bootstrap and its application in signal processing[END_REF] was introduced by Efron [START_REF] Efron | The jackknife, the bootstrap and other resampling plans[END_REF] as an approach to calculate confidence intervals for parameters in circumstances where standard methods cannot be applied. The bootstrap has subsequently been used to solve many other problems that would be too complicated for traditional statistical analysis. In simple words, the bootstrap does with a computer what the experimenter would do in practice, i.e. if it were possible: he or she would repeat the experiment. With the bootstrap, the observations are randomly reassigned, and the estimates recomputed. These assignments and recomputations are done hundreds or thousands of times and treated as repeated experiments. The jackknife [START_REF] Miller | The jackknife -A review[END_REF] is another resampling technique for estimating the standard deviation. As an alternative to the bootstrap, the jackknife method can be thought of as drawing n samples of size n -1 each without replacement from the original sample of size n [START_REF] Miller | The jackknife -A review[END_REF]. Suppose we are given the sample X = {X 1 , X 2 , . . . , X n } and an estimate, θ, from X . The jackknife method is based on the sample delete-one observation at a time, X (i) = {X 1 , X 2 , . . . , X i-1 , X i+1 , . . . , X n } for i = 1, 2, . . . , n, called the jackknife sample. This i th jackknife sample consists of the data set with the i th observation removed. For each i th jackknife sample, we calculate the i th jackknife estimate, θ(i) of ϑ, i = 1, 2, . . . , n. The jackknife estimate of the standard deviation of θ is defined by σ = n -1 n n i=1   θ(i) - 1 n n j=1 θ(j)   2 The jackknife is computationally less expensive if n is less than the number of replicates used by the bootstrap for standard deviation estimation because it requires computation of θ only for the n jackknife data sets. For example, if L = 25 resamples are necessary for standard deviation estimation with the bootstrap, and the sample size is n = 10, then clearly the jackknife would be computationally less expensive than the bootstrap. In order to test the separability of the estimated signals, we have used a jackknife method to estimate the variance of the scalar product quantities R(i, j) for i, j = 1, 2, . . . , m. This is done according to the following steps: 1. From each signal z i = [z i (0), . . . , z i (T -1)] T , generate T vectors such as z (j) i = [z i (0), . . . , z i (j -1), z i (j + 1), . . . , z i (T - 1 )] T and j = 0, 1, . . . , T -1. 2. For each vector z (j) i , estimate the corresponding vector ρ (j) i . 3. Estimate R such as its (i, j) th entry is R(i, j) = 1 T T -1 k=0 ρ (k) i , ρ (k) j ρ (k) i ρ (k) j where •, • denotes the scalar product and • is the euclidian norm. 4. Estimate the standard deviation of R(i, j) by σ(i,j) = T -1 T T -1 k=0 ρ (k) i , ρ (k) j ρ (k) i ρ (k) j - 1 T T -1 l=0 ρ (l) i , ρ (l) j ρ (l) i ρ (l) j 2 6 Discussion Some useful comments are provided here to get more insight onto the considered testing method and its potential applications and extensions. -The asymptotic performance analysis of SOBI derived in [START_REF] Belouchrani | Blind source separation using second order statistics[END_REF], shows that the separation performance of two sources s i and s j depends on the angle between their respective correlation vectors ρ i and ρ j . Hence, measuring this angle gives a hint on the interference rejection level of the two considered sources. As a consequence, one can use the measure of this angle not only to test the separability of the two sources but also to guarantee a target (minimum) separation quality. Choosing the threshold α (i,j) accordingly is an important issue currently under investigation. -The testing method can be incorporated into a two stage separation procedure where the first stage consists in a second order decorrelation method (e.g. SOBI). The second stage would be an HOS-based separation method applied only when the testing indicates a failure of separation at the first step. -In many practical situations, one might be interested by only one or few source signals. This is the case for example in the interference mitigation problem in [START_REF] Belouchrani | Interference mitigation in spread spectrum communications using blind source separation[END_REF] or in the power plants monitoring applications [START_REF] D'urso | Blind identification methods applied to Electricite de France's civil works and power plants monitoring[END_REF]. In this situation, the partial identifiability result is of high interest as it proves that the desired source signal can still be extracted even if a complete source separation cannot be achieved. -We believe that similar testing procedure can be used for HOS-based BSS methods, at least for those like JADE [START_REF] Cardoso | A.Blind beamforming for non-Gaussian signals[END_REF], that are based on 4 th order decorrelation. This would be the focus of future research work. Simulation results We present in this section some simulation results to illustrate the performance of our testing method. In the simulated environment we consider uniform linear array with n = 2 sensors receiving the signals from m = 2 unit-power first order autoregressive sources (with coefficients a 1 = 0.95e j0.5 and a 2 = 0.5e j0.7 ) in the presence of stationary complex temporally white noise. The considered sources are separable according to the identifiability result, i.e. their respective correlation vectors ρ 1 and ρ 2 are linearly independent. The time lags (delays) implicitly involved are τ 0 , • • • , τ 9 (i.e., K = 10). The signal to noise ratio (SNR) is defined as SNR = -10 log 10 σ 2 n , where σ 2 n is the noise variance. We use SOBI algorithm [START_REF] Belouchrani | Blind source separation using second order statistics[END_REF] to obtain the decorrelated sources. The statistics in the curves are evaluated over 2000 Monte-Carlo runs. We present first in figure 1(a) a simulation example where we compare the rate of success of the testing procedure (success means that we decide the 2 sources have been separated) to detect the sources separability for different sample sizes versus the SNR in dB. The confidence interval is fixed to β = 99.7%. One can observe from this figure that the performance of the testing procedure degrades significantly for a small sample size due to the increased estimation errors and the fact that we use the asymptotic normality of considered statistics. In figure 1(b), we present a simulation example where we compare the rate of success according to the sample size for different confidence intervals. The SNR is set to 25dB. Clearly, the lower the confidence interval is, the higher is the rate of success of the testing procedure. Also, as observed in figure 1, the rate of success increases rapidly when increasing the sample size. In figure 2(a), we present a simulation example where we plot the rate of success versus the confidence interval β for different sample sizes and for SNR=25dB. This plot shows somehow the evolution of the rate of success w.r.t. the 'false alarm rate' and confirms the results of the two previous figures. The simulation example presented in figure 2(b) assumes two source signals with parameters a 1 = 0.5e j0.5 and a 2 = 0.5e j(0.5+δθ) , where δθ represents the spectral overlap of the two sources. The number of sensors is n = 5, the sample size is T = 1000 and the SNR=30dB. Figure 2(b) shows the rate of success versus the spectral shift δθ. As we can see, small values of δθ lead to high rates of 'nonseparability' decision by our testing procedure. Indeed, when δθ is close to zero the two vectors ρ 1 and ρ 2 are close to 'linear dependency'. That means that the separation quality of the two sources is poor in that case which explains the observed testing results. In the last figure, we assume there exist three sources. The first two sources are complex white gaussian processes (hence ρ 1 = ρ 2 ) and the third one is an autoregressive signal with coefficient a 3 = 0.95e j0. 5 . The plots in figure 2(c) compares the average values of scalar products for ρ i and ρ j (i, j = 1, 2, 3) with their corresponding threshold values 1 -α (i,j) versus the SNR. The sample size is fixed to T = 500 and the number of sensors is n = 3. This example illustrates the situation where two of the sources (here sources 1 and 2) cannot be separated (this is confirmed by the testing result) while the third one is extracted correctly (the plots show clearly that R(1, 3) < 1 -α (1,3) and R(2, 3) < 1 -α (2,3) ). Conclusion This paper introduces a new method for testing the second order identifiability condition of the blind source separation problem. In simple words, this testing allows us to 'blindly' check, out of the observation, whether the unknown sources have been correctly separated or not. To evaluate the statistics needed for the testing procedure we used the jackknife (resampling) technique. The simulation results illustrate and assess the effectiveness of this testing procedure at least for moderate and large sample sizes. Fig. 1 . 1 Fig. 1. (a)Rate of success versus SNR for 2 autoregressive sources and 2 sensors and β = 99.7%: comparison of the performance of our testing algorithm for different sample sizes T ; (b)Rate of success versus sample size T for 2 autoregressive sources and 2 sensors and SNR=25dB: comparison of the performance of our algorithm for different confidence interval β. Fig. 2 . 2 Fig. 2. (a)Rate of false alarm versus confidence interval β for 2 autoregressive sources and 2 sensors and SNR=25dB: comparison of the performance of our algorithm for different sample size T ; (b)Rate of success versus spectral shift δθ for 2 autoregressive sources and 5 sensors and SNR=25dB; (c)Average values of the |R(i, j)| and thresholds 1-α (i,j) versus SNR for 3 sources and 3 sensors : 2 sources are complex white gaussian processes and the third one is an autoregressive signal. We implicitly assume here that ρ i = 0, otherwise the source signal could not be detected (and a fortiori could not be estimated) from the considered set of correlation matrices. This hypothesis will be held in the sequel. In that paper, only the case where τ1 = 0 was considered. Because of the inherent indetermination of the BSS problem, we assume without loss of generality that the exact and estimated sources are similarly scaled, i.e., ρi = 1. More precisely, one can prove that the estimation error √ T δ( ρi ρT j ) is asymptotically, i.e. for large sample size T , gaussian with zero mean and finite variance.
01653831
en
[ "info.info-cr" ]
2024/03/05 22:32:18
2017
https://inria.hal.science/tel-01653831v2/file/2017ISAR0021_Giannakou_Anna.pdf
76chapter 4 Saids: A Self-Adaptable Intrusion First and foremost I would like to like to thank my advisors for their outstanding guidance and support throughout the duration of this thesis. Christine, thank you for continuously reviewing my work, offering important insights and improvements. Your advice regarding my professional development after the PhD helped me make important decisions about my future. During the last three and a half years you have been a role model for me as a woman in research. Louis, words cannot express how grateful I am for your guidance and support all these years. You have taught me so many things and helped me achieve my goals at so many different levels. Thank you for showing me all these new directions and possibilities and for helping me grow as a researcher. Also, thank you for tirelessly listening me complain about not having enough results :). Jean-Louis, I am grateful for your guidance throughout the whole process. Furthermore, I would like to thank the members of my committee and especially the reviewers Sara Bouchenak and Herve Debar for evaluating my work. Special thanks goes out to all the members of the Myriads team for creating a warm and welcoming atmosphere at the office. David, Yunbo and Amir, for all of our discussions and for being such wonderful people to interact with. Deb and Sean, thank you for hosting me at Lawrence Berkeley National Lab for my three month internship and for allowing me to explore new research directions. This thesis would not have been possible without the endless love and support of my friends and family. Genc, I am so grateful that I have met you and I am proud to call you my buddinis. Thank you for listening my complains offering helpful insights every time I invaded your office :). Bogdan and Mada, you have both been so wonderful and special to me. To Tsiort, Magnum, Ziag and Fotis thank you for your honest and deep support throughout these years from thousands of miles away. I love and miss you guys so much. To Irene, you have been nothing less than exceptional, kolitoula. I cannot express how deeply grateful I am for your endless encouragement and advice all this time. To Iakovos, thank you for your stoic comments and for all of our arguments :). To Eri, thank you for all your support and your clear-headed guidance throughout these years. You are admirable and you have given me so much. To my parents, thank you for your love, patience and support that has allowed me to pursue my ambitions. Thank you for raising me as a strong independent person and for showing me the benefits of persistence. To my sister Maria, thank you for being there, always. Finally, the biggest thank you goes out to a single person Context Server virtualization enables on-demand allocation of computational resources (e.g. CPU and RAM) according to the pay-as-you-go model, a business model where users (referred to as tenants) are charged only for as much as they have used. One of the main cloud models that has gained significant attention over the past few years is the Infrastructure as a Service model where compute, storage, and network resources are provided to tenants in the form of virtual machines (VMs) and virtual networks. Organizations outsource part of their information systems to virtual infrastructures (composed of VMs and virtual networks) hosted on the physical infrastructure of the cloud provider. The terms that regulate the resource allocation are declared in a contract signed by the tenants and the cloud provider, the Service Level Agreement (SLA) [START_REF] Dib | SLA-Based Profit Optimization in Cloud Bursting PaaS[END_REF]. Few of the main benefits of the IaaS cloud include: flexibility in resource allocation, illusion of unlimited capacity of computational and network resources and automated administration of complex virtualized information systems. Although shifting to the cloud might provide significant cost and efficiency gains, security continues to remain one of the main concerns in the adoption of the cloud model [START_REF] Mather | Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance[END_REF]. Multi-tenancy, one of the key characteristics of a cloud infrastructure, creates the possibility of legitimate VMs being colocated with malicious, attacker-controlled VMs. Consequently, attacks towards cloud infrastructures may originate from inside as well as the outside of the cloud environment [START_REF]Top 12 cloud computing threats[END_REF]. A successful attack could allow attackers to gain access and manipulate cloud-hosted data including legitimate user's account credentials or even gain complete control of the cloud infrastructure and turn it into a malicious entity [START_REF]Amazon Web Services as a DDoS Launch Hub[END_REF]. Although traditional security techniques such as traffic filtering or traffic inspection can provide a certain level of protection against attackers, they are not enough to tackle sophisticated threats that target virtual infrastructures. In order to provide a security solution for cloud environments, an automated self-contained security architecture that integrates heterogeneous security and monitoring tools is required. Motivation In a typical IaaS cloud environment, the provider is responsible for the management and maintenance of the physical infrastructure while tenants are only responsible for managing their own virtualized information system. Tenants can make decisions regarding VM lifecycle and deploy different types of applications on their provisioned VMs. Since deployed 12 CHAPTER 1. INTRODUCTION applications may have access to sensitive information or perform critical operations, tenants are concerned with the security monitoring of their virtualized infrastructure. These concerns can be expressed in the form of monitoring requirements against specific types of threats. Security monitoring solutions for cloud environments are typically managed by the cloud provider and are composed of heterogeneous tools for which manual configuration is required. In order to provide successful detection results, monitoring solutions need to take into account the profile of tenant-deployed applications as well as specific tenant security requirements. A cloud environment exhibits a very dynamic behavior with changes that occur at different levels of the cloud infrastructure. Unfortunately, these changes affect the ability of a cloud security monitoring framework to successfully detect attacks and preserve the integrity of the cloud infrastructure [START_REF] Shirazi | Assessing the Impact of Intra-Cloud Live Migration on Anomaly Detection[END_REF]. Existing cloud security monitoring solutions fail to address changes and take necessary decisions regarding the reconfiguration of the security devices. As a result, new entry points for malicious attackers are created which may lead to a compromise of the whole cloud infrastructure. To our knowledge, there still does not exist a security monitoring framework that is able to adapt its components based on different changes that occur in a cloud environment. The goal of this thesis is to design and implement a self-adaptable security monitoring framework that is able to react to dynamic events that occur in a cloud infrastructure and adapt its components in order to guarantee that an adequate level of security monitoring for tenant's virtual infrastructures is achieved. Objectives After presenting the context and motivation for this thesis we now define a set of objectives for a self-adaptable security monitoring framework. Self-Adaptation A self-adaptable security monitoring framework should be able to adapt its components based on different types of dynamic events that occur in a cloud infrastructure. The framework should perceive these events as sources of adaptation and take subsequent actions that affect its components. The adaptation process may alter the configuration of existing monitoring devices or instantiate new ones. The framework may decide to alter the computational resources available to a monitoring device (or a subset of monitoring devices) in order to maintain an adequate level of monitoring. Adaptation of the amount of computational resources should also be performed in order to free under-utilized resources. The framework should make adaptation decisions in order to guarantee that a balanced trade-off between security, performance and cost is maintained at any given moment. Adaptation actions can affect different components and the framework should be able to perform these actions in parallel. Tenant-Driven Customization Tenant requirements regarding specific monitoring cases should be taken into account from a self-adaptable security monitoring framework. The framework should be able to guarantee that adequate monitoring for specific tenant-requested types of threats will be provided. The monitoring request could refer to a tenant's whole virtual infrastructure or to a specific subset of VMs. The framework should provide the requested type of monitoring until 1.4. CONTRIBUTIONS 13 the tenant requests otherwise or the subset of VMs that the monitoring type is applied to no longer exists. Furthermore, the framework should take into account tenant-defined (through specific SLAs) thresholds that refer to the quality of the monitoring service or to the performance of specific types of monitoring devices. Security and Correctness Deploying a self-adaptable security monitoring framework should not add new vulnerabilities in the monitored virtual infrastructure or in the provider's infrastructure. The adaptation process and the input sources required should not create new entry points for an attacker. Furthermore, a self-adaptable security monitoring framework should be able to guarantee that an adequate level of monitoring is maintained throughout the adaptation process. The adaptation process should not intervene with the ability of the framework to correctly detect threats. Cost Minimization Deploying a self-adaptable security monitoring framework should not significantly impact the trade-off between security and cost for both tenants and the provider. On the tenant's side a self-adaptable security monitoring framework should not significantly impact performance of the applications that are hosted in the virtual infrastructure regardless of the application profile (compute-or network-intensive). On the provider's side, the ability to generate profit by leasing it's computational resources should not be significantly affected by the framework. Deploying such a framework should not impose a significant penalty in normal cloud operations (e.g. VM migration, creation, etc). Furthermore, the amount of computational resources dedicated to the self-adaptable framework's components should reflect an agreement between tenants and the provider for the distribution of computational resources. Contributions In order to achieve the objectives presented in the previous section, we design a selfadaptable security monitoring that is able to address limitations in existing monitoring frameworks and tackle dynamic events that occur in a cloud infrastructure. In this thesis we detail how we designed, implemented, and evaluated our contributions: a generic selfadaptable security monitoring framework and two instantiations with intrusion detection systems and firewalls. A Self-Adaptable Security Monitoring Framework Our first contribution is the design of a self-adaptable security monitoring framework that is able to alter the configuration of its components and adapt the amount of computational resources available to them depending on the type of dynamic event that occurs in a cloud infrastructure. Our framework achieves self-adaptation and tenant-driven customization while providing an adequate level of security monitoring through the adaptation process. SAIDS Our second contribution constitutes the first instantiation of our framework focusing on network-based intrusion detection systems (NIDS). NIDSs are key components of a security CHAPTER 1. INTRODUCTION monitoring infrastructure. SAIDS achieves the core framework's objectives while providing a scalable solution for serving parallel adaptation requests. Our solution is able to scale depending on the load of monitored traffic and the size of the virtual infrastructure. SAIDS maintains an adequate level of detection while minimizing the cost in terms of resource consumption and deployed application performance. AL-SAFE Our third contribution constitutes the second instantiation of our framework focusing on application-level firewalls. AL-SAFE uses virtual machine introspection in order to create a secure application-level firewall that operates outside the monitored VM but retains inside-the-VM visibility. The firewall's enforced rulesets are adapted based on dynamic events that occur in a virtual infrastructure. AL-SAFE offers a balanced trade-off between security, performance and cost. Thesis Outline This thesis is organized as follows: Chapter 2 reviews the state of the art while making important observations in the area of cloud computing security focusing on both industrial and academic solutions. We start by providing the context in which the contributions of this thesis were developed while describing fundamental concepts of autonomic and cloud computing. Security threats for traditional information systems as well as information systems outsourced in cloud infrastructures are presented. We then present the notion of traditional security monitoring along with key components and their functionality. Finally, security monitoring solutions for cloud environments are presented focusing on two different types of components, intrusion detection systems and firewalls. Chapter 3 presents the design of our self-adaptable security monitoring framework that is the core of this thesis. The objectives that this framework needs to address are discussed in detail. Fundamental components and their interaction are presented in detail along with a first high-level overview of the adaptation process. This chapter concludes with important implementation aspects of two generic components of our framework. Chapter 4 presents the first instantiation of our security monitoring framework which addresses network-based intrusion detection systems. This chapter details how the objectives set at the beginning are translated in design principles for a self-adaptable networkbased IDS. This first instantiation, named SAIDS, is able to adapt the configuration of a network-based IDS upon the occurrence of different types of dynamic events in the cloud infrastructure. After presenting SAIDS design and main components we describe the adaptation process and how our design choices do not add new security vulnerabilities to the cloud engine. Finally, we evaluate SAIDS performance, scalability and correctness in experimental scenarios that resemble production environments. Chapter 5 presents the second instantiation of the security monitoring framework, which focuses on a different type of security component, the firewall. This chapter maps the objectives of the security monitoring framework in the area of application-level firewalls proposing a new design for addressing inherent security vulnerabilities of this type of security device. This second instantiation, named AL-SAFE, brings self-adaptation to firewalls. We present in detail the adaptation process for addressing dynamic events and justify the correctness of our design choices. Finally, this chapter concludes with an ex-perimental evaluation of our prototype that explores the trade-off between performance, cost and security both from the provider and the tenant's perspectives. Chapter 6 concludes this thesis with a final analysis of the contributions presented and the objectives that were set in the beginning. We demonstrate how our framework's design and the two subsequent instantiations satisfy the objectives presented in this chapter. We then present perspectives to improve performance aspects of our two prototypes, SAIDS and AL-SAFE, along with ideas to expand this work organised in short, mid and long terms goals. Chapter 2 State of the Art In this thesis we propose a design for a self-adaptable security monitoring framework for IaaS cloud environments. In order to provide the necessary background for our work, we present the state of the art around several concepts that are involved in our design. We first present the basic notions around autonomic computing along with its main characteristics. Second we give a definition of a cloud environment and an detailed description of dynamic events that occur in a cloud infrastructure. Third we discuss server and network virtualization. Furthermore we provide a description of security threats against traditional information systems and cloud environments. Concepts around security monitoring and security monitoring solutions tailored for cloud environments follow. Autonomic Computing This section presents a brief introduction to autonomic computing. We start with a short historical background while we introduce the basic self-management properties of every autonomous system. Finally, we describe the role of the adaptation manager, a core component that is responsible for the enforcement and realisation of the self-management properties. What is Autonomic Computing? The notion of autonomic computing was first introduced by IBM in 2001 [START_REF] Kephart | The vision of autonomic computing[END_REF] in order to describe a system that is able to manage itself based on a set of high-level objectives defined by administrators. Autonomic computing comes as an answer to the increasing complexity of today's large scale distributed systems. As a result the ability of a system's administrator to deploy, configure and maintain such systems is affected. The term autonomic computing carries a biological connotation as it is inspired from the human nervous system and its ability to autonomously control and adapt the human body to its environment without requiring any conscious effort. For example, our nervous system automatically regulates our body temperature and heartbeat rate. Likewise, an autonomic system is able to maintain and adjust it's components to external conditions. Characteristics According to [START_REF] Kephart | The vision of autonomic computing[END_REF] the corner stone of each autonomic system is self-management. The system is able to seamlessly monitor its own use and upgrade its components when it 18 CHAPTER 2. STATE OF THE ART deems necessary requiring no human intervention. The authors identify four main aspects of self-management. Self-configuration An autonomic system is able to configure its components automatically in accordance with a set of high-level objectives that specify the desired outcome. Seamless integration of new components demands that the system adapts to their presence, similarly to how the human body adapts to the creation of new cells. When a new component is introduced two steps are necessary: 1. Acquiring the necessary knowledge for the system's composition and configuration. 2. Registering itself with the system so that other components can take advantage of its capabilities and modify their behavior accordingly. Self-optimization One of the main obstacles when deploying complex middleware (e.g. database systems) is the plethora of tunable performance parameters. To this end, self-optimization refers to the ability of the system to continuously monitor and configure its parameters, learn from past experience and take decisions in order to achieve certain high-level objectives. Self-healing Dealing with components failure in large-scale computer systems often requires devoting a substantial amount of time in debugging and identifying the root cause of a failure. Selfhealing refers to the ability of the system to detect, diagnose and repair problems that arise due to software or hardware failures. In the most straightforward example, an autonomous system could detect a failure due to a software bug, download an appropriate patch and then apply it. Another example consists of pro-active measures against externally-caused failures (a redundant power generator in the event of a power outage). Self-protection Although dedicated technologies that guarantee secure data transfer and network communication (e.g. firewalls, intrusion detection systems) exist, maintenance and configuration of such devices continue to be a demanding error-prone task. Self-protection refers to the ability of the system to defend itself against malicious activities that include external attacks or internal failures. The Role of the Manager In every autonomic system the Autonomic Managers (AMs) are software elements responsible for the enforcement of the previously described properties. AMs are responsible for managing hardware or software components that are known as Managed Resources (MRs). An AM can be embedded in a MR or run externally. An AM is able to collect the details it needs from the system, analyze them in order to determine if a change is required, create a sequence of actions (plan) that details the necessary changes and finally, apply those actions. This sequence of automated actions is known as the MAPE [START_REF] Huebscher | A Survey of Autonomic Computing: Degrees, Models, and Applications[END_REF] control loop. A control loop has four distinct components that continuously share information: • Monitor function: collects, aggregates and filters all information collected from an MR. This information may refer to topology, metrics or configuration properties that can either vary continuously through time or be static. • Analyse function: provides the ability to learn about the environment and determines whether a change is necessary, for example when a policy is being violated. • Plan function: details steps that are required in order to achieve goals and objectives according to defined policies. Once the appropriate plan is generated it is passed to the execute function. • Execute function: schedules and performs the necessary changes to the system. A representation of the MAPE loop is shown in Figure 2.1. Execute Monitor Plan Analyze Cloud Computing This section briefly introduces the basic notions behind cloud computing, a computing paradigm that extends the ideas of autonomic computing and pairs them with a business model that allows users to provision resources depending on their demands. First the main principles behind cloud computing are outlined. A description of the cloud main characteristics and the available service models follows. What is Cloud Computing? Cloud computing emerged as the new paradigm which shifts the location of a computing infrastructure to the network, aiming to reduce hardware and software management costs [START_REF] Armbrust | Above the Clouds: A Berkeley View of Cloud Computing[END_REF]. The entity that provides users with on-demand resources is known as service provider. Many definitions have emerged over the years, however until today no standard definitions exist. In this thesis we rely on the NIST definition presented in [START_REF] Mell | The NIST Definition of Cloud Computing[END_REF]: Definition 1 Cloud computing is a model for enabling ubiquitous, convenient, ondemand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. In order to regulate the terms of providing access to cloud resources, the concept of Service Level Agreement between the provider and the customers was introduced [START_REF] Dib | SLA-Based Profit Optimization in Cloud Bursting PaaS[END_REF]. In the context of cloud computing customers are referred to as tenants. CHAPTER 2. STATE OF THE ART Definition 2 A Service Level Agreement (SLA) is a contract that specifies the service guarantees expected by the tenants, the payment to the provider, and potential penalties when the guarantees are not met. Characteristics According to [START_REF] Mell | The NIST Definition of Cloud Computing[END_REF] the main characteristics of cloud computing are: broad network access, on demand self-service, resource pooling, elasticity and measured service. • Broad network access: Cloud services are usually available through the Internet or a local area network and thus can be accessed from any device with access to the network (e.g. smartphones, tablets, laptops, etc). • On-demand self-service: Tenants can provision resources automatically without the need for a personal negotiation of the terms with the cloud provider. Providers offer dedicated APIs in order to serve this purpose. • Resource pooling: Computing resources can serve multiple tenants simultaneously with different physical and virtual demands adopting a multi-tenant model. In this model, tenants are oblivious about the exact location in which the provisioned resources are located. • Elasticity: Tenants can automatically provision or release new resources depending on computational demand. Theoretically, the resources that a tenant can provision are unlimited. • Measured service: Tenants and the provider can monitor and control resource usage through dedicated mechanisms. The same mechanisms can be used by the tenants in order to check whether the terms defined in the SLA are respected. Service Models According to [START_REF] Liu | NIST Cloud Computing Reference Architecture: Recommendations of the National Institute of Standards and Technology[END_REF] the services that are available in cloud computing are categorized in three models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). The contributions presented in this thesis were developed on a cloud infrastructure using the IaaS service model. IaaS IaaS offers tenants the capability to provision virtual resources (e.g. processing in the form of virtual machines, storage, networks) without worrying about the underlying physical infrastructure. Although the IaaS cloud model essentially offers the provisioning of a node-based infrastructure, the authors in [START_REF] Kächele | Beyond IaaS and PaaS: An Extended Cloud Taxonomy for Computation, Storage and Networking[END_REF] define two different layers of abstraction in the IaaS cloud model: Hardware as a Service (HWaaS) and Operating System as a Service (OSaaS). In HWaaS the tenant is free to install arbitrary software, including the OS, while he is responsible for managing the whole software stack. In HWaaS the provider is only accountable for providing the hardware resources. In OSaaS the tenants are offered a fully managed OS including the underlying hardware resources (essentially the whole environment is perceived as a single compute node). Tenants can deploy their application through the interplay of OS processes. The contributions presented in this thesis target both HWaaS and OSaaS IaaS clouds. Known examples of IaaS HWaaS public clouds include: Amazon Elastic Cloud (EC2) [START_REF]Amazon Web Services[END_REF], Google Compute Engine [START_REF]Google Compute Engine[END_REF] and OVH public cloud [START_REF]OVH Public CLoud[END_REF]. VMware vCloud [START_REF]VMware vCloud Suite[END_REF] is a known example of IaaS HWaaS private cloud. Furthermore, a number of open source cloud management systems have been developed over the course of the last few years in order to enable the creation of private clouds (described later in Section 2. 2.4). Prominent examples in this category are: Eucalyptus [START_REF]HPE Helion Eucalyptus[END_REF], Nimbus [START_REF]Nimbus Infrastructure[END_REF], OpenNebula [START_REF] Miloji | OpenNebula: A Cloud Management Tool[END_REF] and OpenStack [START_REF] Sefraoui | Article: OpenStack: Toward an Opensource Solution for Cloud Computing[END_REF]. PaaS PaaS offers tenants the capability to deploy their own applications as long as they were created using programming languages and libraries supported by the provider. This model allows tenants to focus on application development instead of other time consuming tasks such as managing, deploying and scaling their run-time environment, depending on computational load. Major PaaS systems include Google App Engine [START_REF]Google App Engine[END_REF], Microsoft Azure [START_REF]Microsoft Azure[END_REF] and Amazon Web Services [START_REF]Amazon Web Services[END_REF] which are suitable for developing and deploying web applications. SaaS SaaS offers tenants the capability of using the provider's cloud hosted applications through dedicated APIs. The applications are managed and configured by the provider although tenants might have access to limited user-related configuration settings. Prominent examples in this category include: Gmail [START_REF]Google Apps[END_REF], Google calendar [START_REF]Google Apps[END_REF] and iCloud [START_REF]iCLoud[END_REF]. Main IaaS Systems A lot of work in the past was focused on designing and implementing IaaS cloud systems. Tenants are provided with virtualized resources (in the form of Virtual Machines VMs -or containers) and a management system that allows them to manage their resources virtualization technologies like KVM [START_REF] Kivity | KVM: the Linux Virtual Machine Monitor[END_REF], Xen [START_REF] Barham | Xen and the Art of Virtualization[END_REF] and VMware ESX/ESXi [START_REF] Waldspurger | Memory Resource Management in VMware ESX Server[END_REF] are the building blocks that facilitate server virtualization and efficient resource utilisation. Lately, a trend towards containerization of IaaS cloud systems (e.g. Google Kubernetes [START_REF]Production-Grade Container Orchestration[END_REF]) has been observed. As stated in [START_REF] Moreno-Vozmediano | IaaS Cloud Architecture: From Virtualized Datacenters to Federated Cloud Infrastructures[END_REF] the core of an IaaS cloud management system is the so called cloud-OS. The cloud OS is responsible for managing the provisioning of the virtual resources according to the need of the tenant services that are hosted in the cloud. As an example of a cloud OS, we present OpenStack [START_REF]OpenStack[END_REF], a mainstream IaaS management system that we used in order to develop our prototype. OpenStack is an open source cloud management system that allows tenants to provision resources within specific limits set by the cloud administrator. Tenants can view, create and manage their resources either by a dedicated web graphical interface (Horizon) or through command line clients that interact with each one of OpenStack's services. OpenStack operates in a fully centralized manner with one node acting as a controller. The controller accepts user VM life cycle commands and delegates them to a pool of compute nodes. Upon receiving a command from the cloud controller, a compute node enforces it by interacting with the hypervisor. The controller node hosts a plethora of the main services delivered by OpenStack such as: Nova (manager of the VMs lifecycle), Neutron (network connectivity manager), Glance (VM disk image manager) and Keystone (mapping of tenants to services that they can access). Nova and Neutron are also installed on each compute node in order to provide VM interconnectivity and enforce user decision regarding VMs lifecycle. Compute nodes periodically report back to the cloud controller their available resources (processing, memory, storage) and the state of the deployed VMs (e.g. network connectivity, lifecycle events). OpenStack offers a limited set of integration tools for other public APIs (namely Amazon EC2 and Google Compute Engine). A representation of OpenStack's modular architecture can be found in Figure 2.2. Tenant network Compute Node Compute Node Management Network Deployment Models There are four distinguishable cloud deployment models: Private, Public, Community and Hybrid clouds. • Private cloud : The cloud infrastructure is deployed on compute, storage and network systems that belong to a single organization. A private cloud can be managed either by the organization or a third party entity and its usage does not exceed the scope of the organization. • Public cloud : The cloud infrastructure is available for provisioning for everyone on the Internet. It is typically owned and managed by a cloud provider that allows customers (tenants) to request resources without having to deal with the burden of managing them. As a result tenants are only charged for what they use, in accordance with the pay-as-you-go model. • Community cloud : The cloud infrastructure is dedicated to a specific community or organizations that share a set of policies (i.e. security concerns, mission, and compliance requirements). Community cloud comes as a solution for distributing costs between different organizations in contrast to each organization maintaining its own private cloud (e.g. scientists from different organizations that work on the same project can use the same community cloud). In contrast to public clouds access to community clouds is restricted only to members of the community or organization. They can be managed by one or several organizations of the community. Community clouds can be perceived as a specific category of private clouds. • Hybrid cloud : The cloud infrastructure is a combination of two or more separate cloud infrastructures (private, public or community) that remain individual entities. The entities are bound together by a standardized agreement that allows data and application sharing. In this thesis we developed a prototype considering a private cloud although the proposed framework can be integrated in both public and community clouds as well. Dynamic Events in Iaas Clouds and Cloud Adaptation Cloud environments are based on an elastic, highly scalable model that allows tenants to provision resources (e.g. VMs) with unprecedented ease. Furthermore tenants can choose to deploy different services inside their provisioned VMs and expose them to other users through the Internet, generating network traffic towards and from the cloud infrastructure. As a result, cloud environments become very dynamic, with frequent changes occuring at different levels of the infrastructure. In this section we categorize the observed changes in three categories: service-related, topology-related and traffic-related events. Service-related Events Service-related dynamic events include all changes in the applications deployed in the VMs of a single tenant. These changes can refer to addition (i.e. installation) of a new application or the removal of an existing one inside an already deployed VM. A reconfiguration of an existing application resulting in additional features is also considered a service-related dynamic event. Topology-related Events Topology-related events include all changes in the topology of a tenant's virtual infrastructure. The three main commands in a VM life cycle that constitute topology related dynamic events are: VM creation, VM deletion and VM migration (seamlessly moving a VM between two physical nodes over local or wide area network). VM migration can be interpreted as a combination of creation and deletion as when a VM is migrated between two nodes a new copy of the VM is created in the destination node, while the old copy of the VM is deleted from the source node. Public cloud providers offer the possibility of auto-scaling to their tenants in order to automate management of their application's computational load. Scaling decisions generate topology-related changes either by adding new virtual machines (scaling out) or by deleting existing ones when the application's load decreases (scaling in). Network reconfiguration events (e.g. changing a subnet's address range, moving VMs between different subnets or creating/deleting subnets) are also considered topology-related changes. Traffic-related Events Often tenants deploy network-oriented applications in their cloud infrastructure. Depending on the load of the deployed applications, different levels of network traffic are generated towards and from the virtual infrastructure. Any change in the tenant's virtual infrastructure incoming or outgoing traffic load is considered a traffic-related dynamic event. Public cloud providers offer load-balancing solutions in order to handle the dynamic network load and evenly distribute it to available resources. Load balancing decisions can also lead to topology-related changes when new VMs are started or shutdown. CHAPTER 2. STATE OF THE ART Summary In this section, we described the three main categories of dynamic events that occur in a cloud infrastructure. The security monitoring framework designed in this thesis addresses the need for reconfiguration of monitoring devices in all three event categories. We now continue with a description of virtualization technologies as the building block that enables cloud-computing. Virtualization This section gives a brief overview of infrastructure virtualization. Infrastructure virtualization can be decomposed in server virtualization and network virtualization. We first present the main server virtualization components followed by the four dominant server virtualization techniques. Finally, this section concludes with a description of network virtualization. The first ones to define the notion of server virtualization where Popek and Goldberg in their paper "Formal requirements for virtualizable third generation architectures" [START_REF] Popek | Formal Requirements for Virtualizable Third Generation Architectures[END_REF]. According to [START_REF] Popek | Formal Requirements for Virtualizable Third Generation Architectures[END_REF], virtualization is a mechanism permitting the creation of Virtual Machines which are essentially efficient, isolated duplicates of real machines. Server Virtualization Components In an IaaS infrastructure there are three main architectural layers: physical, hypervisor and virtual machine. We briefly describe each one: • Physical : The physical machine provides the computational resources that are divided between virtual machines (VMs). Computational resources include CPUs, memory and devices (e.g. disk, NIC). • Hypervisor : Originally known as the Virtual Machine Monitor, this component is responsible for mediating the sharing of physical resources (e.g. CPU, memory) between different co-located VMs that operate concurrently. The hypervisor is responsible for ensuring isolation between different VMs providing a dedicated environment for each one without impacting the others. • Virtual Machine: A VM or guest is the workload running on top of the hypervisor. The VM is responsible for executing user applications and virtual appliances. Each VM is under the illusion that it is an autonomous unit with its own dedicated physical resources. The VM is oblivious about the existence of multiple other consolidated VMs on top of the hypervisor of the same physical machine. The security monitoring framework designed in this thesis targets the virtual machine layer. For extracting key information regarding the services hosted inside the monitored VMs the hypervisor is leveraged. Server Virtualization There are different mechanisms that enable the creation of virtual machines each one providing different features. Here we detail the four main ones: emulation, full virtualization, paravirtualization and OS-level virtualization. The contributions presented in this thesis apply to full virtualization and paravirtualization. [START_REF]Bochs IA-32 Emulator Project[END_REF] and Qemu [START_REF]QEMU Open Source Processor Emulator[END_REF], which support a wide number of guest architectures (x86, x86 64, MIPS, ARM, SPARC). Full Virtualization Full system-wide virtualization delivers a virtual machine with dedicated virtual devices, virtual processors and virtual memory. In full virtualization the hypervisor is responsible for providing isolation between VMs as well as multiplexing on the hardware resources. This technique enables running VMs on top of physical hosts without the need to perform any alterations on the VM or the host OS kernel. In [START_REF] Popek | Formal Requirements for Virtualizable Third Generation Architectures[END_REF] the authors formalize the full-virtualization challenge as defining a virtual machine monitor satisfying the following properties: • Equivalence: The VM should be indistinguishable from the underlying hardware. • Resource control: The VM should be in complete control of any virtualized resources. • Efficiency: Most VM instructions should be executed directly on the underlying CPU without involving the hypervisor. The two methods that make full virtualization possible are: binary translation and hardware acceleration. We discuss both of them. Binary translation: This technique uses the native OS I/O device support while offering close to native CPU performance by executing as many CPU instructions as possible on bare hardware [START_REF] Adams | A Comparison of Software and Hardware Techniques for x86 Virtualization[END_REF]. When installed, a driver is loaded in the host OS kernel in order to allow it's user space component to gain access to the physical hardware when required. The same driver is responsible for improving network performance for the virtualized guest.Non-virtualized instructions are detected using binary translation and are replaced with new instructions that have the desired effect on the virtualized hardware. The main argument behind virtualization through binary translation is that no modifications of either the guest or the host OS are required. Unfortunately, a non-negligible performance penalty is applied due to the need of performing binary translation and emulation of privileged CPU instructions. Full virtualization with binary translation can be interpreted as a hybrid technique between emulation and virtualization. In contrast to emulation where each CPU instruction is emulated, full virtualization with binary translation allows for some CPU instructions to run directly on the hosts CPU. The most popular fully virtualized solutions using binary translation are: Qemu [START_REF]QEMU Open Source Processor Emulator[END_REF], VirtualBox [START_REF]VirtualBox[END_REF], VMware Fusion and Workstation [START_REF]VMware Fusion[END_REF] [START_REF]VMware Workstation[END_REF]. Hardware acceleration: In order to cope with the performance overhead introduced by binary translation and enable virtualization of physical hardware, Intel (resp. AMD) came up with the VT-x technology [START_REF] Uhlig | Intel virtualization technology[END_REF] (resp. AMD-V). With VT-x a new root mode of operation is allowed in the CPU. Two new transitions are enabled: from the VMM to a guest a root to non-root transition (called VMEntry) and from the guest to VMM a non-root to root transition (called VMExit). Intel uses a new data structure to store and manage information regarding when these transitions should be triggered, the virtual machine control structure (VMCS). Typically a VMExit occurs when the VM attempts to run a subset of privileged instructions. The VMCS data structure stores all necessary information (instruction name, exit reason). This information is later used by the VMM for executing the privileged instruction. The most popular solutions using hardware assisted virtualization are: KVM [START_REF] Kivity | KVM: the Linux Virtual Machine Monitor[END_REF], VMware ESXi [START_REF] Waldspurger | Memory Resource Management in VMware ESX Server[END_REF], Microsoft Hyper-V [START_REF]Microsoft Hyper-V[END_REF] and Xen Hardware Virtual Machine [START_REF]Xen Hardware Virtual Machine[END_REF]. Paravirtualization In contrast to full virtualization which advocates for no modifications in the guest OS, paravirtualization requires the guest OS kernel to be modified in order to replace non-virtualized instructions with hypercalls that communicate directly with the hypervisor. The hypervisor is responsible for exporting hypercall interfaces for other sensitive kernel operations such as memory management and interrupt handling. Xen Project [START_REF] Barham | Xen and the Art of Virtualization[END_REF] has been the most prominent paravirualization solution. In Xen the processor and memory are virtualised using a modified Linux kernel. The modified kernel is actually an administrative VM (called dom0) responsible for providing isolation between VMs, handling network, I/O and memory management for the guest VMs (domU). Dom0 is also in control of the guest VMs lifecycle and bares the responsibility for executing privileged instructions on behalf of the guest OS. The later is done by issuing hypercalls. Dom0 traps the latter and executes them either by translating them to native hardware instructions or using emulation. Xen operates based on a split driver model where the actual device drivers, called backend drivers, are located inside Dom0 and each DomU implements an emulated device, called frontend driver. Every time a DomU issues a call to a driver the emulated part transfers the call to the actual driver in Dom0 -hence the two drivers complementary operate as one. Although Xen is a promising solution for near native performance, its application is limited to open source OSes like Linux or proprietary solutions which offer a customized Xen-compatible version. Hypervisor Practices Emulation, full virtualization and paravirtualization can be combined. Typically, devices are fully emulated (for maintaining the use of legacy drivers) or paravirtulized (for efficient multiplexing access on these devices from different VMs) while the CPU is fully virtualized. Modern hypervisors that adopt this technique are: KVM [START_REF] Kivity | KVM: the Linux Virtual Machine Monitor[END_REF], Xen [START_REF] Barham | Xen and the Art of Virtualization[END_REF] and VMware Workstation [START_REF]VMware Workstation[END_REF]. OS-level Virtualization Another solution, known as lightweight or OS-level virtualization [START_REF] Bernstein | Containers and Cloud: From LXC to Docker to Kubernetes[END_REF], allows the OS kernel to perform virtualization at the system call interface, and create isolated environments that share the same kernel. These flexible, user-oriented isolated environments are known as containers. Containers have their own resources (e.g. file system, network connectivity, firewall, users, applications) that are managed by the shared OS kernel (responsible for providing isolation). Since they all share the same kernel the performance overhead is minimal to none. Furthermore, a container can be migrated in the same way as a VM. Unfortunately, the main issue behind OS-level virtualization is that all containers in a single physical machine are limited to the kernel of the host OS. This limits the number of OSes to only the ones supported by the host's kernel. LXC [START_REF]LXC linux containers[END_REF] and Docker [START_REF]Docker containers[END_REF] are some of the most prominent solutions in this category. Network Virtualization and Network Management in IaaS Clouds Network virtualization is one of the key aspects in an IaaS cloud environment. Assigning IP addresses to VMs, communication between VMs that belong to the same or different tenants and finally communication between VMs and the outside world are some of the issues that need to be addressed from the network virtualization component of the IaaS cloud management system. In this section we first present the mechanisms that materialize network virtualization and we continue with a discussion about network management in IaaS clouds focusing on OpenStack. Network Virtualization There are different solutions that enable network virtualization. Multi-protocol Label Switching [START_REF] Rosen | Multiprotocol Label Switching Architecture[END_REF] uses a "label" appended to a packet in order to transport data instead of using addresses. MPLS allows switches and other network devices to route packets based on a simplified label (as opposed to a long IP address). Hard VLANs allow a single physical network to be broken to multiple segments. By grouping hosts that are likely to communicate with each other to the same VLAN, one can reduce the amount of traffic that needs to be routed. Flat networking relies on the ethernet adapter of each compute node (which is configured as a bridge) in order to communicate with other hosts. With VLAN tagging each packet belonging to a specific VLAN is assigned the same VLAN ID while with GRE encapsulation traffic is encapsulated with a unique tunnel ID per network (the tunnel ID is used in order to differentiate between networks). VLAN tagging and GRE encapsulation both require a virtual switch in order to perform the tagging (respectively encapsulation) while flat networking does not require a virtual switch. However these solutions lack a single unifying abstraction that can be leveraged to configure the network in a global manner. A solution to this empedement that provides dynamic centrally-controlled network management is software defined networking (SDN) [START_REF] Casado | Ethane: Taking Control of the Enterprise[END_REF]. In this section we mainly focus on SDN. Software defined networking [START_REF] Casado | Ethane: Taking Control of the Enterprise[END_REF] emerged as a paradigm in an effort to break the vertical integration of the control and the data plane in a network. It separates a network's control logic from the underlying physical routers and switches which are now simple forwarding devices. The control logic is implemented in a centralized controller allowing for a more simplified policy enforcement and network reconfiguration. Although SDNs are logically centralized, the need for a scalable, reliable solution that guarantees adequate performance does not allow for a physically centralized approach. The separation between the control and the data plane is feasible by creating a strictly defined programmable interface (API) between the switches and the SDN controller. The most notable example of such API is OpenFlow [START_REF] Mckeown | OpenFlow: Enabling Innovation in Campus Networks[END_REF]. In each OpenFlow switch flow tables of packet-handling rules are stored. Each rule matches a subset of the traffic and performs certain actions (dropping, forwarding, modifying) on the matched subset. The rules are installed on the switches by the controller and depending on their content a switch can behave like a router, switch, firewall or in general a middlebox. A switch can communicate with the controller through a secure channel using the OpenFlow protocol which defines the set of messages that can be exchanged between these two entities. Traffic handling rules can be installed Although OpenFlow is the most widely accepted and deployed API for SDNs there are several other solutions such as ForCES [START_REF] Doria | Forwarding and Control Element Separation (ForCES) Protocol Specification[END_REF] and POF [START_REF] Song | Protocol-oblivious Forwarding: Unleash the Power of SDN Through a Future-proof Forwarding Plane[END_REF]. The controller provides a programmatic interface to the network that can be used to execute management tasks as well as offer new functionalities. It essentially enables the SDN model to be applied on a wide range of hardware devices (e.g. wireless, wired). A wide range of available controllers exist such as Nox [START_REF] Gude | NOX: Towards an Operating System for Networks[END_REF], OpenDaylight [START_REF]OpenDaylight: Open Source SDN Platform[END_REF] and Floodlight [START_REF]Project Floodlight[END_REF]. Making network virtualization a consolidated technology requires multiple logical networks to be able to share the same OpenFlow networking infrastructure. FlowVisor [START_REF] Sherwood | FlowVisor: A Network Virtualization Layer[END_REF] was one of the early solutions towards that direction. It enables slicing a data plane based on off-the-shelf OpenFlow compatible switches, making the coexistence of multiple networks possible. The authors propose five slicing dimensions: bandwidth, topology, traffic, forwarding tables and device CPU. Each slice can have its own controller allowing multiple controllers to inhabit the same physical infrastructure. Each controller can only operate on its own slice and gets its own flow tables in the switches. FlowN [START_REF] Drutskoy | Scalable Network Virtualization in Software-Defined Networks[END_REF] offers a solution analogous to container virtualization (i.e. a lightweight virtualization approach). In contrast with FlowVisor [START_REF] Sherwood | FlowVisor: A Network Virtualization Layer[END_REF], it deploys a unique shared controller platform that can be used to manage multiple domains in a cloud environment. A single shared controller platform enables management of different network domains. It offers complete control over a virtual network to each tenant and it allows them to develop any application on top of the shared controller. Network virtualization platform (NVP) from VMware (as part of the NSX [START_REF]VMware NSX[END_REF] product) provides the necessary abstractions for the creation of independent networks (each one with different service model and topology). No knowledge about the underlying network topology or state of the forwarding devices is required as tenants simply provide their desired network configuration (e.g. addressing architecture). NVP is responsible for translating tenant requirements to low-level instruction sets that are later on installed on the forwarding devices. A cluster of SDN controllers is used in order to modify the flow tables on the switches. NVP was designed to address challenges in large-scale multitenant environments that are not supported by the previously described solutions (e.g. migrating an information system to the cloud without the need of modifying the network configuration). A similar solution is SDN VE [START_REF] Li | Software defined environments: An introduction[END_REF] from IBM based on OpenDaylight. Network Management in Iaas Clouds Network virtualization delivers compute-related options (create, delete) to network management. Network objects (networks, subnets, ports, routers, etc) can be created, deleted and reconfigured programmatically without the need of reconfiguring the underlying hardware infrastructure. The underlying hardware infrastructure is treated as a pool of transport resources that can be consumed on demand. Tenants can create private networks (i.e. tenant networks) and choose their own IP address scheme, which can overlap with IP addresses chosen by other tenants. Depending on the type of the tenant network (flat, VLAN, GRE) different communication capabilities are offered to the instances attached to these networks. The networking component of an IaaS cloud management system is responsible for mapping tenant-defined network concepts to existing physical networks in a data center. Essentially the network component performs the following functionalities: assign IP addresses to VMs, facilitating communication between VMs that belong to the same or different tenants and finally providing VMs with outside-world connectivity. In OpenStack, Neutron is responsible for managing different tenant networks and offering a full set of networking services (routing, switching, load-balancing, etc) to provisioned VMs. Neutron is composed of agents (e.g. DHCP agent, L3 routing agent, etc) that provide different types of networking services to provisioned VMs. Neutron creates three different networks in a standard cloud deployment: 1. Management network: used for communication between the OpenStack components. This network is only reached from within the datacenter. 2. Tenant networks: used for communication between VMs in the cloud. The configuration of these networks depends on the networking choices made by the different tenants. 3. External network: used to provide internet connectivity to VMs hosted in the cloud. On each compute node a virtual bridge is created by a dedicated Neutron plugin (called ML2 plugin) which is locally installed on each node. VMs are connected to networks through virtual ports on the ML2-created bridge. The ML2 plugin is also responsible for segregating network traffic between VMs on a per tenant basis. This can be achieved either through VLAN tagging (all VMs that belong to the same network are assigned the same tag) or GRE encapsulation. Security Threats In this section we detail some of the known attacks against information systems and cloud environments. Security Threats in Information Systems Although one of the most common ways of executing cyber attacks is through the network (i.e. either the Internet or a local area network), the attackers often target different areas CHAPTER 2. STATE OF THE ART in an information system. Here we list the most common threats depending on their target level. Before presenting each threat category in detail, we present a high level overview of the vulnerability classes that attackers can exploit. In general, missing validation of inputs in an application can create an entry point for attacks (listed below). Furthermore, lack of access control (i.e. through authentication mechanisms) can allow an attacker to gain unauthorized privileged access. Application Level Application-level threats are abilities of an attacker to exploit vulnerabilities in the software of one or more applications running in an information system. One of the most common application-level attack is SQL injection [START_REF]SQL Injection[END_REF] against Database Management Systems (DBMS). An SQL injection attack occurs when a malicious entity on the client side manages to insert an SQL query via input data to the application. This is usually possible due to lacks of input validation. The impact of the injection may vary depending on the skills and imagination of the attacker. Usually, through an SQL exploit the attacker can gain access to sensitive data inside the database, modify them (insert or delete or update) or even retrieve the contents of a file present in the system. He can also shutdown the DBMS by issuing administrative commands and sometimes even execute commands outside the DBMS. Another type of an injection attack is cross-site scripting (XSS) [START_REF]Cross Site Scripting[END_REF] when the attacker manages to insert malicious code in a trusted website. Cross-site scripting exploits the absence of validation of user input. The malicious code could be in the form of a JavaScript segment or any other code that the browser can execute. When a different user accesses this website she will execute the script thinking that it comes from a trusted source, giving the attacker access to cookies, session tokens or other sensitive information retrieved by the browser on behalf of the infected website. In a more severe scenario the attacker might even redirect the end user to web content under his control. An XSS attack can either be stored (the malicious script permanently resides on the target server) or reflected (the script is reflected off the web server -for example in an error message). A buffer overflow [START_REF]Aleph One. Smashing The Stack For Fun And Profit[END_REF] generally occurs when an application attempts to store data in a buffer and the stored data exceeds the buffer's limits. Buffer overflows are possible because of badly validated input on the application's side. Writing in an unauthorized part of the memory might lead to corrupted data, application crashes or even malicious code execution. Buffer overflows are often used as entry points for the attacker in order to inject malicious code segment into the host's memory and then execute it by jumping to the right memory address. Another alternative for malicious code injection is format string attacks [START_REF] Newsham | Format String Attacks[END_REF]. Format String Attacks (FSA) are used in order to leak information such as pointer addresses. After a successful FSA, normally a return oriented programming exploit is used. Return oriented programming allows the attacker to use short sequences of instructions that already exist in a target program in order to introduce arbitrary behavior. Network Level In the network-level threat category we describe attacks that target communications of layer 3 and above in an information system. Network-level impersonation occurs when an attacker masks his true identity or tries to impersonate another computer in network communications. Operating systems use the IP address of a packet to validate its source. An attacker can create an IP packet with a header that contains a false sender's address, a technique known as IP spoofing [START_REF] Heberlein | Attack Class: Address Spoofing[END_REF]. This technique, combined with TCP protocol specifications (i.e. the sequence and acknowledgement numbers included in a TCP header) can lead to session hijacking. The attacker predicts the sequence number and creates a false session with the victim who in turn thinks that he is communicating with the legitimate host. Denial of Service (DoS) attacks aim at exhausting the computing and network resources of an information system. The way these attacks operate is by sending a victim a stream of packets that swamps its network or processing capacity, denying access to its normal clients. One of the methods employed is SYN flooding [START_REF] Eddy | TCP SYN flooding attacks and common mitigations[END_REF], in which the attacker sends client requests to the victim's server. SYN flooding attacks target hosts that run TCP server processes and exploit the state retention of the TCP protocol after a SYN packet has been received. Their goal is to overload the server with half-established connections and disturb normal operations. Both IP spoofing and SYN flooding are common techniques for launching a denial of service attack and preventing users from accessing a network service. In the event of a server being protected against SYN flood attacks, a denial of service can still be possible if the server in question is too slow in serving flooding requests (the attacker simply overloads the server). Finally, as the name indicates, a man-in-the-middle attack refers to a state where the attacker is able to actively monitor, capture and control the network packets exchanged between two communicating entities. Sophisticated versions of man-in-the-middle attacks include attempts against TLS-based communications where the attackers are able to falsely impersonate legitimate users [START_REF] Karapanos | On the Effective Prevention of TLS Man-in-themiddle Attacks in Web Applications[END_REF]. Domain Name Servers (DNS) are essential parts of the network infrastructure that map domain names to IP addresses redirecting requests to the appropriate location. Attackers target DNS systems in their effort to redirect legitimate requests to malicious websites under their control. One of the most common techniques to achieve that is DNS cache poisoning. DNS cache poisoning exploits a vulnerability in the DNS protocol [START_REF]DNS flaw for cache poisoning attacks[END_REF] in order to replace legitimate resolution results with flawed ones that include the attacker's website. Depending on the type of services hosted in an information system attackers use different exploitation techniques. Identification of the type of hosted services is a necessary preliminary step in most exploitation attempts. A common way for an attacker to identify network services hosted in an information system or probe a specific server for open ports, is port scanning [START_REF] Moore | Inside the Slammer Worm[END_REF]. The standard way to perform a port scan is to launch a process that sends client requests to a range of ports in a particular server (vertical port scan) or to a specific port on several hosts (horizontal port scan). Depending on the type of the request there are different port scan categories at the TCP level. Application fingerprinting, where an attacker looks for a reply that matches a particular vulnerable version of an application is also a common technique used to identify the type of hosted service. Operating System Level All user applications in an information system rely on the integrity of the kernel and core system utilities. Therefore, a possible compromise of any of these two parts can result in a complete lack of trust in the system as a whole. One of the most common attacks against a system's kernel is a rootkit installation. Rootkits are pieces of software that allow attackers to modify a host's software, usually causing it to hide their presence from the host's legitimate administrators. A sophisticated rootkit is often able to alter the kernel's functionality so that no user applications that run in the infected system can be trusted to produce accurate results (including rootkit detectors). Rootkits usually come with a dedicated backdoor so the attacker can gain and maintain access to the compromised host. Backdoors usually create secure SSH connections such that the communication between CHAPTER 2. STATE OF THE ART the attacker and the compromised machine cannot be analysed by Intrusion Detection Systems or other network monitoring tools. Summary In this section we have described security threats targeting traditional information systems. The described attacks could also target applications running inside virtual machines in an outsourced infrastructure. We continue with a description of cloud specific security threats and a classification based on their target. Security Threats in Cloud Environments In a cloud environment security concerns two different actors. First, tenants are concerned with the security of their outsourced assets, especially if they are exposed to the Internet. Second, the provider is also concerned about the security of the underlying infrastructure especially when he has no insight regarding the hosted applications and their workload. In this section we focus on security threats originating from corrupted tenants against other legitimate tenants and their resources, threats against the provider's infrastructure and their origin as well as threats towards the provider's API. Threats against tenants and based on shared resources One of the key elements of a cloud infrastructure is multi-tenancy (i.e. multiplexing virtual machines that belong to different tenants on the same physical hardware). Although this maximizes efficiency for the cloud provider's resources, it also offers the possibility that a tenant's VM can be located in the same physical machine as a malicious VM. This in turn engenders a new threat: breaking the resource isolation provided by the hypervisor and the hardware and gaining access to unauthorized data or disturbing the operation of legitimate VMs. One of the most prominent attacks that illustrates this threat is the side channel attack where an adversary with a colocated VM gains access to information belonging to other VMs (e.g. passwords, cryptographic keys). In [START_REF] Ristenpart | Get off of My Cloud: Exploring Information Leakage in Third-party Compute Clouds[END_REF] the attackers used shared CPU caches as side channels in order to extract sensitive infomation from a colocated VM. Another technique that exploits VM colocation is DoS attacks against shared resources. A malicious VM is excessively consuming shared computing resources (CPU time, memory, I/O bandwidth) disallowing legitimate VMs from completing their tasks. Provider Infrastructure In an IaaS cloud environment 2.2.3.1 each VM is under the illusion that it runs on its own hardware (i.e. CPU, memory, NIC, storage). This illusion is created by the hypervisor, which is responsible for allocating resources for each VM, handling sensitive instructions issued by VMs and finally managing VM lifecycle 2.3. In this section we discuss security threats targeting the hypervisor, as a core component of the provider's infrastructure. An attacker targeting the hypervisor might be able to execute malware from different runtime spaces inside the cloud infrastructure. Each runtime space comes with different privileges. We list the runtime spaces in increasing order of privilege level (also the order in difficulty to exploit). • Guest VM User-Space: This runtime space is the easiest one to obtain especially in an IaaS environment. Although, attempts to run privileged instructions could lead to an exception, an attacker can run any type of exploit. In [START_REF] Elhage | Virtunoid: Breaking out of KVM[END_REF] the attacker manages to break out from a guest by exploiting a missing check in the QEMU-KVM user-space driver. • Guest VM Kernel-Space: Since in an IaaS cloud environment tenants can run an OS of their choice, an attacker can provision some VMs, run an already tampered OS and use the malicious guest's kernel to launch an attack to the hypervisor. In [START_REF]Buffer Overflow in the Backend of XenSource[END_REF] an attack to the hypervisor implements a malicious para-virtualized front-end driver and exploits a vulnerability in the back-end driver. • Hypervisor Host OS: One of the most desirable runtime spaces for an attacker is the one of the host OS as the privileges granted are very high. For example, KVM as a part of the Linux kernel, provides an entry point for attackers that have local user access to the host machine, exploiting a flaw in KVM. Customers in public clouds manage their resources through dedicated web control interfaces. Moreover, cloud providers also manage the operation of the cloud system through dedicated interfaces that are often accessible through the Internet. A successful attack on a control interface could grant the attacker complete access to a victim's account along with all the data stored in it, or even worse to the whole cloud infrastructure when the provider's interface is compromised. In [START_REF]ENISA cloud risk assessment[END_REF] attacks towards cloud management interfaces are considered extremely high risk and in [START_REF] Somorovsky | All Your Clouds Are Belong to Us: Security Analysis of Cloud Management Interfaces[END_REF] the authors prove that the web interfaces of two known public and private cloud systems (Amazon's EC2 and Eucalyptus) are susceptible to signature wrapping attacks. In a signature wrapping attack, the attacker can modify a message signed by a legitimate signature, and trick the web service into processing its message as if it was legitimate. Summary In summary, traditional information systems as well as cloud environments face multiple security threats originating from different privilege levels in the infrastructure. In an IaaS cloud environment the attack surface is expanded with the addition of the hypervisor, as the building block of a cloud infrastructure, as well as the web-exposed management API. In order to successfully detect attacks a security monitoring framework is needed. We continue our discussion with a detailed description of security monitoring frameworks both for traditional information systems and cloud environments. Security Monitoring Information systems face continuous threats at different levels of their infrastructure. An attacker can gain access to the system by exploiting a software vulnerability and thus be able to modify both the OS kernel and critical system utilities. In order to detect such activities, a security monitoring framework is necessary. A security monitoring framework consists of the appropriate detection mechanisms required to diagnose when an information system has been compromised and inform the administrator in the form of specialised messages (called alerts). What is Security Monitoring? According to [START_REF] Bejtlich | The Tao of Network Security Monitoring: Beyond Intrusion Detection[END_REF]: Due to the diverse nature of applications hosted in an information system, a security monitoring framework requires multiple components that monitor different parts of the system in order to maintain situational awareness of all hosted applications. Figure 2.4 depicts an information system with different security devices: firewalls, antiviruses, network-based IDS. In the following sections we discuss some of the core components embedded in modern security monitoring frameworks focusing primarily on Intrusion Detection Systems (IDS) and Firewalls as the contributions presented in this thesis focus on these two components. External traffic Mirrored traffic Network Intrusion Detection System Antivirus Main Components Several tools are available for mitigating malware threats in an information system. We list the most common ones along with their typical features. • Antivirus: Most antivirus solutions provide capabilities such as scanning critical system components (startup files, boot records), real-time scanning of files that are downloaded, opened or executed, sandboxed dynamic monitoring of running applications and identifying common types of malware (viruses, worms, backdoors, etc). Commercial solutions include Kaspersky Security Scan [START_REF]Kaspersky Security Scanner[END_REF], AVG antivirus [START_REF]AVG Antivirus[END_REF] and Panda Protection [START_REF]Panda Protection[END_REF]. • Router: Typically a router uses a set of traffic management rules that is known as an access control list. Routers are normally deployed in front and at the core of an information system's firewall and perform some traffic mitigation such as ingress and egress filtering. Commercial solutions include: Cisco ASR 1000 Series [START_REF]Cisco ASR 1000 Series[END_REF] and Juniper MX Series [START_REF]Juniper MX Series[END_REF]. • Access Control Systems: Normally access control systems are concerned with regulating the users attempts to access specific resources in an information system. Information systems apply access controls at different levels of their infrastructure (e.g. an OS regulating access to files, or an authentication server described below). • Virtual Private Network (VPN): VPN allows users to access an organization's private network remotely. It offers traffic encryption between two connection points through a variety of supported protocols (TLS [START_REF] Dierks | The TLS Protocol Version 1[END_REF], IPsec [START_REF] Kent | Security Architecture for the Internet Protocol[END_REF], DTLS [START_REF] Rescorla | Datagram Transport Layer Security Version 1.2[END_REF]). Examples of VPN service providers include: VPN Express [START_REF]VPN Express[END_REF] and VyPR VPN [START_REF]VyPR VPN[END_REF] • Authentication Server: A server that is used to authenticate users or applications through the network using a set of credentials (e.g. username and password). Authentication servers support a variety of protocols. Notable example of this category is the LDAP protocol [START_REF] Wahl | Authentication Methods for LDAP[END_REF] and Kerberos [START_REF] Kohl | The Kerberos Network Authentication Service (V5)[END_REF] protocol. • Log Collectors: In order to facilitate the collection and analysis of events of interest, log collectors are necessary for every information system. Depending on the level of desired system-wide visibility, different time intervals for the collection of logs can be defined. Due to high diversity between event sources (i.e. applications, system, security devices) most software solutions are able to gather and unify information from different sources and different formats. In a cloud environment log collection is of critical importance as it allows tenants to gain insight into resources utilization, application performance, security and operational health. Major public clouds offer customisable logging services such as CloudWatch [START_REF]Amazon CloudWatch[END_REF] and Log Analytics [START_REF]Microsoft Azure Log Analytics[END_REF]. In traditional information systems a variety of log collection solutions exists (e.g. rsyslog [START_REF]rSyslog[END_REF], LogStash [START_REF]LogStash[END_REF]). Intrusion Detection Systems Intrusion detection systems are usually at the core of a security monitoring framework. Their main purpose is to detect security breaches in an information system before they inflict widespread damage. An IDS is composed of three main stages: data collection, processing and reporting. The core detection feature is implemented in the processing stage. What is an IDS? According to [START_REF] Scarfone | SP 800-94. Guide to Intrusion Detection and Prevention Systems (IDPS)[END_REF]: Definition 4 Intrusion detection is the process of monitoring the events occurring in a computer system or network and analyzing them for signs of possible incidents, which are violations or imminent threats of violation of computer security policies, acceptable use policies, or standard security practices. Consequently, an Intrusion Detection System is software that automates the intrusion detection process. In the following section we describe the different types of IDSs according to their detection technique and we follow with a classification based on the embedded technology. Types of IDSs Most of the IDS technologies are based on one of the two following detection techniques [START_REF] Axelsson | Intrusion Detection Systems: A Survey and Taxonomy[END_REF], [START_REF] Modi | A survey of intrusion detection techniques in cloud[END_REF]. We describe each one along with observed advantages and pitfalls. • Signature-based: Signature-based IDSs compare observed events against a list of a priori known signatures in order to identify possible security breaches. A signature is generally a pattern that corresponds to a registered attack. Signature-based detection is very effective at identifying known threats and is the simplest form of CHAPTER 2. STATE OF THE ART detection. A signature-based IDS compares the current unit of activity (e.g. network packet, file content) to the list of signatures using string comparison. Unfortunately, signature-based detection is largely ineffective when dealing with previously unknown attacks, attacks that are composed by multiple events or attacks that use evasion techniques [START_REF] García-Teodoro | Anomaly-based Network Intrusion Detection: Techniques, Systems and Challenges[END_REF]. Known examples of signature based IDSs include Snort [START_REF]Snort[END_REF], Suricata [START_REF]Suricata Open Source IDS Engine[END_REF] and Sagan [START_REF]The Sagan Log Analyis Engine[END_REF]. • Anomaly-based: Anomaly-based IDSs compare a profile of activity that has been established as normal with observed events and attempt to identify significant deviations. Each deviation is considered an anomaly. A normal profile is created by observing the monitored system for a period of time-called training period (e.g. for a given network http activity composes 15% of the observed traffic) and can be static (the profile remains unchanged) or dynamic (the profile is updated at specific time intervals). Depending on the methodology used to create the normal profile anomaly-based IDSs are either statistical-, knowledge-or machine-learningbased [START_REF] García-Teodoro | Anomaly-based Network Intrusion Detection: Techniques, Systems and Challenges[END_REF]. Statistical-based IDSs represent the behavior of the analysed system from a random view point, while knowledge-based IDSs try to capture the system's behavior based on system data. Finally machine learning-based IDSs establish a model that allows for pattern categorization. One of the main advantages of an anomaly-based IDS is that it can be very effective when dealing with previously unknown attacks. Unfortunately, anomaly-based IDSs suffer from many false positives when benign activity, that deviates significantly from the normal profile, is considered an anomaly. This phenomenon is amplified when the monitored information system is very dynamic. Known examples of anomaly-based IDSs include Bro [START_REF] Paxson | Bro: A System for Detecting Network Intruders in Real-time[END_REF], Stealthwatch [START_REF]Stealthwatch Flow Collector[END_REF] and Cisco NGIPS [START_REF]Cisco NGIPS[END_REF]. According to [START_REF]Towards a Taxonomy of Intrusion-detection Systems[END_REF] IDS technologies are divided in two categories depending on the type of events that they monitor and the ways in which they are deployed: • Network-based (NIDS): NIDSs monitor network traffic (i.e. packets) for a particular network or segments of a network and analyze network protocol activity or packet payload in order to detect suspicious events or attacks. The most common approach for deploying an NIDS is at a boundary between networks, in proximity to border firewalls or other security devices. A specific category of NIDS is wireless IDSs, which monitor only wireless network traffic and analyze wireless network protocols for identifying suspicious activity. In contrast to other NIDS which focus on packet payload analysis, wireless NIDSs focus on anomalies in wireless protocols. • Host-based (HIDS): HIDSs monitor the events occurring in a single host for suspicious activity. An HIDS can monitor network traffic, system logs, application activity, process list and file access in a particular host. HIDSs are typically deployed on critical hosts that contain sensitive information. We have described the main IDS categories based on the mechanism used for detection and the way they are deployed. The work done in this thesis focuses on network-based IDSs. We now discuss the second main security component that has been addressed in this thesis: the firewall. Firewalls This section focuses on a different security component, the firewall. What is a Firewall? According to [100]: Definition 5 A firewall is a collection of components, interposed between two networks, that filters traffic between them according to some security policy. Firewalls are devices that provide the most obvious mechanism for enforcing network security policies. When deploying legacy applications and networks, firewalls are excellent in providing a first-level barrier to potential intruders. The most common firewall configuration comprises two packet filtering routers that create a restricted-access network (called Demilitarized Zone or DMZ, see Figure 2.5). According to [START_REF] Keromytis | Designing Firewalls: A Survey[END_REF] firewalls have three main goals: 1. Protect hosts inside the DMZ from outside attacks, 2. Allow traffic from the outside world to reach hosts inside the DMZ in order to provide network services, 3. Enforce organizational security policies that might include restrictions that are not strictly security related (e.g. access to specific websites). Firewall Features We now discuss available firewall features and the capabilities of each one as in [START_REF] Scarfone | SP 800-41 Rev. 1. Guidelines on Firewalls and Firewall Policy[END_REF]. • Packet filtering: The most basic feature of a firewall is the filtering of packets. When we refer to packet filtering we are not concerned with the payload of the CHAPTER 2. STATE OF THE ART network packets but with the information stored in their headers. The mechanism of packet filtering is controlled by a set of directives which is known as ruleset. The simplest form of a packet filtering device is a network router equipped with access control lists. • Stateful inspection: This functionality essentially improves packet filtering by maintaining a table of connections state and blocking packets that deviate from the expected state according to a given protocol. • Application-level: In order to extend and improve stateful inspection, stateful protocol analysis was created. With this mechanism a basic intrusion detection engine is added in order to analyse protocols at the application layer. The IDS engine compares observed traffic with vendor-created benign profiles and is able to allow or deny access based on how an application is behaving over the network. • Application-proxy gateways: A firewall which acts as an application-proxy gateway contains a proxy agent that acts as an intermediary between different hosts that want to communicate with each other. If the communication is allowed then two separate connections are created (client-to-proxy and proxy-to-server) while the proxy remains transparent to both hosts. Much like an application-level firewall the proxy can inspect and filter the content of traffic. • Virtual private networking (VPN): A common requirement for firewalls is to encrypt and decrypt network traffic between the protected network (DMZ) and the outside world. This is done by adding a VPN functionality to the firewall. As with other advanced firewall functionalities (besides simple header-based packet filtering) a trade-off between the functionality and the cost in terms of computational resources (CPU, memory) depending on the traffic volume and the type of requested encryption, is introduced. • Network access control: Another functionality for modern firewalls is controlling incoming connections based on the result of health checks performed on the computer of a remote user. This requires an agent that is controlled by the firewall to be installed in the user's machine. This mechanism is typically used for authenticating users before granting them access to the network. • Unified threat management: The combination of multiple features into a single firewall is done with the purpose of merging multiple security objectives into a single system. This usually involves offering malware detection and eradication, suspicious probe identification and blocking along with traditional firewall capabilities. Unfortunately, the system's requirements in terms of memory and CPU are significantly increased. In this thesis, we address application-level firewalls and firewalls that provide stateful traffic inspection capabilities. Security Monitoring in Cloud Environments After presenting different components of a security monitoring framework we zoom in security monitoring frameworks tailored for cloud environments. As explained in Section 2.2.5 cloud environments experience dynamic events in different levels of the infrastructure. Naturally, the occurring events engender changes for the security monitoring framework that requires its components to be automatically adapted to the new state. For example, when a VM is migrated from one compute node to another, the NIDS that is responsible for monitoring the traffic on the destination compute node needs to be reconfigured in order to monitor the traffic for specific attacks against the services hosted in the migrated VM. Without reconfiguration, an attack could pass undetected, creating an entry point for the attacker and allowing him to compromise the cloud-hosted information system. In this section we discuss cloud security monitoring solutions targeting either to the provider infrastructure and its most critical components (e.g. hypervisor, host OS kernel) or to the tenant information system. Provider Infrastructure Monitoring This section presents security solutions that target the cloud provider's infrastructure focusing on the hypervisor and host OS kernel. The frameworks described in both categories could be considered as hypervisor or kernel IDS systems. The relationship between the hypervisor and the host OS kernel is depicted in Figure 2.6. In this picture the hypervisor runs as a kernel module while the VMs run in user space. Their virtual address space is mapped through the hypervisor to the host's physical address space. Although, such a solution would limit the attack surface it would not completely guarantee the integrity of all hypervisor components. To address this challenge, the authors in [START_REF] Wang | HyperCheck: A Hardware-assisted Integrity Monitor[END_REF] have created HyperCheck, a hardware assisted intrusion detection framework, that aims at hardening the TCB. Their framework uses the CPU System Management Mode (SMM, a built-in feature in all x86 models) for taking a snapshot of the current state of the CPU and memory and transmits it to a secure remote server for analysis. The remote server is capable of determining whether the analysed hypervisor has been compromised by comparing the newly received snapshot with one taken when the machine was initialized. HyperCheck operates in the BIOS level thus its only requirement is that the attacker does not gain physical access to the machine for altering the SMM during runtime. In order to secure HyperCheck against attacks that simulate hardware resets, a machine with a trusted boot can be used. The authors of HyperCheck implemented a prototype on QEMU which is able to create and send a snapshot of the protected system in approximately 40ms. In a similar approach the authors of HyperSentry [START_REF] Azab | HyperSentry: Enabling Stealthy In-context Measurement of Hypervisor Integrity[END_REF] also used a snapshot taken by the SMM to perform integrity checking. The fundamental difference between these two frameworks is that in the case of HyperSentry the request for the snapshot is issued by a stealthy out-of-band channel, typically the Intelligent Platform Management Interface, that is out of the control of the CPU. Thus an attacker who has obtained the highest level of privileges in the system still cannot call HyperSentry. Performance wise, a periodic invocation of HyperSentry (integrity checks every 16 seconds) would result in an 1.3 % of overhead for the hypervisor, while a full snapshot requires 35ms. In contrast to the aforementioned hardware assisted solutions, HyperSafe [START_REF] Wang | HyperSafe: A Lightweight Approach to Provide Lifetime Hypervisor Control-Flow Integrity[END_REF] is a software solution that is centered around enforcing hypervisor integrity rather than verifying it. The authors use two software techniques, called non-bypassable memory lockdown and restricted pointer indexing, to guarantee integrity of the hypervisor's code in addition to control flow integrity. Non bypassable memory lockdown write-protects the memory pages that include the hypervisor code along with their attributes so that a change during runtime is prevented. By leveraging non-bypassable memory lockdown the framework is able to expand write-protection to control data. In order to deal with the dynamic nature of control data (like stack return addresses) the authors compute a control graph and restrict the control data to conform with the results of the graph. The induced overhead by running HyperSafe for tenant applications is less than 5%. Kernel Protection Frameworks In contrast to hypervisor integrity frameworks which are only concerned with protecting the code base and data of the hypervisor, kernel protection frameworks aim at securing the code integrity of the kernel. The frameworks described below provide tampering detection for rootkits, a subcategory of malware. One of the most pivotal works in kernel integrity checking is Copilot [START_REF] Petroni | Copilot -a Coprocessor-based Kernel Runtime Integrity Monitor[END_REF]. Copilot is able to access a system's memory without relying on the kernel and without modifying the OS of the host. The framework is based on a special PCI add-on card that is able to check the monitored kernel for malicious modifications periodically. The host's memory is retrieved through DMA techniques and sent to a remote admin station for inspection through a dedicated secure communication channel (much like the HyperCheck approach). With an inspection window of 30 seconds Copilot's overhead to system performance is approximately 1%. HookSafe [START_REF] Wang | Countering Kernel Rootkits with Lightweight Hook Protection[END_REF] follows the same philosophy as HyperSafe by write-protecting kernel hooks in order to guarantee control data integrity. The authors base their approach on the observation that kernel hooks rarely change their value once initialised, thus making it possible to relocate them in a page-aligned memory space with regulated access. The performance overhead to real-world applications (e.g. Apache web server) is 6%. Gibraltal [START_REF] Baliga | Automatic Inference and Enforcement of Kernel Data Structure Invariants[END_REF], installed in a dedicated machine called the observer, obtains a snapshot of the kernel's memory through a PCI card. It observes kernel execution over a certain period of time (training phase) and creates hypothetic invariants about key kernel data structures. An example of an invariant could be that "the values of elements of the system call table are constant". Gibraltal then periodically checks whether the invariants are violated and if so an administrator is notified for the presence of a rootkit. The framework produces a very low false positive rate (0.65%) while maintaining a low performance overhead (less than 0.5%). A common observation for the frameworks described in both kernel and hypervisor intrusion detection solutions is that the incorporated detection mechanism cannot be adapted depending on changes on the applications hosted in the monitored system (virtualised or not). Furthermore, in the case of hypervisor integrity frameworks, the solutions do not address changes in the virtual topology or the load of network traffic. After describing the different approaches to secure the hypervisor, one of the most critical parts of the provider's infrastructure, we now shift our focus to security monitoring frameworks for tenant infrastructures. Tenant Information System Monitoring In this section we focus on tenant security monitoring frameworks with two main components: intrusion detection systems and firewalls. Before that we present an important concept in tenant infrastructure monitoring called virtual machine introspection. After reviewing threats against cloud hosted information systems it is clear that attacks towards virtual machines often target on-line applications and the underlying OS. Therefore acquiring real-time information about the list of running processes and the OS state in the deployed VMs has become a necessity. Virtual machine introspection is able to provide this information in an agentless manner guaranteeing minimal intrusiveness. Virtual Machine Introspection After reviewing threats against cloud hosted information systems it is clear that attacks towards virtual machines often target on-line applications and the underlying OS. Therefore acquiring real-time information about the list of running processes and the OS state in the deployed VMs has become a necessity. Virtual machine introspection is able to provide this information in an agentless manner guaranteeing minimal intrusiveness. Security solutions that employ virtual machine introspection move monitoring and protection below the level of the untrusted OS and as such can detect sophisticated kernel-level malware that runs inside the deployed VMs. What is Virtual Machine Introspection? The concept of introspection was introduced by Garfinkel et al. in [START_REF] Garfinkel | A Virtual Machine Introspection Based Architecture for Intrusion Detection[END_REF]. In general terms Definition 6 virtual machine introspection is inspecting a virtual machine from the outside for the purpose of analyzing the software running inside it. The advantages of using VMI as a security solution are two-fold: 1. As the analysis runs underneath the virtual machine (at the hypervisor level) it is able to analyze even the most privileged attacks in the VM kernel 2. As the analysis is performed externally it becomes increasingly difficult for the attacker to subvert the monitoring system and tamper with the results. As such a high confidence barrier is introduced between the monitoring system and the attacker's malicious code. Unfortunately, as the monitoring system runs in a completely different hardware domain than the untrusted VM it can only access, with the help of the hypervisor, hardwarelevel events (e.g. interrupts and memory accesses) along with state-related information (i.e. physical memory pages and registers). The system then has to use detailed knowledge of the operating system's algorithms and kernel data structures in order to rebuild higher OS-level information such as the list of running processes, open files, network sockets, etc. The issue of extracting high-level semantic information from low-level hardware data is known as the semantic gap. In order to bridge the semantic gap, the monitoring system must rely on a set of data structures, which can be used as templates in order to translate hypervisor-level observations to OS-level semantics. As such, the monitoring system is required to keep up-to-date detailed information about the internals of different commodity operating systems, thus making the widespread deployment of introspection-based security solutions unfeasible. CHAPTER 2. STATE OF THE ART Virtuoso [START_REF] Dolan-Gavitt | Virtuoso: Narrowing the Semantic Gap in Virtual Machine Introspection[END_REF] attempts to overcome this challenge. Virtuoso is a framework that can automatically extract security-relevant information from outside the virtual machine. Virtuoso analyzes dynamic traces of in-guest programs that compute the introspection-required information. Then it automatically produces programs that retrieve the same information from outside the virtual machine. Although Virtuoso is a first step towards automatic bridging of the semantic gap it is limited only to information that can be extracted via an in-guest API call (such as getpid() in Linux OS). Moreover, Virtuoso does not address the main problem regarding VMI-aware malware: the fact that an attacker might affect the introspection result by altering kernel data structures and algorithms. An example of such malware is DKSM [START_REF] Bahram | DKSM: Subverting Virtual Machine Introspection for Fun and Profit[END_REF]. The de facto standard in performing virtual machine introspection is XenAccess [START_REF] Payne | Secure and flexible monitoring of virtual machines[END_REF]. The authors define a set of requirements for performing efficient memory introspection: 1. No superfluous modifications of the hypervisor's code, These requirements are met in XenAccess which provides also low-level disk traffic information in addition to memory introspection. XenAccess utilises Xen's native function xc map foreign range() that maps the memory of one VM to another (in this case from DomU to Dom0, see Section 2.3.2.1.3), in order to access the monitored guest's memory which is then treated as local memory, providing fast monitoring results. For gathering the necessary information about the guest's OS a call to the XenStore database is made. XenAccess is a library that allows security monitoring frameworks to perform virtual machine introspection and is not a standalone monitoring framework. As such it does not incorporate any detection or prevention techniques. LibVmi [113] is the evolution of XenAccess which extends introspection capabilities to other virtualization platforms like KVM. Besides extending XenAcess to other virtualization platforms, LibVmi offers significant performance improvements by utilizing a caching mechanism for requested memory pages. We use LibVmi in the implementation of the contribution of this thesis presented in Chapter 5. Virtual machine introspection solutions can be classified into two main categories: passive and active monitoring, depending on whether the security framework performs monitoring activities by external scanning or not. Cloud-Tailored IDSs The complexity and heterogeneity of a cloud environment combined with the dynamic nature of its infrastructure (see Section 2.2.5) make the design of a cloud-tailored IDS a challenging task. The problem is augmented when taking into account the security requirements of different tenants whose information systems often require customised security monitoring solutions that do not align with each other (i.e. different types of security threats, level of desired information, etc). The approaches described below detail IDS solutions that aim at addressing those challenges. Roschke et al. [START_REF] Roschke | Intrusion Detection in the Cloud[END_REF] propose an IDS framework (see Figure 2.7) which consists of different IDS sensors deployed at different points of the virtual infrastructure. Each virtual component (e.g. virtual machine) is monitored by a dedicated sensor. All sensors are controlled by a central management unit which is also accountable for unifying and correlating the alerts produced by the different types of sensors. The central management unit has four core components: Event Gatherer, Event Database, Analysis Controller and IDS Remote Controller. The Event Gatherer is responsible for Figure 2.7 -The Cloud IDS architecture as in [START_REF] Roschke | Intrusion Detection in the Cloud[END_REF] receiving and standardising alerts from the deployed sensors which are then stored in the Event Database. Alerts are then accessed by the Analysis component which performs correlation for the detection of multi-event complex attack scenarios. Finally the IDS Remote Controller is responsible for the lifecycle (start, stop, shutdown) and configuration of each IDS sensor. Although the approach presented in the paper enables the use of different types of IDS sensors (host-based, network based) it does not account for the dynamic nature of the virtual infrastructure. For example, it is not clear whether the dedicated sensor is migrated along with the virtual machine in case of a VM migration. Furthermore, the reconfiguration of the IDS sensors is not automated (e.g. in the case where a new service is added in the deployed VMs). Finally, component sharing is not enabled even within the same virtual infrastructure. In an attack-specific approach the authors of [START_REF] Mazzariello | Integrating a network IDS into an open source Cloud Computing environment[END_REF] try to tackle the threat of a Denialof-Service event by deploying network-based IDS sensors next to each compute node of an IaaS cloud infrastructure. The proposed solution attempts to monitor each compute node by a separate IDS instance and then perform alert correlation in a central point. Although this approach clearly addresses the scalability issue of monitoring the whole traffic at a central point (e.g. one IDS instance attached to the network controller), there are several issues that remain unsolved. For example there is no mention of IDS reconfiguration in case of a changed set of services on the deployed VMs that are hosted in a particular compute node. Although the authors advocate for a distributed approach that will result to a better performing IDS sensor (in terms of packet drop rate) they do not address the case where an unexpected traffic spike occurs. The described framework only includes network-based IDSs, as opposed to [START_REF] Roschke | Intrusion Detection in the Cloud[END_REF] which includes different types of IDSs. In an effort to address security in federated clouds as well as to tackle large-scale distributed attacks that target multiple clouds, the authors of [START_REF] Ficco | Intrusion Detection in Cloud Computing[END_REF] propose a layered CHAPTER 2. STATE OF THE ART intrusion detection architecture. The framework performs intrusion detection in three different layers: Customer Layer (tenant virtual infrastructure), Provider Layer (provider physical infrastructure) and Cloud Federation Layer. Each layer is equipped with probes, which perform the actual detection functionality, agents which are responsible for gathering and normalizing the alerts generated by different types of probes, and finally security engines that perform the actual decision making by correlating the received alerts. The security engines are responsible for deciding whether different security events represent a potential distributed attack and for forwarding the results to a higher layer. The security engine in the cloud provider layer is able to detect whether parts of its cloud infrastructure have been compromised based on data that it receives from the security engines of different customers (i.e. tenants). Although the authors attempt to combine the results of security monitoring of the tenants and the provider, they do not address cases where reconfiguration of the monitoring probes is required (i.e. when dynamic events occur). Moreover it is not clear whether different security probes can be shared between tenants. Livewire [START_REF] Garfinkel | A Virtual Machine Introspection Based Architecture for Intrusion Detection[END_REF] was the pioneering work in creating an intrusion detection system that applies VMI techniques. Livewire works offline and passively. The authors use three main properties of the hypervisor (isolation, inspection and interposition) in order to create an IDS that retains the visibility of a host-based IDS while providing strong isolation between the IDS and a malicious attacker. A view of Livewire's architecture can be found in Figure2.8. The main components of the VMI-based IDS are: Policy Framework Metadata Guest OS OS Interface Library Guest OS Guest Apps Virtual Machine Virtual Machine Monitor callback or Response Policy Modules . A High-Level View of our VMI-Based IDS Architecture: On the right is the virtual machine (VM) that runs the host being monitored. On the left is the VMI-based IDS with its major components: the OS interface library that provides an OS-level view of the VM by interpreting the hardware state exported by the VMM, the policy engine consisting of a common framework for building policies, and policy modules that implement specific intrusion detection policies. The virtual machine monitor provides a substrate that isolates the IDS from the monitored VM and allows the IDS to inspect the state of the VM. The VMM also allows the IDS to interpose on interactions between the guest OS/guest applications and the virtual hardware. INSPECTION COMMANDS are used to directly examine VM state such as memory and register contents, and I/O devices' flags. MONITOR COMMANDS are used to sense when certain machine events occur and request notification through an event delivery mechanism. For example, it is possible for a VMI to get notified when a certain range of memory changes, a privileged register changes, or a device state change occurs (e.g. Ethernet interface address is changed). ADMINISTRATIVE COMMANDS allow the VMI IDS to control the execution of a VM. This interface allows the VMI IDS to suspend a VM's execution, resume a suspended VM, checkpoint the VM, and reboot the VM. These commands are primarily useful for bootstrapping the system and for automating response to a compromise. A VMI IDS is only given administrative control over the VM that it is monitoring. The VMM can reply to commands synchronously (e.g. when the value of a register is queried) or asynchronously (e.g. to notify the VMI IDS that there has been a change to a portion of memory). The VMI IDS The VMI IDS is responsible for implementing intrusion detection policies by analyzing machine state and ma-chine events through the VMM interface. The VMI IDS is divided into two parts, the OS interface library and the policy engine. The OS interface library's job is to provide an OS-level view of the virtual machine's state in order to facilitate easy policy development and implementation. The policy engine's job is purely to execute IDS policies by using the OS interface library and the VMM interface. The OS Interface Library VMMs manage state strictly at the hardware level, but prefer to reason about intrusion detection in terms of OSlevel semantics. Consider a situation where we want to detect tampering with our sshd process by periodically performing integrity checks on its code segment. A VMM can provide us access to any page of physical memory or disk block in a virtual machine, but discovering the contents of sshd's code segment requires answering queries about machine state in the context of the OS running in the VM: "where in virtual memory does sshd's code segment reside?", "what part of the code segment is in memory?", and "what part is out on disk?" We need to provide some means of interpreting lowlevel machine state from the VMM in terms of the higherlevel OS structures. We would like to write the code to do this once and provide a common interface to it, instead • OS interface library: responsible for providing an OS-view of the monitored guest by translating hardware events to higher OS level structures. The OS interface library is responsible for bridging the semantic gap (see Section 2.5.2.2.1) • Policy Engine: responsible for deciding if the system has been compromised or not. Different detection techniques (e.g. anomaly detection) can be supported by the policy engine in the form of policy modules. The authors implemented their prototype on VMware workstation. As the first step towards using introspection in security monitoring, Livewire has some limitations. First, it does not address dynamic events in a cloud infrastructure, as it remains unclear if the dedicated IDS follows the VM in the event of a migration. Second the policy modules do not account for tenant security requirements and cannot be adapted in case a new service is added in the introspected VM. Finally, component sharing is not enabled as the design limits an IDS to a single VM. HyperSpector [START_REF] Kourai | HyperSpector: Virtual Distributed Monitoring Environments for Secure Intrusion Detection[END_REF] secures legacy IDSs by placing them inside isolated virtual machines while allowing them to keep an inside-the-guest view of the monitored system through virtual machine introspection. The authors use three mechanisms to achieve inside-theguest visibility: • Software port mirroring: the traffic from and to the monitored VM is copied to the isolated VM where the legacy IDS is running. • inter-VM disk mounting: the file system of the monitored VM is mounted in the dedicated VM as a local disk, thus enabling integrity checks. • inter-VM process mapping: the processes running inside the monitored VM are mapped to the isolated VM in the form of shadow processes with local identifiers. A dedicated function called process mapper running in the hypervisor is responsible for translating the local identifiers of the shadow processes to actual process identifiers in the monitored VM. The process mapper only provides reading access to the registers and memory of the shadow processes thus preventing a subverted IDS from interposing the monitored VMs functionality. Inter-VM process mapping is used for extracting information regarding the list of processes running inside the monitored VM. Although HyperSpector secures legacy IDSs through virtual machine introspection, it suffers from the same limitations as Livewire. Lares [START_REF] Payne | Lares: An architecture for secure active monitoring using virtualization[END_REF] attempts a hybrid approach in security monitoring through virtual machine introspection by attempting to do active monitoring while still maintaining increased isolation between the untrusted VM and the monitoring framework. The authors propose to install protected hooks in arbitrary locations of the untrusted VM's kernel. The purpose of the hook is to initiate a diversion of the control flow to the monitoring framework. Once a hook is triggered, for example in the event of a new process, then the execution in the untrusted guest is trapped and the control automatically passes in the monitoring software which resides in an isolated VM. The hooks along with the trampoline that transfers control to the monitoring software are write protected by a special mechanism in the hypervisor called write protector. The trampoline is also responsible for executing commands issued by the monitoring software and does not rely on any kernel functions of the untrusted VM. Although Lares combines the benefits of isolation along with the ability to interpose on events inside the untrusted VM, it has some limitations that prevent its adoption in a cloud environment. First, the security VM cannot monitor more than one guest. This implies that for every VM spawned in a compute node, a corresponding security VM needs to be started as well, reducing the node's capacity for tenant VMs by half. Second, in the event of a VM migration, the tied monitoring VM needs to be moved as well, imposing additional load to the network. Finally, the list of monitored events is static, since the addition of a new event would require the placement of a new hook inside the untrusted VM. CloudSec [START_REF] Ibrahim | CloudSec: A security monitoring appliance for Virtual Machines in the IaaS cloud model[END_REF] attempts to provide active monitoring without placing any sensitive code inside the untrusted VM. The authors use VMI to construct changing guest kernel data structures in order to detect the presence of kernel data rootkits (e.g. kernel object hooking rootkits). The proposed framework is able to provide active concurrent monitoring for multiple colocated VMs. CloudSec does not directly access the memory pages of the untrusted VM. Instead, it interacts with the hypervisor for obtaining the corresponding pages which are stored in a dedicated memory page buffer (MPB). CloudSec uses a dedicated module (KDS) in order to load information regarding kernel data structures of monitored VM's OS. Using the information from the KDS the Semantic Gap Builder (SGB) attempts to solve the semantic gap and build a profile of the monitored VM. Finally the profile is fed to the Defence Modules which perform the actual detection. An overview of the CloudSec architecture is shown in Figure2.9. Figure 2.9 -CloudSec architecture as in [START_REF] Ibrahim | CloudSec: A security monitoring appliance for Virtual Machines in the IaaS cloud model[END_REF] Although CloudSec enables active monitoring for multiple VMs concurrently, the performance overhead of the solution in a multi-tenant environment has not been investigated. Furthermore, the active monitoring capabilities are limited to switching off an infected VM. CloudSec does not address dynamic events and is limited to VMware ESXi hypervisor. KvmSec [START_REF] Lombardi | KvmSec: A Security Extension for Linux Kernel Virtual Machines[END_REF] is a KVM extension that enables active monitoring for untrusted guests from the host machine. While KvmSec is composed of multiple modules that reside in the host and in the untrusted guests, the authors place the core detection modules on the host side in order to provide tamper-resistant monitoring. Communication between the guest and host modules is enabled through a secure channel that enables information exchange. The guest module consists of a kernel daemon that creates and manages the secure communication channel and a second daemon that collects, analyses and acts upon received messages. The secure communication channel is created in shared memory with synchronised access through mutextes. Upon detection of a malicious event KvmSec is able to freeze or shutdown the monitored guest. Currently no other sanitization mechanisms are supported. KvmSec is able to extract the list of running processes inside the untrusted guest but no other detection modules are supported. Although theoretically, KvmSec might be able to monitor multiple consolidated VMs by enabling a shared memory region for each VM, the performance overhead of this approach remains unexplored. The last five discussed solutions include passive and active monitoring frameworks that incorporate virtual machine introspection. Although passive monitoring is clearly a less invasive approach that favors stealthy monitoring (as there is no need for placing additional code in the untrusted guest), it lacks the ability to interpose on guest events. On the other hand, active monitoring enables the security monitoring framework to act on suspicious events but it requires hooks to be placed inside the untrusted VM, making it a more invasive solution. Passive monitoring solutions can be performed only at specific time intervals (known as introspection frequency), as opposed to active solutions that are triggered only when a suspicious event, like a memory region being accessed, occurs. Furthermore, although the discussed solutions in both categories provide some form of protection mechanisms (e.g. write protected memory regions) there is still a chance that an attacker can disable the hooks and render the result of introspection invalid. In the contribution of this thesis presented in Chapter 4, we adapt NIDSs to dynamic changes that occur in a cloud environment in order to provide adequate monitoring of the network traffic that flows towards and from the cloud-hosted information system. Our contribution addresses the adaptation issues that are not taken into account in the previously-presented solutions. Cloud Firewalls This section presents firewall solutions tailored for cloud environments. We focus on industrial solutions since there substantial effort is put on designing new cloud firewall solutions or adding new features to existing ones. We focus on two firewall categories: next-generation firewalls and application-level firewalls Next-Generation Firewalls Nowadays large scale distributed attacks generate multiple security events at different levels of a cloud infrastructure and are considered amongst the most impactful cyber threats for a cloud environment. One solution in tackling these types of attacks is embedding a next-generation firewall in the cloud infrastructure. Nextgeneration firewalls are devices that are able to combine multiple functionalities in one: application-oriented access to the Internet, deep analysis of the network traffic (e.g. deep packet inspection), and finally a user-oriented access policy for on-line applications. In this section we discuss next-generation firewall solutions for cloud environments offered by major industry players. A joint solution between VMware and PaloAltoNetworks [119] introduces the VM-Series next-generation firewall which is able to provide application-driven access control (in contrast to traditional firewalls that offer a port and IP address control policy). The proposed solution is able to dynamically adapt the enforced security policy when topology events (e.g. VM migration) occur. Their approach introduces a new feature called tag for VM identification. Each VM can have multiple tags that represent different features such as IP address, OS, etc. The user is allowed to create security rules based on tags instead of static VM objects. VM-Series is fully integrated in the NSX security suite (see Section 2.5.2.2.3) in order to gain access to the network traffic and topology information of the infrastructure. Unfortunately, VM-Series does not take into account specific tenantsecurity demands (e.g. protection against specific types of threats) and does not offer component sharing capabilities between different tenants. The VM-Series solution is also integrated in Amazon EC2 [START_REF]Amazon Web Services[END_REF]. Application-level Firewalls In order to gain insight on which applications are generating network traffic, application-level firewalls rose as a solution. Application-level firewalls filter network packets based on a set of rules which refer to protocols and states of the involved applications. When this solution is applied to web applications hosted in a cloud environment it can offer protection against known application-level attacks (such as SQL injection or cross-site scripting, see Section 2.4.1). The Amazon Web Application Firewall (WAF) [120] allows tenants to create their own security rules depending on the type of applications that are hosted in their virtual infrastructure. Tenants can gain visibility into specific types of requests by setting a dedicated filter through the WAF API or create access control lists if they require limited access to their applications. Once created, the rules are installed in a front facing load balancer. Although the WAF solutions offer substantial freedom to tenants, by allowing them to fully customize the deployed ruleset, it does not account for dynamic events (topology-or traffic-related events). Furthermore it is unclear if component sharing is enabled by installing rules of different tenants. A distributed web application firewall is introduced by Brocade (former SteelApp) [START_REF]SteelApp Web Application Firewall[END_REF] that offers automatic rule generation based on application behavior. The in-learning capability of the solution is able to observe on-line applications for a period of time and create access control rules. Brocade WAF has three major components that resemble an SDN architecture model: the Enforcer, the Decider and finally the Administration Interface. The Enforcer is responsible for enforcing the security rules and inspecting the network traffic. If a packet that does not match the existing rules arrives then the Enforcer sends it to the Decider which decides whether to allow or block the connection. Then, the Decider generates a rule and sends it back to the Enforcer. As the rule generator, the Decider is the most computing-intensive part of the application and its load depends on the traffic of the application. The Decider is also responsible for auto-scaling in case of increased demand. Finally the Administration Interface is responsible for managing the WAF and inserting high-level policy rules that are then translated by the Decider to firewall rules. Although the solution is capable of autoscaling it is unclear what is the CPU penalty on colocated VMs. In order to build a tamper-resistant, application-aware firewall that combines in-guest visibility with the isolation of a VMI monitoring framework, the authors of [START_REF] Srivastava | Tamper-Resistant, Application-Aware Blocking of Malicious Network Connections[END_REF] created VMwall. Using XenAccess [START_REF] Payne | Secure and flexible monitoring of virtual machines[END_REF] VMwall is able to correlate processes running inside the untrusted guest with network flows. VMwall maintains a white list of processes that are allowed to make connections and compares the white list to the introspection-generated list. If a match is found a rule allowing the connection is inserted in a dedicated filtering module in Dom0. Although VMwall is the pioneering work in creating introspection-based firewalls, it faces some limitations. First the white list of processes is statically defined and thus does not take into account the dynamic nature of a VM where services are continuously added or removed by tenants. Second, it does not address dynamic topology related events (e.g. VM migration) that occur in a cloud environment. For example, there is no mention of a prioritisation strategy when a migration event occurring in the middle of an introspection action. Finally, it is unclear whether the kernel filtering module can be shared between multiple VMs thus enabling sharing of the firewall component. Xfilter [START_REF] Kourai | A Self-Protection Mechanism against Stepping-Stone Attacks for IaaS Clouds[END_REF] is a self-protection mechanism that filters outgoing packets in the hypervisor, based on information obtained through introspection. Xfilter was designed as an active defence mechanism against compromised VMs that are used as stepping stones to target hosts outside the cloud infrastructure. The framework operates in two phases: Detection and Inspection. During the detection phase Xfilter only inspects the packet header. Once an attack is detected it automatically passes to the Inspection phase where additional information for the packet is extracted through introspection (process name and ID that initiated the transfer, port number, destination IP, etc). Then a rule is automatically generated that blocks all packets with that particular set of characteristics. Due to its design, Xfilter is limited in filtering only outgoing connections, thus unable to address all security cases that are covered by a traditional firewall. As such it is an inadequate general-purpose traffic filtering option. The introspection-based firewall solutions presented are unable to adapt their components based on the dynamic events that occur in a cloud infrastructure. The contribution of this thesis presented in Chapter 5 addresses dynamic events in virtual infrastructure and adapts its components automatically. In this thesis we focus on application-level firewalls that adapt their components based on the list of services that are hosted in the cloud infrastructure. VESPA: A Policy-Based Self-Protection Framework In this section, we present VESPA [START_REF] Wailly | VESPA: Multi-layered Self-protection for Cloud Resources[END_REF], a self-protection framework that addresses self-adaptation of security devices as a reaction to detected attacks. VESPA was designed in order to tackle the heterogeneous nature of an IaaS cloud environment and provide lighter administration of the security devices, combined with lower response time (i.e. when a threat is detected) along with lower error rate (e.g. false positives/negatives). The four main design principles of VESPA are: 1. Policy based self-protection: The framework's design is based on a set of security policies that address the security objectives of the different stakeholders (i.e. tenants and the provider). 2. Cross-layer self-defence: Based on the fact that a cloud environment is composed of different software layers, the framework's response to an attack is not limited to a single layer and can involve protection as well as detection functions (as opposed to [START_REF] Roschke | Intrusion Detection in the Cloud[END_REF] where the framework's core functionality is detection). Multiple self-protection loops: The framework offers the ability to select among different reaction paths in case of an attack. The security administrator can select between loops that offer different trade-offs between reaction time and accuracy. Open architecture: The framework is able to integrate different off-the-self security components. The authors created a four-layer framework that implements their four design principles. The first layer, called Resource plane consists of the cloud resources that need to be monitored (i.e. VMs, tenant networks, etc). The second layer, the Security plane, includes all off-the-shelf security components that can be detection devices (e.g. IDSs) or protection devices (e.g. firewalls). The Agent plane is used as a mediator between the heterogeneous security devices and the actual decision making process. The agents that are part of the agent plane act as collectors and aggregators for the different logs produced by the devices in the security plane. Finally, the last layer, called the Orchestration plane, is responsible for making the reaction decision when an attack towards the monitored infrastructure occurs. Although VESPA is a security framework that tries to address self-adaptation of the security monitoring devices, the authors consider only security incidents as potential sources of adaptation. Other types of dynamic events (see Section 2.2.5) are not considered, consequently no reaction mechanisms for these events are implemented. Furthermore, VESPA does not include tenant-related security requirements in the definition of the reaction policies. Finally, although VESPA aims at including commodity security monitoring devices into the security plane, modifications on their source code are required in order to enable compatibility with the framework. The contributions presented in this thesis adapt the security monitoring framework based on environmental changes (topology-, service-and traffic-related) as opposed to VESPA which addresses security incident-oriented adaptation. Furthermore, our contributions are able to respect tenant-defined security requirements in the adaptation process. Finally, our contributions do not require modifications on the detection components. Security as A Service Frameworks (SecaaS) Most cloud providers follow a shared-responsibility security model when it comes to cloud infrastructures: tenants are responsible for securing anything that they deploy or connect to the cloud. In order to facilitate the security monitoring of a tenant's virtual infrastructure cloud providers offer complete monitoring frameworks in the form of products. In this section we discuss some of these products along with the list of services that they offer. Each provider offers Identity and Access Management solutions for regulating resource access (Amazon: [START_REF]Amazon Web Services Identity and Access Management[END_REF], Google: [126], Microsoft: [127]). • Amazon: Besides the AWS WAF that we discussed in Section 2. [START_REF] Mather | Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance[END_REF] • VMware: NSX, the network virtualization platform offers a variety of security tools including traditional edge firewalls that are exclusively managed by the tenant [START_REF] Waldspurger | Memory Resource Management in VMware ESX Server[END_REF] or anti-spoofing mechanisms [132] that allow users to restrict access to a set of IP addresses that are determined to be spoofed. VMware also provides integrated third party security solutions like TrustPoint [133] which automatically detects network resources that are not yet configured by performing partial scans of the network. Trustpoint also offers remediation options such as automatically quarantining machines or uninstalling infected applications. • Microsoft: Advanced threat analytics [START_REF]Microsoft Advanced Threat Analytics[END_REF] is a specialised tool for detecting distributed attacks that generate seemingly unrelated events. The tool flags incidents that deviate from a previously established normal application behavior. Cloud App Security [135] is another solution for identifying applications that use the network, creating and enforcing customised filtering rules. This product targets SaaS cloud infrastructures. This thesis proposes a design for a self-adaptable security monitoring framework with two separate instantiations (one for NIDSs and one for firewalls). Our approach borrows elements from Security as a Service frameworks (e.g. integration of tenant security requirements and traffic filtering based on the type of hosted applications) but does not offer a full set of security services like industrial SecaaS solutions. Summary This chapter gave an overview of the state of the art for this thesis. We started with a description of autonomic computing along with its key characteristics. Then the concept of cloud computing was introduced. Together these two complementary computing paradigms form the context in which the contributions of this thesis were developed. We 2.6. SUMMARY 51 then focused on describing the IaaS cloud management system that was used in the deployment of our prototype, OpenStack. A description of network virtualization techniques and network management in OpenStack followed. Afterwards, we turned our attention to security threats in traditional information systems and cloud environments. We then presented an overview of the main components of a security monitoring frameworks focusing on two security components: Intrusion Detection Systems and Firewalls. The key observations from this chapter are: • IaaS cloud environments are very dynamic. We have identified three main change categories: Topology-related, Traffic-related and Service-related changes. Despite the numerous available cloud security monitoring frameworks there are no solutions that address all three types of dynamic events. VESPA, a policy-based self-protection framework, addresses adaptation of the security monitoring framework but focusing on security events as the main source of adaptation (instead of the three types mentioned before). • Although some of the industrial solutions discussed (e.g. Amazon web application firewall) include the option of integrating tenant-specific security requirements in the form of filtering rules, the rule generation is not automatic, forcing the tenants to write and install the rules themselves. • Component sharing between tenants is essential in a cloud environment where multiple VMs are deployed in the same physical host. Although the described solutions recognize the necessity of a multi-tenant monitoring framework, it still remains a design requirement that has not been implemented to the best of our knowledge. Security monitoring for tenant virtualized infrastructures has yet to receive significant attention in the cloud community. Although efforts aimed at including quality of service guarantees for different services in a cloud environment have been made [START_REF] Serrano | SLA Guarantees for Cloud Services[END_REF], security monitoring requirements are still not included in cloud SLAs. To our knowledge a selfadaptable security monitoring framework that is able to adapt to the dynamic events of a cloud environment, allow tenant-driven reconfiguration of the monitoring devices and enable component sharing in order to minimise costs has yet to be implemented. The goal of this thesis is to design a framework that is able to address the main limitations of current solutions discussed in the state of the art. In the following Chapter 3 we present the high-level design of our framework. Our framework's two instantiations incorporate different concepts presented in the state of the art, namely intrusion detection systems and application-level firewalls. Our first instantiation, presented in 4, is a self-adaptable intrusion detection system tailored for cloud environments. In our second instantiation, presented in Chapter 5, we propose a novel design for securing an application-level firewall using virtual machine introspection. Our firewall is able to automatically reconfigure the enforced ruleset based on the type of services that run in the deployed VMs. To our knowledge none of the firewall solutions discussed are able to achieve this. Chapter 3 A Self-Adaptable Security Monitoring Framework for IaaS Clouds Introduction In the previous chapter we presented the state of the art in security monitoring for IaaS cloud infrastructures. Our analysis has shown that the existing solutions fail to address all three categories of dynamic events in a cloud infrastructure (topology, monitoring load and service-related changes) while at the same time integrating monitoring requirements from different tenants. To address this limitation we have designed a self-adaptable security monitoring framework for IaaS cloud environments that is able to: 1. Take into account the various kinds of dynamic events in a cloud infrastructure and adapt its components automatically. 2. Take into account tenant-specific security requirements and reconfigure the security devices in such manner that the resulting configuration respects these requirements. 3. Provide accurate security monitoring results without introducing new vulnerabilities to the monitored infrastructure. 4. Minimise costs for both tenants and the provider in terms of resource consumption. In order to illustrate the practical functionality of our framework, we use a simplified example of a cloud-hosted information system. We use the same example in the whole thesis in order to provide consistency for the reader. This chapter presents the design and implementation of our framework. It is structured as follows: Sections 3.2 and 3.3 present the system and threat model under which we designed our framework. Section 3.4 details the objectives of our framework while Section 3.5 presents our simplified example. Section 3.6 details the high-level design of the adaptation process when a dynamic event occurs. The main components of our framework along with key implementation details are presented in Sections 3.7 and 3.8 respectively. Finally, Section 3.9 summarises our first contribution. 54CHAPTER 3. A SELF-ADAPTABLE SECURITY MONITORING FRAMEWORK FOR IAAS CLOUDS System Model We consider an IaaS cloud system with a cloud controller that has a global overview of the system. Tenants pay for resources that are part of a multi-tenant environment based on a Service Level Agreement (SLA). Each tenant is in control of an interconnected group of VMs that hosts various services. No restrictions about the type of deployed applications are imposed on tenants. The VMs are placed on available physical servers that are shared between multiple VMs that may belong to different tenants. The cloud provider is responsible for the management and reconfiguration of the monitoring framework's components and tenants can express specific monitoring requirements through the SLA or a dedicated API that is part of the monitoring infrastructure. A tenant's monitoring requirements include: 1. Security monitoring for specific types of threats (e.g. SQL injection attempt, worms, etc), at different levels of the virtual infrastructure (application, system, network) and 2. Performance-related specifications in the form of acceptable values (thresholds) for monitoring metrics. An example of a tenant-specified threshold could be: the maximum accepted value for the packet drop rate of a network intrusion detection system. The tenant specifications may lead to the reconfiguration of security monitoring devices that are shared between tenants or between tenants and the provider. The cloud controller is responsible for providing networking capabilities to the deployed VMs. Two types of networks are constructed: an internal one between VMs that belong to the same tenant and an external one that is accessible from outside the infrastructure. Each deployed VM is assigned two IP addresses and two domain names: an internal private address and domain name and an external IPv4 address and domain name. Within a tenant's virtual infrastructure, both domain names resolve to the private IP address while outside the external domain is mapped to the external IP address. Threat Model We consider software attacks only, that originate from inside or outside the cloud infrastructure. We assume that like any legitimate tenant, an attacker can run and control many VMs in the cloud system. Due to multiplexing of the physical infrastructure, these VMs can reside in the same physical machine as potential target VMs. In our model an attacker can attempt a direct compromise of a victim's infrastructure by launching a remote exploitation of the software running on the deployed VM. This exploitation might target different levels in the victims infrastructure (system, network, applications). We consider all threats described in Section 2.4.1 to be applicable on a victim's VMs. Upon successful exploitation, the attacker can gain full control of the victim's VM and perform actions that require full system privileges such as driver or kernel module installation. Malicious code may be executed at both user and kernel levels. The attacker is also in position of using the network. We consider all attacker-generated traffic to be unencrypted. In this work we consider the provider and its infrastructure to be trusted. This means that we do not consider attacks that subvert the cloud's administrative functions via vulnerabilities in the cloud management system and its components (i.e. hypervisor, virtual switch, etc). Malicious code cannot be injected in any part in the provider's infrastructure and we consider the provider's infrastructure to be physically secure. Objectives The goal of this thesis is to design a self-adaptable security monitoring framework that detects attacks towards tenant's virtualised information systems. We have defined four key properties that our framework needs to fulfill: self-adaptation, tenant-driven customization, security and correctness and finally, cost minimization. In this section we detail each of them. Self Adaptation Our framework should be able to automatically adapt its components based on dynamic events that occur in a cloud infrastructure. Consequently, the framework should be able to alter the existing configuration of its monitoring components, instantiate new ones, scale up or down the computational resources available to monitoring components and finally, shut down monitoring components. We distinguish three adaptation categories depending on their source: • Service-based adaptation: In this category the framework's components need to be adapted due to a change in the list of services that are hosted in the virtual infrastructure. Addition or removal of existing services could impact the monitoring requirements, thus require the instantiation of new monitoring devices or reconfiguration of existing ones. • Topology-based adaptation: In this category, the source of adaptation lies in changes in the virtual infrastructure topology. Sources of these changes include tenant decisions regarding VM lifecycle (i.e. creation, deletion) and provider decisions regarding VM placement (i.e. migration). The security monitoring framework should be able to adapt it's components in order to guarantee an adequate level of monitoring despite the new virtual infrastructure topology. • Monitoring load-based adaptation: In this category, the framework needs to react to changes in the monitoring load. In the case of network traffic monitoring, an increase in the traffic flowing towards and from applications hosted in the virtual infrastructure would trigger an adaptation decision that would guarantee that enough processing power and network bandwidth (if the monitoring device is analyzing network traffic) is provided to the monitoring components. An adaptation decision could also involve the instantiation of a new monitoring probe that will be responsible for a particular traffic segment. In the case of VM activity monitoring, a sudden increase in inside-the-VM activity (i.e. running processes, open files, etc) could lead to altering the computational resources available to the security probe monitoring that particular VM. Tenant-Driven Customization Our framework should be able to take into account tenant-specific security requirements. These requirements include application-specific monitoring requests (i.e. requests for detecting specific types of attacks depending on the application profile) and monitoring metrics requests (i.e. detection metrics or performance metrics for the monitoring devices). The framework should be able to consider a given tenant's requirements in reconfiguration decisions and enforce these requirements on the affected monitoring devices. 56CHAPTER 3. A SELF-ADAPTABLE SECURITY MONITORING FRAMEWORK FOR IAAS CLOUDS Security and Correctness Our framework should be able to guarantee that the adaptation process does not introduce any novel security vulnerabilities in the provider's infrastructure. The reconfiguration decisions should not introduce any flaws in the monitoring devices and should not affect the framework's ability to maintain an adequate level of detection. The monitoring devices should remain fully operational during the reconfiguration process. Cost Minimization Our framework should minimise costs in terms of resource consumption for both tenants and the provider. Deploying our framework should minimally affect the provider's capacity to generate profit by multiplexing its physical resources. The distribution of computational resources dedicated to monitoring devices should reflect a tenant-acceptable trade-off between computational resources available for monitoring and computational resources available for VMs. The performance overhead imposed by our framework to tenant applications that are deployed inside the monitored VMs should be kept at a minimal level. Example Scenario Our simplified example of a cloud hosted information system is depicted in Figure 3.1. In ADAPTATION PROCESS 57 In this simplified example, we include only two types of monitoring devices: networkbased IDSs and firewalls. The traffic flowing towards and from the VM on node parapide-18 is monitored by a network-based IDS named suricata79 residing on a separate node with IP 172.16.99.38 while the traffic flowing towards and from the VM on node parapide-32 is monitored by another network-based IDS named suricata65 residing on the same node. Each compute node has a firewall at the level of the virtual switch (named f-parapide18 for the compute node parapide18 and f-parapide32 for the compute node parapide32 ). Finally, an edge firewall named f-ext1 is responsible for filtering the traffic that flows towards and from the cloud infrastructure to the outside world. Adaptation Process After defining the four main objectives of our monitoring framework, we now describe the three levels of the adaptation process. The process begins from the adaptation sources, that can either be dynamic events or changes in the cloud infrastructure (topology-, service-or monitoring load-related) or evolving tenant security requirements. It continues with our framework's decision making. Finally, the adaptation decision is enforced by reconfiguring the affected security devices. First, the adaptation process is triggered by either a change in the cloud infrastructure (i.e. service, topology or monitoring-load related) or a tenant specific security requirement. All necessary information is extracted and forwarded to the adaptation framework. Depending on the type of change different information is propagated to the framework: • Service-related change: type of service and technical specifications (e.g. port numbers or range, protocol, authorized connections/users, etc). • Topology-related change: ID of the affected VM along with network information (e.g. internal/external IP, port on the virtual switch, etc) and the physical node hosting the VM. • Monitoring load-related change: device-specific metrics that demonstrate the effect of the monitoring load fluctuation on the monitoring functionality (e.g. packet drop rate, memory consumption, etc). The information extracted from a tenant security requirement includes: specific security events (e.g. attack classes or specific threats) and monitoring metrics (e.g. packet drop rate). The propagated information is extracted from different sources (i.e. the cloud engine, monitoring devices, SLA, etc). Once the adaptation framework receives the propagated information, it starts making the adaptation decisions. The first step in the decision making process is identifying the monitoring devices affected by the adaptation. The adaptation framework is able to extract the list of the devices based on the VMs involved in the dynamic events. Depending on the monitoring strategy selected, the group of VMs assigned to a specific monitoring device could be determined based on their physical location (e.g. an NIDS monitoring the traffic that flows towards and from all VMs that are deployed on a particular compute node). The framework has full access to topology and networking information for each monitoring device. This information includes: 1. name and IP address of the physical node hosting the 58CHAPTER 3. A SELF-ADAPTABLE SECURITY MONITORING FRAMEWORK FOR IAAS CLOUDS device (e.g. if a device is running in a container), 2. IP address of the device if applicable, 3. list of other co-located devices and finally 4. list of computational resources available on the node hosting the device (e.g. CPU, memory, etc). After the adaptation framework has identified the list of affected monitoring devices, it makes the adaptation decision. The adaptation decision can imply the reconfiguration of the monitoring devices so that monitoring for specific types of threats is included or removed. It can also imply the instantiation of a new monitoring device. The framework can also decide to assign more computational resources to a group of monitoring devices in order to be able to better manage their computational load. After the decision has been made, it is translated to device specific reconfiguration parameters by dedicated framework components. The final stage of the adaptation process is executed at the level of the monitoring devices. The device-specific reconfiguration parameters are taken into account and the monitoring devices are adapted accordingly. The adaptation framework is able to maintain an adequate monitoring level even during the reconfiguration phase either by using live reconfiguration capabilities of the devices (when applicable) or by incorporating other strategies, which enable later inspection of activity (e.g. temporary clone of an HIDS, storing traffic for later inspection in the case of an NIDS). After the adaptation process is complete the affected monitoring devices are fully operational. Although we consider network reconfiguration events such as network migrations part of topology-related changes, our framework does not handle network reconfiguration events at this stage. Architecture This section presents the architecture of our self-adaptable security monitoring framework. First a high-level overview of the system is presented followed by the description and functionality of each component. High-Level Overview The high-level overview of our framework's architecture is shown in Figure 3.2. The figure depicts an IaaS cloud with one controller and two compute nodes on which the tenant's virtualised infrastructure is deployed. Different components of our self-adaptable security monitoring framework are included in the figure . A dedicated node is used for hosting different network IDS while an edge firewall filtering the traffic between the outside world and the cloud is deployed on a standalone host. Firewalls are also included at the level of the local switches on the compute nodes. Finally, a log aggregator collects and unifies the events produced by the different types of security devices. Our framework is composed of three different levels: tenant, adaptation and monitoring devices. The monitoring devices level consists of probes (NIDS and firewalls in Figure 3.2) as well as log collectors and aggregators. The adaptation level consists of all the framework's components that are responsible for designing and enforcing the adaptation process. A dedicated Adaptation Manager, which can be located in the cloud controller, acts as a decision maker. Dedicated components named Master Adaptation Drivers (MAD), located in the nodes that host the monitoring devices, are responsible for translating the manager's decision to componentspecific configuration parameters are also part of this level. A MAD can be responsible After presenting a high-level overview of our framework's architecture we now describe each component in detail. Tenant-API One of our framework's core objectives is integration of tenant-specific security monitoring requirements. Tenants can request monitoring against specific attack classes depending on the profile of their deployed applications (e.g. for a DBMS-backed web application a tenant can request monitoring for SQL injection attempts). Furthermore, tenants may have specific requests regarding the quality of monitoring in terms of device-specific metrics (e.g. a tenant can request a lower threshold for the packet drop rate of a NIDS system). In order to facilitate tenant requirement integration, our framework provides a dedicated API that is exposed to the tenants and allows them to express monitoring specifications in a high level manner. Essentially, the API performs a translation between the tenants monitoring objectives, which are expressed in a high-level language, and our framework-specific input sources. Our monitoring framework then takes into account the outcome of the translation for making an adaptation decision. We now describe the API design. The design of our API is organised in three distinct parts. We detail each one. In order to simplify authentication we have made the design choice to integrate our API into the provider's API and make it available through the web. Tenant-exposed part: The first part of our API is directly exposed to the tenants. Each tenant uses its unique identifier in order to access the tenant-exposed part through the web. After successful authentication, the tenant has access to the list of monitoring services that are activated in its virtual infrastructure along with detailed record about each service. The information available about each monitoring service are: attack/threat classes (e.g. SQL injection, cross site scripting, etc), list of VMs that are under this monitoring service and finally, a time field that specifies when this option was activated. A tenant can add a new monitoring service or remove an existing one through a dedicated add/delete option in the API. In the event of a new monitoring service addition, the tenant is given the option to select a monitoring service only amongst the ones that are available/supported by the self-adaptable monitoring framework. After selecting the monitoring service the tenant adds the IDs of the VMs that it wants this service to be applied on. A second option available for tenants is tuning of SLA-defined monitoring metrics. Each tenant has access to a list of SLA-defined monitoring metrics and can increase or decrease their value. Finally, a list of the applications that are deployed on its provisioned VMs is provided by each tenant. The information available for each application is: • its name. • connectivity record. In the connectivity record the tenant specifies network-related information about the service. This information includes list of ports that the service is expected to use and the list of restricted IPs that are allowed to interact with the application (if applicable). • VM ID that the service is running on. Translation part: The translation part of our API lies one level lower than the tenant-exposed part and is actually performing the translation between the high-level description of tenant requirements to framework-specific information. The translation part parses the tenant-generated input and performs two functionalities for each monitoring service: 1. mapping of the high-level service name to framework-specific service name (if required), and 2. mapping of the instances names to VM cloud-engine IDs. Furthermore the translation part extracts the names of the applications along with the number of ports and the list of allowed IPs (if applicable). The extracted information forms the necessary records required by our framework in order to make an adaptation decision. Finally, in order to allow our framework to make adaptation decisions on a VM basis, the information is grouped in a VM-based manner (cloud engine ID of the VM, list of running processes and network connectivity, monitoring services). As a last step the translation part generates a framework-readable file with a specific format (e.g. XML format) with the VM-based information and the tenant-defined values of the SLA-specified monitoring metrics. The generated file is unique per tenant. The file depicting the types of services along with specific monitoring requirements for the VM with ID 27 of the example in Section 3.5 can be found in Listing 3.1. In the example of Section 3.5, the tenant with ID 74cf5749-570, has provisioned only one VM on which it deployed an ssh server and an SQL-backed web server. It requested additional monitoring against worms and it accepts a drop rate (for an NIDS) that does not exceed 5%. Each time a tenant expresses a new monitoring requirement the file is regenerated. After describing the different parts of our tenant-exposed API and their functionalities we continue in detailing another type of components of our framework, the security devices. Security Devices Security devices include all devices and processes that perform the actual monitoring functionality. The type of devices included are: intrusion detection systems (network or host based), firewalls, vulnerability scanners, antiviruses, etc. The monitoring devices can be installed at any point in the cloud infrastructure and can monitor part of the tenants or the provider infrastructure. Although the monitoring devices perform different types of monitoring under different configurations, the common denominator between all types of devices is the production of detailed log files. In order to efficiently manage and unify logs originating from the security devices we include log collectors and aggregators in this category (although they do not perform actual monitoring tasks). Log collectors can be co-located with one or multiple monitoring instances and can perform local or remote collection of logs. Aggregators are responsible for looking for specific patterns, defined by the framework's administrator, inside the log files and summarizing events. Adaptation Manager The Adaptation Manager (AM) is one of our framework's core components. It is responsible for making the adaptation decisions that affect the monitoring devices of the monitoring framework. The AM is able to handle dynamic events inside the cloud infrastructure and guarantees that an adequate level of monitoring is maintained. The Adaptation Manager has a complete overview of the state of the monitoring framework which is comprised by the following information: • topological overview: list of monitoring devices and their location (nodes on which they are deployed and IP addresses of the nodes), • functional overview: a mapping between VMs and monitoring devices. One device can be mapped to multiple VMs and vice versa. The functional overview of the system provides the necessary information regarding which monitoring device is monitoring which subset of the deployed VMs. Depending on the monitoring strategy selected, a monitoring device can be responsible for all the VMs that are hosted in a particular location (e.g. an NIDS monitoring the traffic that flows towards and from all the VMs deployed on a specific compute node). Upon the occurrence of a dynamic event (e.g. VM migration) the AM performs the actions presented in Algorithm 1 in order to make an adaptation decision: propagate decision(agents, reconfiguration required) • Map the ID of the VM affected by the change to the list of services running inside the VM (line 2 in Algorithm 1). This is done by parsing the information provided by the API-generated file (sla info.xml in Listing 3.1). • Identify the monitoring devices responsible for the affected VM (line 3 in Algorithm 1). These are the monitoring devices that will be adapted. This is done by using information that is provided by the Component Dependency Database (see Section 3.7.6). The information regarding the list of running services and the list of monitoring devices that are going to be adapted are combined in a single file called vm information file. The resulting file for the example information system described in Section 3.5, can be found in Listing 3.2. In the example of Section 3.5, three services are deployed on that particular VM with ID 27 (ssh server, apache web server and an SQL database) while the VM is monitored by a signature-based IDS named suricata65. Listing 3.2 -VM information file • Decide on the type of reconfiguration required (line 5 in Algorithm 1). Depending on the type of monitoring devices and the event category different reconfiguration types might be necessary (e.g. rule addition or removal, module activation, white list creation, new probe instantiation, computational resource redistribution, etc). • Propagate the reconfiguration parameters to the agents responsible for enforcing the adaptation decision (line 6 in Algorithm 1). In case of a topology-related dynamic event all steps are performed while in the case of a service-or monitoring load-related change or a tenant-specific changed monitoring requirement only steps 3 to 6 are performed. The AM is also responsible for handling performance degradation of the monitoring probes. The AM sets predefined thresholds for a set of device specific performance metrics and then allows each monitoring device to raise an alert in case one of the predefined thresholds is violated. The AM then decides if a new probe is necessary. If a new probe is instantiated the AM propagates the necessary information regarding monitoring load redistribution to the lower level agents. In a cloud environment often dynamic events occur simultaneously. In order to handle the adaptations of the security devices that originate from these events, the AM can handle multiple adaptation events simultaneously. In the event of two different adaptation decisions affecting the same existing monitoring device, we distinguish three outcomes depending on the arguments of the adaptation decisions: • The adaptation decisions contain different adaptation arguments: In this case there is no conflict between the decisions and the reconfigurations can proceed. 64CHAPTER 3. A SELF-ADAPTABLE SECURITY MONITORING FRAMEWORK FOR IAAS CLOUDS • The adaptation decisions contain the same arguments or there is a partial match between the two argument sets: In this case depending on the nature of the adaptation decisions (activation or deactivation of monitoring parameters) we can foresee two outcomes: 1. Both decisions lead to activation or deactivation of monitoring parameters: In this case there is no conflict and the reconfigurations can proceed. 2. One decision leads to activation of monitoring parameters while the other to deactivation: In this case there is a conflict between the reconfigurations. In order to guarantee an adequate level of detection, our framework adopts the strategy of keeping the matching arguments activated. Infrastructure Monitoring Probes The Infrastructure Monitoring Probes (IMPs) are located inside different core modules (networking, compute) of the cloud engine and are responsible for detecting topology related changes. The detected changes include VM lifecycle (e.g. start, stop) and placement (i.e. migration) changes. Once a topology-related change occurs an IMP intercepts the dynamic event and extracts all the necessary VM-related information from the cloud engine. The information includes: networking records (external and internal IP address, network port on the virtual switch) and compute records (VM ID, source and destination node -in case of a migration-, tenant ID) of the affected instance. Then the IMP forwards this information to the Adaptation Manager in order to make the adaptation decision. Although located inside the cloud engine IMPs do not preempt normal cloud operations (e.g. VM-lifecycle decisions or network-related reconfigurations) during the reconfiguration of monitoring devices. Component Dependency Database In complex security monitoring frameworks that consist of different components, interdependencies between security devices can lead to troublesome security issues. Reconfiguration of a single monitoring component can create the need for reconfiguring a set of secondary monitoring devices. In the case of our framework, an adaptation decision that was triggered by a dynamic event (e.g. a service stop inside a monitored guest) can affect separate security devices: an active monitoring device (e.g. a firewall) and a passive monitoring device (e.g. an IDS). In both devices reconfiguration is necessary in order to reflect a change in the monitoring process that was caused by the dynamic event (e.g. delete rules that filter traffic for the stopped service for the firewall and de-activate the rules that monitor traffic for the stopped service in the IDS). In order to facilitate identification of all affected devices when making an adaptation decision, we introduce the Dependency Database. The Dependency Database is located inside the cloud controller and is responsible for storing security device information for each monitored VM. Our dependency database consists of two separate tables a VM info table and a Device info table that provide respectively the functional and topological views to the Adaptation Manager. The columns in theVM info consist of the names of all security devices involved in the monitoring of a particular VM (identified with its ID, placing one VM per line). Using the VM ID as a primary key, the Adaptation Manager can extract the list of monitoring devices that are responsible for this VM. These are the devices that are affected by an adaptation decision caused by a dynamic event involving that VM. The VM info table for the VMs of the example of Section 3.5 can be found in Implementation We have developed a prototype for our framework from scratch in Python. We used Open-Stack (version Mitaka) [START_REF]OpenStack[END_REF] as the cloud management system. In order to enable network traffic mirroring we used Open vSwitch (OvS) [137] as a multilayer virtual switch. OvS is only compatible with later versions of OpenStack that use Neutron for providing networking services for deployed VMs. Consequently, version Mitaka was selected. We used Libvirt [138] for interacting with the underlying hypervisor. This section presents a few important implementation aspects. Namely, we focus on the details of two of our framework's main components: the Adaptation Manager and the Infrastructure Monitoring Probe. Adaptation Manager In order for the manager to be able to handle multiple adaptation events in parallel, a multi-threaded model approach was adopted. A master thread is responsible for receiving notifications regarding topology-related changes from the Infrastructure Monitoring Probes. The notification mechanism currently supports two versions: creating and listening to a dedicated socket, or placing a notification adapter (using the inotify [139] Linux utility) in a specific directory for tracking events (modify, close write) on the directory's files. Once the AM receives an event, the AM performs the steps described in Algorithm 2: 1. A worker thread is spawned for handling the considered adaptation event. In order to retrieve the information about the VM involved in the topology change, 66CHAPTER for i, j in affected devices, locations do 6: args.txt ← decide(list of services, i) 7: ids conn(j, args.txt, +/-) the thread parses the vm information file.xml (in Listing 3.2) using the information parser function. Using the VM ID as an identifier, the function extracts the list of services running inside the affected guest and the tenant-specific security requirements. 2. The AM makes the adaptation decision and the parameters (e.g. in case an NIDS is involved, which types of rules will be activated/deactivated, what is the tenant acceptable drop rate) are written to a dedicated file named args.txt. In order to extract the names, types and location of the affected security probes the worker parses a separate file (topology.txt) containing the topological and functional views necessary for the AM. The topology.txt file containing the topological and functional views for the information system described in Section 3.5 can be found in Listing 3.3. Listing 3.3 -topology and functional information file 1 Compute-Node IP IDS IDS-Node 2 p a r a p i d e -18. r e n n e s . g r i d 5 0 0 0 . f r 1 7 2 . 1 6 . 9 8 . 1 8 s u r i c a t a 7 9 1 7 2 . 1 6 . 9 9 . 3 8 3 p a r a p i d e -32. r e n n e s . g r i d 5 0 0 0 . f r 1 7 2 . 1 6 . 9 8 . 3 2 s u r i c a t a 6 5 1 7 2 . 1 6 . 9 9 . 3 8 In the example of Section 3.5, the monitoring strategy described includes one NIDS per compute node. All NIDSs are deployed on the same node. Once a VM migration occurs, for example for the VM with ID 27, the master thread receives the network-related information from the IMP (public IP = 172.10.24.195, private IP = 192.168.1.5, source = parapide-18.rennes.grid5000.fr, destination = parapide-32.rennes.grid5000.fr, port name on the virtual switch of the destination node= qvb1572). Once it receives this information the worker thread parses the vm information file.xml and the topology.txt files and it extracts the list of services running in the migrated VM (sshd, apache2, sqld), the additional tenant-defined monitoring requirements (worm), the tenant specific monitoring metrics (drop rate threshold of 5%) and finally the names of the NIDS that are responsible for monitoring the traffic in the source and destination nodes (suricata79 and suricata65 respectively) along with their host IP address (172. 16.99.38). These NIDSs are the two devices that need to be adapted. The worker thread then writes the adaptation arguments to adaptation args.txt. The result for the NIDS monitoring the traffic towards and from the destination node (suricata65 in the example of Section 3.5) is shown in Listing 3.4. Listing 3.4 -The file containing the adaptation arguments for an NIDS 3.9. SUMMARY 67 1 s i g n a t u r e b a s e d 2 s u r i c a t a 6 5 3 apache2 4 s q l 5 s s h 1 9 2 . 1 6 8 . 1 . 2 , 1 9 2 . 1 6 8 . 1 . 3 6 worm 7 5 3. The worker thread sends the dedicated file through a secure connection (using a dedicated function called ids conn) to a MAD located in the node(s) hosting the affected security devices. The ids conn function uses the IP address of the node hosting the device, and the name of the security device in order to establish the connection 4. A dedicated operator (e.g. + or -), that is decided by the AM is sent together with the file containing the adaptation arguments, indicates whether the adaptation requires an activation or deactivation of monitoring parameters. In our example, the operator sent with the file in Listing 3.4 is a + indicating that the monitoring parameters need to be activated. In case of an adaptation decision that affects multiple security components in different locations, a separate thread per component is created in order to facilitate the parallel transmission of the adaptation file. 3.8.2 Infrastructure Monitoring Probe Summary In this chapter we have described the design of a self-adaptable security monitoring framework. Our framework was designed in order to address the four main objectives: selfadaptation, tenant-driven customization, security and cost minimization. In this chapter we described how the core component of our framework, the Adaptation Manager, orchestrates the adaptation decisions in order to meet the self-adaptation and tenant-driven customization objectives. A detailed description of the adaptation process, from the dynamic event that triggers the adaptation to the actual reconfiguration of the security probes was presented. During the process, we have demonstrated that the Adaptation Manager respects tenant-defined monitoring metrics by including them in the adaptation parameters. The AM is able to make the adaptation decisions independently from the type of security device. Consequently, our framework is able to integrate different types of security monitoring devices. The Master Adaptation Drivers (described in more detail in the following chapter) are responsible for translating the adaptation decision to device-specific parameters. The remaining two objectives (security and cost minimization) are discussed in the following chapters. Furthermore, we described remaining individual components of our framework and their functionality: the Adaptation Manager, which is the core of our framework, making all the adaptation decisions, the tenant-API, which allows tenants to express their monitoring requirements and translates them to AM-readable information, the Infrastructure Monitoring Probes, which are responsible for detecting dynamic-related events and notifying the AM and the Dependency Database which holds all necessary information regarding interdependent security devices. Each component's functionality contributes to an accurate adaptation decision. 68CHAPTER 3. A SELF-ADAPTABLE SECURITY MONITORING FRAMEWORK FOR IAAS CLOUDS Selected implementation details of two of our framework's components (the Adaptation Manager and the Infrastructure Monitoring Probes) were presented. In order to facilitate multiple adaptation decisions in parallel, the AM was implemented using a multi-threaded approach. Instead of using traditional network-based communication between different components, we opted for a faster file-based approach using the Inotify Linux utility. In order to obtain accurate and up-to-date VM-related information we made the design choice of placing the IMPs inside core modules of the cloud engine. Two separate instantiations of our framework are discussed in the following chapters. The proposed instantiations focus on the adaptation of two different types of security devices. The first instantiation presents a self-adaptable network intrusion detection system called SAIDS, while the second instantiation presents a secure application-level introspection-based firewall called AL-SAFE. Chapter 4 SAIDS: A Self-Adaptable Intrusion Detection System for IaaS Cloud Environments In this chapter we present SAIDS the first instantiation of our security monitoring framework. SAIDS is a self-adaptable network intrusion detection system designed for IaaS cloud environments. A preliminary version of this contribution was published in [START_REF] Giannakou | Towards Self Adaptable Security Monitoring in IaaS Clouds[END_REF]. We begin with a description of SAIDS objectives in Section 4.1, followed by the presentation of individual SAIDS components in Section 4.2. Security threats are discussed in Section 4.3. The adaptation process along with events that trigger the adaptation are featured in Section 4.4. Implementation details and our detailed evaluation plan along with obtained results are described in Sections 4.5 and 4.6 respectively. Finally, Section 4.7 summarises this chapter and presents key observations. Objectives In this section we discuss in detail the objectives that SAIDS should meet. • Self-Adaptation: SAIDS should react to dynamic events that occur in a cloud environment and adapt the network intrusion detection devices accordingly. These events refer to topology-related changes in the virtual or hardware infrastructure and service-related changes. Virtual infrastructure changes are caused by tenant decisions regarding VM-lifecycle (i.e. creation, deletion) or provider decisions regarding VM placement (i.e. migration). Changes in the hardware infrastructure refer to addition or removal of servers. Service related changes refer to the addition or removal of services in the deployed VMs. • Customization: based on the type of services that are hosted on the deployed VMs SAIDS should allow tenants to customise the events that are being detected. Tenants can request monitoring against specific types of threats that refer to different levels of their infrastructure (i.e. application, system or network level). Common threats (e.g. worms, SQL injection attempts) can be detected using generic rules out of public or commercial rule repositories [141]. SAIDS should provide tenants with the ability to use custom rules (i.e. tailored for their deployed systems) for common threats in order to improve detection quality. Furthermore, tenants should be able to write and include their own customised IDS rules against more specific types of threats that target their deployed services. • Scalability: the number of deployed SAIDS IDSs should adjust to varying conditions: load of the network traffic monitored, number of physical servers in the datacenter, number of VMs in the virtual infrastructure. SAIDS should be able to alter the resources available to its IDSs in the event of a degradation in the quality of detection. Different metrics are used in order to estimate the quality of detection for which SAIDS takes into account tenant-defined thresholds. SAIDS uses the following metrics: packet drop rate (the value of this metric can be improved by altering the computational resources available to the SAIDS IDSs), detection rate (the value of this metric is related to SAIDS IDSs packet drop since it also demonstrates the ability of an IDS to process the input stream without dropping packets, thus can indirectly be improved by altering the computational resources available to the SAIDS IDSs) and false positive rate. • Security and Correctness: SAIDS should guarantee that an adequate level of detection is maintained during the adaptation of the SAIDS IDSs. The adaptation of the SAIDS IDSs should not allow attacks that otherwise would have been detected to remain undetected. Furthermore SAIDS should not create new security vulnerabilities in the provider's infrastructure. Models and Architecture In this section we present the system and threat model used in SAIDS along with a detailed description of SAIDS architecture and individual components. We adopt the same system and threat model as the ones described in Chapter 3, Sections 3.2 and 3.3. Architecture This section describes SAIDS architecture. We first present a high level overview of SAIDS and then we focus on describing the functionality of each individual component. SAIDS consists of four major components as depicted in Figure 4.1: the Local Intrusion Detection Sensors (LIDS), the Adaptation Worker (AW), the Master Adaptation Driver (MAD) and the Mirror Worker (MW). The LIDSs are deployed on dedicated nodes and our framework features one AW per LIDS (the AW is installed inside the LIDS). SAIDS features one MAD per dedicated node. Finally, we include one Mirror Worker per compute node. Component Description This section focuses on the description of each individual component's functionality. The components are run by the cloud provider. Adaptation Worker: The AW is located inside the LIDS and has several roles: First, it is responsible for reconfiguring the enforced ruleset by reloading the new configuration file that was created by the MAD. Second, the AW can detect if the detection process has failed and restart it if necessary. Third, the AW periodically reports LIDS-specific monitoring metrics (e.g. packet drop rate) back to the MAD and ensures that during the reconfiguration process the LIDS continues to operate seamlessly, so an adequate level of detection is maintained. Finally, once the reconfiguration process has been completed successfully, the AW reports back to the MAD. Master Adaptation Driver: A MAD is responsible for the reconfiguration and lifecycle of a group of LIDSs on a given node. In order to satisfy the scalability objective of SAIDS the MAD was designed for handling multiple reconfiguration requests in parallel. When a dynamic event occurs, the adaptation parameters are sent by the AM to the MAD. The MAD translates them to LIDS-specific rules and creates a new configuration file that contains the rules that need to be activated in the affected LIDS. In the event of a new LIDS is instantiated the MAD is responsible for creating an endpoint for The MAD periodically communicates with different AW instances in order to gain access to LIDS-specific performance metrics. In case of a performance degradation the MAD is responsible for deciding between instantiating a new probe or assigning more computational resources to an existing one. Finally, the MAD can periodically obtain resource utilization information about each LIDS. The time between two consecutive resource utilization queries can be defined by the tenant. Mirror Worker: The MW has two different roles: First, it is responsible for checking whether the traffic that flows to and from a group of VMs that are hosted in a particular compute node is correctly mirrored to the corresponding LIDS node(s). Second if a mirroring endpoint does not exist the mirror worker creates it on the underlying local switch. 4.2.1.1.5 Safety Mechanism: SAIDS features a safety mechanism inside each compute node that guarantees that the VM participating in a dynamic event (e.g. a migrated VM) does not enter an active state before the corresponding LIDS has been successfully reconfigured. The AM notifies the safety mechanism that the LIDS reconfiguration has been completed successfully. Although SAIDS has this mechanism enabled by default, in our design we allow tenants to choose whether to disable it or not. The choice between enabling the safety mechanism or not demonstrates a trade-off between security and performance. Consequently, enabling the mechanism could impact the performance of network-critical applications that run inside the affected VM. After presenting SAIDS individual components we now discuss potential security threats against SAIDS. Security Threats In this section we describe the potential vulnerabilities in SAIDS design and potential vulnerabilities added by SAIDS in the provider's infrastructure. We present our design choices for addressing each one. SAIDS Configuration Files The first type of input that is required for the adaptation of the LIDSs is a set of configuration files that are used for translating the adaptation arguments (which include any tenant-defined monitoring requests) to rule category names. The first file contains the adaptation arguments while the second file provides a mapping between specific types of tenant-deployed services and rule category names. An attacker could alter the contents of the files and create false adaptation arguments that would result in the activation of incorrect rule categories or deactivation of correct ones. These files are simple text or XML files for which SAIDS features robust parsers. The input file is pre-processed using a SAIDS-specific filter that verifies that only SAIDS-specific elements and printable ASCII strings without special characters are present in the files. Furthermore the value of each entry (i.e. monitoring request) partially matches the rule name (exact definition in Section 4.4.2), so any complex interpretation is avoided. Following up on the list of deployed services in the example of Section 3.5, the file containing the adaptation arguments after the adaptation decision can be found in Listing 4.1: The format of the file is as follows: The first two lines are reserved for the LIDS type and the name of the LIDS while the last line is reserved for comma-separated numeric values of LIDS-specific metrics. In the simplified example of Section 3.5, the tenant has only one VM with three processes running (an ssh daemon and a SQL-backed Apache server) while he requests additional monitoring for worms and accepts a drop rate of 5% from the LIDS. LIDS Rules The result of the above translation leads to enabling specific rule categories in the LIDS. Since the rules are LIDS native, they are considered safe. SAIDS Adaptation Sources The adaptation process in SAIDS is based on specific arguments that describe dynamic events (e.g. for a VM migration SAIDS needs the VM ID, VM IP public and private addresses, source and destination node, etc). Since the arguments are extracted through the IMPs from inside the cloud engine and we assume that the provider's infrastructure is safe, we consider them safe. Connection Between SAIDS Components The Master Adaptation Driver defines the reconfiguration parameters based on adaptation arguments that it receives from the Adaptation Manager in a dedicated file. Interception of this file by an attacker could lead to false reconfiguration decisions. We establish and maintain a secure connection between the AM and the MAD. The secure connection is established through a secure protocol [START_REF] Barrett | SSH, The Secure Shell: The Definitive Guide[END_REF] which provides authentication of the AM and guarantees the integrity of the data transferred. External Traffic As all network-based intrusion detection systems, LIDSs can be corrupted by malicious production traffic that they analyze. SAIDS introduces a barrier between a potentially corrupted LIDS and the node hosting it by placing the LIDS in an isolated environment (e.g. a Linux container). Communication between LIDS and the local log collector instance is facilitated through shared volumes. Although this communication is not exposed to the network, a potentially corrupted LIDS can still produce malicious logs which could corrupt the local log collector instance and ultimately lead to false logs being transmitted to tenants. To contain the propagation of corruptions of the local log collector, we also place it in an isolated environment. In the event of a corrupted log collector instance, malicious input could be introduced in the log file of the LIDS. However, since the LIDS itself does not need to read log files, this is not a security issue for the LIDS. Adaptation process In this section we describe the events that trigger the adaptation of the LIDSs and the different steps of the adaptation process. Events Triggering Adaptation SAIDS adapts its components based on dynamic events that refer to three main categories: 1. Virtual infrastructure topology-related changes: this category includes tenantdriven (i.e. VM creation, deletion) or provider-driven (i.e. VM migration) changes. 2. Hardware infrastructure topology-related changes: addition or removal of physical servers. The changes in this category are exclusively provider-driven. 3. Service-related changes: addition or removal of services on the monitored VMs. 4. Performance-related changes: effects in the quality of detection or optimization decisions regarding computational resource utilization. The effects in the detection quality are detected through LIDS-specific detection quality metrics. In Table 4.1 we classify these events based on their origin and subsequent adaptation action. The adaptation action varies depending on the current state of the monitoring Adaptation Process We now describe the adaptation process for each one of the dynamic events described in the previous section. We focus only on the SAIDS-specific components and we omit the first stage of the adaptation that includes the notification from the Infrastructure Monitoring Probes and the adaptation decision from the Adaptation Manager. The actions performed by SAIDS during the adaptation process were designed in order to satisfy SAIDS selfadaptation and customization objectives. Throughout this section we use the adaptation file presented in Listing 4.1 (the adaptation file resulting from the simplified example scenario presented in Section 3.5). Topology-Related Change Once the Master Adaptation Driver (MAD) receives the adaptation parameters from the Adaptation Manager two steps are performed: 1. It checks whether the affected LIDS is running or not. If it is not running then the MAD starts a new LIDS and reconfigures the traffic distribution on the local switch of the node hosting the LIDS in order for the newly instantiated sensor to access the traffic flowing towards and from the affected VM. 2. The MAD translates the adaptation parameters to LIDS-specific configuration parameters and creates a new LIDS-specific configuration file. The configuration file contains the list of rule categories that need to be activated in the LIDS in order to successfully monitor the list of services running inside the affected VM. In our example the MAD partially matches the adaptation argument to the rule category name in order to find the right rule categories that need to be activated. A partial match is found when the adaptation argument is contained in the rule category name (e.g. worm in emerging-worm.rules). Consequently, for the worm adaptation argument the emerging-worm.rules category will be activated while for the sqld argument the emerging-sql.rules will be activated. In case a partial match is not found, MAD uses the second file from SAIDS input set (see Section 4.3), which is a LIDS-specific file, located in the MAD node, to translate the adaptation argument to rule category names. The file only features rule categories that can not be partially matched to the adaptation argument (e.g. apache2 or ssh). A snippet of this file can be found in Listing 4.2: Listing 4.2 -The userservice.conf file In the newly created suricata configuration file the following rule categories will be activated: (a) http-events.rules, emerging-web server.rules, emerging-web specific apps.rules for the web server, (b) emerging-shellcode.rules, emerging-telnet.rules for the ssh daemon and finally, (c) emerging-sql.rules for the SQL database. A part of the resulting LIDS configuration file can be found in Listing 4.3: Listing 4.3 -The suricata.yaml file 1 #RULE BLOCK 2 # -decoder -e v e n t s . r u l e s # a v a i l a b l e i n s u r i c a t a s o u r c e s under r u l e s d i r 3 # -stream-e v e n t s . r u l e s # a v a i l a b l e i n s u r i c a t a s o u r c e s under r u l e s d i r 4 -http-e v e n t s . r u l e s # a v a i l a b l e i n s u r i c a t a s o u r c e s under r u l e s d i r 5 # -smtp-e v e n t s . r u l e s # a v a i l a b l e i n s u r i c a t a s o u r c e s under r u l e s d i r 6 # -dns-e v e n t s . r u l e s # a v a i l a b l e i n s u r i c a t a s o u r c e s under r u l e s d i r 7 # -t l s -e v e n t s . r u l e s # a v a i l a b l e i n s u r i c a t a s o u r c e s under r u l e s d i r 8 # -emerging-u s e r a g e n t s . r u l e s 3. As a last step, the MAD notifies the AW, which is locally installed inside the LIDS, that a new configuration file exists and the IDS needs to be reconfigured. Upon receiving the notification, the AW checks whether the detection process is running and initialises a reload of the newly created configuration file. Once the reload is complete (i.e. the LIDS has been adapted) the AW notifies the MAD that the adaptation process was completed successfully. In case the AW notifies the MAD that the adaptation process failed, for example due to a crash of the detection process or an unsuccessful reload of the enforced ruleset, the MAD propagates the event to the AM which then notifies the safety mechanism that the VM should not yet be resumed in the new location. Depending on the type of failure the following strategy is adopted: first, the AW will try to restart the detection process (or reload the enforced ruleset in the event of a reload failure). If it fails, it propagates the information to the MAD, which in turn instantiates a new LIDS, reconfigures traffic distribution appropriately and destroys the failed LIDS instance. The number of tries that the AW will execute before a new LIDS needs to be instantiated are decided by the MAD. The AW guarantees that during the reconfiguration phase, the LIDS will continue to operate seamlessly, thus no traffic remains uninspected. 4. Finally, the MAD notifies the Adaptation Manager that the adaptation request was served. The AM in turn notifies the safety mechanism that the VM can be safely resumed. Traffic-Related Change In order to detect degradation in the performance of an LIDS the MAD periodically queries the AW for LIDS-specific performance metrics (e.g. packet drop rate). Once the performance metric exceeds a predefined threshold, the MAD instantiates a new LIDS, with identical configuration parameters, and reconfigures the traffic distribution on the local switch so that the load is balanced between the two LIDSs. Currently the MAD can redistribute traffic load only on a VM basis (i.e. send all the traffic from and to a particular VM to a specific LIDS). Service-Related Change The adaptation process is the same as a topology related change. Since SAIDS does not feature any mechanism for automatic discovery of new services in the deployed VMs, we rely on the tenants in order to notify SAIDS for service-related events (through our framework's dedicated API). So far the description of the adaptation process focuses on the side of the monitoring probes. Although LIDS reconfiguration is essential for preserving an adequate level of detection in the virtual infrastructure, gaining access to the right portion of the traffic is also required. Each time a topology-related change occurs (e.g. VM creation or migration), the Mirror Worker is responsible for checking whether a traffic endpoint from the local switch on the compute node to the local switch of the IIDS node exists, and if not creates it. This strategy applies to hardware-related changes as well. Implementation We have implemented a prototype of SAIDS from scratch using the KVM [START_REF] Kivity | KVM: the Linux Virtual Machine Monitor[END_REF] hypervisor on a private cloud. Our cloud was deployed on OpenStack [START_REF]OpenStack[END_REF] and we used Open vSwitch (OvS) [137] as a multilayer virtual switch. To segregate VMs that belong to different tenant networks we utilised Generic Routing Encapsulation (GRE) tunnels. A span tunnel endpoint was created for mirroring traffic in the virtual switches to the LIDSs node. In this section we discuss the main implementation aspects of each SAIDS component. Local Intrusion Detection Sensors: we deploy each LIDS inside a dedicated Docker [START_REF]Docker containers[END_REF] container. Since the LIDS only runs the detection process and does not require a full operating system, we opt for containers in order to achieve minimal start time. Containers are also a suitable lightweight solution for achieving isolation between different detection processes. Currently our prototype features 2 different legacy network IDSs: Snort [START_REF]Snort[END_REF] and Suricata [START_REF]Suricata Open Source IDS Engine[END_REF]. Each container hosts an IDS process and an Adaptation Worker responsible for managing that process. For providing access to the mirrored traffic for the LIDSs we use the ovs-docker utility. Ovs-docker allows docker containers to be plugged on OvS-created bridges. It interacts with the virtual switch on the node hosting the LIDSs and creates one network tap per container. We select signature-based LIDSs as they are the ones requiring zero training time. We utilise OpenFlow [START_REF] Mckeown | OpenFlow: Enabling Innovation in Campus Networks[END_REF] rules for distributing traffic between LIDSs. Depending on the monitoring strategy selected (e.g. one LIDS monitoring the traffic that flows towards and from a particular compute node), the traffic is distributed based on the tunnel source address of the GRE tunnel transferring the monitored traffic. Adaptation Worker: We have created a unified version of the AW that is able to handle the signature-based LIDSs that are supported in our prototype (i.e. Suricata and Snort). The AW communicates with the Master Adaptation Driver for receiving reconfiguration requests and reporting back on the reconfiguration status using a shared folder. The AW places the shared folder under surveillance for specific events (file creation and modification) using the Inotify Linux utility [139], a tool for detecting changes in filesystems and reporting them back to applications. Once the event is triggered the AW loads this new configuration file (so the new ruleset can be enforced) and calls the live rule swap functionality available in both Suricata and Snort IDSs in order to live update (i.e. without having to restart the LIDS) the enforced ruleset. The live rule swap operation allows a user to update the enforced ruleset without stopping the IDS itself (a SIGUSR2 is sent to the detection process). MAD relies on this functionality, consequently the LIDS remains operational even during the actual reconfiguration. The AW ensures that the new ruleset has been loaded by continuously monitoring the log file for a log indicating that the new ruleset has been reloaded. Once the reload is complete the AW notifies the MAD by creating a dedicated file in the shared folder. The AW was implemented in Python. Master Adaptation Driver: For enabling managing the lifecycle and reconfiguration of multiple LIDSs MAD was implemented using a multithreaded approach. MAD creates a unique folder per LIDS and uses a dedicated thread to watch this folder for changes (again using the inotify utility). Once an adaptation request arrives from the AM (i.e. 78CHAPTER 4. SAIDS: A SELF-ADAPTABLE INTRUSION DETECTION SYSTEM FOR IAAS CLOUD EN a file containing the adaptation parameters is created in the watched folder) the thread starts the reconfiguration process. The MAD features IDS specific configuration files for translating the adaptation parameters to rule categories. If the LIDS is not started yet, the thread starts it, creates a port for it on the virtual switch using the add-port command from ovs-docker and finally redirects the appropriate mirrored traffic to the created port. The last part is done by creating a dedicated OpenFlow rule that redirects the traffic from the GRE tunnel endpoint to the LIDS port. For tracking the resource consumption of each LIDS sensor the MAD features a special function called docker stats monitor. First, it obtains the container's ID. Then it periodically queries the cgroup of that particular ID for different runtime metrics: CPU, I/O and Resident Set Size memory. The MAD also inspects externally the packet drop rate for a particular LIDS container by collecting interface level packet drop count from inside the container namespace. The MAD was implemented in Python. Mirror Worker: It checks whether a GRE tunnel for mirroring the traffic flowing towards and from a group of VMs to the corresponding LIDS exists. If not the MW creates it. The IP of the LIDS along with the VMs IDs and the port name of the VM on the destination node are sent by the Adaptation Manager. Once the AW receives the OvS port name, it uses the list interface OvS command giving the port name as input in order to extract the port's id. The MW was implemented in Python. Safety Mechanism: we implement the safety mechanism by placing a dedicated hook inside the plug vifs Nova function which is executed on compute nodes. The plug vifs function is responsible for creating the virtual interface for the VM on the OvS bridge of the destination node. The hook halts the virtual interface creation until the LIDS reconfiguration has been completed. By placing the hook inside the function we make sure that network connectivity for the VM is not enabled until the adaptation is complete. We select the plug vifs function because it is executed in both VM creation and migration events. The safety mechanism was implemented in Python. Evaluation After presenting the most important implementation aspects of SAIDS we now present the evaluation of our prototype. We first detail the objectives of our evaluation plan along with our experimentation methodology. Finally, we discuss the obtained results along with limitations. Objectives of the Evaluation The main goal of SAIDS is to adapt the LIDSs while guaranteeing an adequate level of security, combined with adequate performance (in terms of reaction time for a full adaptation loop) and minimised cost both for tenants and the provider. We now detail the factors that affect each objective. Performance The performance objective refers to two different aspects: adaptation speed and scalability. 4.6.1.1.1 Adaptation Speed: Here we refer to the time required for SAIDS to perform a full adaptation loop, from the moment a dynamic event occurs until all involved LIDSs are successfully reconfigured. In order to have an exact calculation of the overall time we need to answer the following questions: 1. What are the different SAIDS components that are involved in each adaptation loop? Five SAIDS components are mandatorily involved in each adaptation loop: the Adaptation Manager, the Master Adaptation Driver, the Adaptation Worker, the Mirror Worker and the safety mechanism. Obviously, the overall time depends on the different tasks that each component has to complete. 2. What are the tasks performed by each component? • Adaptation Manager: makes the adaptation decision and sends the adaptation arguments to the Master Adaptation Driver. • Master Adaptation Driver: checks if the LIDS container is running and depending on the outcome, directly proceeds in generating the adapted configuration file or first starts a new LIDS container and configures traffic distribution. • Adaptation Worker: conducts the live rule update in the LIDS container. • Mirror Worker: checks whether a traffic endpoint from the compute node hosting the VM to the node hosting the LIDSs exists and if not creates it. • Safety Mechanism: guarantees that in the case of a VM creation or migration the VM does not enter an active state until the reconfiguration of the LIDS has been completed successfully. Different factors affect the completion time of each task, which leads us to the next question: 3. Which factors affect the execution time of each task? • Adaptation Manager: the number of the adaptation arguments affects the size of the file and consequently the time required to send it to the MAD on the LIDS node. The number of the adaptation arguments depends on the number of services running inside the monitored VMs and the number of additional monitoring rules that the tenant has requested. • Master Adaptation Driver: the number of rules that need to be activated/deactivated affects the time required to regenerate the LIDS configuration file. The time required for the remaining tasks is not affected by the adaptation arguments. • Adaptation Worker: the number of rules that are added affects the overall time required to reload the enforced ruleset. • Mirror Worker: since the MW needs to create a single tunnelling endpoint (which translates to executing two OvS commands, one for identifying the port number of the VM's port and one for creating the tunnel itself) the MW execution time is expected to be constant. • Safety Mechanism: the waiting time introduced by the safety mechanism in resuming the VM is equal to the time remaining to complete the adaptation process when the Nova function plug vifs is called on the VM destination node. Consequently, the factors that affect the completion time of the four other SAIDS components indirectly affect the execution time of the safety mechanism. We now present the second performance objective. We want to evaluate how many adaptation requests SAIDS can successfully serve in parallel. In order to achieve this we need to answer the two following questions: 1. How many full adaptation loops can SAIDS handle in parallel? Each loop is composed of three different levels: The Adaptation Manager, the Master Adaptation Driver and finally the Adaptation Worker with the LIDS (the level of the AW does not scale since the design pairs a single AW with a single LIDS). The evaluation of the overall scalability of SAIDS should be composed of the scalability evaluation of each one of the adaptation levels. Consequently, we need to calculate: (a) How many MADs can the Adaptation Manager handle in parallel? This is the scalability result of the first level of adaptation (from the AM to different MADs). To achieve this we calculate the maximum number of MADs that the AM can handle in parallel. For this phase we only vary the number of MADs. (b) How many LIDSs can a MAD handle in parallel? This is the scalability result of the second level of the adaptation (from a MAD to the LIDSs). To achieve this we need to consider the case where the number of tasks that a MAD needs to perform per LIDS is maximized. This case essentially requires the MAD to spawn a new LIDS and configure the traffic distribution on the local switch, for each adaptation request. We examine only this case as the one requiring the maximum effort on the MAD side. Since the focus of the experiment is on creating new LlDSs, rather than reconfiguring the enforced ruleset of existing ones, we only activate one rule category per IDS. The number of rule categories that are activated does not change the size of the LIDS configuration file (see example in Listing 4.3) thus the time required for the MAD to generate it is not impacted. Moreover, since the MAD operations are asynchronous, the time required to load the rules in each LIDS does not affect the MAD scalability. For this phase of the experiment we only vary the number of LIDSs. 2. What is the overhead imposed by the multiple parallel requests in the execution time of each adaptation loop? We would like to identify the impact of parallelism on the time required to complete each adaptation loop. The reaction time of two SAIDS components (i.e. Adaptation Manager and Master Adaptation Driver) is directly affected by the number of parallel requests. We compute the overhead (in seconds) in the reaction time of the two components. Cost We examine the associated penalties on deploying SAIDS both from the tenants and the provider's objective. From the provider's perspective we calculate the overhead imposed by SAIDS to normal cloud operations (e.g. VM migration) while for the tenants we examine if SAIDS imposes any overhead in the performance of tenant applications. • Provider-side cost: namely, What is the overhead (in seconds) introduced by SAIDS to a normal cloud operation like a VM migration? • Tenant-side cost: since SAIDS monitoring is performed by network based IDSs that work on mirrored traffic, SAIDS deployment does not directly affect tenant applications regardless of their profile (no latency is induced in the production network). The traffic mirroring itself can indirectly affect the applications running on the SAIDS-monitored node due to CPU consumption and physical network bandwidth usage (although this penalty is inherent of the mirroring technique and not 4.6. EVALUATION 81 SAIDS itself). The only SAIDS related cost on individual tenant applications is related to the VM downtime when normal cloud operations occur. Security and Correctness Since one of the main SAIDS objectives is to guarantee an adequate level of detection during the adaptation time, it is clear that we need to examine whether malicious traffic is successfully identified even when the LIDSs are being reconfigured. Furthermore, we need to certify that SAIDS does not affect the detection capabilities of the adapted LIDSs and that the adaptation result is correct. We focus on the following questions: • Are the added rules correct and operational? • Are there any packets dropped during the adaptation time? • Can SAIDS detect an attack that occurs during the adaptation time? • Does SAIDS add any security flaw in the adaptation process itself or in the provider's infrastructure? in Section 4.3 we have already justified why our design choices do not add any flaws in the adaptation process and in the provider's infrastructure. After presenting the objectives of our evaluation process, we now detail the experimental scenarios used to perform the evaluation of our SAIDS prototype. Experimentation Methodology This section describes in detail the experimental scenarios used in order to evaluate SAIDS prototype. The scenarios were designed for addressing multiple evaluation objectives simultaneously. We select VM migration as a representative cloud operation that includes VM creation and deletion. For examining the security and correctness of SAIDS, we select a web server as use case. VM Migration The VM migration scenario simultaneously addresses the performance and cost objectives (only the provider-associated cost of deploying SAIDS). We aim at calculating the overhead imposed by deploying SAIDS in a VM migration. In this scenario we calculate the migration time of a monitored VM under two different workload cases: 1. an idle VM, no workload running in the migrated VM (idle VM) and 2. a memory-intensive workload running in the migrated VM. The overall migration time depends on two factors: the memory size of the migrated VM and the workload running inside the migrated VM. The workload cases represent two different situations, the first one (i.e. idle VM), with minimum migration time, consequently any overhead imposed by SAIDS is maximised while the second one (i.e. memory intensive workload), with maximum migration time, hence any overhead imposed by SAIDS is minimised. In both cases we examine all possible adaptation options: • a corresponding LIDS already exists and is running on a dedicated node, thus SAIDS only needs to reconfigure the enforced ruleset. • SAIDS needs to start the corresponding LIDS, create a port for it on the virtual switch, and redirect the mirrored traffic coming from the destination node of the VM to the LIDS port. Furthermore, SAIDS needs to check whether a tunnel for the In each option we calculate the reaction time of each SAIDS component. Multiple LIDSs and Multiple MADs This scenario focuses on the scalability objective of our evaluation plan. The multiple LIDSs and multiple MADs scenario examines the ability of SAIDS to handle multiple adaptation requests in parallel. SAIDS's scalability is examined at two different levels: the Master Adaptation Driver and the Adaptation Manager. At the Master Adaptation Driver level, we calculate the total reaction time as well as the reaction time of each phase (ruleset configuration, LIDS creation, traffic distribution). We compare the results with the adaptation of a single LIDS and calculate the scalability overhead. The only varying parameter in this experiment is the number of LIDS. For the Adaptation Manager level we calculate how many different Master Adaptation Drivers (each one with maximized load) an can AM handle in parallel. Each MAD resides in a different node and requires a dedicated secure connection in order to transmit the adaptation arguments. We calculate the mean reaction time of the AM and we compare it with a single MAD approach in order to calculate the scalability overhead. For the evaluation, we simulate a large number of nodes using containers and we place each MAD in a separate container with a dedicated IP address. All containers are placed on the same physical node. Since each container is a completely isolated environment, the AM perceives it as a dedicated node and still needs to create a dedicated secure connection per MAD. Due to memory restrictions (our node has 24GB of memory) no LIDS is run inside the containers. Since the MAD operations are asynchronous the fact that no LIDS is run does not affect the result. In SAIDS, an adaptation request concerning a single LIDS is represented by a file containing the adaptation arguments (one file per LIDS is sent from the AM to the MAD responsible for the adapted LIDS). Consequently, in order to simulate the maximum number of adaptation requests per MAD, we take the results from the first phase of the experiment (i.e. the maximum number of LIDSs that a single MAD can handle) and we send the same number of files containing adaptation arguments to each MAD. The varying parameter in this experiment is the number of MADs. Web Server In this scenario we examine SAIDS ability to guarantee an adequate level of detection even during the adaptation process. For this purpose we migrate a web server and we launch multiple SQL injection attacks during the migration period. In the set up created for this scenario we have two different LIDSs (one monitoring the traffic in the source node and one monitoring the traffic in the destination node). The first LIDS is already configured to detect SQL injection attacks while the second one is not. We expect that the second LIDS will be able to detect the attacks after SAIDS adapts it. Depending on when in the migration phase the attack packets reach the victim VM we expect different outcomes. Before presenting the different outcomes we briefly discuss the migration aspect that affects the connectivity of the migrated VM. In each live migration the dirty memory pages of the migrated VM are copied from the source to the destination node until a specific threshold is reached, when the VM is momentarily paused the remaining memory pages are copied and then the VM is resumed at the destination node. Until this threshold is reached the VM continues to be active on the source node, thus the virtual interface accepting VM-related traffic is the one on the source node (consequently in our case it will be monitored by the first LIDS). In parallel with the memory pages copy, a new virtual interface for the VM is created on the destination node. After the interface is created and the copy of the pages reaches the threshold, the VM is activated on the destination node, thus the traffic is now redirected on the new virtual interface (consequently in our case it is monitored by the new LIDS). We now list the three different outcomes: 1. Attack packets reach the VM before the virtual interface has been created at the destination node. Consequently, the packets will be inspected by the first LIDS. We expect the attack to be detected since the LIDS is already configured. 2. Attack packets reach the VM after the virtual interface has been created at the destination node and SAIDS has successfully reconfigured the second LIDS. We expect the attack to be detected since the second LIDS is already reconfigured. 3. Attack packets reach the VM after the virtual interface has been created at the destination node and SAIDS reconfiguration is on-going on the second LIDS. Since SAIDS utilises the live rule swap functionality of a LIDS we expect the second LIDS to analyze the attack packets as soon as the new ruleset has been reloaded (the alert will be generated once the new ruleset is enforced and the attack packets reach the second LIDS). SAIDS features a safety mechanism that does not allow the VM to enter an active state after migration (i.e. on the destination node) before the LIDS reconfiguration has been completed. The safety mechanism guarantees that no packets will reach the VM before the new LIDS is successfully reconfigured. Furthermore, for checking whether SAIDS causes the LIDS to drop packets during the adaptation process, we compare the number of packets reaching the virtual interface of the LIDS with the number of packets that the LIDS reports as captured. Result Analysis After presenting our evaluation scenarios and the objectives that they serve we now analyze the obtained results. Experimental Setup To do our experiments, we deployed a data center on the Grid5000 experimentation platform. Our datacenter has 5 physical nodes: one controller, one network node, two compute nodes and one separate node for hosting the LIDSs. Each physical node has 24GB of RAM and features two AMD Opteron processors (1.7Ghz, 4 cores each). The nodes run an Ubuntu Server 14.04 operating system and are interconnected through a 1Gb/s network. The LIDSs gain access to the monitored traffic through mirror ports and GRE tunnels. The LIDSs in all experiments run a Suricata NIDS process. All the VMs deployed on the physical nodes run an Ubuntu server 14.04 Operating System with 2 CPUs and 4 GB of RAM. We perform 10 executions per experiment. VM Migration To generate the memory-intensive workload we utilised bw mem wr from the LMBench benchmark suite [143] with a 1024MB working set. The working set is allocated, zeroed and then written as a series of 4 byte integers. In each adaptation we only add two new rule categories that correspond to ssh traffic (emerging-shellcode.rules, emerging-telnet.rules). Since the VM is not executing a workload that generates traffic no other rules are necessary. In this scenario we aim at proving that SAIDS imposes negligible overhead in the VM migration. The results are shown in Figure 4 VM is introduced. In the first case, where only a reconfiguration of the enforced ruleset is required, the time until the new ruleset is loaded is 4.14s (the MAD starts the reconfiguration process as soon as it receives the adaptation arguments). The AM uses the existing connection in order to send the file with the adaptation arguments thus we include only the time to send the file in the overhead analysis. In the second case, where a new LIDS needs to be instantiated, the time required until it gains access to the traffic is 0.97s (time for the MAD to start the LIDS and reconfigure traffic: 0.82s + time for the AM to send the adaptation arguments: 0.15s -connection establishment + file transmission time). The creation of the tunnel endpoint in the VM destination node takes 0.19s (including the time required for the AM to send the information to the AW which contains connection establishment and file transmission time). The overall time required for SAIDS to perform a full adaptation loop in both cases, is much smaller than the overall migration time (13.9s for an idle VM and 38.2s for a VM with a memory intensive workload). Furthermore, reconfiguring an existing IDS is a much heavier operation than starting a new one. This is due to the fact that during the reconfiguration process the AW needs to wait until the live rule swap is complete, which, depending on the number of newly added rules and potential LIDS delay in flushing its logs, can be time consuming. Multiple MADs and Multiple LIDSs In order to create multiple adaptation events in parallel, we wrote a dedicated script that simulates migration events by generating the same arguments that are sent to the Adaptation Manager by the Infrastructure Monitoring Probe in case of a VM migration (VM public IP, VM private IP, source and destination node, port on the virtual switch of the destination node). From the obtained results we identify that the task of spawning a new LIDS container, which implies interacting with the Docker daemon, is the most time consuming task. Even with 50 parallel LIDS spawning requests, which represent the maximum number of Suricata containers that our physical node can accommodate, the mean overall reaction time for SAIDS under maximum load is 9.41s, which is still significantly lower than the 13.9s average migration time for an idle VM (see experiment Described in Section 4.6.3.2). Consequently, even if one of the 50 LIDS that are adapted is responsible for monitoring the traffic flowing towards and from the migrated VM, still no overhead will be introduced in the VM migration (the LIDS will be instantiated before the migration is completed). Note that in the breakdown of the MAD phases, we did not include the time required for the MAD to produce the new LIDS configuration file and check whether a new LIDS is running, since their effect in the overall time is negligible (see explanation in Section 4.6.2.2). In a production environment, a usual deployment scenario includes assigning one core per LIDS in order to maintain an adequate performance level (in terms of packet loss) for the detection process. For simulating a production setup we tested SAIDS with 8 parallel adaptation requests (our machine has 8 cores). The mean overall time for MAD was 2.08s with individual breakdown of: LIDS creation 1.72s, switch port creation 0.32s and traffic redirection 0.01s. In this scenario, the monitoring strategy selected assigns a single LIDS for monitoring the traffic flowing towards and from a single VM (although this strategy is not optimal in terms of provider-side costs we apply it for the scalability study). Consequently, in order to generate the adaptation requests for the 50 LIDS of each thread, we use our script to simulate 50 dynamic events (e.g. VM migrations) for 50 different VMs. In order to target the LIDS that belong to the same MAD instance that a worker thread is handling, all the VMs of a worker thread are migrated to the same destination node. In order to extract the arguments for each one of the 50 VMs that it is handling the worker thread needs to parse the file where all the VM-related information is stored (vm info.xml ). For generating enough tasks for the worker threads the minimum number of VM entries in this file is computed as follows: maximum number of AM worker threads × number of VMs per thread. In this scenario we instantiate up to 100 AM worker threads consequently the minimum number of entries in the vm info.xml : 100 × 50 = 5000. The arguments for the adaptation of each LIDS are written to a separate file (see an example in Listing 4.1, adaptation args.txt). Each file has a size of 219 bytes. Then, the worker thread opens a single secure connection and sends all 50 files (one per LIDS) to the MAD responsible for the 50 LIDS. Finally, the worker thread opens a 88CHAPTER 4. SAIDS: A SELF-ADAPTABLE INTRUSION DETECTION SYSTEM FOR IAAS CLOUD EN secure connection with the destination node of the migrated VMs and sends the necessary information in a file to the MW. Note that since in our simulation all VMs of a single worker thread are migrated to the same compute node, only one file is needed. Indeed, the target of this experiment is not to evaluate the scalability of the AM with respect to the number of compute nodes. This optimization allows us to gain a better insight in the scalability of the AM with respect to the number of MADs. The results are presented in Figure 4 is most affected by increasing the load of the MADs for the AM is the establishment of the secure connection. That is due to the fact that each MAD is located in a different container with a different IP address consequently a separate secure connection is necessary (multiplexing is not possible). We measure the time to send the adaptation arguments (i.e. essentially the time required to send the 50 adaptation files) on the AM side. Since we do not wait for confirmation from each MAD instance that it received the files, no delay due to network contention is observed in the result. However, since all MAD instances are essentially run on different containers on the same node, some delay in the ssh connection establishment due to the number of processes running on the node could be observed. The latter makes the result of our experiment a pessimistic outcome compared to a real world scenario where each MAD instance would be run in a separate less loaded node. Since the VM-related information for all the VMs is located in a single file the multi-threading approach does not significantly decrease the adaptation decision time (as opposed to the case of one file per VM, where each worker thread needs to parse a file with only one entry instead of 5000). Our results demonstrate that a single AM instance can handle up to 5000 LIDS instances while the per-thread response time remains under 1s. The limit in the number of LIDS instances results only from the memory capacity of the testbed used to conduct our experiments. The number of instances could be increased, if SAIDS is deployed in a different setup where the memory capacity of production nodes is significantly larger than 24 GB of RAM per node. For computing the resource consumption of an AM in terms of CPU and memory handling multiple MADs we used the pidstat tool from the sysstat suite [144], a tool used for monitoring the resource consumption of a specific task running in an OS. In each experiment we ask the first worker thread to launch pidstat immediately after it receives the adaptation arguments and we terminate the monitoring after the last worker thread has completed its tasks. With this strategy we make sure that we only compute the resource consumption of all the worker threads during the actual adaptation process. Since all the adaptation tasks in each adaptation request are performed by the worker thread responsible for that adaptation request, no other SAIDS-related process consumes resources. We set the monitoring interval at 1s. The results are shown in Table 4 The increase in the CPU usage when the number of AM worker threads increases is due to the fact that starting a new ssh session imposes an one-time CPU penalty (during the connection establishment due to the cryptographic key exchange). Our measurements compute the worst-case scenario for each worker thread which is to establish a new connection. The CPU usage is expected to decrease in average-case scenarios where SAIDS needs to reconfigure an existing LIDS, thus it can use an already established connection for sending the file containing the adaptation arguments. Correctness Analysis For the web server scenario we installed WordPress on the target VM and we used Metasploit suite [START_REF]Metasploit Penetration Testing Software[END_REF] for launching SQL injection attacks. We have created our own custom SQL injection rule which is included in the local.rules file (this file stores the user-defined rules in both Snort and Suricata LIDS). A snipet of the file can be found in Listing 4.4: Listing 4.4 -The local.rules file 1 a l e r t t c p any any -> $HOME NET any ( msg : "WP S q l I n j e c t i o n Attack " ; c o n t e n t : " INSERT INTO w p u s e r s " ; s i d : 1 0 0 0 0 1 7 ; r e v : 1 ; ) The first LIDS, which monitors the traffic flowing towards and from the source compute node is configured to detect SQL injection attempts (the custom rule is activated), while the second LIDS, which monitors the traffic that flows towards and from the destination node, is not configured (the custom rule is deactivated). In order to cover all three possibilities for the arrival time of the attack packet (before the virtual interface migration -attack packets are processed by the old LIDS, after the virtual interface migration but before the new LIDS reconfiguration and finally after the virtual interface migration and after the LIDS reconfiguration) we launch 10 consecutive attacks at the beginning of the VM migration. 4.6.3.4.1 Attack packets arrive before the creation of the virtual interface of the target VM on the destination node: In this case the traffic is processed by the first LIDS, so the attack is detected and an alert is generated. 4.6.3.4.2 Attack packets arrive after the creation of the virtual interface of the target VM on the destination node and after the second LIDS has been successfully reconfigured by SAIDS: In this case the enforced ruleset in the second 90CHAPTER 4. SAIDS: A SELF-ADAPTABLE INTRUSION DETECTION SYSTEM FOR IAAS CLOUD EN LIDS is already reconfigured to include the custom SQL injection signature, so the attack is detected and an alert is generated. 4.6.3.4.3 Attack packets arrive after the creation of the virtual interface of the target VM on the destination node but before the second LIDS has been successfully reconfigured by SAIDS: In our strategy, the LIDS reconfiguration starts immediately after the migration command is received by the cloud API and is executed in parallel with the migration. A full adaptation cycle from SAIDS requires either 4.14 (existing LIDS reconfiguration) or 0.97s (new LIDS deployment) while the migration of the target VM requires in the best case scenario (idle VM) [START_REF] Liu | NIST Cloud Computing Reference Architecture: Recommendations of the National Institute of Standards and Technology[END_REF].9s (see experiment described in Section 4.6.3.2). In this case the migration of the virtual interface of the target VM (executed by the plug vifs function) occurs always after the 10th second in the migration cycle. As a result, the second LIDS reconfiguration has been completed before the migration of the virtual interface of the target VM occurs. Consequently, the SAIDS adaptation cycle has already been completed and the LIDS has already been reconfigured. Indeed, attack packets never reach the new virtual interface on the destination node before SAIDS reconfiguration is complete. For the two cases that refer to the second LIDS (see Sections 4.6.3.4.2 and 4.6.3.4.3), the number of packets that arrive in the virtual interface of the LIDS container is identical to the number of packets reported by the Suricata process as captured, consequently no packets are dropped during the reconfiguration phase. We chose to compare the number of packets reported by the Suricata process with the number of packets received by the LIDS container as comparison of the number of packets reported in any previous stage (e.g. with the number of packets copied to the mirror interface) may have included non-SAIDSrelated packet loss. After analyzing our obtained results we now discuss the limitations of SAIDS. Limitations SAIDS uses signature-based network IDSs and as such suffers from the inherent limitations of this type of intrusion detection. Therefore, SAIDS cannot detect unknown attacks for which a corresponding signature (i.e. rule) does not exist. Furthermore, since SAIDS works on a copy of the traffic, an additional mirror-induced delay is imposed between the time an attack reaches the target VM and the time when the alert is raised from the LIDS. Regarding the connection between different SAIDS components, according to our scalability study, a secure connection per MAD is required. This could lead to network contention in a real production environment where thousands of MAD nodes are deployed. In the scenario described in Section 4.6.3.2 we saw that SAIDS imposes negligible overhead for average-sized VMs (4GB and higher). Since the LIDS reconfiguration is completed before the VM migration is completed in the destination node, the safety mechanism does not have to halt the VM from resuming. However, SAIDS could impose some overhead in migration operations in cases of very light workload where the overall migration time is less than 4.14s (i.e. the time required for SAIDS to reconfigure an existing LIDS). Summary In this chapter we presented SAIDS, the first instantiation of our self-adaptable security monitoring framework. SAIDS is a self-adaptable network intrusion detection system that 4.7. SUMMARY 91 satisfies four main objectives: 1. self-adaptation, 2. tenant-driven customization, 3. scalability and 4. security. SAIDS is able to adapt its components based on different types of dynamic events in the cloud infrastructure. Depending on the type of the event SAIDS can alter the configuration parameters of existing security probes or instantiate new ones. A detailed description of the adaptation process along with the role of each SAIDS component was presented. We evaluated SAIDS under different scenarios in order to calculate the overhead of our approach in normal cloud operations, such as VM migration and we prove that SAIDS imposes negligible overhead in a VM migration. Furthermore, we evaluated the scalability and security/correctness of our approach with dedicated simulation scenarios. Scalability was evaluated in two different levels (from AM to multiple MADs and from a MAD to multiple LIDSs). Due to memory size restrictions imposed by our testbed the maximum number of LIDS that a single MAD can handle in parallel is 50 while the maximum number of MADs that a single AM can handle is 100. Overall SAIDS can handle up to 5000 LIDS in our current testbed, while this number could be increased making our solution suitable for a large scale cloud infrastructure. We have shown that SAIDS is able to detect attacks while handling dynamic events (e.g. VM migration) and is able to remain operational even during the adaptation process. The contribution presented in this chapter was focused on intrusion detection. The next chapter presents the second instantiation of our security monitoring framework, AL-SAFE and is focused on intrusion prevention. AL-SAFE: A Secure Self-Adaptable Application-Level Firewall for IaaS Clouds In this chapter we present the second instantiation of our framework which focuses on a different type of security component, the firewall. AL-SAFE is a secure application-level introspection-based firewall designed to cope with the dynamic nature of an IaaS cloud infrastructure. This contribution was published in [START_REF] Giannakou | Al-safe: A secure selfadaptable application-level firewall for iaas clouds[END_REF]. In Section 5.1 we motivate the need for securing application-level firewalls and we present a justification of our design choices regarding AL-SAFE. The system and threat models that we adopted along with individual component description are presented in Section 5.2. Section 5.3 presents the adaptation process while implementation details are discussed in Section 5.4. Our evaluation strategy along with obtained results are presented in Section 5.5. Finally Section 5.7 concludes this chapter by listing key observations. Requirements Application-level firewalls are an important part of cloud-hosted information systems since they provide traffic filtering based on the type of applications deployed in a virtual infrastructure. However, they are subject to attacks originating both from inside and outside the cloud infrastructure. In this thesis, we aim at designing a secure application-level firewall for cloud-hosted information systems. In a cloud infrastructure, two security domains exists: One is concerned with traffic that flows between VMs inside the virtual infrastructure (that might belong to the same or different tenants) while the other is concerned with traffic that flows between the outside world and the virtual infrastructure. Consequently, an application-level firewall should address both domains. Furthermore, a cloud-tailored application-level firewall should take into account tenantspecific traffic filtering requirements and self-adapt its ruleset based on dynamic events that occur in a cloud infrastructure. In this section we elaborate on the need for securing a cloud-tailored application-level firewall and we justify how AL-SAFE's design addresses this need. Furthermore, we detail the design principles of AL-SAFE and how they relate to the objectives of our self-adaptable security monitoring framework. 94CHAPTER 5. AL-SAFE: A SECURE SELF-ADAPTABLE APPLICATION-LEVEL FIREWALL FOR IAAS Why Should we Secure an Application-level Firewall In contrast to typical host-or network-level firewalls which filter network traffic based on a list of rules that use IP addresses and ports, application-level firewalls operate based on a white list of processes that are allowed to access the network. This fine-grained filtering is achievable because application-level firewalls run inside the host operating system, and thus have a complete overview of the running applications and associated processes. Unfortunately, in the conventional design of application-level firewalls, isolation between the firewall and vulnerable applications is provided by the OS kernel, whose large attack surface makes attacks disabling the firewall probable. Hence, we address the following challenge: Can we keep the same level of visibility while limiting the attack surface between infected applications and a trusted, application-level firewall? In order to answer this question, we designed AL-SAFE. In the following section we present in detail how AL-SAFE's design addresses this impediment. Security and Visibility In order to address the issue of limiting the attack surface between the security device (i.e. the firewall) and a potentially compromised VM, we designed AL-SAFE to operate outside of the virtual machine it is monitoring, in a completely separate domain. Leveraging virtual machine introspection 2.5.2.2.1 we retain the same level of "inside-the-host" visibility while introducing a high-confidence barrier between the firewall and the attacker's malicious code. As we discussed in Section 2.5.2.2.3 firewalls in IaaS clouds are managed by the cloud provider. A successful firewall solution should be able to take into account the type of services deployed in the virtual infrastructure as well as the different dynamic events that occur in a cloud environment. Consequently, a cloud-tailored firewall should be able to allow customization of the filtering rules in a per-tenant basis (service-based customization), and also adaptation of the enforced ruleset upon the occurrence of dynamic events (self-adaptation). In the following section we detail AL-SAFE's design principles. Self-Adaptable Application-Level Firewall In AL-SAFE we enabled automatic reconfiguration of the enforced ruleset based on changes in the virtual infrastructure topology (virtual machine migration, creation, deletion) and in the list of services running inside the deployed VMs. To address the need of filtering intraand inter-cloud attacks, AL-SAFE provides filtering at distinct infrastructure locations: at the edge of the cloud infrastructure (filtering network traffic between the outside world and the cloud infrastructure) and at the level of the local-switch inside each physical host (filtering inter-VM traffic). In this way AL-SAFE prevents attacks that originate both from outside and inside the cloud. We now present a list of all the design principles of AL-SAFE : • Self-adaptation: AL-SAFE's enforced ruleset should be configured with respect to dynamic changes that occur in a cloud environment, especially changes that refer to the virtual infrastructure topology. The source of these changes can be tenant decisions regarding the VM lifecycle (i.e. creation, deletion) or provider decisions regarding VM placement (i.e. migration). • Service-based customization: the enforced ruleset should be configurable to only allow network traffic that flows towards and from tenant-approved services that are hosted in the deployed VMs. Addition or removal of legitimate tenant-approved services should lead to reconfiguration of AL-SAFE's ruleset. • Tamper resistance: AL-SAFE should continue to operate reliably even if an attacker gains control of a monitored VM. In particular, the reconfiguration of the enforced ruleset should not explicitly rely on information originating from components installed inside the monitored guest. • Cost minimization: the overall cost in terms of resource consumption must be kept at a minimal level both for the tenants and the provider. AL-SAFE should impose a minimal overhead on tenant applications deployed inside the AL-SAFE-protected VMs. Models and Architecture We adopt the same system and threat models as the ones described in Chapter 3 (Sections 3.2, 3.3). We now present an overview of the events that trigger the adaptation process followed by AL-SAFE's design along with the presentation of key components. Events that Trigger Adaptation In order to satisfy the self-adaptation and service-based customization objectives, AL-SAFE is able to automatically configure the enforced rulesets on both filtering levels based on two categories of dynamic events: topology-and service-related changes. We list the events in each category along with their source in Table 5.1: As listed in the table, virtual infrastructure topology-related changes include VM creation, migration and deletion while service list related changes include addition of new or removal of existing services on the deployed VMs. All dynamic events listed require either addition or removal of existing rules in AL-SAFE. Component Description AL-SAFE consists of five main components depicted in Figure 5.1: the edge firewall (EF), that filters network traffic between the outside world and the cloud infrastructure, a local switch-level firewall (SLF), that filters traffic in the local switch of each physical host, the Introspection component (VMI), the Information Extraction Agent (IEA), and the Rule Generators (RG), one for each firewall. All components are run by the cloud provider. AL-SAFE components are integrated in our self-adaptable security monitoring framework by interacting with the Adaptation Manager (located inside the cloud controller) and the Infrastructure Monitoring Probes (located in the cloud controller as well). The IEA takes as a parameter a tenant-defined white list of processes that are allowed to access the network (white-list thereafter). Sharing the white-list with the provider essentially implies disclosing a list of processes that are approved for using the network. In the white-list each tenant-approved network-oriented process is represented by an application entry in the XML file. The application entry has different fields: port and protocol that the process is expected to use and a list of IP address (public or private) that are allowed to connect to the process. In our example there are three processes that are allowed to use the network: an apache server and ssh daemon. Both the apache server and the ssh daemon have restrictions as to which IP addresses are allowed to interact with. We now describe the individual AL-SAFE components along with their functionality. VM Introspection The VMI component is responsible for introspecting the memory of the monitored guest. VMI is able to coherently access the VM's physical memory and uses a profile of the VM's operating system's kernel to interpret its data structures. Thus VMI first extracts the list of running processes, and then iterates over this list to check if a network socket figures in the per-process list of file descriptors. For each network socket found, VMI extracts the process name, the pid as well as source and destination ports, IP address and communication protocol. The VMI-created list is named connection list. Information Extraction Agent The IEA compares the connection list thereafter resulting from the VMI with the tenantdefined white-list of processes. The Adaptation Manager is responsible for sharing the white-list with the Information Extraction Agent through a secure channel. The AM is also responsible for sharing updated versions of the white-list (e.g. when a new tenant-approved service is added). The IEA assigns an allow tag on connections from the connection list that figure in the white-list and a block tag on all other connections. The IEA propagates the connection information together with an ALLOW or BLOCK instruction to the next component, the Rule Generators. Furthermore the IEA component keeps a record of the rules used for each VM deployed on the compute node on which it runs. Rule Generators Due to the different types of filtering rules, AL-SAFE features one rule generator per type of firewall (one for the switch-level firewall and one for the edge firewall). Each RG creates the corresponding rules using all propagated information such as source port, source IP address, destination port, destination IP address and protocol. In the case of the switch-level firewall, the rules are grouped by VM with one rule table per VM. Each set of generated rules is then injected in its respective firewall. Edge Firewall The Edge firewall is located at the edge of the virtual infrastructure in a separate network device and is responsible for external traffic directed towards and from the virtual infrastructure. Switch-Level Firewall The Switch-level firewall is responsible for filtering network packets in the local switch using a list of ALLOW and BLOCK rules. Adaptation Process AL-SAFE automatically adapts the enforced ruleset based on changes in the topology of the virtual infrastructure and the list of services running in the deployed VMs. We present a high-level overview of the adaptation process in each one of these two cases. The adaptation steps (from introspection of the AL-SAFE-protected VM until the injection of the rules in the two firewalls) are demonstrated in Figure 5.2. introspection strategy, the arrival of the migration request in an introspection period is critical. Let us define t x as the introspection period and t y as the time between the last start of an introspection and the moment when the migration command arrives at the source node of the deployed VM. Depending on the arrival time of the migration request we define two different cases: 1. The migration command arrives between two consecutive introspection actions. The remaining time until the next introspection (t x -t y ) is recorded and is sent as a parameter to the destination node along with the last valid introspection generated ruleset of the source node. The VM is resumed and the next introspection occurs after t x -t y . Since the VM migration command arrived between two introspections, the only way to respect the introspection period (that is not allow more time than t x to pass between to consecutive introspections) is to introspect after t x -t y time. Our strategy is depicted in Figure 5. In a migration event the proactive policy is enforced where the last valid ruleset is injected in the switch-level firewall of the destination node before the VM is resumed. 100CHAPTER 5. AL-SAFE: A SECURE SELF-ADAPTABLE APPLICATION-LEVEL FIREWALL FOR IAA Security Threats We now present the security threats against specific AL-SAFE components and how they can be exploited from an attacker. We discuss our design choices for securing AL-SAFE from these attacks. AL-SAFE Input Sources AL-SAFE operates based on a tenant-defined white-list of processes that are authorized to use the network. An attacker could taint the contents of the white-list and allow illegitimate processes to use the network. The API-generated white-list is expressed in a simple XML format for which the parser is easy to make robust. The input file is pre-processed using a AL-SAFE-specific filter that verifies that only AL-SAFE-specific elements and printable ASCII strings without special characters are present in the file. Moreover, no complex interpretation is required since the values of each entry match fields of the firewall rules. AL-SAFE Adaptation Arguments AL-SAFE adapts the enforced rulesets in the two level-firewall based on topology or service related changes in the virtual infrastructure. Theoretically, an attacker could bypass the adaptation process or initiate an unnecessary one by tampering with the arguments of existing topology-related changes. AL-SAFE relies on the IMPs, which are located inside the cloud engine, in order to access all VM-related information (i.e. VM id, external/internal IP addresses, tap on the virtual switch, etc). The IMPs are hooks placed inside the cloud engine which copy information from the data structures used by the cloud engine in order to store network-related information regarding the VMs. Since the cloud engine and the information it stores, are considered to be tamper-proof the information extracted from the IMPs is considered accurate. Regarding service-related changes, an attacker could tamper with the adaptation process in various ways. First, by tainting the arguments of a service (i.e. process name, port, protocol, etc) in order to force AL-SAFE to allow traffic towards and from attackerpreferred ports. AL-SAFE relies on VM introspection in order to detect service-related changes. Introspection parses kernel data structures in the VMs in order to extract the list of active network sockets together with their owner process name. Consequently, the only way for an attacker to tamper with the service arguments is by controlling the VM kernel this is an inherent limitation of all introspection-based solutions and we address it along with possible solutions in Sectionreflim. Second, the attacker could force the introspection component to crash or exploit a software vulnerability in the component itself. The parsing phase relies on commodity tools that may be vulnerable to out-of-bound memory accesses and reference loops in the parsed structures. Out-of-bound accesses are avoided since the commodity tool that we use (i.e. Volatility, presented later in Section 5.4) is in Python which features automatic array boundaries check. To protect against reference loops, as a last option a timeout could be used to stop introspecting. The extracted information is only compared to the white-list of process names or inserted as port numbers (resp. IP addresses) in the filtering rules. It is thus sufficient to check that extracted values are 16 bits integers (resp. valid IP addresses). Transfer of Reconfiguration Parameters In AL-SAFE the tenant-defined white-list is sent from the AM located inside the cloud controller to the node hosting the monitored VM. An attacker could perform a "Man in the middle" attack during the sending phase and alter the content of the white-list. In our approach, we maintain a secure connection open at all times between the cloud controller and the compute nodes. The authentication protocol used [START_REF] Barrett | SSH, The Secure Shell: The Definitive Guide[END_REF] provides authentication of the AM and guarantees the integrity of the data transferred. Hence, an attacker has no way of intercepting or altering any part of the communication between the cloud controller and the compute nodes. Firewall Rules In AL-SAFE network packets are processed by the OpenFlow tables inserted in the local switch, and by the rules inserted in the edge firewall. Assuming that both filtering engines are robust, the added rules can be considered safe since the only actions allowed are to allow or drop traffic. Implementation We created a prototype of AL-SAFE from scratch using the KVM [START_REF] Kivity | KVM: the Linux Virtual Machine Monitor[END_REF] hypervisor on a private cloud. Our cloud was deployed on OpenStack [START_REF]OpenStack[END_REF] and we used Open vSwitch (OvS) [137] as a multilayer virtual switch. In this section we present key implementation details of each component. Edge Firewall For the edge firewall we rely on the Nftables [147] stateful packet filtering framework which is deployed in a standalone Linux host. Switch-Level Firewall For the switch-level firewall our prototype features two versions. The first version uses the stateless filtering capabilities offered by Open vSwitch (i.e. essentially two rules per service are required, one for incoming and one for outgoing traffic). In the second version, AL-SAFE supports stateful filtering. The stateful filtering uses the OvS built-in feature of connection tracking conn state in order to generate rules that keep track of open connections. Each open connection corresponds to an entry in the conntrack table. When a packet that is not part of any connection arrives, our prototype creates a new entry in the conntrack table and marks the connection as tracked. Mr Fergal Martin Tricot implemented the second version of the switch-level firewall during his 3 month Master 1internship that I co-supervised. The rules are grouped by VM (that is by switch port), with one OpenFlow table for each VM located on the compute node. The evaluation of AL-SAFE was conducted using the first version of the prototype. VMI In order to introspect the memory of a running VM we used LibVMI [113] combined with Volatility Memory Forensics Framework [148]. LibVMI [113] as the evolution of XenAccess, is a C library with Python bindings that facilitates the monitoring of lowlevel details (memory, registers, etc) of a running virtual machine. Since KVM does not 102CHAPTER 5. AL-SAFE: A SECURE SELF-ADAPTABLE APPLICATION-LEVEL FIREWALL FOR IAA contain APIs that enable the access to the memory of a running VM a custom patch was applied that uses a dedicated Unix socket for memory access. The patch uses libvirt [138] in order to gain control over the running VM (i.e. pause, resume). Although LibVMI is not itself an introspection framework, it provides a useful API for reading from and writing to a VM's memory. LibVMI integration with Volatility [148] is done through a dedicated Python wrapper (PyVMI) that contains a semantic equivalence for each of the LibVMI's API functions. Figure 5.5 shows the full software stack from the patched KVM to Volatility. Volatility can support any kernel version provided that a profile with the kernel symbols and data structures is created. The cloud provider would have to maintain a profile for each OS version deployed on the monitored VMs. As a modular framework, it provides different functionalities that are implemented by plugins. Each plugin performs a certain task such as identifying the list of running processes or the list of processes that have opened sockets (like the Linux netstat command). Volatility provides support to different processor architectures through the use of address spaces. An address space facilitates random memory access to a memory image by a plugin. A valid address space of a memory image is derived automatically by Volatility and is then used for satisfying memory read requests by each plugin. Unfortunately, Volatility was not designed in order to derive address spaces from memory images of running VMs which change constantly. In order to overcome this impediment we take a snapshot of the VMs memory (using LibVMI's built-in function vmi snapshot create) before each introspection. The overall flow of the VMI component actions is depicted in the chart shown in Figure 5.6. KVM LibVMI patch LibVMI The technique for obtaining the snapshot of the running VM's memory, called stop-andcopy, copies the whole memory of the VM to a temporary file. During this time the VM is paused and cannot make forward progress. Evidently, since snapshotting a VM implies copying a significant amount of memory, the time required is not negligible. Since our VMI component performs periodic introspections, it is necessary, for a successful introspection, that the introspection period (i.e. the time between two consecutive introspections) is larger than the time required to obtain a snapshot. The relation between the time required to obtain a snapshot and the introspection period is demonstrated in the Figure 5.7. Three scenarios are represented: introspection period larger than the time required to obtain the snapshot and introspect the snapshotted file, introspection period larger than the time required to obtain the snapshot but shorter than the time required to snapshot and introspect the snapshotted file and finally introspection period shorter than the time required to obtain the snapshot. We observe that defining an introspection period that is shorter than the actual time required to obtain a snapshot will result to a crash of the whole process. For enabling periodic introspection we implemented a Python wrapper that creates a volatility object and performs plugin actions on that object at specific time intervals (i.e. introspection period). Our wrapper is also able to adapt the introspection period on the fly based on instructions received by the Adaptation Manager. To enable VMI on dynamic infrastructure changes (e.g. VM migration), notifier hooks were placed inside the Nova function plug vifs() that is executed on compute nodes and is responsible for creating the virtual interface(s) for the VM. The hooks pass all necessary information to VMI (VM name, id, version of running OS, etc) and start VMI immediately after the VM is resumed. Information Extraction Agent First, the IEA detects the differences between the last two consecutive introspections results and extracts the necessary information for rule generation (source and destination IPs, ports and protocol). Before propagating the information to the two parallel rule generators, a dedicated thread issues commands to the underlying OvS daemon (through the list interface OvS command) and obtains the ID of the OvS port that corresponds to the introspected VM. Then it checks whether an OpenFlow table with filtering rules for that port exists and if not creates it. The IEA stores the table number along with the VM ID in a dedicated file (table info.txt) for later use (e.g. in case the VM is deleted, the IEA extracts the table number from the file and issues a delete command for all the rules in that table to the underlying OvS daemon). The table number along with the port ID and the necessary rule information are passed to the rule generator of the switch-level firewall. An example of the information passed to the switch-level rule generator, for the ssh process belonging in the white-list of Listing 5.1 can be found in Listing 5.2. In this example, the IEA has cross-checked the introspection result with the white-list (found in Listing 5.1) and has found that the ssh process on port 22 is allowed to use the network. Then, it acquires the OvS port number (4) for that particular VM (through the list interface OvS command ) and the number for the OpenFlow table [START_REF] Barham | Xen and the Art of Virtualization[END_REF] where all the rules for that particular VM should be stored (stored in table info.txt). Consequently he propagates the necessary information to the SLF rule generator. Rule Generators We implemented a separate rule generator for each firewall. The edge firewall rule generator produces Nftables-compatible rules while the switch-level firewall generator produces OvS compatible rules. To minimize the adaptation time, both rule generators are executed in parallel. Evaluation Methodology In this section we present our evaluation of AL-SAFE. We first present the objectives of our evaluation plan followed by our experimentation methodology. We performed the evaluation on the first version of our prototype, where the switch-level firewall is stateless (i.e. two rules per service are required one for incoming and one for outgoing traffic). The evaluation concludes with the correctness analysis and limitations of AL-SAFE in Section 5.6.3. Objectives of the Evaluation The main goal of AL-SAFE is to guarantee an equilibrium of a three-dimensional trade-off between performance, security and cost. In a cloud infrastructure different stakeholders are involved (i.e. tenants and the provider), consequently the trade-offs should be explored from each stakeholder's perspective. We first discuss our approach for evaluating AL-SAFE's performance, followed by the security and cost aspects. Performance The aspect of performance refers to the time required for AL-SAFE to complete a full adaptation loop (i.e. from the moment a dynamic event occurs until both firewalls are successfully reconfigured). In order to get an estimation of the overall time (i.e. latency) we need to answer the following questions: 1. What is the overall time (in seconds) needed until both firewalls are successfully reconfigured? The adaptation process consists of four phases: sharing of the white-list, snapshotting-introspection, rule generation and rule insertion. The overall latency is the sum of each phase's individual latency, which naturally leads us to a second question: 2. What is the time (in seconds) required to complete each phase? Depending on the tasks performed by each phase there are different components involved. Different • Adaptation Manager: It is responsible for sharing the tenant-generated white-list with nodes that host the monitored VMs. The number of entries in the list impacts the size of the file thus the time required for sending it to the corresponding nodes. • Rule Generators: They are responsible for generating the two separate rule categories and inserting them in the firewalls. The overall execution time for these components depends on the number of generated rules and the time required to insert them. Respectively, the number of generated rules is related to the number and type of services running inside a monitored VM and the tenantdefined white-list. Regarding the rule insertion time, for the switch level-firewall the number of rules affects the insertion time, while for the edge firewall the rules are written to a file, the file is then sent to the firewall host and finally the rules are inserted. • Introspection: The VMI component performs two functionalities: snapshotting and introspecting. Since the technique employed for snapshotting the monitored VM is stop-and-copy the only factor that affects the snapshotting time is the size of the monitored VM's memory. Introspection time depends on different factors as follows: -Number of running processes, -Number of created sockets, -Size of the introspected file (snapshot). Security and Correctness From a tenant's perspective, AL-SAFE is an application-level introspection-based firewall. AL-SAFE needs to allow only tenant-authorized services to use the network while blocking all other malicious network activity, even when the monitored VM is compromised. From the provider's perspective, AL-SAFE needs to guarantee that no security vulnerabilities are added in the provider's infrastructure by deploying AL-SAFE. Cost Cost minimization is one of AL-SAFE's core objectives. Thus, the associated overheads both from a tenant's and the provider's perspectives need to be examined. For the provider-associated cost we calculate the performance overhead imposed by AL-SAFE in normal cloud operations (e.g. VM migration) and the system resources consumed by AL-SAFE's components. Respectively, for tenant-associated cost we calculate the performance overhead imposed by AL-SAFE on tenant applications running inside monitored VMs. • Provider-associated cost: What is the latency (in seconds) introduced by AL-SAFE to a normal cloud operation such as VM migration? and What is the cost of deploying AL-SAFE's components in a compute node in terms of CPU and RAM? All of the resources consumed by AL-SAFE are resources that cannot be assigned to virtual machines (hence cannot generate profit for the provider). Consequently an exact computation for the CPU percentage and memory consumption is required. • Tenant-associated cost: What is the cost of deploying AL-SAFE as perceived by tenant applications? In order to identify the quantitative cost induced by AL-SAFE we examine two different kinds of applications: process-intensive and networkintensive. We select these application profiles for simultaneously examining the main factors affecting each AL-SAFE component under different workloads. For the process-intensive application we identify the associated cost as the additional time required to perform a specific task. For the network-intensive application we identify the cost as the overhead induced in network throughput, application throughput and latency in connection establishment. Experimentation Methodology We now present the detailed scenarios that we used in order to perform the actual evaluation. It is worth mentioning that the scenarios were designed in order to address multiple evaluation objectives simultaneously. We select a Linux kernel build as a process intensive application and a web server and Iperf as network intensive applications. For a typical cloud operation we select a VM migration as a super case that includes VM creation (in the destination node) and VM deletion (in the source node). Each application is tested under different workload and introspection period parameters. VM Migration The VM migration scenario focuses on the provider-associated cost of deploying AL-SAFE. We aim at providing the reader with a fine-grained view of how intrusive a full adaptation loop is to VM migration. Although it can also provide an accurate estimation of the time (latency in seconds) required to perform each phase in the adaptation this is not the focus of this experiment. We compute the overall migration time of a monitored VM in seconds. The scenario has two options: no workload running in the migrated VM (idle) and a memory intensive workload running in the migrated VM. In the first case migration time is minimum (hence adaptation penalty is maximised) while in the second case the migration time is significantly larger (hence adaptation penalty is minimal). In this scenario the adaptation process only affects the switch-level firewall. Linux kernel Build In the Linux Kernel bulid scenario we compile a Linux kernel inside the untrusted VM and we vary the introspection period. The scenario serves a dual purpose as it addresses both the performance and cost objectives of our evaluation plan. Depending on the objective we compute different metrics: 1. Performance of AL-SAFE: We record the time required for each of AL-SAFE components to complete its functionality. The component that dominates the overall latency of a full self-adaptation loop in this particular scenario should be the introspection component. Since the scenario features a process-intensive application with no network activity, no rules are generated or inserted in the two firewalls. As discussed in Section 5.5.1.1, due to the snapshot technique selected, the memory size is the only parameter that influences the time required to obtain the snapshot. 108CHAPTER 5. AL-SAFE: A SECURE SELF-ADAPTABLE APPLICATION-LEVEL FIREWALL FOR IAA Regarding the introspection time and with respect to the application profile we identify the number of processes and the size of the introspected file (i.e. snapshot) as influencing factors. 2. Tenant-associated cost: We measure the elapsed build time in seconds. Whether parallel compilation is enabled or not and the number of virtual CPUs in the virtual machine are expected to influence the result (due to the change in the number of processes). We also consider the time between two consecutive introspections to be an influencing parameter for the overall build time. Each introspection requires a snapshot of the monitored VM which freezes the VM during the snapshot time (due to the stop-and-copy technique), consequently the overhead increases with the introspection frequency. Apache Web Server In this scenario we install a web server on the monitored VM for serving new client requests. The scenario serves a dual purpose with regards to the evaluation objectives: 1. it quantitatively estimates AL-SAFE's performance Regarding rule creation and insertion we recall that for the edge firewall a secure connection is required in order to inject the rules. Consequently, the influencing factors are the number of rules and the time required to establish a secure connection. In this case a variation in the number of requests can also indirectly influence the rule creation and insertion times. An example would be a scenario where the requests come from different client IP addresses and a list-based tenant security policy (detailed explanation below) is applied. 2. Tenant-associated cost: for calculating the tenant-associated cost of deploying AL-SAFE we compute the mean of the following values: latency induced in the response time for each new connection and service throughput. For the new connection response time different setups are examined depending on: the location of the client, the security policy enforced and the time of the request's arrival with respect to the introspection period. We detail each one: • Location of the client: (a) The client is located in a virtual machine belonging to the same tenant: only the switch-level firewall needs to be reconfigured. (b) The client is located outside the cloud infrastructure: both the edge firewall and the switch-level firewall need to be reconfigured. (c) The client is located inside the cloud infrastructure but in a virtual machine that belongs to a different tenant. In our setup the edge firewall is located on the gateway connecting the cloud infrastructure with the outside world. When a client request from a VM belonging to tenant T1 is issued to tenant's T2 web server VM (public ip: 80) it first reaches the Neutron controller and then is redirected to the host executing the web server (essentially the request never leaves the cloud). Similarly to the first type of request, only the switch level firewall needs to be reconfigured. • Tenant-defined security policy: (a) Policy allow all : Allow every request on port 80. This policy requires only one rule in each firewall thus the number of requests does not induce any additional latency. (b) Policy allow only white-listed IPs: In this case only requests from a tenantdefined address list are allowed.The latency depends on the number of IPs in the list. Since our switch-level firewall is stateless, we generate two rules per IP one for incoming and one for outgoing traffic while for the edgefirewall (stateful) only one is enough (since the conntracking feature allows to use a general rule for established connections). (c) Policy block black-listed IPs: Reasoning is similar to the allow only whitelisted IPs policy. Every request is allowed besides the ones originating from blacklisted IPs. Again the latency depends on the length of the list. This policy can be combined with the allow all policy (i.e. allow connections from all IPs except the blacklisted ones). • Arrival of the request time in the introspection cycle: Depending on when the connection request arrives and what is the timeout period for the TCP connection, we foresee the following outcomes: (a) The request arrives before the introspection has been completed. That is: arrival of request + timeout < introspection complete. In this case the connection fails. (b) The request arrives after the introspection has finished but before the adaptation of the two firewalls has been completed. That is: introspection complete < arrival of request + timeout < adaptation complete. Again the connection fails. (c) The request arrives before the adaptation of the two firewalls has been completed but the timeout is enough for the connection to wait until the completion of the adaptation. That is: introspection complete < arrival of request + timeout ≥ adaptation complete and arrival of request + timeout > adaptation complete. In this case the connection succeeds(the port will be open). (d) The request arrives after the adaptation of the two firewalls has been completed. In this case the connection succeeds. After defining the metrics used in this scenario we now focus on the varying parameter in the different workloads that we use, that is the number of requests per second. The web server spawns new sockets in order to serve the requests. In this case we expect an increment in the introspection time. Regarding memory size, we test with 110CHAPTER 5. AL-SAFE: A SECURE SELF-ADAPTABLE APPLICATION-LEVEL FIREWALL FOR IAA 2048MB memory for the VM and two virtual CPUs. The memory size represents average workload use-cases (medium traffic websites, small databases, etc) as stated in [START_REF]Amazon Web Services[END_REF]. The memory size is expected to be the only factor affecting the snapshotting time. 3. Provider-associated cost: we calculate AL-SAFE resource utilization in terms of CPU percentage and memory consumption. This scenario refers to measuring the performance impact of a full adaptation loop for a server that is accepting incoming connections. For evaluating the impact of a full adaptation on a client located inside the virtual infrastructure (in an AL-SAFE-protected VM) attempting to connect to a web server located outside the virtual infrastructure (hence adaptation of both firewalls is required in order to allow the client traffic to pass unimpeded), we calculate the latency induced in the response time for each new connection. In this case we execute a full adaptation loop only on the client's side. Unfortunately, in contrast to the server side where the connection port is known a priori (port 80 or port 443 for https requests), the client case comes with one impediment: the port number is unknown until the client attempts to make a new connection (hence a rule that allows the connection cannot be inserted proactively in the two firewalls rulesets). In order to overcome this impediment we include two security policies: 1. Proactive security policy: allow all traffic directed towards the server's IP and port 80. With this option all traffic towards the web server is allowed regardless of the source port. 2. Reactive security policy: wait until introspection detects the source port of the white-listed process and then insert the rule that allows traffic for that particular port only. The proactive option clearly favors performance as it offers minimal service disturbance but it also introduces security vulnerabilities (since no control is performed on the source process of the connection) as a potentially malicious process executing on the monitored client can gain access to the legitimate web server. Iperf This scenario is used to evaluate the effect of the introspection phase on the network throughput. We install Iperf [149] in the VM which acts as a server and we use a separate node outside the cloud infrastructure as a client. Iperf measures the maximum available bandwidth on an IP network. The selected scenario focuses on the cost objective of our evaluation plan. The computed metrics are: 1. Tenant-associated cost: we measure the network throughput in sending/receiving a TCP stream. As in the kernel compilation scenario, each time the VM is introspected a snapshot is taken, which freezes the VM during the snapshot time (due to the stop-and-copy technique), consequently the overhead increases with introspection frequency. 2. Provider-associated cost: we calculate AL-SAFE resource utilization in terms of CPU percentage and memory consumption. We compare the Iperf results with and without introspection. Microbenchmarks In the previous scenarios we described our methodology for measuring the overhead of a full adaptation loop on network-and process-intensive tenant applications. This section focuses on a fine-grained view of the cost for a full adaptation loop particularly on individual connection establishment. 5.5.2.5.1 TCP connection establishment time: We wrote a client/server TCP program and measure the TCP connection setup time for a single connection to a node outside the virtual infrastructure. We address both cases where either the server or the client are executed inside the monitored VM. We compare the results obtained without adaptation to the ones with the adaptation of the two firewalls. • Server inside the AL-SAFE-protected VM: The setup of this case is depicted in Figure 5.8. Consequently the rules that are inserted in the two firewalls after the adaptation loop is complete are: Switch-level firewall: table=16, priority =10, tcp, tp dst=80, in port=4, actions=ALLOW for incoming traffic , table=16, priority =10, tcp, tp src=80, out port=4, actions=ALLOW for outgoing traffic (since the evaluation is conducted on the first version of our prototype two rules are required for the SLF). Edge firewall : The rule added in the input chain is: tcp dport 80 counter accept. The firewall already contains a rule for established connections thus the reply from the server will be allowed. • Client inside the AL-SAFE-protected VM: The setup of this case is depicted in Figure 5.9. In this case a client inside the AL-SAFE-protected VM (same VM as in the previous example) is trying to connect to a tcp server located outside the cloud infrastructure. Since the process tcp client is allowed to initiate connections (white-list in Listing 5.3) the rules for the two firewalls after the adaptation loop (and after the port used by the tcp client has been discovered by the introspection) are: Switch-level firewall: table=16, priority =10, tcp, tp src=1451, out port=4, actions=ALLOW for outgoing traffic , table=16, priority =10, tcp, tp dst=1451, in port=4, actions=ALLOW for the incoming reply. Edge firewall : The rule added in the output chain is: tcp sport 1451 counter accept. The firewall already contains a rule for established connections thus the reply from the server will be allowed. 5.5.2.5.2 UDP round trip time: for evaluating a UDP stream setup cost we wrote a small client/server program that transmits a block of data and receives an echo reply. We measure the round trip time with and without the adaptation of the two firewalls. The setup of this case is depicted in Figure 5.10. In this case the receiver of the message is Switch-level firewall: table=16, priority =10, tcp, tp dst=68, in port=4, actions=ALLOW for the incoming block , table=16, priority =10, tcp, tp src=68, out port=4, actions=ALLOW for the reply. Edge firewall: The rule added in the input chain is: udp dport 68 counter accept. The firewall already contains a rule for established connections thus the reply from the server will be allowed. In both micro-benchmarks the memory of the VM, the number of processes and the number of sockets are constant. The only influencing parameter is the time of the request's arrival in the introspection period (as discussed in the web-server scenario described in Section 5.5.2.3). 114CHAPTER 5. AL-SAFE: A SECURE SELF-ADAPTABLE APPLICATION-LEVEL FIREWALL FOR IAA Evaluation Results After describing our evaluation scenarios and the underlying rationale, we present the results obtained from the different experiments. Section 5.6.1 presents the results from the performance evaluation of AL-SAFE while Section 5.6.2 discusses correctness aspects of our approach. Finally AL-SAFE limitations are detailed in Section 5.6.3. Performance and Cost Analysis To do our experiments we deployed a datacenter with three physical hosts: one cloud controller and two compute nodes. Each physical host has 48GB RAM and runs a 64bit Linux Ubuntu 14.04 distribution. The machines are interconnected with a 1Gb/s network. All the VMs deployed on the compute nodes run a 64bit Linux Ubuntu 13.10 distribution with 2 cores and 2GB RAM. We also deployed the Nftables firewall in a fourth physical host with the same hardware as our cloud nodes. All reported results are compared to a baseline value obtained without AL-SAFE. Before running our experiments we conducted a preliminary set of tests to calculate the time for a full snapshot of a 2GB VM's memory. We calculated the mean snapshot time to 1.5 seconds over 10 repetitions (standard deviation 0.05). Since the technique used copies the whole memory of the VM into a dedicated file the size of the VM is the only factor affecting the snapshot time. VM Migration To generate the memory-intensive workload we used bw mem wr from the LMBench benchmark [143] suite with a 1024MB working set. The working set is allocated, zeroed and then written as a series of 4 byte integers. In this scenario we aim at proving that the time required to reconfigure the switch-level firewall is independent from the VM workload. We executed 10 repetitions of each case. The results are presented in Figure 5 Linux Kernel We compiled a Linux kernel inside the VM and we varied the introspection period. The kernel was compiled with a configuration including only the modules loaded by the running kernel of the VM, using gcc 4.8.4 with a degree of parallelism of 3. We used the time command line utility for measuring the overall execution time. The VM is not expected to start services that use the network during the execution time of the experiment thus no adaptation of the firewalls is required. Before presenting the results, we discuss a model that estimates the minimum overhead value on the kernel compilation time. Let us define: x the time overhead introduced in the kernel compilation time, α as the mean value of the time required to take a snapshot and n the number of introspections performed during the experiment. Since in each introspection a snapshot of the AL-SAFE-protected VM is taken, a temporary freeze of the VM is performed. Consequently, the minimum overhead should be the result of the number of introspections times the snapshot time. That is min(x) = n × α. The mean value over five repetitions is shown in Figure 5. highest overhead (12%) is observed when the introspection period is 60 seconds. Indeed the observed overhead (184.2s) conforms with our overhead model as our computed value is 28 × 1.5 = 42 and 184.2 >> 42. Each introspection requires a snapshot of the running VM which freezes the VM for a short period of time. Obviously, more introspections requires more freezing time for the VM, which translates to a higher execution time. The lowest overhead (14.4s) is observed when the introspection is performed every 5 minutes. Again the result conforms with our overhead model (minimum overhead is computed at 9s). The results suggest that there are additional factors, besides the freezing time resulting from the snapshot, that contribute to the overall overhead value. Apache Web Server For a network intensive application, we installed the Apache Web server [150] and we used ApacheBench [START_REF]Apache HTTP server benchmarking tool[END_REF] to generate different workloads. In this scenario we examine two aspects of our design: first the dependency between the introspection period and the Web server throughput and second the dependency between the arrival of the connection request in the introspection period and the Web server latency. The second aspect shows the impact of using periodic introspection on the availability of a new Web server instance, like in a cloud scale-up operation. For both aspects the client is located outside the virtual infrastructure. We choose to test with an outside client as in the second aspect, reconfiguration of both firewalls is required and a comprehensive insight into AL-SAFE's overhead is provided. For the first aspect no adaptation of the firewalls is required (a preliminary phase to allow the connection between the server and the client is executed), while the only varying parameter is the introspection period. We run the experiment for 3 minutes and record the results over five repetitions. The workload consists of 750,000 requests from 1000 concurrent clients. The results shown in Figure 5.14 validate our previous observation regarding introspection period and performance degradation. In this scenario, the highest number of introspections (20 for the 15 seconds period) imposes the highest cost in the server's throughput (12%). For the second aspect we fix the introspection period at 30 seconds and we start the Web server at port 80 between two introspections. Thus an adaptation of both firewalls is required in order to allow the connections from the client to pass unimpeded. In this experiment we vary the arrival time of the connection request (right before snapshot, in the middle of introspection, at the end of introspection and after introspection). The The largest impact on the web server latency (blue dotted curve) is when the client requests are issued right before the introspection takes place. Indeed, in order to establish the connection, the client application has to wait for the introspection to be completed, the rules to be generated by the two separate rule generators and then injected in the two firewalls (two rules per service for the switch-level firewall and one for the edge firewall). This translates to a minimum connection time of 13.38s (1.5s for the snapshot of the AL-SAFE-protected VM + 10.28s for the introspection + 1.60s for rule generation and insertion). A per-phase breakdown of the produced overhead is shown in Figure 5. [START_REF]Google Compute Engine[END_REF]. When the requests are issued at the end of introspection in the cyan dotted curve, we observe that the curve is much closer to the corner of the graph. This observation holds for all curves (cyan dotted and purple dotted) that represent low latency cases (requests are issued either at the end or after introspection). The produced overhead (in the minimum connection time) results from the time required to reconfigure the two firewalls. The time required to reconfigure the edge firewall is significantly larger than the one for the switchlevel firewall due to the establishment of a secure connection between the node that hosts the VM and the firewall node. Iperf For the Iperf experiment we use the standard TCP STREAM test with a 64KB socket stream and 8KB message size. We run the experiment for 300 seconds and record the result. 118CHAPTER 5. AL-SAFE: A SECURE SELF-ADAPTABLE APPLICATION-LEVEL FIREWALL FOR IAA Before the experiment is executed we run a preliminary phase that configures both firewalls to allow the connection, such that no adaptation is taking place during the experiment. The mean results over five repetitions are shown in Figure 5.17 Figure 5.17 -Impact of the introspection period on network throughput network throughput value for increasing introspection periods. The results confirm our previous observation regarding introspection period and performance overhead. Indeed a shorter introspection period results in more snapshots that obviously result to more downtime for the VM. In this case the highest overhead (5.75%) is observed in the 15 seconds case (20 snapshots). Micro-Benchmarks Before presenting the individual results of each micro-benchmark we present a model for estimating the overhead of the adaptation loop on individual connection establishment. Let us define: x the overhead in terms of seconds for a full adaptation loop, α the time required to obtain a snapshot of the monitored VM, β the time required to perform the actual introspection process, γ as the time required for the firewall reconfiguration and π as the introspection period (i.e. the time between two consecutive introspections). Depending on when in the adaptation loop the connection request is issued and whether it is a client or a server which is hosted in the AL-SAFE-protected VM, we define different values for x. 5.6.1.5.1 Adaptation on the Server Side -TCP: In this case we install a server in the AL-SAFE-protected VM and we issue a connection request from a client located outside the cloud infrastructure. Consequently both firewalls need to be reconfigured in order for the connection to be established. • Request issued right before introspection: x = α + β + γ. That is the request has to wait for each phase to be completed before it reaches its destination. • Request issued in the middle of introspection: x = β 2 + γ. The request has to wait until the introspection finishes and the two firewalls are successfully reconfigured before it reaches its destination. • Request issued at the end of introspection: x = γ. The request has to wait only for the two firewalls to be reconfigured in order to reach its destination. 5.6. EVALUATION RESULTS 119 5.6.1.5.2 Adaptation on the Client Side -TCP: In this case we install a client inside the AL-SAFE-protected VM and we issue connection request to a server located outside the cloud infrastructure. Consequently both firewalls need to be reconfigured in order to establish the connection. • Request issued right before introspection: x = α + β + γ. That is the request has to wait for each phase to be completed before it can leave the cloud infrastructure. • Request issued in the middle of introspection: x = (π -β 2 ) + α + β + γ. Since the introspection is performed on a snapshot of the AL-SAFE-protected VM, which was taken before the client was started, the client process does not appear in the introspection result (i.e. because the client process was not started at the moment the snapshot was taken). Consequently, the connection request needs to wait until the next snapshot, introspection and adaptation in order to leave the cloud infrastructure. • Request issued at the end of introspection: x = (π -β) + α + β + γ → x = π + α + β. The request needs to wait until the next introspection and subsequent firewall reconfiguration. Since the introspection time only depends on the number of running processes (and open sockets) and in the micro-benchmark experiments we create only one new process in order to handle the connection, we can assume that the same value for the mean introspection time can be applied to both TCP and UDP scenarios. Furthermore, in both cases 2 firewall rules are inserted in the switch-level firewall (because we perform the evaluation on the first, stateless, version of our prototype) and only one rule in the edge firewall. Consequently, the same mean value for rule creation and insertion can also be applied to both scenarios. to be reconfigured). Figure 5.19 shows the connection establishment times when the connection requests are issued at different times during the introspection process(beginning, middle, end, after). The case with the smallest overhead (1.57s) is when the request is issued at the end of introspection. Indeed the request only has to wait for the firewall reconfiguration in order to reach the server. According to our model, the observed overhead should be 1.61s which is indeed the case. We observe again a relatively high time required for the secure connection establishment (1.60s), which is due to the already discussed DNS defect when establishing a secure SSH connection. The case that demonstrates the highest overhead is the one when the request is issued right before the snapshot. That is because, the request has to wait until the snapshot is taken, the introspection is complete and the rules are generated and injected in the two firewalls. According to our model the estimated overhead in this case is : α + β + γ =1.5s + 9.0s + 1.61s = 12.11s. The observed overhead is 11.89s. The 0.22s (1.68%) deviation between the estimated overhead value and the observed overhead is attributed to the fact that the estimated overhead was computed based on mean values for each phase. The results demonstrate that the arrival of requests in the introspection cycle plays a major role in the connection establishment time. For a client attempting to connect to an AL-SAFE-protected server the best case scenario is issuing a request at the end of the introspection cycle. 5.6.1.5.6 Outbound TCP Connection: In this experiment the TCP client is installed in the AL-SAFE-protected VM inside the cloud infrastructure issuing connection requests to a server located outside the cloud infrastructure. Consequently, both firewalls need to be reconfigured in order for the client request to pass unimpeded. In contrast with an inbound TCP connection where the connection port is known a priori (e.g. port 80), an outbound TCP connection faces the limitation of an unknown port number. The overhead in connection establishment times, when issuing the request at different times during the introspection process, is shown in Figure 5.20. Initiating a request right before introspection is now the best case scenario with the smallest overhead. Indeed the open socket will be included in the new introspection result (since the client process is not started in the AL-SAFE-protected VM when the snapshot of the first introspection was taken detailed presentation in Section 5.6.1.5). According to our model the estimated overhead is: α + β + γ = 1.5s + 9s + 1.6s = 12.11s. The observed overhead is : 12.03s which validates our initial hypothesis about the cost of issuing the request right before the introspection of the AL-SAFE-protected VM. In all other cases the time period between the time of the request and the next introspection has to be added to the connection establishment For example, in the case where the request is issued at the middle of introspection the added time is 10.5s (the introspection period was defined at π = 15s and the request was issued at the middle of the introspection process at β 2 = 4.5s consequently the waiting time amounts at 10.5s). In this case the estimated overhead should be: π -β 2 + α + β + γ = 10.5s + 1.5s + 9s + 1.61s = 22.61s. The actual overhead is 23.15s (we again observe a deviation of 0.54s or 2.37% between estimated and observed overhead due to the mean values used for calculating the estimated overhead). The case of a client located inside the AL-SAFE-protected VM is the exact opposite of the server case (presented in Section 5.6.1.5.5). The best case scenario now is when the connection request is issued at the beginning of the introspection. 5.6.1.5.7 UDP Round Trip Time: In the UDP round trip time experiment we install the process receiving the ECHO request inside a monitored VM located inside the cloud infrastructure. Consequently, both firewalls need to be reconfigured in order for the message to complete its roundtrip. Figure 5.21 shows the overhead in connection establishment times when the message is send at different times in the introspection period. The observed overhead follows a similar pattern with the one imposed on the inbound ). The best case scenario is when the message is sent at the end of the introspection process (i.e. it has to wait only for rule creation an reconfiguration). Indeed the observed overhead conforms with our estimation: 2.08s and 1.61s respectively the deviation is once more attributed to the mean values used for 122CHAPTER 5. AL-SAFE: A SECURE SELF-ADAPTABLE APPLICATION-LEVEL FIREWALL FOR IAA calculating the estimated overhead. The worst case scenario is when the message is sent right before introspection. Indeed the message has to wait until the introspection process finishes and the two firewalls are successfully reconfigured. According to our model the overhead is estimated as: α + β + γ = 1.5s + 9.0s + 1.61s = 12.11s. Again the observed overhead conforms with our model. In UDP communications, much like TCP connections, when the request arrives in the introspection cycle (beginning, middle, end) highly affects the produced overhead. The case where the request is issued at the end of the introspection is the one with the smallest overhead. Resource Consumption In this section we discuss the cost of AL-SAFE in terms of CPU consumption and RAM. We focus our analysis on the Introspection component (VMI) as it is the one expected to consume the most resources. Since the introspection mechanism extracts the necessary information about network sockets by iterating on the process list of the running VM it is obvious that the number of processes affects both the execution time of the VMI and the required resources. We calculate the CPU and RAM utilization of the introspection process in our Web server scenario (Section 5.6.1.3), with a generated workload of 750,000 requests from 1000 concurrent clients, over ten executions. Since our Web server is configured with an eventbased module, it is expected to generate many child processes, each one handling a prespecified number of threads. We compare the result with the resources consumed by the VMI in the Iperf scenario (Section 5.6.1.4) where only a single process is created to handle the connection socket. The results are shown in Table 5.2. The table lists average CPU usage and memory consumption along with the overall execution time of the VMI component (real ) and the times spent in user (usr ) and kernel (sys) modes. The high cost of introspection in terms of memory is because Volatility loads the whole snapshot file (in both cases 2 GB) into memory. The number of generated processes inside the monitored VM increases the CPU consumption of the VMI component. Correctness Analysis In this section we justify the security and correctness aspects of AL-SAFE. We focus on the functionality of AL-SAFE as a firewall as well as the contribution of the AL-SAFE approach in addressing inherent design limitations of application-level firewalls. Since AL-SAFE is an application-level firewall one of its main security goals is to successfully block unauthorized connections. We have validated the correctness of our generated rules both for inbound and outbound connections. For intra-cloud connection attempts the switch-level component of AL-SAFE successfully intercepted all packets from processes that were not in the white-list. For extra-cloud inbound connections the packets were stopped by the edge firewall. In both cases no unauthorized packets reached the VM or left the compute node. In a typical system, software exploits can directly affect the execution of an applicationlevel firewall. Exploits combine network activity from a user-level component along with a 5.6. EVALUATION RESULTS 123 kernel-level module that hides the user-level component from the view of the applicationlevel firewall. The malicious exploit likely obtains full-system privileges and can thus halt the execution of the firewall. The malicious kernel-level module can alter the hooks used by the in-kernel module of the application-level firewall so that the firewall is simply never invoked as data passes through the network. Conventional application-level firewalls fail under these types of attacks. AL-SAFE withstands attacks from these types of exploits. However, AL-SAFE can still retrieve a maliciously crafted connection list and allow connections for malicious applications that impersonate legitimate white-listed applications. Compared to a traditional application-level firewall which operates inside the host and if compromised can open any port regardless if it is in the white-list or not, AL-SAFE does not open any not white-listed port. AL-SAFE denies all unknown connections by default. In a production system where services have sufficiently long life-times, this tackles the case of an attacker timing the introspection period and attempting to use the network between two consecutive introspections. The performance overhead of this choice on each connection is outlined in Section 5.6.1.5. Finally we analyze the potential vulnerabilities added by AL-SAFE to the provider infrastructure. AL-SAFE's components are exposed to three kinds of potentially malicious input. First, the white-list of processes, second the added rules and third the introspection results. The design choices for the three items (as presented in Section 5.3.1) address the issue of malicious input. We now discuss AL-SAFE's limitations and our suggested approach for handling them. Limitations AL-SAFE as an application-level firewall located outside the monitored VM, is able to provide, through virtual machine introspection, the same degree of visibility as an inside the host solution. However, AL-SAFE suffers from some limitations. We detail these limitations based on their category: • Performance: AL-SAFE performs introspection periodically, which delays the network connectivity of newly started services and clients. To reduce this overhead, AL-SAFE could introspect on watchpoints, e.g. on listen() and connect() syscalls on TCP sockets. • Security: As all introspection-based security solutions AL-SAFE is vulnerable to kernel data structure manipulation. An attacker who fully controls the VM can also tamper with kernel data structures to control introspection results. To counter such attacks we could use approaches to check the VM's kernel integrity [START_REF] Baliga | Automatic Inference and Enforcement of Kernel Data Structure Invariants[END_REF]. Furthermore, an additional security impediment would be a previous legitimate process that has turned malicious. An attacker can hijack a connection after it has been established and verified by AL-SAFE as legitimate. It can use a software exploit to take control of a particular process bound to the port or use a kernel module to alter packets before they are sent out to the local switch network interface. To counter this issue we could place dedicated Intrusion Detection Systems in the infrastructure, using the approach of SAIDS. 124CHAPTER 5. AL-SAFE: A SECURE SELF-ADAPTABLE APPLICATION-LEVEL FIREWALL FOR IAA Summary In this chapter we presented AL-SAFE, the second instantiation of our security monitoring framework which focuses on firewalls. AL-SAFE is a secure application-level introspectionbased firewall that is able to adapt the enforced ruleset based on changes in the virtual infrastructure topology and the list of services running in the monitored VMs. AL-SAFE's design addresses the inherent design limitation of application level-firewalls that run inside the monitored VMs. Hence, they can be compromised by malicious kernel-level code that is executed inside the monitored host. Using virtual machine introspection AL-SAFE pulls the firewall outside the untrusted VM while maintaining an inside-the-VM visibility. AL-SAFE filters traffic at two distinct points in the virtual infrastructure regulating the load imposed on other security devices, which are part of our framework such as intrusion detection systems. We have conducted a thorough evaluation of our approach examining both performance and correctness aspects. We have shown that the overhead in cloud operations such as VM migration is independent from the VM workload. This overhead is lower than the migration time. Our results show a dependency between the introspection period and the generated overhead for tenant applications running inside the untrusted VM. Increasing the introspection period depending on the type of activity inside the VM (fewer introspections for compute-intensive applications that do not use the network) could significantly decrease the overhead. Our prototype already features a dedicated mechanism for adapting the introspection period on the fly. Finally, we have shown that AL-SAFE correctly blocks unauthorized connections while allowing all tenant-approved connections to pass unimpeded. The design choices made for AL-SAFE's components do not add any security vulnerabilities in the provider's infrastructure. AL-SAFE's limitations both from a security (kernel data structure manipulation) and performance aspects (delay of network connectivity) were presented along with suggestions on how they can be addressed. Chapter 6 Conclusion This chapter summarizes our contributions and details how these contributions fulfil the objectives presented in Section 1.3. The contributions along with their assessment are listed in Section 6.1 while suggestions for future research work are presented in Section 6.2. Contributions In this thesis we designed a self-adaptable security monitoring framework that is able to adapt its components based on dynamic events that occur in a cloud infrastructure. Four main objectives were defined: self-adaptation, tenant-driven customization, security and cost minimization. Our framework achieves these objectives and constitutes a flexible monitoring solution for virtualized infrastructures that is able to integrate different types of monitoring devices. Two different instantiations of our framework, SAIDS and AL-SAFE were presented in detail. SAIDS, a self-adaptable network intrusion detection system uses Local Intrusion Detection Sensors (LIDS) in order to monitor traffic towards and from the cloud infrastructure. SAIDS reaction to different types of dynamic events was presented and justified in order to provide the reader with a clear overview of the adaptation process. The first instantiation of our framework is a scalable solution that can alter the existing configuration and the computational resources available to a set of LIDSs depending on the load of monitored traffic while maintaining an adequate level of detection. SAIDS prototype was developed using different cloud technologies (e.g. OpenStack [START_REF]OpenStack[END_REF], Open vSwitch [137]). Our evaluation under different scenarios that resemble production environments allowed us to assess SAIDS performance, scalability, and correctness. Our results showed that SAIDS is able to handle 5000 LIDS (evaluation performed on 8 core machines with 24GB of RAM each -our testbed's machines memory capacity imposed a limitation on the number of LIDSs that our prototype can handle in parallel) while imposing negligible overhead to cloud operations. AL-SAFE is the second instantiation of our security monitoring framework which focuses on an active monitoring component, the firewall. AL-SAFE is executed outside the monitored VMs and filters traffic at distinct points of the virtual infrastructure combining an edge firewall, located at the interface between the cloud network and the external network, with a switch-level firewall. We proved that our design is able to address the inherent design limitation of application-level firewalls (malicious code exposure due to inside-the-host execution) and at the same time maintain an inside-the-host level of visibility through virtual machine introspection. The adaptation of AL-SAFE's enforced ruleset for different types of dynamic events was thoroughly detailed, followed by a jus-126 CHAPTER 6. CONCLUSION tification of subsequent actions. Finally, our evaluation presented a comprehensive study of the trade-offs between the security, adaptation benefits of deploying AL-SAFEand the performance overhead imposed on cloud operations and tenant applications hosted in the virtual infrastructure. Our results have shown that the overhead imposed by AL-SAFE on new sockets of network-oriented tenant applications highly depends on the arrival time of the connection request in the introspection period. We now discuss how our work addresses the four objectives that were defined in the introduction of this thesis. Self-adaptation: Our framework is able to adapt its components based on three types of dynamic events that occur in a cloud infrastructure: topology-related, service-related and monitoring load-related events. Our framework's core component, the Adaptation Manager, is able to make adaptation decisions based on the type of event and act as a coordinator of the adaptation process synchronizing the different components involved. The AM as a high-level component guarantees that the adaptation decision remains abstracted from the type of the monitoring device, providing our framework with another level of genericness. Both SAIDS and AL-SAFE adapt their enforced rulesets upon receiving the adaptation arguments from the AM. In order to guarantee the accurate translation of the adaptation arguments to device-specific configuration parameters both SAIDS and AL-SAFE feature dedicated components that interact with the actual monitoring devices. In both SAIDS and AL-SAFE prototypes we integrate different off-the-self components (2 NIDSs and 2 firewalls) with no modifications in their code. Tenant-driven customization: Our framework takes into account tenant-defined monitoring requirements as they are expressed through a dedicated API. These requirements may refer to monitoring tailored for specific services that are deployed in the virtual infrastructure or to performance-related metrics. The Adaptation Manager guarantees that the tenant requests will be taken into account in the adaptation decision and will be propagated to lower level agents, the Master Adaptation Drivers (MADs), that will translate them to device-specific configuration parameters. Our framework supports dedicated actions in case a tenant-defined requirement is violated (e.g. altering the computational resources available to a monitoring device in case a performance-related metric exceeds a tenant-defined threshold). Security: Our framework is able to guarantee that the adaptation process will not add any security flaw in the monitoring device itself or in the provider's infrastructure. Our design choices have proven that the different elements that participate in the adaptation of a monitoring device (i.e adaptation sources, input files, etc) do not add any new security flaws and do not create any potential entry point for the attacker. Furthermore, in both of our frameworks instantiations we have experimentally validated the correctness of the adaptation result. The monitoring devices continue to remain operational during the adaptation process, guaranteeing that an adequate level of detection is maintained. Cost minimization: Our framework is able to guarantee that the cost for both tenants and the provider in terms of application performance and computational resources is kept at a minimal level. SAIDS evaluation results showed that our framework's instantiation imposes negligible overhead in normal cloud operations. Regarding computational resources (CPU and RAM) deploying SAIDS bears minimal cost. As a passive monitoring solution, SAIDS does not directly affect the performance of network-oriented cloud applications. AL-SAFE's overhead in normal cloud operations does not depend on the VM workload while the CPU and memory cost is tolerable. AL-SAFE follows a timebased introspection model and as such the overhead for new sockets of network-oriented tenant applications highly depends on the arrival time of the connection request in the introspection period. The work presented in this thesis was able to address the gap in existing cloud security monitoring frameworks regarding reaction to dynamic events. Existing solutions only partially address the defined objectives (as described in Section 1.3) while our framework is able to combine self-adaptation based on dynamic events with accurate security monitoring results. This thesis presented the design of a self-adaptable security monitoring framework that is able to adapt its components based on different types of dynamic events that occur in a cloud infrastructure. SAIDS and AL-SAFE the framework's two instantiations, addressed self-adaptation for two different types of security devices, intrusion detection systems and firewalls. Naturally, the work done in this thesis can be extended. We have identified several directions of improvement that would lead to a complete self-contained monitoring infrastructure. We discuss these directions in the next section. Future Work Our future work is organized in three categories depending on feasibility and time required to complete the described improvements. In Section 6.2.1 we present a few short term goals that constitute performance and design improvements of our existing instantiations. Section 6.2.2 focuses on other components of our framework while Section 6.2.3 concludes this chapter with our vision regarding a self-contained security monitoring framework. Short-Term Goals Different design and performance improvements regarding SAIDS and AL-SAFE prototypes could be realised. SAIDS: Currently SAIDS does not feature a mechanism for automatic discovery of new services that are deployed in the monitored VMs. The only way for SAIDS to become aware of a change in the list of running services (and subsequently reconfigure the involved LIDSs) is through our framework's dedicated API needing the tenant to declare that a service was started or stopped. A solution for automatic service discovery would be for SAIDS to use AL-SAFE's periodic introspection results. Each time a new legitimate service is detected in a VM by introspection, the Adaptation Manager could trigger an adaptation of the enforced ruleset of the LIDS responsible for monitoring the traffic that flows towards and from that particular VM. The addition of automatic service discovery does not require a significant change in the existing SAIDS design since the Adaptation Manager is currently shared between the two instantiations. AL-SAFE: As demonstrated by the performance evaluation of AL-SAFE, periodic introspection imposes unnecessary overhead to applications that are not network-intensive (see the kernel-build results). A solution would be to correlate the type of application activity with the introspection period for example computation-intensive applications can have a larger introspection period than network-intensive applications. Furthermore, instead of a periodic introspection period AL-SAFE could adopt a watchpoint-based introspection model in which the VMI component could introspect every time a specific event occurs (e.g. a listen syscall on a TCP socket). Finally, our results have shown that the response time of the introspection component is not negligible. In order to improve the response time of this component and subsequently decrease the overhead imposed by the adaptation loop on new connections, introspection could be optimized by introspecting directly on LibVMI rather than a combination of LibVMI and Volatility. This change implies implementing a version of the netstat command using the VM's memory pages exported by LibVMI and necessary information regarding kernel data structures. Introspecting directly on LibVMI holds an additional advantage, the removal of the snapshotting phase, that was necessary for providing a coherent address space to volatility. This improvement will significantly reduce the memory consumption of the VMI component. Dependency Database: Currently all the necessary information for the monitoring devices involved in the monitoring of a particular VM is contained in simple text files. Although in Chapter 3 we defined that this information should be stored in the Dependency Database we did not have time to implement this component. Including a relational database (e.g. MySQL [152]) for storing this information in the existing framework implementation should require minimal changes to the the Adaptation Manager component. These changes are necessary in order to facilitate connection with the database as well as exchange of information for the latter a message protocol could be used like for example RabbitMQ [153]. Tenant API: Currently, the tenant API as defined in Chapter 3 has not yet been implemented. A simple Restful interface can be used in order to provide the translation between high-level tenant monitoring requirements and Adaptation Manager-readable adaptation arguments. Mid-Term Goals This section focuses on expansion of our framework to other types of security devices as well as addressing aspects like multitenancy and combining security monitoring for tenants and the provider. Other types of security devices: In order to extend the monitoring capabilities of our framework to other types of monitoring (e.g. inside-the-host activity monitoring, network flow analysis) other types of monitoring devices need to be included. Currently, our self-adaptable security monitoring framework includes only network-based intrusion detection systems and firewalls. A possible improvement would be to include host-based IDSs or network analyzers like Bro [START_REF][END_REF]. Since we plan to include other types of IDSs the changes required would primarily refer to SAIDS. Many host-based IDSs operate based on agents that are installed locally inside the monitored VM and perform different types of monitoring (e.g file integrity checking, rootkit detection, etc). These agents communicate with a central manager and periodically report findings. In order to support this model, the design changes required for SAIDS are two fold. First, at the level of the Master Adaptation Driver. Instead of being responsible for regenerating the actual configuration file for the IDS (like in the LIDS case) the MAD could simply forward relevant monitoring parameters to the appropriate Adaptation Worker (AW). Depending on the number of agents reporting to each AW the MAD could also adapt the portion of computational resources available to each AW in order to perform load balancing. Second, at the level of the Adaptation Worker, instead of having one AW per detection process, a single AW instance could be responsible for all detection agents running inside a group of VMs (e.g. all the VMs residing in the same compute node). Since most detection agents support remote configuration through a secure connection, the AW could be located in a separate domain, introducing another security layer between a potentially compromised detection agent and the SAIDS component. Other types of security devices that could be included are log collectors and aggregators. In order to satisfy the cost minimization objective, a log collector instance would be responsible for gathering and unifying the logs produced by a subset of monitoring devices (e.g. all the devices that monitor VMs that reside on the same compute node). Regarding the tenant-driven customization objective, the collector would apply special filters to the collected logs (e.g. if a specific attack for which tenants have requested additional monitoring has been detected or if the number of attacks in a specific time window has exceeded a certain threshold) and propagate the results to an aggregator instance. Tenants could access the aggregated logs through a dedicated mechanism that guarantees authentication and data integrity, satisfying the correctness objective. Different policies, designed to cope with the scale of the system and adapt the number of collectors and aggregators, could be defined in order to address the self-adaptation objective. Multi-tenancy: The current version of our security monitoring framework does not address implications that arise in multi-tenant environments. In order to enable security monitoring for different tenants, we need to consider the sharing of the monitoring devices between tenants. Component sharing between tenants can also be perceived as an additional aspect of cost minimization. We now discuss the necessary changes in our two instantiations SAIDS and AL-SAFE in order to enable component sharing. Since each tenant has its own network, and legacy network intrusion detection systems do not support monitoring two different networks with the same NIDS instance, SAIDS will have to assign separate LIDSs to different tenants. However, the remaining components (Adaptation Manager and Master Adaptation Driver) can still be shared between tenants. In order to differentiate between LIDS that belong to different tenants, an extra field indicating the ID of the tenant that this device is assigned to can be added in the set of information stored for each LIDS probe. Each MAD could maintain a per-tenant list with all the LIDS names that are under its control. Our evaluation results have shown that both the MAD and the AM can handle multiple adaptation requests in parallel, thus enabling parallel adaptation of LIDS that belong to different tenants. For AL-SAFE device sharing implies using the same firewall (either switch-level or edge) for filtering traffic towards and from VMs that belong to different tenants. Since the filtering is performed by dedicated rules, installing rules for different VMs in the same firewall device is straightforward. In order to address simultaneous dynamic events that concern different tenants, parallel generation of filtering rules that concern different VMs is necessary. Unfortunately, in the current version of AL-SAFE parallel rule generation is only supported for VMs that reside in different compute nodes. For enabling parallel generation of rules for VMs that reside in the same compute node, parallel introspection of those VMs is needed. Unfortunately, in the current implementation the VMI component does not support parallel introspection of collocated VMs (because it is single-threaded). Consequently, the core change for AL-SAFE supporting multi-tenancy requires making VMI multithreaded. A multi-threaded VMI component that introspects directly on Lib-VMI will also impose a significantly lower memory overhead. Combining the security monitoring of tenants and the provider: In a cloud environment the provider could assume the role of a super-tenant. This is essentially translated to a tenant with increased privileges who also requires adequate security monitoring of its infrastructure and adaptation of the security devices in case of dynamic events. The existence of a super-tenant raises two research questions: first, the necessary design changes for our security monitoring framework in order to support the different roles. In the case of SAIDS this would imply a number of dedicated LIDS instances that monitor the provider's traffic and are possibly located in an isolated node without any other tenant-LIDS. For AL-SAFE this would imply that provider-related rules are injected and enforced in the two types of firewalls. Second, an agreement for a fair sharing of moni-CHAPTER 6. CONCLUSION toring resources between the tenants and the super-tenant (i.e. the provider) needs to be defined in order to guarantee that the monitoring devices dedicated to tenants will always have access to the necessary computational resources. An adaptable threshold regarding the percentage of monitoring resources dedicated to the provider should be agreed between tenants and the provider and included in the SLA. Furthermore, a framework for translating the threshold value to specific monitoring parameters (e.g. how many rules in a shared firewall can the provider install) needs to be realized. This research question is closely related to another PhD thesis in Myriads team entitled Definition and enforcement of service level agreements for cloud security Monitoring. Integration of SAIDS in a large-scale system: Qirinus [155], a start-up that specializes in automatic deployment of security monitoring infrastructures for cloud environments, plans to integrate SAIDS in their system. The integration would allow tenants to use the Qirinus API in order to provide a high-level description of their system along with specific security requirements which will then be translated to SAIDS-specific arguments. The Qirinus system will also be responsible for automating the deployment of SAIDS individual components in the virtual infrastructure in such way that the tenant-defined requirements are respected. Integrating SAIDS with Qirinus will enable the transfer of SAIDS technology to real world large-scale scenarios. Handling network reconfiguration events: Currently our self-adaptable security monitoring framework does not handle network reconfiguration events although they are considered topology-related changes. Indeed, these types of events, for example migrating a VM between networks, bare some resemblances with events that refer to the placement of VMs (in this case with a VM migration). These resemblances allow us to consider that significant similarities will occur between the adaptation process that follows a VMplacement event and the adaptation process that follows a network-reconfiguration event. For example, in SAIDS and AL-SAFE, the difference between the necessary reconfigurations in a VM-placement dynamic event and a network reconfiguration dynamic event would consist in changing the IP addresses (internal and external) in the rules related to the VMs. Long-Term Goals As a long-term research direction, we consider the design of a fully autonomous selfadaptable security monitoring framework. A fully autonomous monitoring framework should be able to react to security events and take subsequent actions in order to isolate potentially infected VMs and stop attackers from gaining control of the virtual infrastructure. Reaction is essentially based on the ability of the framework to translate security monitoring findings (e.g. IDS alerts) to adaptation decisions that affect the configuration of the monitoring devices. In the context of this thesis, such an ability is linked to including security events to the set of possible adaptation sources. Currently our self-adaptable security monitoring framework supports adaptation of the security devices based on three different types of dynamic events: topology-, service-and monitoring load-related events. Security events (i.e. attacks) as a potential adaptation source were not considered. In our framework a reaction mechanism could operate by transferring SAIDS-generated alerts to AL-SAFE and translating them to filtering rules. The primary functionality of this mechanism would be to extract all related information from the alert (IP address, protocol, port, etc) and propagate it through a secure channel to AL-SAFE's Information Extraction Agent. The main challenge behind this mechanism is the determination of the correct Information Extraction Agent (since in our cloud environment one IEA is installed 6.2. FUTURE WORK 131 in each compute node). In order to determine the right IEA host, the mechanism needs to obtain a partial topological and functional overview of the monitoring framework. We define as partial overview the topological and functional information that refers only to a subset of security devices, for example all the monitoring devices that are under the control of a specific Master Adaptation Driver instance (as opposed to a complete overview where the functional and topological overview refer to system-wide information). Adding security to the set of possible adaptation sources opens a convergence area with the VESPA architecture [START_REF] Wailly | VESPA: Multi-layered Self-protection for Cloud Resources[END_REF] and will allow us to create a fully autonomous self-adaptable security monitoring framework that considers security-as well as infrastructure-related dynamic events. A.2 Motivation Dans un environnement de cloud IaaS typique, le fournisseur est responsable de la gestion et de la maintenance de l'infrastructure physique, alors que les clients ne sont responsables que de la gestion de leur propre système d'information virtualisé. Les clients peuvent prendre des décisions concernant le cycle de vie des VMs et déployer différents types d'applications sur les VMs fournies. Étant donné que les applications déployées peuvent avoir accès à des informations sensibles ou effectuer des opérations critiques, les clients s'occupent de superviser la sécurité de leur infrastructure virtualisée. Ces préoccupations peuvent s'exprimer sous la forme d'exigences relatives à la surpervision de sécurité, c'està-dire la surveillance d'actions de types spécifiques de menaces dans l'infrastructure virtualisée. Les solutions de supervision de sécurité pour les environnements en clouds sont généralement gérées par le fournisseur du cloud et sont constituées d'outils hétérogènes pour lesquels une configuration manuelle est requise. Afin de fournir des résultats de détection corrects, les solutions de surpervision doivent tenir compte du profil des applications déployées par le client ainsi que des exigences spécifiques de sécurité des clients. Un environnement en cloud présente un comportement très dynamique avec des changements qui se produisent à différents niveaux de l'infrastructure de cloud. Malheureusement, ces changements affectent la capacité d'un système de supervision de sécurité du cloud à détecter avec succès les attaques et à préserver l'intégrité de l'infrastructure en cloud. Les solutions existantes de supervision de sécurité des clouds ne permettent pas de prendre en compte les changements et de prendre les décisions nécessaires concernant la reconfiguration des dispositifs de sécurité. En conséquence, de nouveaux points d'entrée pour les attaquants sont créés, ce qui peut entraîner une compromission de l'infrastructure entière du cloud. À notre connaissance, il n'existe toujours pas de système de supervision de sécurité capable d'adapter ses composants en fonction des différents changements qui se produisent dans un environnement de cloud. L'objectif de cette thèse est de concevoir et mettre en oeuvre un système de supervision de sécurité auto-adaptatif capable de réagir aux événements dynamiques qui se produisent dans une infrastructure en cloud et d'adapter ses composants afin de garantir un niveau adéquat de supervision de sécurité pour les infrastructures virtuelles des clients. A.3 Objectifs Après avoir présenté le contexte et la motivation de cette thèse, nous proposons maintenant un ensemble d'objectifs pour un système de supervision de sécurité auto-adaptatif. A.3.1 Auto-adaptation Un système de supervision de sécurité auto-adaptatif devrait pouvoir adapter ses composants en fonction de différents types d'événements dynamiques qui se produisent dans une infrastructure de cloud. Le système devrait considérer ces événements comme des sources d'adaptation et prendre en conséquence des mesures qui reconfigurent ses composants. Le processus d'adaptation peut modifier la configuration des dispositifs de supervision existants ou en créer d'autres. Le système peut décider de modifier les quantités de ressources informatiques disponibles pour un dispositif de supervision (ou un sous-ensemble de dispositifs de supervision) afin de maintenir un niveau de supervision adéquat. L'adaptation de la quantité de ressources informatiques devrait également être effectuée afin de libérer des ressources sous-utilisées. Le système devrait prendre des décisions d'adaptation afin A.3.2 Personnalisation Les exigences relatives aux clients et concernant les cas de surpervision spécifiques devraient être prises en compte dans un système de supervision de sécurité auto-adaptatif. Le système devrait être en mesure de garantir une supervision adéquate des types spécifiques de menaces demandés par le client. Une demande de supervision pourrait se référer à l'infrastructure virtuelle complète d'un client ou à un sous-ensemble spécifique de machines virtuelles. Le système devrait fournir le type de supervision demandé jusqu'à ce que la demande du client change ou que les machines virtuelles auxquelles le type de supervision est appliqué n'existent plus. En outre, le système devrait prendre en compte les seuils définis par le clients (par des SLA spécifiques) qui font référence à la qualité du service de supervision ou à la performance de types spécifiques de dispositifs de supervision. A.3.3 Sécurité et correction Le déploiement d'un système de supervision de sécurité auto-adaptatif ne devrait pas ajouter de nouvelles vulnérabilités dans l'infrastructure virtuelle supervisée ou dans l'infrastructure du fournisseur. Le processus d'adaptation et les entrées qu'il requiert ne devraient pas créer de nouveaux points d'entrée pour un attaquant. En outre, un système de supervision de sécurité auto-adaptatif devrait pouvoir garantir qu'un niveau de supervision adéquat soit maintenu tout au long du processus d'adaptation. Le processus d'adaptation ne devrait pas interférer avec la capacité du système à détecter correctement les menaces. A.3.4 Minimisation des coûts Le déploiement d'un système de supervision de sécurité auto-adaptatif ne devrait pas avoir d'impact significatif sur le compromis entre sécurité et coût pour les clients et le fournisseur. Du côté du client, un système de supervision de sécurité auto-adaptatif ne devrait pas influer de manière significative sur les performances des applications hébergées dans l'infrastructure virtuelle, quel que soit le profil de l'application (utilisant beaucoup les CPUs ou beaucoup le réseau). Du côté du fournisseur, la capacité de générer des profits en louant ses ressources informatiques ne devrait pas être affectée de manière significative par le système. Le déploiement d'un tel système ne devrait pas imposer de pénalité importante dans les opérations normales du cloud (par exemple, migration de VM, création, etc.). En outre, la proportion des ressources informatiques dédiées aux composants du système autoadaptatif devrait refléter un accord entre les clients et le fournisseur pour la distribution des ressources informatiques. A.4 Contributions Afin d'atteindre les objectifs présentés dans la section précédente, nous concevons un système de supervision de sécurité auto-adaptatif capable de dépasser les limites des systèmes de supervision existants et de gérer les événements dynamiques qui se produisent dans une infrastructure en cloud. Dans cette thèse, nous détaillons comment nous avons conçu, mis en oeuvre et évalué nos contributions : un système générique de supervision de sécurité auto-adaptatif et deux instanciations avec des systèmes de détection d'intrusion et des pare-feu. A.4.1 Un système de supervision de sécurité auto-adaptatif Notre première contribution est la conception d'un système de supervision de sécurité autoadaptatif capable de modifier la configuration de ses composants et d'adapter la quantité de ressources informatiques disponibles selon le type d'événement dynamique qui se produit dans une infrastructure de cloud. Notre système réalise l'adaptation automatique et la personnalisation en fonction des clients tout en fournissant un niveau adéquat de supervision de la sécurité grâce au processus d'adaptation. Notre système comprend les composants suivants : le gestionnaire d'adaptation (ou Adaptation Manager ), les sondes de supervision d'infrastructure (ou Infrastructure Monitoring Probes), la base de données de dépendances (ou Dependency Database), l'API côté client et enfin les dispositifs de sécurité. Le gestionnaire d'adaptation est au coeur de notre système et est chargé de prendre les décisions d'adaptation lorsque des événements dynamiques se produisent. Les sondes de supervision d'infrastructure sont capables de détecter des événements dynamiques liés à la topologie et de transmettre toutes les informations nécessaires au gestionnaire d'adaptation. La base de données de dépendances est utilisée afin de stocker des informations importantes concernant les dispositifs de sécurité, tandis que, via l'API côté client, les clients peuvent exprimer leurs propres exigences de supervision de sécurité. Enfin, les dispositifs de sécurité assurent différentes fonctionnalités de supervision de sécurité. A.4.2 SAIDS Notre deuxième contribution constitue la première instanciation de notre système et est axée sur les systèmes de détection d'intrusion en réseau (NIDS). Les NIDS sont des éléments clés d'une infrastructure de supervision de sécurité. SAIDS atteint les objectifs au coeur de notre système tout en fournissant une solution passant à l'échelle pour répondre aux nécessités d'adaptation parallèles. Notre solution est capable de passer à l'échelle en fonction de la charge du trafic surveillé et de la taille de l'infrastructure virtuelle. Les composants principaux de SAIDS sont : le pilote d'adaptation maître (ou Master Adaptation Driver ), le travailleur d'adaptation (ou Adaptation Worker ) et les capteurs locaux de détection d'intrusion (LIDS). Le Master Adaptation Driver est chargé de la traduction des arguments d'adaptation en des paramètres de configuration pour les LIDSs alors que le travailleur d'adaptation est chargé d'effectuer la reconfiguration en tant que telle des LIDSs. Les capteurs locaux de détection d'intrusion sont les dispositifs de sécurité qui effectuent la détection réelle des événements de sécurité. SAIDS maintient un niveau de détection adéquat tout en minimisant le coût en termes de consommation de ressources et de performance des applications déployées. Nous avons évalué la capacité de SAIDS à obtenir un compromis entre les performances, les coûts et la sécurité. Notre évaluation consiste en différents scénarios qui représentent des environnements de production. Les résultats obtenus démontrent que notre prototype passe à l'échelle et peut gérer plusieurs capteurs de détection d'intrusion réseau en parallèle. SAIDS impose des coûts additionnels négligeables pour les applications des clients ainsi que pour des opérations de cloud typiques telles que la migration de VM. En outre, nous avons prouvé que SAIDS maintient un niveau de détection adéquat au cours du processus d'adaptation. A.4.3 AL-SAFE Notre troisième contribution constitue la deuxième instanciation de notre système et est axée sur les pare-feu applicatifs. AL-SAFE utilise l'introspection de machine virtuelle afin de créer un pare-feu sécurisé qui fonctionne à l'extérieur de la machine virtuelle surveillée mais conserve la visibilité intra-VM. AL-SAFE suit une stratégie d'introspection périodique et permet au client de spécifier la période d'introspection. Les jeux de règles appliqués par le pare-feu sont adaptés en fonction des événements dynamiques qui se produisent dans une infrastructure virtuelle. Les composants principaux d'AL-SAFE sont : l'agent d'extraction d'informations (ou Information Extraction Agent), le composant d'introspection de machine virtuelle (ou Virtual Machine Introspection), les générateurs de règles (ou Rule Generators) et deux pare-feu distincts. L'agent d'extraction d'informations est chargé d'identifier les processus autorisés à établir des connexions alors que le composant Virtual Machine Introspection effectue l'introspection en tant que telle de la VM supervisée. Les générateurs de règles sont utilisés pour produire les règles pour les deux pare-feux. Nous avons évalué la capacité d'AL-SAFE à proposer un compromis équilibré entre sécurité, performance et coût. Notre processus d'évaluation se compose de différents scénarios qui représentent des environnements de production. Les résultats obtenus démontrent que AL-SAFE est capable de bloquer toutes les connexions non autorisées et que les règles résultant du processus d'adaptation sont correctes et opérationnelles. Les coûts additionnels d'AL-SAFE pour les opérations typiques du cloud, comme la migration de VM, sont indépendants de l'intensité de l'activité de la VM, tandis que les coûts additionnels pour les applications des clients dépendent de la période d'introspection et du profil d'application (réseau ou calcul). A.5 Perspectives Nous avons identifié plusieurs axes de recherche pour les travaux futurs. Nous les organisons sur des objectifs à court, moyen et long terme. A.5.1 Perspectives à court terme Nos objectifs à court terme se concentrent sur les améliorations de conception et de mise en oeuvre des versions actuelles de nos prototypes ainsi que sur la mise en oeuvre de deux des composants de notre système que nous n'avons pas eu le temps de mettre en oeuvre. Dans SAIDS, nous souhaitons ajouter des découvertes automatiques de service afin que les règles de détection liées aux services exécutés, et appliquées dans les LIDSs affectés, soient automatiquement adaptées. Le mécanisme d'introspection d'AL-SAFE pourrait être utilisé comme outil de découverte automatique des services. Dans AL-SAFE, nous souhaitons remplacer le modèle d'introspection périodique par un modèle d'introspection déclenchée par des évènements, de sorte que les coûts additionnels dans les applications des clients soient réduits. Enfin, nous souhaitons mettre en oeuvre deux autres composants de notre système, la base de données de dépendances et l'API côté client. A.5.2 Perspectives à moyen terme Nos objectifs à mi-parcours visent à aborder des problèmes plus complexes qui sont intrinsèques aux environnements de cloud, comme la cohabition des clients. La version actuelle de notre système de supervision de sécurité ne traite pas des problèmes qui se posent dans les environnements multi-clients. Afin de permettre la supervision de la sécurité pour différents clients, nous devons considérer le partage des dispositifs de supervision entre les clients. Le partage de dispositifs entre les clients peut également être perçu comme un aspect supplémentaire de la réduction des coûts. Nous souhaitons étudier les changements nécessaires tant dans SAIDS que dans AL-SAFE afin d'atteindre cet objectif. En outre, nous souhaitons inclure d'autres types de LIDS comme les systèmes de détection d'intrusion hôte et les analyseurs de réseau dans le prototype SAIDS. Les autres perspectives de recherche à moyen terme incluent la combinaison de la supervision de la sécurité des clients et du fournisseur ainsi que l'intégration de SAIDS dans un système à grande échelle, grâce à une collaboration avec la startup Qirinus. A.5.3 Perspectives à long terme Dans une perspective à long terme, nous nous intéressons à la conception d'un système de supervision de sécurité auto-adaptatif entièrement autonome. Un système de supervision entièrement autonome devrait pouvoir réagir aux événements de sécurité et prendre des mesures en conséquence afin d'isoler les machines virtuelles potentiellement infectées et empêcher les attaquants de prendre le contrôle de l'infrastructure virtuelle. La réaction repose essentiellement sur la capacité du système à traduire les résultats de la supervision de sécurité (par exemple, les alertes des systèmes de détection d'intrusion) en des décisions d'adaptation qui reconfigurent des dispositifs de supervision. Dans le contexte de cette thèse, une telle capacité est liée à l'inclusion d'événements de sécurité dans l'ensemble des sources d'adaptation possibles. Actuellement, notre système de supervision de sécurité auto-adaptatif prend en charge l'adaptation des dispositifs de sécurité en fonction de trois types d'événements dynamiques : ceux liés à la topologie, aux services déployés, et à la charge de travail en analyse. Les événements de sécurité (c'est-à-dire les attaques) en tant que source d'adaptation potentielle n'ont pas été pris en compte. Institut National des Sciences Appliquées de Rennes Figure 2 . 1 - 21 Figure 2.1 -The MAPE control loop Figure 2 . 2 - 22 Figure 2.2 -The OpenStack architecture CHAPTER 2 .Figure 2 . 3 - 223 Figure 2.3 -The SDN architecture CHAPTER 2 . 2 STATE OF THE ART Definition 3 Security Monitoring is the collection, analysis, and escalation of indications and warnings to detect and respond to intrusions. Figure 2 . 4 - 24 Figure 2.4 -Information system with different security devices contributing to security monitoring Figure 2 . 5 - 25 Figure 2.5 -A DMZ example Figure 2 . 6 - 26 Figure 2.6 -Hypervisor and host OS kernel 2 . 3 . 4 . 5 . 2345 No modifications to the target VM, Minimal performance impact, Fast development of new monitors, Ability to have a full view of the target OS and 6. Target OS cannot tamper with monitors. Figure 2 . 8 - 28 Figure2.8 -The Livewire architecture as in[START_REF] Garfinkel | A Virtual Machine Introspection Based Architecture for Intrusion Detection[END_REF] Figure 3 . 1 - 31 Figure 3.1 -An example of a cloud hosted information system Figure 3 . 2 - 32 Figure 3.2 -The framework's architecture 60CHAPTER 3 .Figure 3 . 3 - 333 Figure 3.3 -The framework's different levels Listing 3 . 1 - 3 < 4 </ s e r v i c e s> 5 < 6 </ s e r v i c e s> 7 < 8 </vm> 9 <IDS> 10 <a d d i t i o n a l m o n i t o r i n g="worm"> 11 <d r o p r a t e> a c c e p t e d=5 </ d r o p r a t e> 12 < 313456789101112 SLA information file 1 <Tenant Id=" 74 c f 5 7 4 9 -570"> 2 <vm i d=" 27 "> s e r v i c e s a u t h o r i s e d d e s t i n a t i o n I P s=" 1 9 2 . 1 6 8 . 1 . 5 " a u t h o r i s e d s o u r c e I P s=" 1 9 2 . 1 6 8 . 1 . 2 , 1 9 2 . 1 6 8 . 1 . 3 " d p o r t=" 22 " name=" s s h " p r o t o=" t c p " r o l e=" s e r v e r " s p o r t=" 0 . 0 . 0 . 0 "> s e r v i c e s a u t h o r i s e d d e s t i n a t i o n I P s=" 1 7 2 . 1 0 . 2 4 . 1 9 5 " a u t h o r i s e d s o u r c e I P s=" a l l " d p o r t=" 80 " name=" apache2 " p r o t o=" t c p " r o l e=" s e r v e r " s p o r t=" 0 . 0 . 0 . 0 "> s e r v i c e s> name=" s q l " </ s e r v i c e s> 62CHAPTER 3 . 3 A SELF-ADAPTABLE SECURITY MONITORING FRAMEWORK FOR IAAS CLOUDS Algorithm 1 2 : 3 : 4 :i in affected devices do 5 : 12345 The adaptation decision algorithm 1: function adaptation(dynamic event) list of services ← map(dynamic event.VM id, vm information file ) affected devices, agents ← map(dynamic event.VM id) for reconfiguration required ← decide(i, list of services) 6: 1 <vm i d=" 27 "> 2 < 3 </ s e r v i c e s> 4 < 5 </ s e r v i c e s> 6 < 12723456 s e r v i c e s a u t h o r i s e d d e s t i n a t i o n I P s=" 1 9 2 . 1 6 8 . 1 . 4 " a u t h o r i s e d s o u r c e I P s=" 1 9 2 . 1 6 8 . 1 . 2 , 1 9 2 . 1 6 8 . 1 . 3 " d p o r t=" 22 " name=" s s h " p r o t o=" t c p " r o l e=" s e r v e r " s p o r t=" a l l "> s e r v i c e s a u t h o r i s e d d e s t i n a t i o n I P s=" 1 7 2 . 1 0 . 1 2 4 . 1 9 5 " a u t h o r i s e d s o u r c e I P s=" a l l " d p o r t=" 80 " name=" apache2 " p r o t o=" t c p " r o l e=" s e r v e r " s p o r t=" a l l "> s e r v i c e s name=" s q l "> 7 <c u r r e n t I D S h o s t i p=" 1 7 2 . 1 6 . 9 9 . 3 8 " name=" s u r i c a t a 7 9 " type=" s i g n a t u r e b a s e d " a d d i t i o n a l m o n i t o r i n g="worm" d r o p r a t e= " 5 " > </newIDS> 8 </vm> 70CHAPTER 4 . 4 SAIDS: A SELF-ADAPTABLE INTRUSION DETECTION SYSTEM FOR IAAS CLOUD EN Figure 4 . 4 Figure 4.1 -SAIDS architecture 72CHAPTER 4 . 4 SAIDS: A SELF-ADAPTABLE INTRUSION DETECTION SYSTEM FOR IAAS CLOUD EN that LIDS on the local switch and reconfiguring the traffic distribution between LIDSs on the local switch. 74CHAPTER 4 . 4 SAIDS: A SELF-ADAPTABLE INTRUSION DETECTION SYSTEM FOR IAAS CLOUD EN 80CHAPTER 4 . 4 SAIDS: A SELF-ADAPTABLE INTRUSION DETECTION SYSTEM FOR IAAS CLOUD EN 4.6.1.1.2 Scalability: 82CHAPTER 4 . 4 SAIDS: A SELF-ADAPTABLE INTRUSION DETECTION SYSTEM FOR IAAS CLOUD EN mirrored traffic from the destination node of the VM to the LIDS node exists and if not create it. 84CHAPTER 4 . 4 SAIDS: A SELF-ADAPTABLE INTRUSION DETECTION SYSTEM FOR IAAS CLOUD EN Figure 4 . 3 - 43 Figure 4.3 -Adaptation time breakdown when SAIDS only reconfigures the enforced ruleset inside the LIDS Figure 4 . 4 - 44 Figure 4.4 -Adaptation time breakdown when SAIDS has to start a new LIDS, distribute traffic and create a mirroring tunnel 4. 6 6 .3.3.1 MAD Scalability -Multiple LIDSs: During the first phase of our experiment we focus only on a single Master Adaptation Driver and compute the maximum 86CHAPTER 4. SAIDS: A SELF-ADAPTABLE INTRUSION DETECTION SYSTEM FOR IAAS CLOUD EN number of LIDSs that it can handle in parallel. The setup of a single MAD instance handling multiple LIDS is depicted in Figure 4.5. Figure 4 . 5 - 45 Figure 4.5 -MAD scalability setup Figure 4 . 6 - 46 Figure 4.6 -MAD response time 4. 6 . 3 . 3 . 2 6332 Scalability of the AM -Multiple MADs: After obtaining the maximum number of LIDSs that a single MAD instance can handle in parallel in our testbed (50 LlDS -460.1 MB of RAM per LIDS in a node with 24GB of RAM) we now study the scalability in the response time of the Adaptation Manager. In our experiment, each AM worker thread needs to adapt all the LIDS belonging to a single MAD (50 LIDS). We instantiate up to 100 AM worker threads. The setup of a single AM instance handling multiple MADs is depicted in Figure4.7. Figure 4 . 7 - 47 Figure 4.7 -AM scalability setup Figure 4 . 8 - 48 Figure 4.8 -AM response time 92CHAPTER 4 . 4 SAIDS: A SELF-ADAPTABLE INTRUSION DETECTION SYSTEM FOR IAAS CLOUD EN Chapter 5 96CHAPTER 5 .Figure 5 . 1 - 2 3 7 </ i n p u t> 8 <output 9 </ output> 10 </ p o r t> 11 </ a p p l i c a t i o n> 12 <a 15 <i p v a l u e=" 1 17 </ i n p u t> 18 <output 19 </ output> 20 </ p o r t> 21 </ a p p l i c a t i o n> 22 < 5512789101112151171819202122 Figure 5.1 -The AL-SAFE architecture with the Adaptation Manager Figure 5 . 3 -Figure 5 . 4 - 5354 Figure 5.3 -The migration request arrives between two introspections Figure 5 . 5 - 55 Figure 5.5 -LibVMI stack Figure 5 . 6 - 56 Figure 5.6 -Adaptation process flow chart Figure 5 . 7 - 57 Figure 5.7 -Snapshot-Introspection relationship Listing 5 . 2 - 52 Information passed at the switch-level firewall 1 R u l e i n f o ( t a b l e = 2 8 , o v s p o r t = 4 , p r o t o = TCP, p o r t = 2 2 , i p s = [ 1 9 2 . 1 6 8 . 1 . 2 , 1 9 2 . 1 6 8 . 1 . 3 ] , a c t i o n = ALLOW) 106CHAPTER 5 . 3 . 53 AL-SAFE: A SECURE SELF-ADAPTABLE APPLICATION-LEVEL FIREWALL FOR IAA factors, related to each component's functionality, affect it's individual completion time. Consequently, a third question arises: What factors affect the execution time of each component? We discuss the factors per component: 2 . 1 . 21 it calculates tenant and provider associated cost of deploying AL-SAFE Depending on the evaluation objective we compute different metrics: Performance of AL-SAFE: We record the time required for each AL-SAFE component to complete its functionality. We investigate the individual latency of each component: Introspection (VMI), Information Extraction Agent (IEA), rule generators (RGs) and rule insertion. As discussed in Section 5.5.1.1 the factors that affect the introspection time are: (a) size of the introspected file (i.e. snapshot), (b) number of processes and (c) number of sockets. Two of the factors (process and socket numbers) can be indirectly influenced through variation of the requests/second workload parameter (i.e. more requests/second imply more open sockets). The size of the introspected file can be influenced through assigning different memory values to the monitored VM. Figure 5 . 8 - 58 Figure 5.8 -TCP server setup Figure 5 . 9 - 59 Figure 5.9 -TCP client setup Figure 5 . 5 Figure 5.10 -UDP setup Figure 5 . 11 -Figure 5 . 12 - 511512 Figure 5.11 -Migration time with and without adaptation Figure 5 . 13 - 513 Figure 5.13 -Impact of the introspection period on kernel compilation time Figure 5 . 14 - 514 Figure 5.14 -Impact of the introspection period on server throughput Figure 5 . 15 -Figure 5 . 16 - 515516 Figure 5.15 -Request service time for different times in the introspection cycle 5. 6 . 1 . 5 . 3 Figure 5 . 18 - 6153518 Figure 5.18 -Cases of request arrival time with respect to the introspection cycle 5. 6 Figure 5 . 19 - 6519 Figure 5.19 -Inbound TCP connection establishment time Figure 5 . 20 - 520 Figure 5.20 -Outbound TCP connection establishment time Figure 5 . 21 - 521 Figure 5.21 -Inbound UDP round trip time A. 4 . 4 CONTRIBUTIONS 145 de garantir un équilibre entre sécurité, performance et coût à tout moment. Les actions d'adaptation peuvent affecter différents composants et le système devrait pouvoir effectuer ces actions en parallèle. Emulation Emulation is the first proposed technique to allow the system to run a software that mimics a specific set of physical resources. This mechanism was used to enable the usage of console video games on personal desktop machines. In emulation, the assembly code of the guest is translated into host instructions, a technique known as binary translation. A dedicated component, the emulator is responsible for performing the translation and providing isolation between different guests. There are two different translation techniques: static and dynamic. Static binary translation requires translating all of the guest code into host code without executing it. Dynamic binary translation on the other hand offers at runtime emulation where emulators fetch, decode and execute guest instructions in a loop. The main advantage of dynamic binary translation is that since the translation is happening on the fly, it can deal with self-modifying code. Although the performance cost is evident, emulation is very flexible as any hardware can be emulated for a guest's OS. Popular emulators include Bochs 2.3. VIRTUALIZATION 25 2.3.2.1 Machine-Level Virtualization 2.3.2.1.1 .2.2.3, we focus on security monitoring components such as Shield [128] which is an Intrusion Detection component tailored towards DDoS attacks. Tenants can create their own rules to monitor traffic against specific types of DoS attacks like HTTP or DNS floods. Shield also provides mitigation techniques like rerouting and can be used in combination with WAF for setting proactive filtering against application-level attacks. Other available services include Certificate Manager for deploying SSL certificates. A full list of services can be found in [129]. • Google: Google Security scanner [130] is a proactive tool, which automatically scans web applications for known vulnerabilities (Flash or SQL injections, outdated libraries, etc). Tenants can use the results in order to generate traffic filtering rules that proactively block specific types of requests. The Resource Manager [131] regulates access to resources. Interconnected resources are represented hierarchically and users can set access rights to a group of resources simply by configuring a parent node. Table 3 . 3 1. In this Table 3 . 3 1 -The VM info table we see that for the VM with ID 27 there is a network IDS named suricata79, a host IDS named ossec1 and two different firewalls, one edge, named f-ext1, and one inside the local switch, named f-parapide-18. A single VM can be monitored by different types of IDS (host-and network-based).The Device info table is used to store device specific information. The Adaptation Manager uses each device name in order to extract the following information: location of the device (IP address of the physical node hosting the device) and type of the device The Device info table for suricata65 IDS can be found in Table3.2. In this example we see VM ID Network IDS Host IDS External-firewall Switch-firewall 27 suricata79 ossec1 f-ext1 f-parapide-18 29 suricata65 ossec4 f-ext1 f-parapide-32 example Table 3 . 3 2 -The Device info table Device Name Location Type of device suricata65 172.16.99.38 signature based that the suricata65 network IDS is located on a node with IP address 172.16.99.38 and is a signature-based NIDS. When a dynamic event occurs, the AM uses the information available in the Dependency Database to identify the full list of affected devices. Each time a new monitoring device is instantiated a corresponding entry with all the necessary information is added by the AM in the two tables. 3. A SELF-ADAPTABLE SECURITY MONITORING FRAMEWORK FOR IAAS CLOUDS Algorithm 2 Adaptation when A VM migration occurs 1: function adaptation(VM network info) 2: spawn adaptation thread 3: list of services ← information parser(VM network info.VM id, vm information file) 4: affected devices, locations ← information parser(VM network info.VM id, VM network info.source node, VM network info.destination node, topology.txt) 5: 4.2.1.1.1 Local Intrusion DetectionSensors: LIDS are used for collecting and analyzing network packets that are flowing through subsets of virtual switches. The detection technique that is used can either be signature-or anomaly-based. A signaturebased technique has high true positive rate in detecting known attacks as opposed to an Master Adaptation Driver LIDS1 LIDS2 Adaptation Adaptation Ruleset Worker Worker Ruleset Local Switch Local Switch mirrored mirrored traffic traffic Management Network Mirror Worker Adaptation Local Switch Local Switch Manager VM info SLA info Web DB DNS Email Web Networking Infrastructure Monitoring Probe Compute Compute Node Compute Node Table 4 . 4 Part A Part B Change category Event Origin Adaptation action Virtual VM creation Tenant {rule update, new LIDS} infrastructure VM destruction Tenant {rule update} topology VM migration Provider {rule update, new LIDS} Performance % Packet drop Traffic load {new LIDS} Latency {new LIDS} % unused resources {destroy LIDS} Service Service addition Tenant {rule update} Service removal {rule update} Hardware infrastructure Server addition Provider {rule update, new LIDS} topology Server removal {rule update, destroy LIDS} framework. For example, if a topology related change occurs (e.g. VM migration) SAIDS will check if a LIDS monitoring the traffic flowing towards and from the new VM location exists. If a LIDS exists SAIDS simply reconfigures the enforced ruleset (i.e. rule update action). If a LIDS does not exist then SAIDS instantiates a new LIDS. When a performance degradation occurs, SAIDS opts for a new LIDS instantiation. 1 -Events that trigger adaptation .2. The imposed overhead in both cases 40 50 38.2 38.2 Without SAIDS With SAIDS Migration time (s) 20 30 13.9 13.9 10 0 id le VM workload m e m o ry -i n te n si ve Figure 4.2 -Migration time with and without SAIDS (idle VM and VM with memory intensive workload) is 0.0s which validates our initial hypothesis that SAIDS imposes negligible overhead on typical cloud operations. A per phase breakdown of the two different adaptation cases (i.e. ruleset reconfiguration only and new LIDS with traffic distribution) is shown in Figures 4.3 and 4.4. In both cases Adaptation Manager: Decide on adaptation send adaptation arguments Send mirror arguments to Controller 0.012s 0.012s MW Node Migration request MAD Node 0.13s 0.01s Check if IDS is running Conf file Generation LIDS 4.0s Destination Rule reload start Rule reload finish 13.9s VM resumed Node .8. As the results demonstrate, the phase that AM response time (s) 0.2 0.4 0.6 0.8 1.0 0.012 0.28 0.13 0.012 0.012 0.012 0.012 0.012 0.28 0.28 0.28 0.28 0.15 0.15 0.16 0.17 Adaptation decision 0.28 0.18 LIDS node connection Send adaptation args Dest node connection Send mirror args 0.13 0.15 0.15 0.16 0.17 0.18 0.0 1 10 0.011 0.009 0.009 0.009 0.008 0.009 20 40 50 Number of MADs 100 Table 4 . 4 .2. 2 -Resource consumption of the AM component Number of MADs Usr% Sys% CPU% Memory (MB) 10 17.19 2.29 19.57 188.88 20 23.20 3.26 26.46 188.81 40 25.0 3.60 29.40 188.69 50 26.93 3.76 30.69 188.31 100 28.4 3.97 32.43 188.93 Table 5 . 5 1 -Events that trigger adaptation Change category Event Origin Adaptation action Virtual VM creation Tenant {add rules} infrastructure VM destruction Tenant, Provider {delete rules} topology VM migration Provider {add & delete rules} Service list Service addition Tenant {add rules} Service list Service removal Tenant {delete rules} table:16 on the virtual switch of the compute node. The white-list of this scenario is shown in Listing 5.3. The white-list states that the process named tcp server is allowed to accept incoming connections. 112CHAPTER 5. AL-SAFE: A SECURE SELF-ADAPTABLE APPLICATION-LEVEL FIREWALL FOR IAA Listing 5.3 -VM 25 white-list 1 <?xml version=" 1 . 0 " e n c o d i n g="UTF-8" ?> 2 3 < f i r e w a l l R u l e s x m l n s : x s i=" h t t p : //www. w3 . o r g /2001/XMLSchema-i n s t a n c e " xsi:noNamespaceSchemaLocation=" l a n g u a g e . xsd "> 4 <a p p l i c a t i o n name=" t c p s e r v e r "> 5 <p o r t num=" 80 " p r o t o=" t c p "> 6 <i n p u t a c t i o n="ACCEPT" c o n n t r a c k="NEW/ESTABLISHED" > 7 </ i n p u t> 8 </ p o r t> 9 </ a p p l i c a t i o n> 10 <a p p l i c a t i o n name=" t c p c l i e n t "> 11 <p o r t num=" 0 " p r o t o=" t c p "> 12 <output a c t i o n="ACCEPT" c o n n t r a c k="NEW/ESTABLISHED"> 13 </ output> 14 </ p o r t> 15 </ a p p l i c a t i o n> 16 <a p p l i c a t i o n name=" udp r "> 17 <p o r t num=" 68 " p r o t o="udp"> 18 <i n p u t a c t i o n="ACCEPT" /> 19 </ p o r t> 20 </ a p p l i c a t i o n> 21 </ f i r e w a l l R u l e s> Table 5 . 5 2 -Resource consumption of the introspection component Application Real (s) Usr (s) Sys (s) CPU% Memory (MB) Apache 13.6 5.04 2.21 53.6 2193 Iperf 11.9 3.75 1.60 45 2193 RésuméLes principales caractéristiques des clouds d'infrastructure (IaaS), comme l'élasticité instantanée et la mise à disposition automatique de ressources virtuelles, rendent ces clouds très dynamiques. Cette nature dynamique se traduit par de virtuelle. Étant données la criticité et parfois la confidentialité des informations traitées dans les infrastructures virtuelles des clients, la supervision de sécurité est une préoccupation importante pour les clients comme pour le fournisseur de cloud. Malheureusement, les changements dynamiques altèrent la capacité du système de supervision de sécurité à détecter avec succès les attaques ciblant les infrastructures virtuelles. Dans cette thèse, nous avons conçu un système de supervision de sécurité auto-adaptatif pour les clouds IaaS. Ce système est conçu pour adapter ses composants en fonction des différents changements pouvant se produire dans une infrastructure de cloud. Notre système est instancié sous deux formes ciblant des équipements de sécurité différents : SAIDS, un système de détection d'intrusion réseau qui passe à l'échelle, et AL-SAFE, un firewall applicatif fondé sur l'introspection. Nous avons évalué notre prototype sous l'angle de la performance, du coût, et de la sécurité pour les clients comme pour le fournisseur. Nos résultats montrent que notre prototype impose un coût additionnel tolérable tout en fournissant une bonne qualité de détection.Rapid elasticity and automatic provisioning of virtual resources are some of the main characteristics of IaaS clouds. The dynamic nature of IaaS clouds is translated to frequent changes that refer to different levels of the virtual infrastructure. Due to the critical and sometimes private information hosted in tenant virtual infrastructures, security monitoring is of great concern for both tenants and the provider. Unfortunately, the dynamic changes affect the ability of a security monitoring framework to successfully detect attacks that target cloud-hosted virtual infrastructures. In this thesis we have designed a self-adaptable security monitoring framework for IaaS cloud environments that is designed to adapt its components based on different changes that occur in a virtual infrastructure. Our framework has two instantiations focused on different security devices: SAIDS, a scalable network intrusion detection system, and AL-SAFE, an fréquents l'infrastructure Abstract changements aux différents niveaux de introspection-based application-level firewall. We have evaluated our prototype focusing on performance, cost and security for both tenants and the provider. Our results demonstrate that our prototype imposes a tolerable overhead while providing accurate detection results. 20, Avenue des Buttes de Coëmes CS 70839 F-35708 Rennes Cedex 7 Tel : 02 23 23 82 00 -Fax : 02 23 23 83 96 N° d'ordre : 17ISAR 19 / D17 -19 m a i l emerging-pop3 . r u l e s , emerging-smtp . r u l e s apache2 , nginx http-e v e n t s . r u l e s , emerging-w e b s e r v e r . r u l e s , emergingw e b s p e c i f i c a p p s . r u l e s 3 s s h d emerging-s h e l l c o d e . r u l e s , emerging-t e l n e t . r u l e s d'une infrastructure de cloud, crée la possibilité que des machines virtuelles légitimes soient co-localisées avec des machines virtuelles contrôlées par des attaquants. Par conséquent, les attaques contre des infrastructures en cloud peuvent provenir de l'intérieur et de l'extérieur de l'environnement de cloud. Une attaque réussie pourrait permettre aux attaquants d'accéder et de manipuler les données hébergées par un cloud, y compris les informations d'identification légitimes du compte utilisateur, ou même d'obtenir un contrôle complet de l'infrastructure de cloud et de la transformer en une entité malveillante. Bien que les techniques de sécurité traditionnelles telles que le filtrage du trafic ou l'inspection du trafic puissent fournir un certain niveau de protection contre les attaquants, elles ne suffisent pas à contrer les menaces sophistiquées qui ciblent les infrastructures virtuelles. Afin de fournir une solution de sécurité pour les environnements en cloud, une architecture de sécurité autonome automatisée qui intègre des outils de sécurité et de supervision hétérogènes est requise. Acknowledgements CHAPTER 6. CONCLUSION Bibliography First, the VMI periodically introspects the memory of the monitored guest to obtain the list of processes attempting to access the network. The time between two consecutive introspections is known as the introspection period and it defined in the SLA. Second, the IEA extracts the necessary information for generating filtering rules and propagates it to the two Rule Generators. Finally the RGs create the switch-level and edge firewall rules and inject them in the firewalls. Topology-Related Changes Depending on the type of topology-related change (VM creation, deletion or migration) different steps are followed: VM deletion: In this case no introspection of the deleted VM is required and no new rules are generated, thus the IEA is responsible for deleting the rules that filter the traffic towards and from the deleted VM. VM creation: In this case once the VM is set to an active state on the host node the process of service-related changes is followed. AL-SAFE currently supports two different security policies that can be applied to VM creation: proactive and reactive rule generation. In the case of a proactive rule generation, a preliminary phase is executed before the VM enters an active state on the host node: rules that filter the traffic for the white-listed services are generated by the two RGs and inserted in the two firewalls. The proactive policy enables network connectivity for the white-listed services even before the VMI component introspects the memory of the deployed guest, thus preventing any performance degradation of the network-critical tenant applications. Unfortunately, it also generates filtering rules for services that might not yet be activated thus creating a potential entry point for the attacker (i.e. the attacker might identify the list of open ports and start sending malicious traffic towards the monitored guest through these ports). In the reactive security policy, no preliminary phase is executed and all traffic directed towards and from the newly created VM is blocked until introspection finishes and the rules are generated and inserted in the two firewalls. VM migration: In the case of a VM migration only the rulesets of the switchlevel firewalls at the source and destination nodes need to be adapted. Indeed, a VM migration should be transparent for the edge firewall. Since AL-SAFE follows a periodic
01772698
en
[ "spi.meca.mema" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01772698/file/Perrin_J%20Mater%20Sci_2017_post%20print.pdf
D Perrin R Léger B Otazaghine P Ienny Hyperelastic behavior of modified sepiolite/SEBS thermoplastic elastomers Keywords: thermoplastic elastomer, SEBS, hyperelastic behavior, tear strength, sepiolite, dipcoating Thin elastomer films of styrene-ethylene-butylene-styrene block copolymer (SEBS) filled with sepiolite nanofibers nanocomposites were prepared by a dip-coating process. To increase the SEBS/Sepiolite elastomer performances, a new strategy of surface modification of sepiolite by SEBS polymer chains has been developed. In a first part the surface modification of sepiolite was characterized by FTIR and TGA. In a second part the mechanical properties of the filled SEBS films were assessed. Measurements of tensile properties and tear strength were carried to evaluate the impact of the sepiolite modification. These results are discussed in taking account the filler dispersion and the quality of the SEBS/sepiolite interface. The surface modification of the sepiolite nanofibers shows an interesting improvement of the tear strength without major modifications of SEBS matrix intrinsic hyperelastic behavior. Introduction Thermoplastic elastomers such as polyolefin block copolymers present a wide range of mechanical properties depending on their composition and morphology. From a microscopic point of view, they can be seen as composite materials with separated soft and rigid segments [START_REF] Matzeu | A temperature sensor based on a MWCNT/SEBS nanocomposite[END_REF][START_REF] Juárez | Improvement of thermal inertia of styrene-ethylene/butylene-styrene (SEBS) polymers by addition of microencapsulated phase change materials (PCMs)[END_REF][START_REF] Shi | Preparation of functionalized graphene/SEBS-g-MAH nanocomposites and improvement of its electrical, mechanical properties[END_REF]. Hard blocks form cross-linking nodules giving the material elastomeric properties. Two cross-linking physical levels are observed with the rigid microdomains and the soft segments 1 Corresponding author : [email protected] entanglement. Thus, by controlling the chemistry of the copolymer, nature of rigid and soft blocks and their ratio, it is possible to obtain materials with many different mechanical properties such as high elastic properties, low Young's modulus, high elongation rate [START_REF] Buckley | Elasticity and inelasticity of thermoplastic polyurethane elastomers: Sensitivity to chemical and physical structure[END_REF], that can satisfy industrial demands and applications such as shoes insole, medical bags, resuscitator, etc. Elastomers and particularly thermoplastic elastomers present a so-called Mullins effect characterized by a particular aspect of the mechanical response in filled rubbers in which the stress-strain curve depends on the maximum loading previously encountered [START_REF] Dorfmann | A pseudo-elastic model for the Mullins effect in filled rubber[END_REF][START_REF] Ogden | A pseudo-elastic model for the Mullins effect in filled rubber[END_REF][START_REF] Mullins | Softening of rubber by deformation[END_REF][START_REF] Mullins | Theoretical model for the elastic behavior of filler-reinforced vulcanized rubbers[END_REF]. Recent work [START_REF] Chagnon | Development of new constitutive equations for the Mullins effect in rubber using the network alteration theory[END_REF] proposes to evaluate the theory of network alteration for the Mullins effect by reconciling both physical and phenomenological approaches using finite element applications according a modified network alteration theory of Marckmann et al. [START_REF] Marckmann | Comparison of hyperelastic models for rubber-like materials[END_REF]. Actually, specific work input is analyzed by the integration of the stress-strain curve: the simple numerical use of the strain energy densities allows some correlations between the stiffness reduction associated to the Mullins effect, the viscoelastic response and physical considerations on the material [START_REF] Jaudouin | Incorporation of Organomodified Layered Silicates and Silica in Thermoplastic Elastomers in Order to Improve Tear Strength[END_REF]. The objective of this work is to develop a material than can deform importantly at very low stress without hysteresis and softening effect and presenting good tear strength. However, it remains difficult to obtain a material having both glassy and rubbery properties [START_REF] Buckley | Elasticity and inelasticity of thermoplastic polyurethane elastomers: Sensitivity to chemical and physical structure[END_REF]. Figure 1 adapted from Bouchereau et al. [START_REF] Sell | Génie mécanique des caoutchoucs et les élastomères thermoplastiques, chapter[END_REF] summarizes our objective and the difficulties to achieve this goal by only controlling the chemistry of the thermoplastic elastomer. Actually, the properties of elastomers vary in function of certain parameters which depend on the vulcanization. It is possible to intercede at several levels in order to alter the crosslinking, for example by varying the vulcanization time. The maximum strain that can be reached by a vulcanized elastomer depends on the average distance between two junctions in the macromolecular network, usually quantified by the crosslinking density. Some mechanical characteristics are improved significantly with the crosslinking density such as dynamic or static module. We can understand easily that greater flexibility of the chains and thus high deformability of macroscopic material is observed for a low crosslinking density. However other properties also depend on the degree of crosslinking such as tear strength and fatigue strength, dissipation, hardness. But as these dependencies often develop antagonistically, it is difficult to find an optimal degree of crosslinking compared to the different criteria of behavior. Most studies aim at improving the elastic modulus, elongation at break or work of fracture by incorporating nanoparticles to rubber matrices [START_REF] Aso | The influence of surface modification on the structure and properties of a nanosilica filled thermoplastic elastomer[END_REF][START_REF] Finnigan | Morphology and properties of thermoplastic polyurethane nanocomposites incorporating hydrophilic layered silicates[END_REF][START_REF] Hassan | 1 -Soft Materials -Properties and Applications[END_REF][START_REF] Jaudouin | Physico-chimie de matériaux à base d'élastomères modifiés hyperélastiques[END_REF]. But few of them consist in combining high elasticity, low stress and good tear strength. Little work focuses on the possible combination of high elastic properties, high elongation, low Young's modulus and an improvement of tear strength. Works, based on nanoparticles, enhance the fact that using nanofillers, such as nanoclay, leads to an improvement of tear strength and elastic modulus [START_REF] Haraguchi | Polymer-Clay nanocomposites Exhibiting Abnormal Necking Phenomena Accompanied by Extremely Large Reversible Elongations and Excellent Transparency[END_REF]. Actually, the literature presents the fact that nanoclay can act as an anti-plastic barrier on the material thanks to a combination of chemical and physical interactions with the matrix: the nanoparticles will interact with the linear polymer sequences, leading to an improvement in tear resistance and an extremely large reversible elongation [START_REF] Chen-Yang | High improvement in the properties of exfoliated PU/clay nanocomposites by the alternative swelling process[END_REF][START_REF]Polyurethane composition having improved tear strength and process for preparation thereof Patent[END_REF]. It also reported that incorporation of nanoparticles leads to a decrease of elongation at break in case of poor dispersion. This observation is to consider with care for materials under high strain [START_REF] Vuillaume | Interactions élastomères -charges. Mécanismes de déplacement des molécules adsorbées et co-adsorbées[END_REF][START_REF] Wagner | Reinforcing Silicas and Silicates[END_REF]. Actually, those fillers act as rigid microdomains and at the end, one has to face the same difficulties as described previously. A solution is to modify the surface of the nanofillers to reach the objective. Recent works show the possibility of improvement of the elongation at break properties of rubbers by the addition of lamellar fillers. Zhenjung et al. studied the influence on the mechanical properties of a styrene-isoprene-butadiene rubber in presence of a suspension of multi-layered organophilic montmorillonite in the toluene or the cyclohexane [START_REF] Zhenjung | Effect of the Addition of Toluene on the Structure and Properties of Styrene-Isoprene-Butadiene Rubber/Montmorillonite Nanocomposites[END_REF]. They noticed that fillers are better dispersed by the toluene and give greatly dispersed nanocomposites based elastomer. They observed an improvement of the elongation at break of the rubber from 500 to 700 % for an addition of 3 wt% of montmorillonite in the toluene. Chen-Yang et al. as for them studied the influence of a lamellar nanofiller based organomodified montmorillonite by aminolauric acid (ALA) on the mechanical behavior of a thermoplastic urethane (TPU) increasing elongation at break from 600 to 1400% [START_REF] Chen-Yang | High improvement in the properties of exfoliated PU/clay nanocomposites by the alternative swelling process[END_REF]. Deng and al worked on a composite system based poly (D, L-lactide) or PDLLA with precipitated (NH 4 ) 2 HPO 4 apatite filler modified in a solution of Ca(NO 3 ) 2 in presence of SnOct 2 . By using 10 wt% nanoapatite, the elongation at break of the composite increased from 20 to 1550% [START_REF] Deng | Mechanism of ultrahigh elongation rate of poly(D,L-lactide)-matrix composite biomaterial containing nano-apatite fillers[END_REF]. Finally, the works of Haraguchi and al. highlighted that the incorporation of an important quantity of hydrophilic nanosilica based silsesquioxane, sepiolite, nanoclays, nanotubes of carbons (between 5.5 and 23 wt%) in a hydrophobic matrix of poly (2-methoxyethyl acrylate) (PMEA) allowed to obtain very interesting mechanical and optical properties, in particular complete reversibility for deformation up to 3000% without damages by mechanical necking [START_REF] Haraguchi | Polymer-Clay nanocomposites Exhibiting Abnormal Necking Phenomena Accompanied by Extremely Large Reversible Elongations and Excellent Transparency[END_REF]. The present work concerns the new functionalization of sepiolite nanoparticle with thermoplastic elastomer based styrene ethylene butadiene styrene (SEBS). The study will contribute to better understand how the proportion of nanoparticles and the interface treatment influence several mechanical properties by comparison with cast-calendared and dip coating processes. Materials & Methods Materials The thermoplastic elastomer used in this study is a SEBS Thermoflex 10H730 from Plastic Technology Service LTD. This triblock copolymer (Figure 2) is a very soft grade (11 Shore A) presenting an important elongation at break and a low modulus. The weight ratio of styrene/ethylene-butadiene is about 5/95, and the weight ratio of ethylene/butadiene in the soft segment is about 70/30. This composition has been determined by comparing the DSC thermogram to literature [START_REF] Jaudouin | Incorporation of Organomodified Layered Silicates and Silica in Thermoplastic Elastomers in Order to Improve Tear Strength[END_REF][START_REF] Ma | A comparative study of effects of SEBS and EPDM on the water tree resistance of cross-linked polyethylene[END_REF][START_REF] Wang | Structure and properties of SEBS/PP/OMMT nanocomposites[END_REF]. The SEBS polymer grafted with maleic anhydride (SEBS-g-MA) FG1901W was supplied by Kraton. The pristine sepiolite (noted S1 in the text) used was a high purity natural sepiolite Pangel S9 from Tolsa. (3-Aminopropyl)triethoxysilane was supplied by SIGMA-ALDRICH and was used as received. Similarly to palygorskite, the structure of sepiolite is highly porous. This hydrated magnesium silicate (Si 12 Mg 8 O 30 (OH) 4 (H 2 O) 4 ,8H 2 O) is based on SiO 4 tetrahedra layers, with an inversion of the apical ends every six units. These layers are interconnected by MgO 6 octahedra, thus creating nanochannels of 3.5×10.6 Å 2 in cross-section [START_REF] Zhan | Shape-memory poly(p-dioxanone)-poly(ɛcaprolactone)/sepiolite nanocomposites with enhanced recovery stress[END_REF]. Two types of water molecules are present in the structure: water coordinated to Mg 2+ ions at the edges of the octahedral layers (H 2 O coord ) and zeolithic water in the channels (H 2 O zeol ), hydrogenbonded to coordinated water molecules. Functionalization of sepiolite To improve mechanical performances of sepiolite nanofibers /SEBS films, a new strategy for sepiolite functionalization was developed. This strategy based on a two-step reaction is presented by the Figure 3. The first step is the sepiolite treatment by APTS to introduce amine groups at the filler surface. This silylation reaction is not carried in anhydrous condition and the auto-condensation of the silanes units leads to the formation of a multilayer grafting. The second step is the functionalization of the sepiolite nanofibers with SEBS chains by reaction between the amine groups and anhydride maleic groups of a SEBS-g-MA copolymer. This reaction leads to the formation of an amide group and so to a covalent bond between the sepiolite surface and the SEBS chains. Functionalization of sepiolite by amino groups (S2) Into a 250 ml flask fitted with a condenser were introduced 10 g of sepiolite Pangel S9 (S1), 1 g (4.5×10 -3 mol) of (3-aminopropyl)triethoxysilane (APTES) and 100 ml of an ethanol/water (90/10) solution. The mixture was then stirred and heated at solvent reflux for 15 hours. The mixture was next centrifuged (speed: 5000 rpm) to eliminate the liquid phase and washed three times with acetone. Finally, the obtained sepiolite S2 was dried under vacuum before characterization. Functionalization of sepiolite by SEBS polymer chains (S3) Into a 250 ml flask fitted with a condenser were introduced 10 g of filler, 1 g of SEBS-g-MA and 100 ml of toluene. The mixture was then stirred and heated at solvent reflux for 15 hours. The mixture was next centrifuged (speed: 5000 rpm) to eliminate the liquid phase and washed two times with toluene and acetone. Finally, the filler was dried under vacuum before characterization. Preparation of the SEBS formulations for the dip coating process In a first step the SEBS elastomer was dissolved in toluene under vigorous stirring. When the polymer dissolution is complete, sepiolite (S1, S2 or S3) and SEBS-g-MA can be added. The different formulations tested in this study are summarized in Table 1. The machine allows preparing SEBS films from the different formulation prepared (Table 1). The process can be divided in three different steps (Figure 4). In a first time, the glass pipe is dipped in the SEBS formulation with a controlled speed. Then, when the lower limit position has been reached, the tube is removed at controlled speed and acceleration. Finally, the tube is put in a horizontal position and rotated during a given time to dry the SEBS film. The different parameters of these three steps define a cycle. The type and number of the cycles affect the quality of the obtained elastomer films. The dip-coating machine is composed of several electric motors and sensors. The global process is the combination of different operations and for each a motor (Servo HITEC HS-805MG with a torque of 24.7 kg/cm for 6.0 V, 25 rpm and maximal speed of 2 cm/s) using a PIC microcontroller (Microship) is used to obtain a specific displacement. The first one is a DC motor which drives a ball screw to obtain a vertical displacement of the tube. The second one is a servomotor which allows the tube to switch from a horizontal to a vertical position. The third one is a DC motor which allows the rotation of the tube via a pulley-belt system. The different sensors (Honeywell) allow the control of the vertical movement and the horizontal position of the tube. To obtain a SEBS film, the apparatus is programed with values of rate of descent, rate and deceleration of ascent, duration for the dry step and number of cycles. As a comparison, SEBS was also cast-calendared with a laboratory-scale extruder Polylab system composed of a HAAKE RheoDrive4 motor coupled with a HAAKE Rheomex 19/25 OS single screw extruder with a Maddock mixer equipped with a flat die. Films of 0.4 mm thick are obtained. Effect of acceleration on the thickness of the SEBS films The first tests showed that the obtained films did not present the same thickness throughout their length. To reduce these differences of length different values for the deceleration parameter were tested. Figure 5 shows the influence of deceleration value on the thickness gradient for the obtained film. The lower thickness gradients were obtained with values of deceleration between 25 mm/min² and 30 mm/min². These velocity gradients correspond to values of thickness gradient about 5µm/cm. Number of cycles For a given formulation the number of cycles determines the thickness of the obtained film. The measures of thickness were obtained with a Micrometer and the uncertainties were estimated on 5 samples. The formulations used for this study give a layer with a thickness about 100 µm for each cycle. For this study, the number of cycles was fixed at 5 for an overall thickness of 500 ± 50 µm for the obtained films. Effect of the concentration of the SEBS formulation on the thickness of the obtained films The SEBS concentration of the formulation used for the dip coating process is directly related to the thickness on the obtained film. Figure 6 presents a linear evolution of the film thickness in function of the SEBS concentration. Characterizations 2.5.1 Thermal characterization Thermal characterization was carried out by thermogravimetric analysis (Perkin Elmer Pyris-1 instrument) on 10 mg of samples, under nitrogen. Samples were first heated at 10°C/min from 25 to 110°C, followed by an isotherm at 110°C for 10 min, in order to evacuate all adsorbed water molecules. They were then heated again from 110 to 900°C, at 10°C/min, in order to eliminate the grafted groups. FTIR characterization IR spectra were measured using a Bruker IFS66-IR Spectrometer at room temperature, where 32 scans at a resolution of 4 cm -1 were signal averaged. XRD characterization Morphologies were analyzed using a Bruker D8 diffractometer using CuKα radiation. Data were collected between 2 and 33° by step of 0.02° using an X-ray generator with =0.15406 nm and at 40 kV operating voltage and 20 mA current. TEM characterization The morphology of composites was examined by transmission electron microscopy (TEM). Samples were cut using a LEICA UC7 ultramicrotome at -156°C. Sections of approximately 70 nm were observed with a PHILIPS CM 120 TEM at 80 kV. Mechanical Properties Cyclic tensile tests and monotonic tear strength tests were performed at 23°C on a ZWICK TH010 universal testing machine with a load cell capacity of 500 N at a speed of 500 mm/min. Five specimens used for tensile and tear analyses were cut from the films obtained by the dip coating process (Figure 6). Note that for the tear strength samples, the dimensions given by the French standard (NF EN 12310) [START_REF]Flexible sheets for waterproofing -Determination of resistance to tearing -Part 2: plastic and rubber sheets for roof waterproofing[END_REF] were divided by two because of the sizes of the films produced by the dip coating procedure. Tensile samples were submitted to successive load / unload cycles at an elongation of 300%, 450% and 600% (8 cycles in each cases), and a final load up to fracture (Figures 7 and8). Stress at the first load at 600% is evaluated, as well as the stress and elongation at break. The elastic modulus is taken as the slope of initial tangent to the stress/strain curve at the first load. Let W i be the elastic strain energy density of the i th load, and H 8 the elastic strain energy released after the last unload (Figure 9). A relative stabilization ratio (SR) is calculated using The tear strength (T) as defined in the French standard NF EN 12310-2 and the dissipated surface energy to initiate and propagate the crack (E dis ) are also evaluated for all specimens (Equation 3 and Equation 4). These parameters are obtained by dividing respectively the maximal strength (F max ) by the thickness (e) and the dissipated energy (W dis ) by the cross section (S), which is the area of the resistive ligament of the tear strength samples (Figure 7), in order to compare the samples (Figure 10). Results and discussions Surface modification of sepiolite nanofibers In order to check to what extent sepiolite has been modified, S1, S2 and S3 samples have been characterized by TGA and IR spectroscopy. As detailed by several authors [START_REF] Kuang | Nanostructured Hybrid Materials Formed by Sequestration of Pyridine Molecules in the Tunnels of Sepiolite[END_REF][START_REF] Tartaglione | Thermal and morphological characterization of organically modified sepiolite[END_REF], the heating of pristine sepiolite reveals a multi-step dehydration process, corresponding to the loss of zeolitic water first and then of coordinated water. The zeolitic water is lost in one step, between room temperature and 100°C, while the coordinated water is lost in two steps, between 100 and 300°C first and then between 300 and 600°C. Another step is then observed, from 800°C, corresponding to the dehydroxylation of sepiolite anhydride which loses its structure, resulting in the formation of enstatite and silica. Thermal characterization of pristine sepiolite and both modified sepiolites was carried out by TGA, following the protocol described in the experimental section. The isotherm step at 110°C was used to dry samples so as to avoid any disturbance of the results by the weight loss due to the zeolitic water (Figure 11a). During this step, pristine sepiolite undergoes a 6% weight loss (zeolitic water loss). In the case of organo-modified sepiolites the weight loss is lower compared to pristine sepiolite: 4.1 and 2%, respectively. This can be explained by (i) a lower amount of zeolitic water in the organo-modified sepiolite and/or (ii) the increase of the molecular weight of modified sepiolite due to the presence of organic molecules. Considering that all the samples are in the same hydration state after the first step (loss of the zeolitic water), the comparison of their behavior between 110 and 900°C (Figure 11b) allows checking to what extent the sepiolites have been modified. The first two steps observed, between 110 and 800 °C on Figure 11b, for pristine sepiolite (straight line) corresponds to a weight loss of 7.4 wt%. They can be attributed to the loss of coordinated water (dehydration). Then, after 800 °C, dehydroxylation occurs. Between 110 and 800 °C, the modified sepiolites show significantly different behaviors from that of pristine sepiolite but also one from the other. Their behavior is however similar to that of pristine sepiolite above 800 °C, probably corresponding also only to dehydroxylation. Between 110 and 800°C the total weight loss of S2 and S3 samples is respectively 10.2 and 18.4 wt%, i.e., 2.8 and 11 wt% more than that of pristine sepiolite. This is a first indication that the sepiolites have been modified. Furthermore, the more significant weight loss for S3 occurs between 380 and 400°C corresponding to the same thermal degradation than pure SEBS-g-MA (Figure 11b). In order to prove the grafting of the SEBS chains at the surface of sepiolite, the S3 has been analyzed by FTIR spectroscopy and its spectra compared to that of SEBS and SEBS-g-MA polymers (Figure 12). The modified sepiolite spectrum shows the presence of a signal between 2830 and 2990 cm -1 (Figure 12a) characteristic of the SEBS chains which can be observed for the spectra of SEBS and SEBS-g-MA. A signal centered at 1660 cm -1 can also be observed for the S3 sample (Figure 12b) and was attributed to the amide and imide functions formed after reaction between the amine functions of the S2 sample and the anhydride functions of the SEBS-g-MA chains. This signal which does not appear for the SEBS and SEBS-g-MA polymers proves the grafting of the SEBS chains by the formation of a covalent bond. and by Roy et al. [START_REF] Benlikaya | Preparation and characterization of sepiolite-poly (ethyl methacrylate) and poly (2-hydroxyethyl methacrylate) nanocomposites[END_REF][START_REF] Roya | Novel in situ polydimethylsiloxane-sepiolite nanocomposites: Structure-property relationship[END_REF]. Unlike other clay based-montmorillonite (MMT) [START_REF] Maiti | Structure and properties of some novel fluoroelastomer/clay nanocomposites with special reference to their interaction[END_REF], where individual tetrahedral and octahedral layers can easily be separated, sepiolite exhibits a structure where individual TOT (tetrahedral octahedral tetrahedral) layers are strongly bonded. Unlike smectite clays, here fiber bundles or aggregates get separated into nanometer dimension when these fillers are dispersed into the polymer matrix. Relative dispersion of the clay refers to a good isolation of the individual fibers from each other. As shown in the Figure 13, almost disappearance of the 110 peak of sepiolite in XRD patterns of nanocomposites is considered as an evidence for good dispersion of the main fraction of sepiolite fibers within SEBS matrix. In spite of our best efforts by various techniques, the layers in sepiolite could not be separated earlier. But in the case of in situ synthesis of nanocomposites, greater extent of dispersion is achieved. TEM studies A reasonably good dispersion of neat sepiolite is observed in SEBS (Figure 14a), with few clusters of higher density of particles (dark phases) and aggregates of size 0.2-0.5µm (dark needles). In SEBS/S3 sample, bigger aggregates (1µm) can be observed, while the dispersion of sepiolite in clusters is good (Figure 14b). Clusters are fewer in number but larger in size. The surface modification of sepiolite seems to be in favour of this clustering effect and this is confirmed the XRD decrease of peak intensity showing a more important amorphous/crystalline sepiolite ratio. Regardless of these clusters, a homogeneous zone is indicative of a good dispersion of sepiolite. To the authors, such phenomenon has not been reported yet, but could be explained by a competition of affinity between the SEBS grafted molecule and the SEBS matrix which should lead to a good dispersion of sepiolite or another SEBS grafted molecule which lead to a clustering effect. Mechanical properties Figure 15 and Figure 16 display the shape of typical stress/strain curve and tear strength/displacement curve respectively. All the data are gathered in Table 2 and Table 3. Concerning the processing, casted and dip-coated SEBS properties can be compared. It appears that dip-coating provides SEBS the ability to be more deformable and flexible at a low stress with a moderate Mullins effect and viscous behavior. Moreover, the resistance to tear of the dip-coated SEBS is greatly improved compared with the casted SEBS. If we compare the formulations obtained by dip-coating process, it can be noted that the incorporation of untreated sepiolite (S1) contributes to an improvement of the tear strength and the dissipated energy but also to an increase of the Mullins effect and a decrease of the ability of the elastomer to deform. The introduction of amine groups on sepiolite surface (S2) without additional interfacial agent slightly improves the resistance to tear of the material, from 3.3 N/mm (pristine SEBS) to 3.7 (S2) but the mechanical properties, as stress at 600%, ultimate stress and strain, are still degraded. Adding SEBS-g-MA to S2 sepiolite allows preserving the initial behavior of SEBS (no significant differences in VR, SR, ultimate stress and strain values) while significantly improving tear strength and energy dissipation. As expected an important increase in resistance to tear is observed for SEBS with grafted sepiolite (S3), but sadly it is also responsible for an important increase in the Mullins effect. The increase in ultimate stress (crystallization phenomenon) is not an issue since the stresse at 600% is not impacted. However, observed improvements with grafted sepiolite could be further improved by considering a better dispersion of sepiolite in the matrix. The elastic modulus is not strongly impacted by the filler or its treatment. The whole results are summarized Figure 17. However, it is important to note that these improvements attributed to the functionalization process of the sepiolite can be further improved by a better dispersion of sepiolite in the matrix. Conclusion An ultrahigh tear resistance and elongation rate of the SEBS-matrix nanocomposite elastomer with ultra-fine acicular sepiolite nanoparticles as fillers were found in the present study by a dip-coating process. The improvement of these performances is not made to the detriment of the flexibility and of the elongation at break of the material which remained constant. The incorporation of functionalized nanoclay improves the dissipated energy during a tear test by 40%, without impacting significantly the stabilization and viscosity rates of pure SEBS. This strong increase is the result of a new process of stoichiometric compatibilisation of the sepiolite with the agent of interface (SEBS-g-MA) then the incorporation of the functionalized nanoparticles within the matrix SEBS. The linear chains of SEBS matrix material were cross-linked through ultra-high surface energy of the acicular sepiolite nanoparticles. These crosslinked SEBS structures resulted in the high fixation strength between SEBS molecular chains and nano-sepiolite particles by a larger number of additional cross-link junctions; it led to a quasi-identical elastic and plastic deformation behavior of the composites when comparing with pure SEBS polymer. Figure 1 : 1 Figure 1: Influence of the cross-linking density on different mechanical properties [12]. Figure 2 : 2 Figure 2: Chemical structure of SEBS and SEBS-g-MA. Figure 3 : 3 Figure 3: Functionalization of sepiolite nanofibers by a 2 steps procedure. Figure 4 : 4 Figure 4: Dip coating procedure for the formation of the SEBS films. Figure 5 : 5 Figure 5: Thickness gradient (µm/cm) as a function of the velocity gradient (mm/min²) for a SEBS/S1 formulation. Figure 6 : 6 Figure 6: Thickness of the obtained film as a function of the SEBS concentration. Figure 7 : 7 Figure 7: Shape of the samples used for (left) the cyclic tensile and (right) tear strength tests (values in mm, grey areas are placed between clamps). Figure 8 : 8 Figure 8: Loading program for cyclic tensile tests. Equation 1 1 and characterizes the Mullins effect. A low value of SR corresponds to a low stress softening. A relative viscoelastic ratio (VR) is also determined from the last cycle using Equation 2 and quantifies the viscoelasticity of the stabilized material. Ideally, a low (VR) ensures a quasi-elastic behavior of the material. Figure 9 : 9 Figure 9: Elastic strain energy densities stored and released during load/unload cycles of elastomeric material. Figure 10 : 10 Figure 10: Tear strength and surface dissipated energy. Figure 11 : 12 : 1112 Figure 11: TGA under nitrogen of S1, S2 and S3 samples, (a) from 25 to 110°C at 10°C/min followed by an isotherm at 110°C for 10 min, (b) from 110 to 900°C at 10°C/min. Figure 13 : 13 Figure 13: X-ray scattering patterns of neat sepiolite (S1) and modified sepiolite non-grafted (S2) and grafted SEBS (S3) into SEBS matrix with respect to pristine sepiolite. Figure 14 : 14 Figure 14: TEM images of neat sepiolite (S1) and SEBS grafted sepiolite (S3) into SEBS matrix. Figure 15 : 15 Figure 15: Typical stress/strain curves for tensile cyclic loading. Figure 16 : 16 Figure 16: Typical tear strength/displacement curves for SEBS formulations. Figure 17 : 17 Figure 17: Summary of mechanical results for different formulations. Table 1 : Composition of the different formulations used for the dip coating process. 1 Formulation SEBS Thermoflex Sepiolite S1, SEBS-g-MA Toluene 10H730 S2 or S3 Kraton FG1901X (wt%) (wt%) (wt%) (wt%) SEBS 15 - - 85 SEBS/S1 14.85 0.15 - 85 SEBS/S2 14.85 0.15 - 85 SEBS/S2/SEBS-g-MA 14.85 0.14 0.01 85 SEBS/S3 14.85 0.15 - 85 2.4 Dip coating process 2.4.1 Description of the dip-coating apparatus Table 2 : Values of VR, SR, stress at ε=600%, and ultimate stress and strain for the SEBS formulations. SR (%) VR (%) Stress 600% (MPa) Ultimate stress (MPa) Ultimate strain (%) 2 Elastic modulus (MPa) Table 3 : Values of tear strength and dissipated energy for the SEBS formulations. Tear strength (N/mm) Dissipated energy (mJ/mm 2 ) 3 SEBS 3.3±0.1 66±1 SEBS/S1 4.1±0.5 69±10 SEBS/S2 3.7±0.2 70±5 SEBS/S2/SEBS-g-MA 4.0±0.1 84±3 SEBS/S3 4.2±0.2 93±7 SEBS cast 2.9±0.1 10±2 Acknowledgment: This work was financially supported by the AREVA MELOX (N. Lantheaume) and PIERCAN SAS (D. Guérin) companies. TEM samples preparation and observation have been performed at the Centre Technologique des Microstructures, University of Claude Bernard, Lyon 1, France. The authors also thank P. Hangouët, N. Page, G. Chantereau, V. Diaz, T. Dutto and V.B. Nguyen who worked on this project.
01772703
en
[ "spi.signal" ]
2024/03/05 22:32:18
2006
https://hal.science/hal-01772703/file/EUSIPCO2006.pdf
Abdeldjalil Aïssa-El-Bey Karim Abed-Meraim Yves Grenier email: [email protected] ITERATIVE BLIND SOURCE SEPARATION BY DECORRELATION: ALGORITHM AND PERFORMANCE ANALYSIS This paper presents an iterative blind source separation method using second order statistics (SOS) and natural gradient technique. The SOS of observed data is shown to be sufficient for separating mutually uncorrelated sources provided that the considered temporal coherence vectors of the sources are pairwise linearly independent. By applying the natural gradient, an iterative algorithm is derived that has a number of attractive properties including its simplicity and 'easy' generalization to adaptive or convolutive schemes. Asymptotic performance analysis of the proposed method is performed. Several numerical simulations are presented to demonstrate the effectiveness of the proposed method and to validate the theoretical expression of the asymptotic performance index. INTRODUCTION Source separation aims at recovering multiple sources from multiple observations (mixtures) received by a set of linear sensors. The problem is said to be 'blind' when the observations have been linearly mixed by the transfer medium, while having no a priori knowledge of the transfer medium or the sources. Blind source separation (BSS) has applications in several areas, such as communication, speech/audio processing, biomedical engineering, geophysical data processing, etc [START_REF] Cichocki | Adaptive Blind Signal and Image Processing[END_REF]. BSS of instantaneous mixtures has attracted a lot of attention due to its many potential applications [START_REF] Cichocki | Adaptive Blind Signal and Image Processing[END_REF] and its mathematical tractability that lead to several nice and simple BSS solutions [START_REF] Cichocki | Adaptive Blind Signal and Image Processing[END_REF][START_REF] Pham | Blind source separation of instantaneous mixtures of nonstationary sources[END_REF][START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF][START_REF] Cardoso | A blind beamforming for non-gaussian signals[END_REF][START_REF] Abed-Meraim | A general framework for blind source separation using second order statistics[END_REF]. In this paper, we present SOS based method for blind separation of temporally coherent sources. We first present SOSbased contrast functions for BSS. Then, to achieve BSS, we optimize the considered contrast function using an iterative algorithm based on the relative gradient technique. The generalization of this method to deconvolution and adaptive algorithms is then discussed. Finally, a theoretical analysis of the performance of the method has been derived and validated by simulation results. PROBLEM FORMULATION Assume that m narrow band signals impinge on an array of n ≥ m sensors. The measured array output is a weighted superposition of the signals, corrupted by additive noise, i.e. x(t) = y(t) + η(t) = As(t) + η(t) [START_REF] Cichocki | Adaptive Blind Signal and Image Processing[END_REF] where s(t) = [s1(t), • • • , sm(t)] T is the m × 1 complex source vector, η(t) = [η1(t), • • • , ηn(t)] T is the n × 1 Gaussian complex noise vector, A is the n × m full column rank mixing matrix (i.e., n ≥ m), and the superscript T denotes the transpose operator. The source signal vector s(t), is assumed to be a multivariate stationary complex stochastic process. In this paper, we consider only the second order statistics and hence the signals si(t), 1 ≤ i ≤ m are assumed to be temporally coherent and mutually uncorrelated, with zero mean and second order moments: S(τ ) def = E (s(t + τ )s (t)) = diag[ρ1(τ ), • • • , ρm(τ )] where ρi(τ ) def = E(si(t + τ )s * i (t)) , the expectation operator is E, and the superscripts * and denote the conjugate of a complex number and the complex conjugate transpose of a vector, respectively. The additive noise η(t) is modeled as a stationary white (temporally but not necessarily spatially) zero-mean complex random process. In that case, the source separation is achieved by decorrelating the signals at different time lags. The purpose of blind source separation is to find a separating matrix, i.e. a m × n matrix such that s(t) = Bx(t) is an estimate of the source signals. Before proceeding, note that complete blind identification of separating matrix B (or the equivalently mixing matrix A) is impossible. The best that can be done is to determine B up to a permutation and scalar multiple of its columns [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF], i.e., B is a separating matrix iff: By(t) = PΛs(t) (2) where P is a permutation matrix and Λ a non-singular diagonal matrix. CONTRAST FUNCTION The following theorems serve as the basis for our method for blind separation of stationary sources. We present here separation criteria for the stationary, temporally correlated source signals and their corresponding contrast functions. Let consider first the noiseless case. We have the following result: Theorem 1 Let τ1 < τ2 < • • • < τK be K ≥ 1 time lags, and define ρ i = [ρi(τ1), ρi(τ2), • • • , ρi(τK )] and ρi = [ e(ρ i ), m(ρ i )] where e(x) and m(x) denote the real part and imaginary part of x, respectively. Taking advantage of the indeterminacy, we assume without loss of generality that the sources are scaled such that ρ i = ρi = 1, for all i1 . Then, BSS can be achieved using the output correlation matrices at time lags τ1, τ2 rij(τ k ) = 0 and K k=1 |rii(τ k )| > 0 ( 4 ) for all 1 ≤ i = j ≤ m and k = 1, 2, • • • , K. Note that, if one of the time lags is zero, the result of Theorem 2 holds only under the noiseless assumption. In that case, we can replace the condition K k=1 |rii(τ k )| > 0 by rii(0) > 0, for i = 1, • • • , m. On the other hand, if all the time lags are non-zero and if the noise is temporally white (but can be spatially colored with unknown spatial covariance matrix) then the above result holds without the noiseless assumption. Based on Theorem 2, we can define different objective (contrast) functions for signal decorrelation. In [START_REF] Kawamoto | Blind separation of sources using temporal correlation of the observed signal[END_REF], the following criterion 2 was used G(z) = K k=1 log |diag(Rz(τ k ))| -log |Rz(τ k )| (5) where diag(A) is the diagonal matrix obtained by zeroing the off diagonal entries of A. Another criterion used in [START_REF] Abed-Meraim | A general framework for blind source separation using second order statistics[END_REF] is G(z) = K k=1 1≤i<j≤m [|rij(τ k ) + rji(τ k )| 2 + |rij(τ k ) -rji(τ k )| 2 ] + m i=1 | K k=1 |rii(τ k )| -1| 2 (6) Equations ( 5) and ( 6) are non-negative functions which are zero if and only if Rz(τ k ) = E(z(t + τ k )z (t)) are diagonal for k = 1, • • • , K or equivalently if (4) is met. ITERATIVE ALGORITHM The separation criteria we have presented takes the form: B is a separating matrix ⇐⇒ G(z(t)) = 0 (7) where z(t) = Bx(t) and G is a given contrast function. The approach we choose to solving (7) is inspired from [START_REF] Pham | Blind separation of mixture of independent sources through a quasi-maximum likelihood approach[END_REF]. It is a block technique based on the processing of T received samples and consists of searching the zeros of the sample version of [START_REF] Pham | Blind separation of mixture of independent sources through a quasi-maximum likelihood approach[END_REF]. Solutions are obtained iteratively in the form: B (p+1) = (I + (p) )B (p) (8) z (p+1) (t) = (I + (p) )z (p) (t) (9) At iteration p, a matrix (p) is determined from a local linearization of G(Bx(t)). It is an approximate Newton technique with the benefit that (p) can be very simply computed 2 In that paper, only the case where τ 1 = 0 was considered. (no Hessian inversion) under the additional assumption that B (p) is close to a separating matrix. This procedure is illustrated as follows: We first consider the noiseless case or eventually the nonzero lag case (i.e. τi = 0 for i = 1, . . . , K). By using (9), we have: r (p+1) ij (τ k ) = r (p) ij (τ k ) + m q=1 * (p) jq r (p) iq (τ k )+ m l=1 (p) il r (p) lj (τ k ) + m l,q=1 (p) il * (p) jq r (p) lq (τ k ) (10) where r (p) ij (τ k ) def = E z (p) i (t + τ k )z * (p) j (t) (11) ≈ 1 T -τ k T -τ k t=1 z (p) i (t + τ k )z * (p) j (t) (12) Under the assumption that B (p) is close to a separating matrix, it follows that | (p) ij | 1 and |r (p) ij (τ k )| 1 for i = j and thus, a first order approximation of r (p+1) ij (τ k ) is given by: r (p+1) ij (τ k ) ≈ r (p) ij (τ k ) + * (p) ji r (p) ii (τ k ) + (p) ij r (p) jj (τ k ) (13) similarly, we have: r (p+1) ji (τ k ) ≈ r (p) ji (τ k ) + * (p) ij r (p) jj (τ k ) + (p) ji r (p) ii (τ k ) (14) From ( 13) and ( 14), we have: r (p+1) ij (τ k ) + r (p+1) ji (τ k ) ≈ 2r (p) jj (τ k ) e( (p) ij ) +2r (p) ii (τ k ) e( (p) ji ) + (r (p) ij (τ k ) + r (p) ji (τ k )) r (p+1) ij (τ k ) -r (p+1) ji (τ k ) ≈ 2r (p) jj (τ k ) m( (p) ij ) -2r (p) ii (τ k ) m( (p) ji ) + (r (p) ij (τ k ) -r (p) ji (τ k )) with  = √ -1. By replacing the previous equation into (6), we obtain the following least squares (LS) minimization problem min h r (p) jj , r (p) ii i E (p) ij + 1 2 (r (p) ij + r (p) ji ), 1 2 (r (p) ij -r (p) ji ) ! where E (p) ij def = 4 e( (p) ij ) m( (p) ij ) e( (p) ji ) -m( (p) ji ) 5 (15) r (p) ij = [r (p) ij (τ1), • • • , r (p) ij (τK )] T (16) A solution to the LS minimization problem is given by: E (p) ij = - h r (p) jj , r (p) ii i # 1 2 (r (p) ij + r (p) ji ), 1 2 (r (p) ij -r (p) ji ) ! ( 17 ) where A # denotes the pseudo-inverse of matrix A. Equations ( 15) and ( 17) provide the explicit expression of (p) ij for i = j. For i = j, the minimization of (6) using the first order approximation leads to: K k=1 r (p) ii (τ k ) 1 + 2 e( (p) ii ) -1 = 0 (18) Without loss of generality, we take advantage of the phase indeterminacy to assume that ii are real-valued and hence e( ii) = ii. Consequently, we obtain: (p) ii = 1 - K k=1 |r (p) ii (τ k )| 2 K k=1 |r (p) ii (τ k )| (19) In the case of real-valued signals, the LS minimization becomes: min H (p) ij e (p) ij + ψ (p) ij where H (p) ij = 1 1 ! ⊗ h r (p) jj , r (p) ii i (20) e (p) ij = h (p) ij , (p) ji i T (21) ψ (p) ij = 4 r (p) ij r (p) ji 5 (22) and ⊗ denotes the Kronecker product. A solution to the LS minimization problem is given by: e (p) ij = -H (p)# ij ψ (p) ij (23) Remark: A main advantage of the above algorithm is its flexibility and easy implementation in the adaptive case. Indeed, the adaptive version of this algorithm consists simply in replacing the iteration index p by the time index t and the correlation coefficient r (p) ij (τ k ) by their adaptive estimates r (t) ij (τ k ) = βr (t-1) ij (τ k ) + zi(t)z * j (t -τ k ) if k ≥ 0 or otherwise r (t) ij (τ k ) = βr (t-1) ij (τ k ) + zi(t + τ k )z * j (t) where 0 < β < 1 is a forgetting factor. This algorithm can also be extended to deal with BSS of convolutive mixtures as shown next. GENERALIZATION TO CONVOLUTIVE MIXTURE CASE In the convolutive mixture case, the signal can be modeled by the following equation: x(t) = y(t) + η(t) = L =0 A( )s(t -) + η(t), (24) where A(k) are n × m matrices for ∈ [0, L] representing the impulse response coefficients of the channel. The polynomial matrix A(z) = L =0 A( )z -is assumed to be irreducible (i.e. A(z) is of full column rank for all z). In this section, one will determinate the rational matrix B(z) = B( )z -such that B(z) is a separating matrix, i.e. w(t) = B( )x(t -) = [diag(c1(z) . . . cm(z))] s(t) (25) where c1(z) . . . cm(z) are m given scalar rational functions. To achieve this BSS, we consider a decorrelation criterion : e G(w) = K k=1 1≤i<j≤m |rij(τ k )| 2 + m i=1 | K k=1 |rii(τ k )| -1| 2 (26) so that B(z) is a separating matrix ⇐⇒ e G(w(t)) = 0 ( 27 ) Solutions are obtained iteratively in the form: B (p+1) (z) = I + (p) (z) B (p) (z) (28) w (p+1) (t) = w (p) (t) + 1 =0 (p) ( )w (p) (t -) (29) where (p) (z) def = (p) (0) + (p) (1)z -1 . Similarly to the instantaneous mixture case, a first order approximation of r (p+1) ij (τ k ) is given by: r (p+1) ij (τ k ) ≈ r (p) ij (τ k ) + 1 l=0 * (p) ji ( )r (p) ii (τ k + ) + 1 l =0 (p) ij ( )r (p) jj (τ k -) (30) Replacing ( 30) into (27) leads after straight forward derivation to: min E (p) ij Φ (p) ij E (p) ij + r (p) ij 2 + Φ (p) ji ME * (p) ij + r (p) ji 2 (31) where E (p) ij = [ (p) ij (0) (p) ij (1) * (p) ji (0) * (p) ji (1)] T (32) Φ (p) ij = [φ (p) jj (0) φ (p) jj (-1) φ (p) ii (0) φ (p) ii (1)] (33) φ (p) ii (k) = [r (p) ii (τ1 + k), . . . , r (p) ii (τK + k)] T , k ∈ Z (34) and M is the matrix verifying E (p) ji = ME * (p) ij , i.e. M = 0 1 1 0 ! ⊗ 1 0 0 1 ! (35) Finally, one obtains the solution: P R e E (p) ij m E (p) ij Q S = - 4 Θ (p) ij Θ (p) ji 5 # ν (p) ij ( 36 ) where Θ (p) ij = P R e Φ (p) ij -m Φ (p) ij m Φ (p) ij e Φ (p) ij Q S (37) Θ (p) ji = P R e Φ (p) ji M m Φ (p) ji M m Φ (p) ji M -e Φ (p) ji M Q S (38) and ν (p) ij = h e(r (p ) ij ) T m(r (p) ij ) T e(r (p) ji ) T m(r (p) ji ) T i T (39) For i = j, we take again advantage of the problem indeterminacy to assume without loss of generality that the diagonal entries of (p) ii (0) are real-valued and those of (p) ii (1) are zero. This assumption leads to the (p) ii (0) given by equation (19). ASYMPTOTIC PERFORMANCE In this section, asymptotic (i.e. for large sample sizes) performance analysis results of the proposed method in instantaneous real case is given. We consider the case of instantaneous mixture with i.i.d real-valued sources satisfying, in addition to the identifiability condition k∈Z |ρi(k)| < +∞ for i = 1, . . . , m. The noise is assumed Gaussian with variance σ 2 I. Assuming that the permutation indeterminacy is P = I, one can write: BA = I + δ (40) and hence, the separation quality is measured using the mean rejection level criterion [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF] defined as: Iperf def = p =q E |(BA)pq| 2 ¡ ρq(0) E (|(BA)pp| 2 ) ρp(0) (41) = p =q E |δpq| 2 ¡ ρq(0) ρp(0) (42) Our performance analysis consists in deriving the closedform expression of the asymptotical variance errors: lim T →+∞ T E |δpq| 2 ¡ (43) By using a similar approach to that in [START_REF] Pham | Blind separation of mixture of independent sources through a quasi-maximum likelihood approach[END_REF] based on the central-limit and continuity theorems, one obtains the following result (the proof is omitted due to space limitation). Theorem 3 Vector δij def = [δij δji] T is asymptotically Gaussian distributed with asymptotic covariance matrix ∆ij def = lim T →+∞ T E δijδ T ij (44) = lim T →+∞ T E δ 2 ij ¡ E (δijδji) E (δjiδij) E δ 2 ji ¡ ! (45) = H # ij ΨijH #T ij ( 46 ) where Hij = 1 1 ! ⊗ ¢ ρ i ρ j £ (47) Ψij = 4 Γ (ij) 11 Γ (ij) 12 Γ (ij) 21 Γ (ij) 22 5 (48) with Γ (ij) 11 (k, k ) = τ ∈Z rii(k + τ )rjj(k + τ ) (49) Γ (ij) 22 (k, k ) = τ ∈Z rii(k + τ )rjj(k + τ ) (50) Γ (ij) 12 (k, k ) = τ ∈Z rii(k + τ )rjj(k -τ ) (51) rii(k) = ρi(k) + δ(k)σ 2 bib T i ( 52 ) and Γ (ij) 21 = Γ (ij)T 12 and bi represents the i th row of B = A # . SIMULATION RESULTS We present here some numerical simulations to evaluate the performance of our algorithm. We consider in our simulation an array of n = 4 sensors receiving two signals in the presence of stationary real temporally white noise. The two source signals are generated by filtering real white Gaussian processes by an AR model of order 1 with coefficient a1 = 0.95 and a2 = 0.50 (except for Figure 4). The sources arrive from the directions θ1 = 30 and θ2 = 45 degree. The number of time lags is K = 5 (except for Figure 5). The signal to noise ratio is defined as SNR = -10 log 10 σ 2 , where σ 2 is the noise variance. The mean rejection level is estimated over 1000 Monte-Carlo runs. Figure 2 shows the mean rejection level against the signal to noise ratio SNR. We compare the empirical performance with theoretical performance for T = 1000 sample size. Figure 3 shows the mean rejection level versus the number of sensors using the theoretical formulation for T = 1000 sam- ple size. We observe that, the greater the number of sensors, the lower the rejection level is in the low SNR case. For high SNRs the number of sensors has negligible effect on the separation performance. Figure 4 shows Iperf versus the spectral shift δa. the spectral shift δa represents the spectral overlap of the two sources. In this figure, the noise is assumed to be spatially white and its level is kept constant at 10dB and 30dB. We let a1 = 0.4 and a2 = a1 + δa. The plot evidences a significant increase in rejection performance by increasing δa. The plots in Figure 5 illustrate the effect of the number of time lags K for different SNRs. In this simulation the sources arrive from the directions θ1 = 10 and θ2 = 13 degree. CONCLUSION This paper presents a blind source separation method for temporally correlated stationary sources. An SOS-based contrast function is introduced and iterative algorithm based on relative gradient technique is proposed to minimize it and perform BSS. Generalization to adaptive and convolutive cases are discussed. A theoretical analysis of the asymptotic performance of the method has been derived. Numerical simulations have been performed to evidence the usefulness of the method and to support our theoretical performance study. Figure 1 : 1 Figure 1: Mean Rejection Level in dB versus the sample size T for 2 autoregressive sources, 4 sensors and SNR=40dB. Figure 2 : 2 Figure 2: Mean Rejection Level in dB versus the SNR for 2 autoregressive sources, 4 sensors and T =1000. Figure 3 : 3 Figure 3: Mean Rejection Level in dB versus the number of sensors n for 2 autoregressive sources and T =1000. Figure 4 : 4 Figure 4: Mean Rejection Level in dB versus the spectral shift δa for 2 autoregressive sources, 4 sensors and T =1000. Figure 5 : 5 Figure 5: Mean Rejection Level in dB versus the number of time lags K for 2 autoregressive sources, 4 sensors and T =1000. We implicitly assume here that ρ i = 0, otherwise the source signal could not be detected (and a fortiori could not be estimated) from the considered set of correlation matrices. This hypothesis will be held in the sequel.
01571963
en
[ "math.math-qa", "phys.mphy", "math.math-rt", "math.math-gr" ]
2024/03/05 22:32:18
2020
https://hal.science/hal-01571963/file/LatticesOfHyperrootsV2.pdf
Robert Coquereaux Theta functions for lattices of SU(3) hyper-roots We recall the definition of the hyper-roots that can be associated to modules-categories over fusion categories defined by the choice of a simple Lie group G together with a positive integer k. This definition was proposed in 2000, using another language, by Adrian Ocneanu. If G = SU(2), the obtained hyper-roots coincide with the usual roots for ADE Dynkin diagrams. We consider the associated lattices when G = SU(3) and determine their theta functions in a number of cases; these functions can be expressed as modular forms twisted by appropriate Dirichlet characters. 1 They are called "quantum subgroups" of SU(2) in the latter reference. 2 Here and below, this means that the underlying monoidal category is A k (G), whose definition is briefly recalled at the beginning of section 2.1. 3 Section 4.2, about D + 6 , discusses some properties of the latter; it may have an independent interest. Introduction The ADE correspondence between indecomposable module-categories of type SU [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF] and simplylaced Dynkin diagrams was first obtained by theoretical physicists in the framework of conformal field theories (classification of modular invariant partition functions for the WZW models of type SU(2), [START_REF] Cappelli | The ADE classification of minimal and A (1) 1 conformal invariant theories[END_REF], [START_REF] Francesco | Conformal field theory[END_REF]). Its relation with subfactors was studied in [START_REF] Ocneanu | Paths on Coxeter diagrams: from Platonic solids and singularities to minimal models and subfactors[END_REF] and it was set in a categorical framework by [START_REF] Kirillov | On q-analog of McKay correspondence and ADE classification of SL2 conformal field theories[END_REF][START_REF] Ostrik | Module categories, weak Hopf algebras and modular invariants[END_REF]. In plain terms, the diagrams encoding the action of the fundamental representation of SU(2) at level k (which is classically 2-dimensional) on the simple objects of the various module-categories existing at that level, are the Dynkin diagrams describing the simplylaced simple Lie groups with Coxeter number k + 2. At a deeper level, there is a correspondence between fusion coefficients of the SU(2) modulecategory described by a Dynkin diagram E and the inner products between all the roots of the simply-laced Lie group associated with the same Dynkin diagram. In the non-ADE cases one can also define the action of an appropriate ring on modules associated with the chosen Dynkin diagrams and still obtain a correspondence between structure coefficients describing this action and inner products between roots -one has only to introduce scaling coefficients in appropriate places. The correspondence relating fusion coefficients for module-categories of type SU [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF] and inner products between weights and/or roots of root systems was clearly stated (but not much discussed) in [START_REF] Ocneanu | The Classification of subgroups of quantum SU(N)[END_REF]; in a different context, some of its aspects were already present in the article [START_REF] Dorey | Partition Functions, Intertwiners and the Coxeter Element[END_REF]. The correspondence was used and described in some detail in one section of [START_REF] Coquereaux | Quantum McKay correspondence and global dimensions for fusion and module-categories associated with Lie groups[END_REF]. As observed in [START_REF] Ocneanu | The Classification of subgroups of quantum SU(N)[END_REF], one can start from the SU(2) fusion categories and their modules 1 to recover or define the usual root systems, and associate with each of them a periodic quiver describing, in particular, the inner products between all the roots. It is also observed in the same reference that the construction can be generalized: replacing SU(2) by an arbitrary simple Lie group G leads, for every choice of a module-category E of type 2 G, to a system of "higher roots" that we call "hyper-roots of type G". Usual root systems are therefore hyper-root systems of type SU [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF]. A usual root system gives rise, in particular, to an Euclidean lattice. The same is true for hyper-root systems. Given a lattice, one may consider its theta series whose n th coefficient gives the number of vectors of given norm, or, equivalently, the number of representations of the integer n by the associated quadratic form. Theta functions of root lattices are well known and are usually expressed in terms of appropriate modular forms (see for instance the book [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF]). Our purpose, in the present paper, is not to provide a detailed account of the properties of hyper-root systems -this should be done elsewhere -but only to discuss some general features of their theta functions and describe those functions associated with the systems defined by fusion categories of type A k (SU(3)), or their modules, for small values of the (conformal) level k. The structure of the hyper-root lattice of type SU(3) obtained when k = 1 was announced in [START_REF] Ocneanu | The Classification of subgroups of quantum SU(N)[END_REF]: it was there recognized as a scaled version of D + 6 , the so-called "shifted D 6 lattice". We shall recover and comment on this result below 3 but we shall also obtain closed formulae in terms of modular forms, as well as the corresponding series, for lattices of type SU(3) associated with higher values of k. As already stated, the theory of hyper-roots that we use here is due to A. Ocneanu. Since it is poorly documented, we had to incorporate some general discussion based on a material that is, in essence, published in [START_REF] Ocneanu | The Classification of subgroups of quantum SU(N)[END_REF], or available on line [START_REF] Ocneanu | Higher Coxeter systems[END_REF]. A general account of the theory of hyperroots and of other higher analogues of Lie groups concepts, as well as their interpretation in terms of usual representation theory, has been long awaited for, and should appear one day [START_REF] Ocneanu | [END_REF]. Let us stress again the fact that this is not the purpose of the present article. The family of scalar products between SU(3) systems of hyper-roots, called "higher roots" in [START_REF] Ocneanu | Higher Coxeter systems[END_REF], for several choices of E, and using a different language, was obtained long ago [START_REF] Ocneanu | [END_REF] and displayed in several places using 1 beautiful posters. Our purpose, here, which is therefore the original contribution of this article, is to determine, in various cases, one or several convenient Gram matrices for the associated lattices, and to discuss some properties of their theta functions. These results were hitherto apparently unknown, this is why we decided to make them available. Most of them were obtained in March 2009, while the author was a guest of the Mathematical Department at the University of Luxembourg, whose hospitality is acknowledged. We hope that this presentation will trigger new ideas and insights. 2 From fusion categories and module categories to lattices of hyperroots On extended fusion matrices and their periods From now on k denotes a positive integer called the "level" (or conformal level), G a simple 4 , simply connected, compact Lie group, and Lie(G) its complex Lie algebra. We call A k (G) the category of integrable modules of the affine Kac-Moody algebra associated with Lie(G) at level k, see e.g. [START_REF] Kac | Infinite dimensional Lie algebras[END_REF]. It is equivalent (an equivalence 5 of modular tensor categories), [START_REF] Finkelberg | An equivalence of fusion categories[END_REF], [START_REF] Huang | Vertex operator algebras, the Verlinde conjecture, and modular tensor categories[END_REF], [START_REF] Kazhdan | Tensor structures arising from affine Lie algebras, III[END_REF], to a category constructed in terms of representations of the quantum group G q at root of unity q = exp( iπ g ∨ +k ), where g ∨ is the dual Coxeter number of G (take the quotient of the category of tilting modules by the additive subcategory generated by indecomposable modules of zero quantum dimension). These categories, called fusion categories of type (G, k), play a key role in the Wess -Zumino -Witten models of conformal field theory. A k (G) being monoidal, with a finite number of simple objects denoted m, n, p . . ., we consider the corresponding Grothendieck ring and its structure coefficients, the so-called fusion coefficients N mnp , where m × n = p N mnp p. They are encoded by fusion matrices N m with matrix elements (N m ) np = N mnp . The fusion category being given, one may consider module-categories 6 E k (G) associated with it. In the following we assume that the chosen module-categories are indecomposable. Of course, one can take for example E k (G) = A k (G). The fusion coefficients F nab characterize the module structure: n × a = b F nab b, where a, b, . . . denote the simple objects of E k (G). They are encoded either by square matrices F n , with matrix elements (F n ) ab = F nab , still called fusion matrices, or by the rectangular matrices7 τ a = (τ a ) nb , with (τ a ) nb = (F n ) ab . The simple objects of A k (G), or irreps, are labelled by the vertices of the Weyl alcove of G at level k. With G = SU [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF], this alcove is the Dynkin diagram A k+1 , vertices are labelled (n), with non-negative integers n ∈ {0, 1, . . . , k}, and the fusion matrices N (n) or F (n) obey the Chebyshev recursion relation F (n) = F (n-1) F (1) -F (n-2) (1) where F (0) is the identity matrix (the weight with Dynkin component (0) is the highest weight of the trivial representation) and F (1) refers to the generator (the fundamental irrep of classical dimension 2). In the general case these matrices still obey recursion relations that depend on the choice of the underlying Lie group G. With G = SU(3), the simple objects (irreps) are labelled by pairs (p, q) of non-negative integers with p + q ≤ k, and the recursion relations read F (p,q) = F (1,0) F (p-1,q) -F (p-1,q-1) -F (p-2,q+1) if q = 0 F (p,0) = F (1,0) F (p-1,0) -F (p-2,1) (2) F (0,q) = (F (q,0) ) T F (0,0) is the identity matrix and F (1,0) and F (0,1) are the two generators. Also, F (q,p) = F T (p,q) . Expressions of the fundamental fusion matrices F [START_REF] Cappelli | The ADE classification of minimal and A (1) 1 conformal invariant theories[END_REF]0) , for all the modules E k (G) considered in this paper are recalled in the appendix (sec. 6.1) using the weight ordering (p 1 , q 1 ) < (p 2 , q 2 ) if p 1 + q 1 < p 2 + q 2 or if q 1 < q 2 and p 1 + q 1 = p 2 + q 2 . In some applications one sets to zero the fusion matrices whose Dynkin labels do not belong to the chosen Weyl alcove. This is not what we do here. On the contrary, the idea is to use the same recursion relations to extend the definition of the matrices F n at level k from the Weyl alcove to the fundamental Weyl chamber of G (cone of dominant weights) and to use signed reflections with respect to the hyperplanes of the affine Weyl lattice in order to extend their definition to arbitrary arguments n ∈ Λ, the weight lattice of G. By so doing one obtains an infinite family of matrices F n that we shall still (abusively) call "fusion matrices", and for which we keep the same notations, although their elements can be of both signs. It is also useful to shift (translation by the Weyl vector) the labelling index of the these matrices to the origin of the weight lattice; in other words, for n ∈ Λ, using multi-indices, we set {n} = (n -1), where the use of parenthesis refers to the usual Dynkin labels (we hope that this brace notation will not confuse the reader -see examples below). The following results about SU [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF] and SU [START_REF] Coquereaux | Quantum McKay correspondence and global dimensions for fusion and module-categories associated with Lie groups[END_REF] are known and belong to the folklore. If G = SU(2) the terms F {n} with 1 ≤ n ≤ k + 1 are the usual fusion matrices at level k, they have non-negative integer matrix elements, F {1} = F (0) is the identity and F {0} = F (-1) is the zero matrix; more generally the terms F {n} with n = 0 mod N , where N = k + 2, vanish. Matrices F {N +m} = -F {N -m} have non-positive integer matrix elements for 1 ≤ m ≤ N -1. The F sequence is periodic of period 2N and the reflection symmetries (Weyl mirrors), with sign, are centered in position {n} = 0 mod N . Notice that F {2N -1} = -F {1} = -l 1. If G = SU(3), we set F {p,q} = F (p-1,q-1) , so that F {0,0} is the zero matrix and F {1,1} = F (0,0) is the identity (the latter corresponding to the weight with components (0, 0) in the Dynkin basis, i.e., to the highest weight of the trivial representation). One has F {p,q} = 0 whenever p = 0 mod N , q = 0 mod N or p + q = 0 mod N . One also gets immediately the following equalities: F {p+N,q} = (P.F ) {p,q} , F {p,q+N } = (P 2 .F ) {p,q} where P = F {N -2,1} is a generator of Z 3 (with P 3 = 1) acting by rotation on the fusion graph of A k (SU (3)) and F {p+3N,q} = F {p,q+3N } = F {p+N,q+N } = F {p,q} . The sequence F {p,q} is periodic of period 3N in each of the variables p and q but it is completely characterized by the values that it takes in a rhombus (N, N ) with N 2 vertices; for this reason, this rhombus will be called periodicity cell, or periodicity rhombus. We have reflection symmetries (with sign) with respect to the lines {p} = 0 mod N , {q} = 0 mod N and {p + q} = 0 mod N . The F matrices labelled by vertices belonging to the Weyl alcove (which can be strictly included in the first half of a periodicity rhombus) have non-negative integer matrix elements; those with indices belonging to the other half of the inside of the rhombus have non-positive entries, those with vertices belonging to the walls of the Weyl chamber or to the second diagonal of the rhombus vanish, and the whole structure is periodic. The Weyl group action on the alcove8 and the affine SU(3) lattice at level k = 2 are displayed in figure 1, left. More generally, for a simple Lie group G taken9 at level k, the obtained periodicity cell, once F matrices have been appropriately extended to the whole weight lattice, is a parallelotope D with ηN r G vertices where N = k + g, g being the Coxeter number of G and r G its rank, and where the value of η, a small integer, depends on the symmetries of the Dynkin diagram of G. We saw that ��������� • � • • • � � • • • � • • • � � • • • � • • • � � • • • � • • • � � • • � • • • � • • • � � • • • � • • • -� � • • • -� • • • � -� • • • � • • • � • • � � • • • � • • • � � • • • � • • • -� � • • • -� • • • � � • • • � • • � • • • � � • • • � • • • � � • • • � • • • � -� • • • � • • • � � • • � • • • � • • • � � • • • � • • • � � • • • � • • • � � • • • � • • • � • • � � • • • � • • • � � • • • � • • • � � • • • � • • • � � • • • � • Figure 1: SU(3) at level 2. Left: The alcove (lower red triangle), its images under the Weyl group, and the periodicity rhombus. Right: The function τ a on R ∨ for a, the trivial irrep, with highest weight (0, 0); Notice that reflection across the drawn diagonal induces a sign flip of the values of F n or of the function τ a . Ideally, these figures, like those that follow, should be magnified on a terminal device. η = 2 for SU(2) but η = 1 for SU [START_REF] Coquereaux | Quantum McKay correspondence and global dimensions for fusion and module-categories associated with Lie groups[END_REF]. The integer N is sometimes called altitude of the module [START_REF] Francesco | SU(N) lattice integrable models associated with graphs[END_REF], or generalized Coxeter number (it coincides with the Coxeter number of the group SU(N)). On periodic essential matrices and the ribbon of hyper-roots The ribbon R ∨ Given a module-category E k (G) over A k (G) (the former, that we shall just call E, if no confusion arises, using a notation that will also denote the set of isomorphisms classes of its simple objects, can be chosen equal to the latter, but it is good to keep the distinction in mind), we defined, for each simple object a of E k (G), an essential matrix τ a , with elements (τ a ) n,b = F n,a,b , where n, b refer respectively to the simple objects of A k (G) and of E. Since we have extended the definition of the fusion matrices F n to allow arguments n belonging to the weight lattice of G by using recursion relations, symmetries, and periodicity, we can do the same for the τ 's, keeping the same notation: the indices a, b of (τ a ) nb still refer to simple objects of E but the index n labels weights of G. The infinite matrices τ a can be thought of as rectangular, with columns indexed by the elements of E (a finite number) and lines indexed by the weights of Λ, the weight lattice of G. For every choice of a ∈ E, τ a is therefore a periodic, integer-valued function, on Λ × E. Actually, in many cases the definition domain of τ a can be further restricted. Indeed, there are many modules E k (G) that have a non trivial grading with respect to the center Z of G; in those cases, not only the weights of G, its irreducible representations, the simple objects of A k (G), but also the simple objects of E k (G), have a well defined grading (denoted ∂) with respect to Z, and the module structure is compatible with this grading: matrix elements of τ a in position (n, b) will automatically vanish if ∂n + ∂a = ∂b. Unless stated otherwise we shall assume in the rest of the paper that we are in this situation. The function τ a is periodic, integer-valued on Λ × Z E, and it is specified (see figure 1, right) by the values that it takes on the finite set R ∨ = D × Z E where D is the period parallelotope. The set R ∨ , a finite rectangular table made periodic, may be thought as a closed ribbon 10 . For most choices of E, in particular if one takes E = A k (G), the group Z acts non trivially and R ∨ has r E |D|/|Z| elements, where the rank r E is the number of simple objects of E, and |D| = ηN r G . The elements of R ∨ will be called (restricted 11 ) hyper-roots of type G defined by the module E k (G). The choice of a fundamental irrep π of G, with the constraint that it should exist at level k (so that π defines a particular non-trivial simple object of A k (G)) allows one to consider E as a graph or, rather, to associate with E, a graph denoted by the same symbol, once π is chosen once and for all : it is the graph of multiplication by π, sometimes called fusion graph, representation graph, nimrep graph, or McKay graph associated with π. If π is complex, like the fundamental representation(s) of SU(3), edges of E are oriented. If π is self-conjugate, like the fundamental representation of SU [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF], or like the antisymmetric square of the vector representation of SU(4), edges carry both orientations and can be considered as non-oriented. In general E is actually a quiver since it is a directed graph where loops and multiple arrows between two vertices are allowed. For any choice of a fundamental irrep π of G existing at level k, the set E, and therefore the ribbon R ∨ as well, become quivers (there is an edge from one vertex of R ∨ to another if there are edges between their respective two projections in E and in Λ). The definition of R ∨ as a set of vertices does not depend on the choice of π. Figure 2 displays the first few edges (orange arrows) between the vertices of the bottom left corner of the R ∨ quiver for E = A 2 (SU(3)); since this is essentially a cartesian product 12 of two multigraphs (red and blue arrows in the same picture) we shall no longer displays the edges of R ∨ in subsequent illustrations. If SU(3) is replaced by SU(2), one obtains in the same way the quivers of roots for all simple Lie groups; several examples of this construction in the case of usual roots, for instance the quiver of roots of E 6 , can be found in [START_REF] Coquereaux | Quantum McKay correspondence and global dimensions for fusion and module-categories associated with Lie groups[END_REF]. • • • • • • • • • • • • • • • • • • • • • • • • Figure 2: Edges (orange arrows) of the quiver R ∨ for A 2 (SU(3)), π being one of the two fundamental representations: start from an allowed vertex (a blue star) of a subgraph E (a blue triangle), go to the vertex, or vertices, located in the same position in the neighboring subgraphs by using the reversed edges of Λ, i.e., following the red arrows backwards, then follow the blue arrows on the latter subgraphs. At the level of sets it is clear that the ribbon R ∨ is in bijection with the root system defined by the chosen Dynkin diagram since the number of roots is indeed equal to r E × N . Let us briefly mention why R ∨ = Z 2N × Z 2 E can be identified with the periodic quiver of roots. As graphs, the Dynkin diagram E encodes the multiplication of the simple objects of E k (SU(2)) by the fundamental irrep of SU [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF]. The Chebyshev recursion relation 1 now reads F {2} F {p} = F {p-1} + F {p+1} , but F {2} is also the adjacency matrix of the graph E since {2} = (1) denotes the fundamental irrep of SU [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF]. In other words, the functions τ a are such that the sum of neighbors taken vertically (i.e., along Z 2N ) equals the sum of neighbors taken horizontally (i.e., along E) on the graph R ∨ . This can be written as a harmonicity property: define the laplacian ∆ E on E as the sum of neighbors, and similarly for the laplacian ∆ Λ on Z 2N ; a function f such that ∆ Λ f = ∆ E f is called harmonic. The functions τ a are therefore Z-valued and harmonic. As the vertices of Dynkin diagrams also label fundamental weights, one has a function τ a on R ∨ for every fundamental weight, therefore weights define functions that are harmonic. If one thinks of a root as a point (a Dirac measure) of the ribbon, one can show (see section 3 of [START_REF] Coquereaux | Quantum McKay correspondence and global dimensions for fusion and module-categories associated with Lie groups[END_REF], in relation with [START_REF] Ocneanu | The Classification of subgroups of quantum SU(N)[END_REF]) that (τ a ) n,b is the inner product between the fundamental weight a and the root localized at the point (n, b). This discussion justifies the terminology "ribbon of hyper-roots" since when G = SU(2), the period is 2N , the period parallelotope is the interval D = Z 2N , and the ribbon R ∨ = Z 2N × Z 2 E can be identified with the periodic quiver of roots. From another point of view, roots are weights, therefore roots also define (particular) Z-valued harmonic functions on R ∨ , and since α, α = 2, the point where a root α is localized on the ribbon, as a Dirac measure, is obtained from the collection of inner products α, β between α and all the roots as the (unique) point where this value is equal to 2. Let α = (m, a) and β = (n, b) two vertices of R ∨ i.e., two roots. One finds: The case G = SU(2) When G is SU(2), the module-categories E k (SU( 2 α, β = F (m-n),a,b + F (n-m),a,b = F (m-n),a,b -F (m-n-2),a,b (3) where the fusion coefficients F (n)ab have been extended by periodicity as explained in section 2.1. In terms of fusion matrices F {n} with shifted labels and matrix elements (F {n} ) (a,b) = F {n},a,b = F (n-1),a,b , this relation reads < (m, a), (n, b) >= (F {m-n+1} -F {m-n-1} ) (a,b) (4) Although expressed in terms of dimensions of spaces of essential paths on graphs (a concept that we shall not use in the present paper), equation 3 was explicitly written in section 1.7 of [START_REF] Ocneanu | The Classification of subgroups of quantum SU(N)[END_REF]. What was then proposed, in this last reference, is to use this expression as a starting point in order to define the inner product between all the vertices of the ribbon R ∨ , i.e., all the roots, without relying on the existence of a special basis (the simple roots) in which the inner products would be given by elements of the usual Cartan matrix, and finally to consider higher generalizations where SU(2) is replaced by another simple or semi-simple Lie group G. SU(3) and the general case Again, up to notations and a different terminology, the content of the present section is already present in or can be inferred from reference [START_REF] Ocneanu | The Classification of subgroups of quantum SU(N)[END_REF], pp 9-10. The vector space of complex valued functions on the set of hyper-roots, the ribbon, is C |R ∨ | . It admits a canonical basis whose elements are identified with characteristic functions δ α (Dirac measures located at the points α ∈ R ∨ ). An Euclidean structure is defined on this space by declaring that these Dirac masses are orthonormal. The elements f of the subspace of harmonic functions are such 13 that ∆ Λ f = ∆ E f , where ∆ Λ and ∆ E respectively denote the laplacian on the weight lattice Λ of G and the laplacian on the graph E; in order to define the latter, one should also, in principle, select a fundamental irrep π of G existing at the chosen level, but this choice will be irrelevant for the study of the case G = SU(3) that we shall investigate later in more detail. In the classical situation weights are harmonic and should have integral inner products with roots, therefore one defines hyper-weights as Z-valued functions that are harmonic on the ribbon. Hyper-roots can be thought either as points α of the ribbon (Dirac masses), or as particular hyperweights. A point α of R ∨ specifies a harmonic function, also denoted α, defined as the orthonormal projection of the Dirac measure δ α ∈ C |R ∨ | on the subspace of harmonic functions. One finds [START_REF] Ocneanu | The Classification of subgroups of quantum SU(N)[END_REF] that its value on β ∈ R ∨ , denoted < α, β >, is explicitly given, if α = (m, a) and β = (n, b), by < α, β >= w∈W (w)F m-n+wρ-ρ,a,b (5) The above expression generalizes the one previously obtained for G = SU [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF], where W = S One can take the latter expression as a definition of the inner product and extend , by linearity to the linear span of hyper-roots. One checks that it defines a positive definite 14 inner product and therefore an Euclidean structure on the space of hyper-roots. This euclidean space will be denoted C. Notice that15 < α, α >= |W| for all hyper-roots α. Terminological conventions: elements of the hyper-root lattice L, the Z span of hyper-roots, are "hyper-root vectors" and the elements of the dual lattice L , are "hyper-weight vectors". If G = SU(3), the Weyl group is S 3 and the inner product of two hyper-roots is obtained from equation 5, or 6, as the sum of six fusion coefficients. Writing α = (m, a) = ((m 1 , m 2 ), a), β = (n, b) = ((n 1 , n 2 ), b), setting λ 1 = m 1 -n 1 , λ 2 = m 2 -n 2 , and using shifted labels, we obtain < α, β > = (7) F {λ 1 +1,λ 2 +1} + F {λ 1 -2,λ 2 +1} + F {λ 1 +1,λ 2 -2)} -F {λ 1 -1,λ 2 -1} -F {λ 1 -1,λ 2 +2} -F {λ 1 +2,λ 2 -1} (a,b) 2.2.4 From the set R ∨ to the set R Let α be a point of R ∨ , this defines a hyper-root, also called α, as a vector of the euclidean space C. Obviously, its negative -α is another vector of C. For usual roots, i.e., SU(2) hyper-roots, the opposite of a root is a root, as it is well known. However, for SU(3) hyper-roots, we can see, using the definition of the period parallelotope D, that -α does not correspond to any vertex of R ∨ . This feature is not convenient. For all purposes it is useful to generalize the previous definitions, keeping R = R ∨ for SU(2) but setting R = R ∨ ∪ -R ∨ for SU(3), then |R| = 2|R ∨ |. The opposite of a hyper-root (an element of R) is then always a hyper-root. For modules E endowed with a non trivial grading by the center Z of G, we already know that: |R ∨ | = r E |D| |Z| = r E N r G |Z| η (8) Since we only study G =SU(3) hyper-roots in this paper, we shall not give more details about R versus R ∨ for a general G, nevertheless we notice, in view of the last comment of section 2.1 on the period parallelotope D, its size being ηN r G with η = 1 for SU(2) and η = 2 for SU(3), and from the above definition of R, that we obtain in both cases a number of hyper-roots given by the same general expression: |R| = 2 r E N r G |Z| (9) which, more specifically, reads |R| = r E N for G =SU(2), as it should, and reads for G = SU(3), |R| = 2|R ∨ | = 2 3 r E N 2 (10) since r G = 2 and |Z| = 3; we have also N = k + 3. Moreover if one chooses E = A k one has r E = (N -2)(N -1)/2, then |R| = 2|R ∨ | = (N -2)(N -1)N 2 /3 (11) From now on, we shall usually not mention R ∨ , the set of restricted hyper-roots that we had to introduce in the first place, since R will be used most of the time. Dimension of the space of hyper-roots The dimension r = dim C of the space of hyper-roots associated with E k (G), in those cases where the center Z acts non-trivially on the set of vertices of E k (G), is16 r = dim C = r E |W| |Z| ( 12 ) where W is the Weyl group associated with the simple Lie group G. The term |W|/|Z| cancels out for G = SU(2) since W and Z are both isomorphic with Z 2 . One then recovers the rank r = r E given by the number of vertices of the chosen Dynkin diagram. For G = SU(3), W = S 3 , |W| = 3!, and for modules E with non trivial triality (a property that holds for all the examples that we shall considered below) we have Z = Z 3 therefore r = 2 r E . Moreover, If one chooses E = A k , then r = (N -2)(N -1). More generally for G = SU(n), W = S n and for modules E endowed with a non-trivial grading by the center Z = Z n , one has r = r E n!/n = (n -1)! r E . Remark : The Weyl group has order |W| = r G ! Π(θ α ) |Z| where Π(θ α ) is the product of the components of the highest root of G in the basis of simple roots 17 . One can therefore rewrite equation 12 as r = r E r G ! Π(θ α ) (13) 2.4 Example: E = A k (SU(3)) At level k we have N = k +3, and the values of |R ∨ |, |R|, r E and r, in terms of N , were given before. Taking for instance k = 2 there are 100 hyper-roots, 50 strict hyper-roots, r E = 6, r = 12, and we display in figure 3 one of the (hyper) root hexagons associated with some chosen hyper-root that sits where the integer 6 (inner product with itself) appears18 , in the central fusion diagram; the inner products of that hyper-root with all the others are given by the integers that appear in the figure . The small circles with no integer marks are forbidden vertices (points where the constraint ∂n + ∂a = ∂b, see section 2.2.1, is not obeyed). Notice that N = 5, so that drawing hexagons with edges of length 5 would be sufficient to display all the inner products and their Weyl symmetries, but we drew an hexagon with edges of length N + 1 = 6 in order to illustrate the periodicity properties of the chosen hyper-root. The harmonicity property (∆ Λ f = ∆ E f , see section 2.2.3) of the chosen hyper-root, call it f , can be easily checked in this hexagon, see figure 4: consider for instance the point marked X, it belongs to a particular fusion diagram and there are two oriented edges (arrows) ending on X in the same diagram, they start from vertices where f have values 2 and 2, so that (∆ E f )(X) = 2 + 2 = 4; there are also oriented edges in the hexagon (or in the SU(3) weight lattice) that connect the fusion diagrams themselves, they follow the three orientations given in the figure, and those ending in the fusion diagram where X is located therefore define three arrows with head X, starting from three neighbouring fusion diagrams, from vertices where f has values 6, -1, -1, so that (∆ Λ f )(X) = 6 -1 -1 = 4. For illustration and comparison with the above SU(3) harmonicity, we display on figure 5 the corresponding property for the simpler case A k (SU( 2)) with k = 3; the fusion graph coincides therefore with the Dynkin diagram A 4 , and the SU(2) harmonicity property for this quiver of roots, which is the SU(5) quiver, can be checked for instance at the point marked X on figure 5. This SU(2) harmonicity property, which holds for all simple Lie groups, is also illustrated (in particular for the roots of several exceptional Lie groups) in one of the sections of ref [START_REF] Coquereaux | Quantum McKay correspondence and global dimensions for fusion and module-categories associated with Lie groups[END_REF]. Absolute and relative hexagons. The absolute hexagon associated with a, a vertex of the fusion diagram, is a hexagonal window, with edges of length N (or N + 1 if the hexagon is extended, as in figures 3, 4), displaying the inner product between some hyper-root α = (m, a), where m is an element of the weight lattice (or of the periodicity rhombus), and all others, using periodicity, the hexagon being chosen in such a way that the fusion diagram to which a belongs is located at the center of the hexagon (also the origin of SU(3) weight lattice). This is the case in figures 3, 4. There are of course r E = 6 such absolute hexagons if k = 2. Relative hexagons are also hexagonal windows displaying the inner products between one chosen hyper-root α = (m, a) and all others, they are not usually centered (in the sense that α does not always belong to the fusion diagram located at the center of the hexagon), but they are in good relative positions: the vertex a belongs to a fusion diagram itself located at the weight m of the SU(3) weight lattice. By definition there are as many relative hexagons as there are hyper-roots, and if one makes the choice of a basis one can in particular consider the relative hexagons associated with these basis elements. Relative hexagons are still symmetric (Weyl axes) with respect to the position of the chosen hyper-root, the position of the fusion diagram to which the vertex labelled 6 belongs, since < α, α >= 6, but this diagram is not necessarily located at the center of the hexagon. Since they are in good relative positions, relative hexagons can be added (pointwise) or multiplied by scalars; the resulting hexagons display arbitrary (non-necessarily integral) hyper-weights since they automatically obey the required harmonicity properties. Still with the same example k = 2, we give in figure 9 the relative root hexagons associated with the choice of a particular basis of hyper-roots (12 of them in this particular case). Position of hyper-roots and periodicity rhombus. Rather than displaying root hexagons, it is enough, albeit slightly less convenient, to display the periodicity rhombus. To each hyper-root one can associate such a rhombus. We still consider the example k = 2 and display on figure 6 the positions of the 50 restricted hyper-roots and, on figure 7, one of the rhombuses associated with some chosen hyper-root (as usual the latter sits where the "6" is). The 12 hyper-roots whose positions are included in the triangle located in the left corner (brown triangle in the picture) define, up to some chosen ordering, a basis of the space of the space of hyper-roots; there is nothing special about this basis (we did not introduce any notion of "simple hyper-roots") but it will be used later to define a particular Gram matrix for the hyper-root lattice, and we give in figure 9 the 12 relative hexagons associated with this basis. 3 Needed tools for lattices of hyper-roots of type SU(3) � • • • � • • • � � • • • � • • • -� -� • • • � • • • � � • • • � • • • � • • � � • • • � • • • � � • • • -� • • • -� -� • • • -� • • • � � • • • � • • • � � • • • � • • • -� � • • • -� • • • � -� • • • -� • • • -� -� • • • -� • • • � -� • • • -� • • • � -� • • • � • -� • • • � • • • -� -� • • • -� • • • -� -� • • • � • • • -� � • • • � • • • -� -� • • • -� • • • -� -� • • • � • • • -� • • � � • • • -� • • • � -� • • • -� • • • -� � • • • � • • • � � • • • � • • • -� � • • • -� • • • -� � • • • -� • • • � � • • • � • • • � � • • • � • • • � -� • • • � • • • -� � • • • � • • • � � • • • � • • • � -� • • • � • • • � -� • • • � • • • � � • • • � • • • � � • • • -� • • • � -� • • • -� • • • -� � • • • � • • • � � • • • � • • • -� � • • • -� • • • -� � • • • -� • • • � � • • -� • • • � • • • -� -� • • • -� • • • -� -� • • • � • • • -� � • • • � • • • -� -� • • • -� • • • -� -� • • • � • • • -� • � • • • -� � • • • -� • • • � -� • • • -� • • • -� -� • • • -� • • • � -� • • • -� • • • � -� • • • � • • • � � • • • � • • • � � • • • -� • • • -� -� • • • -� • • • � � • • • � • • • � � • • � • • • � • • • � � • • • � • • • -� -� • • • � • • • � � • • • � • • • � Root systems defined by Dynkin diagrams (or simple Lie groups), are well known, and the corresponding lattices which are just, in our framework, hyper-root lattices of type G = SU(2), are described in many places. Their associated lattices, as well as their theta functions, can be found in the literature, for instance in [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF]. In most cases they are explicitly given in terms of combinations of elliptic theta functions, but they could also be obtained from a method that uses the theory of modular forms twisted by appropriate Dirichlet characters (although this is not usually done). This latter method, that we shall review in section 3.4, will be used to find explicit expressions for the lattices of hyper-roots of type G = SU(3). On the SU(3) classification (reminders) A given module E over A k (SU(3)), and consequently a given lattice of hyper-roots, is fully specified by one of the two fundamental fusion matrices of SU(3) or equivalently by the fusion graph describing the action of one fundamental representation of SU(3) on the chosen module. The matrices F (1,0) that we need are recalled in appendix 6.1, their associated fusion graphs are displayed alongside the headings of section 4. The classification of modules E is known, properties of the members of the different series and of the seven exceptional cases of the SU(3) family, together with their fusion graphs, are discussed in a number of places, see [START_REF] Francesco | SU(N) lattice integrable models associated with graphs[END_REF], [START_REF] Ocneanu | The Classification of subgroups of quantum SU(N)[END_REF], [START_REF] Francesco | Conformal field theory[END_REF], [START_REF] Coquereaux | Comments about quantum symmetries of SU(3) graphs[END_REF], [START_REF] Coquereaux | Orders and dimensions for sl2 or sl3 module-categories and boundary conformal field theories on a torus[END_REF], [START_REF] Evans | Ocneanu cells and Boltzmann weights for the SU(3) ADE graphs[END_REF], see also [START_REF] Coquereaux | Fusion graphs[END_REF]. In the following we shall mostly concentrate on the series E = A k (SU(3)) and call L k the corresponding lattices of hyper-roots; we consider explicitly the cases k = 1, 2, 3, 4. We also give explicit results for theta functions associated with the modules D 3 , D 6 , and the three exceptional cases E 5 , E 9 and E 21 ; these modules have a non-trivial Z 3 grading (our previous general discussion should be slightly modified for modules that do not have a non-trivial Z 3 grading, this is why we do not give explicit results for such cases but the method would be identical). Only A k , D k with k = 0 mod 3 and the three exceptional cases just mentioned have "self-fusion", or "are flat" (in � • • • � • • • � � • • • � • • • -� -� • • • � • • • � � • • • � • • • � • • � � • • • � • • • � � • • • -� • • • -� -� • • • -� • • • � � • • • � • • • � � • • • � • • • -� � • • • -� • • • � -� • • • -� • • • -� -� • • • -� • • • � -� • • • -� • • • � -� • • • � • -� • • • � • • • -� -� • • • -� • • • -� -� • • • � • • • -� � • • • � • • • -� -� • • • -� • • • -� -� • • • � • • • -� • • � � • • • -� • • • � -� • • • -� • • • -� � • • • � • • • � � • • • � • • • -� � • • • -� • • • -� � • • • -� • • • � � • • • � • • • � � • • • � • • • � -� • • • � • • • -� � • • • � • • • � � • • • � • • • � -� • • • � • • • � -� • • • � • • • � � • • • � • • • � � • • • -� • • • � -� • • • -� • • • -� � • • • � • • • � � • • • � • • • -� � • • • -� • • • -� � • • • -� • • • � � • • -� • • • � • • • -� -� • • • -� • • • -� -� • • • � • • • -� � • • • � • • • -� -� • • • -� • • • -� -� • • • � • • • -� • � • • • -� � • • • -� • • • � -� • • • -� • • • -� -� • • • -� • • • � -� • • • -� • • • � -� • • • � • • • � � • • • � • • • � � • • • -� • • • -� -� • • • -� • • • � � • • • � • • • � � • • � • • • � • • • � � • • • � • • • -� -� • • • � • • • � � • • • � • • • � X Figure 4 : Harmonicity property for one of the hyper-root hexagons of A 2 (SU(3)), i.e., SU(3) at level 2. The chosen root is located where the "6" stands. [START_REF] Coquereaux | Comments about quantum symmetries of SU(3) graphs[END_REF]. The chosen root is located where the "2" stands. ∘ 2 ∘ 0 1 ∘ 1 ∘ ∘ 0 ∘ 1 -1 ∘ 0 ∘ ∘ -1 ∘ -1 0 X -2 ∘ ∘ -1 ∘ -1 -1 ∘ 0 ∘ ∘ 0 ∘ 1 1 ∘ 1 ∘ ∘ 2 ∘ 0 • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Figure 6: SU(3) at level 2: position of all hyper-roots in a periodicity rhombus). The 12 elements belonging to the triangle traced out in the left bottom can be used, after choosing some ordering, to build a basis B 1 of the space of hyper-roots. This basis is used to write the Gram matrix given in equation 14. operator algebra parlance), this notion will not be used in this article but these are the examples that we explicitly consider here. Remember that, apart from the A k themselves, the following modules have a non-trivial Z 3 grading: the D k series with k = 0 mod 3, the D k series, the exceptional E 5 , E 9 and E 21 , the twisted D cases D t 9 and its own module D t 9 (these last two are often also flagged as "exceptional", the first being an analog of the E 7 of the SU(2) family, which indeed appears as a twisted D 10 = D 16 (SU(2))), and the exceptional module M 9 (which is also a module over E 9 ). The other members of the SU(3) classification, namely the cases D k with k = 1 or 2 mod 3, A k , and the exceptional module M 5 over E 5 have trivial Z 3 gradings. 0 • • • 2 • • • 2 0 • • • 1 • • • -2 -1 • • • -1 • • • 1 -2 • • • • -2 2 • • • 1 • • • -2 2 • • • -2 • • • -1 -1 • • • -1 • • • -1 • -2 • • • 2 -1 • • • -1 • • • -1 -1 • • • -2 • • • 2 -2 • • • 1 • 0 • • • 2 • • • 1 -2 • • • -1 • • • -1 -2 • • • 1 • • • 2 0 • • • • 0 6 • • • 2 • • • 0 2 • • • -2 • • • -2 2 • • • 2 • • • 0 As usual, the subindex k in the above script capital letters indicates the existence of a module structure over A k (for instance D 4 is a module over A 4 ). In the previous sections the notation E k was used in a generic way, but from now on we use specific notations to denote the modulecategories of the SU(3) classification (all of them appear in the above lists), and we therefore keep the "E" notation to refer to the three exceptional cases E 5 , E 9 and E 21 . We hope that there should be no confusion. Remember also that the Dynkin notation used for the SU(2) classification does not agree with the above convention: the subindex of a Dynkin diagrams refers to the number of simple objects, whereas the subindex of A (or D, or E, etc. ) used in higher classifications usually refers to the level. For instance one has A 11 = A 10 (SU(2)), D 7 = D 10 (SU(2)), E 6 = E 10 (SU(2)). Choice of a basis There are many ways of choosing a basis for a lattice L. To every choice is associated a fundamental parallelotope and a Gram matrix A (the matrix of inner products in this basis). Two Gram matrices A and A give rise (or come from) congruent lattices when they are integrally equivalent, i.e., when there exists a matrix U , with integer entries and determinant ±1, such that A = U AU T . It is clear that the discriminant is an invariant for integral equivalence. It is also the square volume of a fundamental parallelotope and it is equal to the order of the dual quotient L /L where L is the dual lattice. It is however sometimes useful to loosen a little bit the notion of equivalence and use rational equivalence rather than integral equivalence; this amounts to identify lattices associated with rationally equivalent Gram matrices (the matrix U has rational elements, and its determinant is a non -zero rational number). In the case of the SU(2) hyper-root systems (i.e., root systems in the usual sense), one may choose for Gram matrix A the Cartan matrix corresponding to a given Coxeter-Dynkin graph E, namely A = 2 -F (1) where F (1) is the fundamental fusion matrix of the module defined by E. For usual root lattices, the notion of Cartan matrix (associated with a basis of simple positive roots) is unique, but one can find for these lattices other integral basis and other Gram matrices than those associated with simple positive roots. For lattices of hyper-roots we did not define fundamental hyper-weights and did not define simple hyper-roots either: the notion of "Cartan matrix" is not available. However we can certainly consider several interesting choices for the Gram matrices. In what follows we shall usually present only one Gram matrix, called A, since it defines the lattice up to integral equivalence, but one should keep in mind that other choices are possible. Remark. Warning: a naive generalization of the equation A = 2 -F (1) that relates the adjacency matrix of Dynkin diagrams, and therefore also, in the simply-laced case, the fundamental fusion matrices of the SU(2) modules to the Cartan matrix A and to the usual Lie group root lattices, suggests, in the case of SU(3), to replace the Cartan matrix A by 6 -(F (1,0) + F (0,1) ) i.e., using the fundamental fusion matrices for SU(3) modules (nimreps). However the lattices obtained from this naive choice 19 of A do not correspond to the lattices of hyper-roots considered in the present paper; it is already clear that the dimensions do not match: the rank of the lattice of hyper-roots associated with E k (SU(3)) is 2r E whereas it would be only equal to r E for the above naive choice. Basis B 1 . Its 2r E elements (assuming k > 0) belong to the bottom left corner of the SU(3) period parallelogram, more precisely, we choose those hyper-roots located in the admissible vertices of the six fusion graphs sitting in positions {{0, 0}, {0, 1}, {1, 0}, {1, 1}, {2, 0}, {0, 2}} of the weight lattice; one checks that this indeeds determines a basis which is fully specified once an ordering has been chosen. This basis corresponds to the highlighted triangle in figure 6, see also figure 8. Other basis. Many other convenient basis can be chosen, for instance B 2 and B 3 , respectively associated with the admissible vertices belonging to the fusion graphs located in positions {{1, 1}, {2, 1}, {1, 2}, {3, 1}, {2, 2}, {1, 3}} and {{0, 0}, {1, 0}, {0, 1}, {N -1, N -2}, {N -2, N - 1}, {N -1, N -1}}, with N = k + 3. Our choice. With the exception of the lattice L 1 where, for illustration purposes, we shall present two equivalent Gram matrices respectively associated with the basis B 1 and B 2 , the Gram matrices A that will be given later are obtained from equation 8 using the basis B 1 . • • 1 -1 • • • • 1 • • 2 • • • 0 • 2 • • 0 • • • 2 • 2 • • 0 0 • • • 2 • 2 • • 0 • • 0 6 • • • • 0 • • 2 • • • 0 • 2 • • Figure 8: SU(3) at level 3: the basis B 1 in the periodicity rhombus (20 positions marked with integers). We display the scalar product between the hyper-root marked 6 and all other basis elements (these values appear on line 5 of the Gram matrix A given in equation 6.2). 3.3 Summary of the procedure. • Choose a module E over A k (SU(3)), for instance A k itself. • From the fundamental fusion matrix F (10) of the chosen module given in Appendix 6.1, calculate the other fusion matrices F n , for instance using the SU(3) recurrence relation. • Extend the fusion matrices to the weight lattice of SU(3), using symmetries and periodicity. • It is useful to build the periodic essential matrices τ a , and not only the F n , in particular if the module E is not A k itself. • Using equation 8 one can determine a table A big of the |R| × |R| scalar products between all the hyper-roots (or only between those of R ∨ ). The matrix A big has rank r = 2r E < |R| = (k + 1)(k + 2)(k + 3) 2 /3. • Select a family (α i ) of r independent hyper-roots (i.e., choose a basis) and call A the r × r restriction of the previous table A big to the chosen basis. This A will be a Gram matrix for the lattice of hyper-roots. However A big can be huge. It is shorter to proceed as follows: use one of the hyper-root basis (for instance B 1 ) described previously, and determine the corresponding A matrix by calculating only the r × r inner products between its basis elements. One ends up with a Gram matrix A that is, of course, basis dependent. The rest of the discussion is standard in the sense that it mimics what is done for usual roots and weights. Note: the above steps could also be followed in that case, just replacing SU(3) by SU(2). • The choice of A determines a basis (α i ) of hyper-roots that are such that < α i , α j >= A ij . Call K = A -1 the inverse of A and (ω i ) the dual basis of (α i ), then < ω i , ω j >= K ij and < α i , ω j >= δ ij . The family of vectors (ω i ) is, by definition, the hyper-weight basis associated with the hyper-root basis (α i ) -there is no need to introduce co-roots or co-weights, since, for the systems considered here, all hyper-roots have the same norm, equal to 6. Warning: the indices i, j . . . of (α i ) or (ω i ) run from 1 to r = 2r E whereas the indices a, b of (τ a ) refer to the irreps of E and therefore run only from 1 to r E . In particular essential matrices τ a and elements ω i of the hyper-weight basis are distinct quantities. Arbitrary linear combinations of the vectors (ω i ) with integer coefficients are (integral) hyperweights, by definition. They have integer scalar products with hyper-roots and they are harmonic functions on the ribbon. Hyper-roots are particular hyper-weights. • The last step is to study the lattice of hyper-roots and its theta function. How this is done will be described in the next section. One can a posteriori check that the orthonormal projection of a Dirac measure on the ribbon on the subspace of harmonic functions (hyper-weights) is indeed a hyper-root. This could have been used as a method to determine them. Take δ u = (0, 0, . . . , 0, 1, 0, . . . 0) with |R ∨ | components and a single 1 in position u, the other components being 0's (so this is a Dirac measure on C R ∨ ); its projection can be decomposed on the basis (ω i ): P u = j P uj ω j , where the P uj coefficients have to be determined. Every ω i , with i ∈ {1 . . . r}, determines a vector o i of C R ∨ with components < ω i , β > where β runs in R ∨ . The projection P u also determines a vector p u of C R ∨ with components < P u , β >. The unknown P uj are obtained by imposing the orthogonality relations (δ u -p u ).o i = 0 for all i. Up to a rescaling factor N 2 , one checks that the obtained result P u is indeed one of the hyper-roots, the one localized in position u (where the coefficient 6 stands). Lattices and theta functions (reminders) We remind the reader a few results about lattices and their theta functions. This material can be gathered from [START_REF] Zagier | Elliptic Modular Forms and Their Applications, in 'The 1-2-3 of Modular forms[END_REF]. Consider a positive definite quadratic form Q which takes integer values on Z m . We can write Q = 1 2 x T A x, with x ∈ Z m and A a symmetric m × m matrix. Integrality of Q implies that A is an even integral matrix (its matrix elements are integers and its diagonal elements are even). Therefore A is a positive definite non singular matrix, and det(A) > 0. So the inverse A -1 exists, as a matrix with rational coefficients. The modular level of Q, or of A, is the smallest integer such that A -1 is again an even integral matrix -this notion differs from the notion of conformal level k used in the previous part of this article. ∆ = (-1) m det(A) is the discriminant of A. Given Q, one defines the theta function θ Q (z) = ∞ n=0 p(n) q n where20 q = exp(2iπz) and p(n) ∈ Z ≥0 is the number of vectors x ∈ Z m that are such that Q(x) = n. The function θ Q is always a modular form of weight m/2. In our framework m will always be even (in particular ∆ = det(A)) so that we set m = 2s with s an integer. The following theorem (Hecke-Schoenberg) is known [START_REF] Zagier | Elliptic Modular Forms and Their Applications, in 'The 1-2-3 of Modular forms[END_REF] and will be used: Let Q : Z 2s → Z a positive definite quadratic form, integral, with m = 2s variables, of level and discriminant ∆. Then the theta function θ Q is a modular form on the group Γ 0 ( ), of weight s, and character χ ∆ . In plain terms: θ Q ( az+b cz+d ) = χ ∆ (a) (cz + d) s θ Q (z) for all z ∈ H (upper half-plane) and a b c d ∈ Γ 0 ( ). Here Γ 0 ( ) is the subgroup 21 of SL(2, Z) defined by the condition c ≡ 0 mod and χ ∆ is the unique Dirichlet character modulo which is such that χ ∆ (p) = L(∆, p) for all odd primes p that do not divide , where L denotes the Legendre symbol. Notice that m, as defined above, is also, in our framework, the dimension of the space C of hyper-roots, which, for G = SU(3), is equal to 2r E . In that case, the weight of the (twisted) modular form θ Q is therefore equal to r E , the number of vertices of the fusion diagram, or the number of simple objects in E. About Dirichlet characters. Dirichlet characters are particular functions from the integers to the complex numbers that arise as follows: given a character on the group of invertible elements of the set of integers modulo p, one can lift it to a completely multiplicative function on integers relatively prime to p and then extend this function to all integers by defining it to be 0 on integers having a non-trivial factor in common with p. A Dirichlet character with modulus p takes the same value on two integers that agree modulo p. The interested reader may consult the abundant literature on the subject but it is enough for us to remember that they are a particular kind of completely multiplicative complex valued functions on the set of integers, that there are φ(p) characters modulo p, where φ is the Euler function, and that they are tabulated in many places -there is even a command DirichletCharacter[p,j,n] in Mathematica [18] that gives the Dirichlet character with modulus p and index j as a function of n (the index j running from 1 to φ(p)). 4 Lattices of hyper-roots of type SU(3) and their theta functions For k = 0, the general formulae of section 2.2.1 give N = 3, r E 0 = 1, r = 2, |R ∨ | = 3 and |R| = 6. One expects 22 that this lattice should coincide with the usual root lattice of SU(3), and R ∨ with the set of positive roots. The period is a rhombus 3 × 3 but the small fusion graph has a single vertex, with Z 3 grading 0, so, in order to build a basis of the hyper-root lattice, only two of the six weights {{1, 1}, {2, 1}, {1, 2}, {3, 1}, {2, 2}, {1, 3}} contribute (see section 3.2), those of grading 0, namely {1, 1} and {2, 2}; we therefore recover that the dimension is r = 2. The Gram matrix of the lattice of hyper-roots, in this basis, obtained from equation ( 8), is 6 -3 -3 6 , i.e., three times the Cartan matrix of SU(3), this lattice can therefore be considered as a rescaled version of the SU(3) root lattice (the hexagonal lattice). The theta function of the latter is well known and can be found in many textbooks, for instance in [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF]; its expression in terms of the elliptic theta function ϑ 3 reads: θ(z) = ϑ 3 (0, q) 3 + ϑ 3 π 3 , q 3 + ϑ 3 2π 3 , q 3 3ϑ 3 (0, q 3 ) = 1 + 6 q 2 + 6 q 6 + 6 q 8 + 12 q 14 + 6 q 18 + 6 q 24 + 12 q 26 + 6 q 32 + O q 34 The theta function of the hyper-root lattice L 0 is then simply θ(3z) -replace q by q 3 . Although this special case (k = 0) coincides, up to scale, with a well known lattice, it is instructive to look at its theta function by using the theorems recalled in the previous section. The quadratic form defined by the Cartan matrix of SU(3) has level 3, so that its theta function is a modular form on the group Γ 0 (3), of weight s = 1; it is twisted by a non-trivial Dirichlet character modulo 3 (there are only two of them here, the first being trivial), and there is no constraint coming from the Legendre symbol condition since there are no odd primes less than 3 that do not divide 3. This vector space of modular forms is of dimension 1, hence θ can be identified with its generator. As an application, here is a very short program, using the computer algebra package Magma [START_REF] Bosma | The Magma algebra system. I. The user language[END_REF] that returns the above theta function, up to the same order (q 17 ) 2 -one has to rescale q-and uses the above concepts: For k = 1, N = 4, r A 1 = 3. The rank of the lattice L 1 is r = 2 r A 1 = 6. The period is a rhombus 4×4 and the number of hyper-roots is |R| = 32, with |R ∨ | = 16. This last number being reasonably small, we shall give more details for this lattice than for those that come after. Gram matrices There are many possible Gram matrices for this lattice: they differ by the choice of the basis (integral equivalence). They have a determinant equal to 4 6 . The lattice is even, with minimal norm 6. For illustration, three possible Gram matrices denoted A, A , A , are given below. The first two are respectively associated with the basis choices B 1 and B 2 described in section 3.2. The third simply relates to a fundamental fusion matrix of A 1 (see a comment in section 4.7). The matrix K is the inverse of A. We denote {α i }, i = 1 . . . 6, the elements of the hyper-root basis B 1 , i.e., < α i , α j >= A ij . The members of the dual basis (the hyper-weight basis) are denoted {ω j }, so that < ω i , α j >= δ ij and < ω i , ω j >= K ij . A =         6 2 2 -2 -2 -2 2 6 2 2 -2 2 2 2 6 2 2 -2 -2 2 2 6 2 2 -2 -2 2 2 6 -2 -2 2 -2 2 -2 6         A =         6 2 2 -2 -2 -2 2 6 2 2 2 -2 2 2 6 -2 2 2 -2 2 -2 6 2 -2 -2 2 2 2 6 2 -2 -2 2 -2 2 6         A =         6 2 2 2 -2 -2 2 6 2 -2 2 -2 2 2 6 -2 -2 2 2 -2 -2 6 -2 -2 -2 2 -2 -2 6 -2 -2 -2 2 -2 -2 6         K = 1 8         3 -1 -1 1 1 1 -1 3 -1 -1 1 -1 -1 -1 3 -1 -1 1 1 -1 -1 3 -1 -1 1 1 -1 -1 3 1 1 -1 1 -1 1 3         The 16 elements of R ∨ may be called "positive hyper-roots" (their opposites, the elements of -R ∨ , being "negative") and they can be expanded on the chosen root basis as follows: α 1 -α 2 + α 6 , -α 2 + α 3 -α 5 , -α 1 + α 3 -α 4 , -α 4 + α 5 + α 6 , α 6 , -α 2 + α 4 -α 5 , -α 1 -α 5 -α 6 , -α 1 + α 2 -α 4 , α 2 , α 4 , -α 3 + α 4 -α 6 , α 2 -α 3 -α 6 , α 1 , α 3 , α 5 , α 1 -α 3 + α 5 With the same ordering, the family of their mutual inner products build the following 16 × 16 matrix A big , which is of rank 6, as expected: A big =                            6 2 -2 2 2 2 -2 -2 -2 -2 -2 -2 2 -2 -2 2 2 6 2 -2 -2 2 2 -2 -2 -2 -2 -2 2 2 -2 -2 -2 2 6 2 -2 -2 2 2 -2 -2 -2 -2 -2 2 2 -2 2 -2 2 6 2 -2 -2 2 -2 -2 -2 -2 -2 -2 2 2 2 -2 -2 2 6 2 -2 2 2 2 -2 -2 -2 -2 -2 -2 2 2 -2 -2 2 6 2 -2 -2 2 2 -2 -2 -2 -2 -2 -2 2 2 -2 -2 2 6 2 -2 -2 2 2 -2 -2 -2 -2 -2 -2 2 2 2 -2 2 6 2 -2 -2 2 -2 -2 -2 -2 -2 -2 -2 -2 2 -2 -2 2 6 2 -2 2 2 2 -2 -2 -2 -2 -2 -2 2 2 -2 -2 2 6 2 -2 -2 2 2 -2 -2 -2 -2 -2 -2 2 2 -2 -2 2 6 2 -2 -2 2 2 -2 -2 -2 -2 -2 -2 2 2 2 -2 2 6 2 -2 -2 2 2 2 -2 -2 -2 -2 -2 -2 2 -2 -2 2 6 2 -2 2 -2 2 2 -2 -2 -2 -2 -2 2 2 -2 -2 2 6 2 -2 -2 -2 2 2 -2 -2 -2 -2 -2 2 2 -2 -2 2 6 2 2 -2 -2 2 -2 -2 -2 -2 -2 -2 2 2 2 -2 2 6                            The lattice is even, and if we rescale it, setting B = A/2, where A is one of the above Gram matrices, one finds det(B) = 64, and the vectors of minimal norm belonging to the lattice (no longer even) associated with the Gram matrix B have norm 3; however, the lattices L k , for k > 1 are usually not even. In all coming sections we shall always choose for the lattices under consideration a basis made of hyper-roots, and the diagonal elements of the associated Gram matrix (keeping in mind an arbitrariness of choice) will therefore always be equal to 6, that comes from the order of the Weyl group of SU(3). The determinant of L 1 , equal to 4096, is sometimes called "connection index", it is also the order of the dual quotient L * 1 /L 1 . The latter is an abelian group isomorphic with Z 2 × (Z 4 ) ×4 × Z 8 . The lattice L 1 is obviously not self-dual. If we rescale L 1 as above in such a way that the minimal norm is 3, and call B this new lattice, we see that the connection index is 64 and that the dual quotient B * /B is isomorphic with (Z 2 ) ×4 × Z 4 . Elements of the lattice B * belong to one and only one congruence class, an element of the dual quotient, they are therefore be classified by 5-uplets (c 2 1 , c 2 2 , c 2 3 , c 2 4 , c 4 ), with c 2 i ∈ {0, 1} and c 4 ∈ {0, 1, 2, 3}. Theta function A direct calculation leads to θ(z) = 1+32 q 6 +60 q 8 +192 q 14 +252 q 16 +480 q 22 +544 q 24 +832 q 30 +1020 q 32 +1440 q 38 +1560 q 40 +2112 q 46 +. . . This is in agreement with the following theta series: 1 2 ϑ 2 0, q 4 6 + ϑ 3 0, q 4 6 + ϑ 4 0, q 4 6 The latter is recognized as the theta function for a (scaled version of) the shifted D 6 lattice, called D + 6 = D 6 ∪ ([1] + D 6 ) , see [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF]. Notice that two inequivalent lattices may have the same theta series, so the stated coincidence, by itself, is not sufficient to allow identification of L 1 and D + 6 which, ultimately, relies on the fact, as we shall see below, that one can choose the same Gram matrix to define both lattices. It is known [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF] that the D + n packing is a lattice packing if and only if n is even. In particular this is so for n = 6 -and we know a priori that L 1 is a lattice and not only a packing. The fact that D + n is not a lattice for n odd excludes a possible systematic identification with the lattices L k , k > 1, that are associated with higher hyper-root systems of SU(3) type. We may recover the previous theta function for this lattice by applying the Hecke-Schoenberg theorem. From the Gram matrix one finds that the discriminant is 4 6 and that the (modular) level of the quadratic form is 16. The odd primes not dividing 16 are 3, 5, 7, 11, 13 and their Legendre symbols are all equal to 1. From the 8 × 16 table of Dirichlet characters of modulus 16 over the cyclotomic field of order ϕ Euler (16) = 8 restricted to odd primes not dividing the level, one selects the unique character whose values coincide with the list obtained for the Legendre symbols. The space of modular forms on Γ 1 (16) of weight 3, twisted by this Dirichlet character, namely the Kronecker character -4, has dimension 7. It is spanned by the following forms (in the remaining part of this section we set q 2 = q 2 ): b 1 = 1 + 12 q 8 2 + 64 q 12 2 + 60 q 16 2 + O(q 24 2 ), b 2 = q 2 + 21 q 9 2 + 40 q 13 2 + 30 q 17 2 + 72 q 21 2 + O(q 24 2 ), b 3 = q 2 2 + 26 q 10 2 + 73 q 18 2 + O(q 24 2 ), b 4 = q 3 2 + 6 q 7 2 + 15 q 11 2 + 26 q 15 2 + 45 q 19 2 + 66 q 23 2 + O(q 24 2 ), b 5 = q 4 2 + 4 q 8 2 + 8 q 12 2 + 16 q 16 2 + 26 q 20 2 + O(q 24 2 ), b 6 = q 5 2 + 2 q 9 2 + 5 q 13 2 + 10 q 17 2 + 12 q 21 2 + O(q 24 2 ), b 7 = q 6 2 + 6 q 14 2 + 15 q 22 2 + O(q 24 2 ) An explicit determination of the vectors (and their norms) belonging to the first shells of the hyper-roots lattice of A 1 (SU(3)) shows that the theta function starts as 1 + 32q 3 2 + 60q 4 2 + O(q 14 ). The components of this modular form on the previous basis are therefore 1, 0, 0, 32, 60, 0, 0. In other words, θ = b 1 + 32 b 4 + 60 b 5 Using a computer package, one can quickly obtain the q-expansion of the functions b n to very large orders and recover or extend the result that was given for θ. As an alternative to the expression of θ previously given in terms of elliptic theta functions, here is a Magma program that returns its series expansion up to order 24 in q 2 and uses the above ideas: H := DirichletGroup(16,CyclotomicField(EulerPhi( 16))); chars := Elements(H); eps := chars [START_REF] Conway | Sphere Packings, Lattices and Groups[END_REF]; M := ModularForms([eps],3); order:=24; PowerSeries(M![1,0,0,32,60,0,0],order); 4.2.3 The automorphism group of the lattice L 1 For SU(2) hyper-roots (i.e., usual roots) the Weyl group is a subgroup of the automorphism group of the lattice. In the case of SU(3) hyper-roots latices, one can also, in each case, consider the automorphism group aut of the lattice. Using Magma, we find that the automorphism group of L 1 is of order 23040 and that it is isomorphic with the semi-direct product of A 6 (the alternated group of order 6!/2 = 360) times an abelian group of order 64, actually with ((C 2 ) ×5 A 6 ) C 2 . Orbits of the basis vectors under the aut action coincide and contain the 32 hyper-roots (the 16 positive and the 16 negative ones). The group aut is generated by the following matrices         0 0 0 -1 1 -1 -1 0 0 -1 0 -1 -1 0 1 0 0 -1 0 0 0 0 -1 0 -1 1 0 0 -1 0 0 0 0 1 0 0         ,         1 -1 0 0 1 0 0 -1 0 0 1 -1 0 -1 0 0 0 0 -1 0 1 0 0 -1 -1 0 0 -1 0 -1 0 0 0 0 -1 0         ,         1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 -1 1 -1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 -1 -1 1 0         Other avatars of the lattice L 1 We already identified the lattice L 1 with a scaled version of the shifted D 6 lattice, called D + 6 . Here are a few others. The generalized laminated lattice Λ 6 [START_REF] Coquereaux | Quantum McKay correspondence and global dimensions for fusion and module-categories associated with Lie groups[END_REF] with minimal norm 3. It belongs to a family of lattices Λ n [START_REF] Coquereaux | Quantum McKay correspondence and global dimensions for fusion and module-categories associated with Lie groups[END_REF] that was studied and classified in [START_REF] Plesken | Constructing integral lattices with prescribed minimum[END_REF]. These lattices have a kind of periodicity, and the relevant information is encoded in a tree of inclusions up to some maximal object that appears to be Λ 23 [START_REF] Coquereaux | Quantum McKay correspondence and global dimensions for fusion and module-categories associated with Lie groups[END_REF], isomorphic to the so-called "shorter Leech lattice" (a nice sublattice of the Leech lattice). The authors construct the tree of inclusions and give the Gram matrices for all the Λ n [START_REF] Coquereaux | Quantum McKay correspondence and global dimensions for fusion and module-categories associated with Lie groups[END_REF] of the sequence -actually they give the Gram matrix for n = 23 and a few others but this information is sufficient to reconstruct Gram matrices for all of them. In particular, for n = 6, one recovers half the matrix A 2 already obtained in this section. The fact that Λ 6 [START_REF] Coquereaux | Quantum McKay correspondence and global dimensions for fusion and module-categories associated with Lie groups[END_REF] can be identified with D + 6 (and in particular with "our" L 1 ) is not mentioned in [START_REF] Plesken | Constructing integral lattices with prescribed minimum[END_REF] but the previous observation shows that it is so. One could again be tempted to identity the lattices L k , defined by the fusion graphs associated with E = A k (SU(3)), for k > 1, with other Λ n 's. This is however not the case because the minimal norm of the (unrescaled) L k , for k > 1, is 6, not 3. Moreover the lattices Λ n [START_REF] Coquereaux | Quantum McKay correspondence and global dimensions for fusion and module-categories associated with Lie groups[END_REF] in dimensions n = 12 and n = 20 have kissing numbers respectively equal to 136 and 1280 whereas the lattices associated with fusion graphs of SU(3) at levels k = 2, 3, i.e., also in dimensions 12 and 20, have kissing numbers 100 and 240. The lattice L 4 generated by cuts of the complete graph on a set of 4 vertices. In [START_REF] Deza | Delaunay Polytopes of Cut Lattices[END_REF], the authors study the "Delaunay Polytopes of Cut Lattices", i.e., the real span of the lattice L n generated by cuts of the complete graph on a set of n vertices, which is a vector space of dimension ( n 2 ). In particular the dimension is 6 when n = 4. The authors are interested in the Delaunay polytopes for the lattices L n . The point is that when n = 4, this lattice is isomorphic with D + 6 . Actually, the precise relation is L 4 = √ 2 D + 6 . Let e ij , 1 ≤ i < j ≤ n be an orthonormal basis of R ( n 2 ) . In this basis, a vector d of the lattice has coordinates d ij . An integral vector d ∈ L n if and only if d ij + d jk + d ki = 0 (mod 2), for all triples {i, j, k}. Again, one can see that lattices associated with fusion graphs of SU(3) cannot, in general, be identified with the lattices L n , unless n = 4. So the above properties (characterization of lattice vectors) hold only for the lattice L 1 . A short description of the Voronoi cells The Voronoi cells of the lattice L 1 have 588 vertices (576 correspond to shallow holes and 12 to deep holes). The deep holes are of maximal norm, equal to 4 (the covering radius). The Voronoi polytope has 92 5-dimensional facets, 32 of them are orthogonal to the vectors of norm 6 and 60 are orthogonal to the vectors of norm 8 (these norms are 3 and 4 if one uses the previously mentioned rescaled version of this lattice), 4896 edges and 588 vertices. These results can be found from the Gram matrix, for example using Magma [START_REF] Bosma | The Magma algebra system. I. The user language[END_REF], and agree with those of [START_REF] Deza | Delaunay Polytopes of Cut Lattices[END_REF] who, in another framework already studied the Voronoi and Delaunay dual tesselations of the lattice L 1 ∼ L 4 ∼ D + 6 (see previous paragraph). The hyper-root lattice L 2 of A 2 (SU(3)) For k = 2, N = 5, r A 2 = 6, the rank of the lattice L 2 is r = 2 r A 2 = 12. The period is a rhombus 5 × 5 and |R| = 100. A Gram matrix A for the lattice is given below A =                      6 0 2 0 2 0 -2 1 -2 2 -2 2 0 6 2 2 2 2 1 -1 0 -2 0 -2 2 2 6 0 2 2 2 2 -1 1 2 2 0 2 0 6 2 0 0 2 1 -2 2 0 2 2 2 2 6 0 2 2 2 2 -1 1 0 2 2 0 0 6 0 2 2 0 1 -2 -2 1 2 0 2 0 6 0 2 0 2 0 1 -1 2 2 2 2 0 6 2 2 2 2 -2 0 -1 1 2 2 2 2 6 0 0 -2 2 -2 1 -2 2 0 0 2 0 6 -2 2 -2 0 2 2 -1 1 2 2 0 -2 6 0 2 -2 2 0 1 -2 0 2 -2 2 0 6                      (14) The discriminant is readily calculated: ∆ = 5 9 . The modular level is = N 2 = 25. Theta function Applying the Hecke-Schoenberg theorem leads to the following result: the theta function of this lattice of hyper-roots of type SU(3) at conformal level k = 2 is of weight 6, modular level = 5 2 = 25 (the square of the altitude) and Dirichlet character χ [START_REF] Evans | Ocneanu cells and Boltzmann weights for the SU(3) ADE graphs[END_REF] for the characters modulo 25 on a cyclotomic field of order 20. It is the only character (namely the Kronecker character 5), the eleventh on a collection of 20 = Φ Euler [START_REF] Plesken | Constructing integral lattices with prescribed minimum[END_REF]) that coincides with the value of the Legendre symbol L(∆, p) for all odd primes p that do not divide 25. This space of modular forms has dimension 16. The theta function, in the variable q 2 = q 2 , is therefore fully determined by its 16 first Fourier coefficients (the first being 1). The coefficients of q a 2 with a > 15 are then predicted. The series starts as θ(z) = 1 + 100 q 3 2 + 450 q 4 2 + 960 q 5 2 + 2800 q 6 2 + 6600 q 7 2 + 12300 q 8 2 + . . .. Here are the first coefficients, up to q 48 2 = q 96 : 1, 0, 0, 100, 450, 960, 2800, 6600, 12300, 22400, 30690, 63000, 93150, 144000, 203100, 236080, 392850, 550800, 708350, 961800, 972780, 1581600, 1937250, 2495400, 2977400, 3063360, 4469400, 5547700, 6477600, 7963200, 7344920, 11094000, 12627000, 15127200, 17091900, 16459440, 22670850, 26899200, 29779950, 34869600, 31131750, 44964000, 48927900, 57061200, 62034900, 57598720, 77425500, 89018400, 95469650,. . . Here is the Magma code leading to this result: [START_REF] Plesken | Constructing integral lattices with prescribed minimum[END_REF]CyclotomicField(EulerPhi(25))); chars := Elements(H); eps := chars [START_REF] Evans | Ocneanu cells and Boltzmann weights for the SU(3) ADE graphs[END_REF]; M := ModularForms([eps],6); order:=48; PowerSeries(M![1, 0, 0,100, 450, 960, 2800, 6600, 12300, 22400, 30690, 63000, 93150, 144000, 203100, 236080],order); H := DirichletGroup The first Fourier coefficients have to be computed by a brute force approach that relies, ultimately, on the explicitly given Gram matrix. Other properties of this lattice The automorphism group of the lattice L 2 . The automorphism group aut of L 2 is of order 1200 and its structure, in terms of direct and semi-direct products, is C 2 × ((((C 5 × C 5 ) C 4 ) C 3 ) C 2 ) . Orbits of the basis vectors under the aut action coincide and contain the 100 hyper-roots (the 50 positive and the 50 negative ones). Since all the hyper-roots belong to a single orbit of aut, all their stabilizers are conjugated in aut, and found to be isomorphic with the group D 12 (which is itself isomorphic with S 3 × C 2 ). A short description of the Voronoi cells. We only mention that the Voronoi polytope has 5410 11-dimensional facets. The hyper-root lattice L 3 of A 3 (SU(3)) A Gram matrix is given in appendix 6.2. For k = 3, N = 6, r A 3 = 10, the rank of the lattice L 3 is r = 2 r A 3 = 20. The period is a rhombus 6 × 6 and |R| = 240. Theta function The discriminant is readily calculated: ∆ = 6 12 . The modular level is = 18. The theta function belongs to a space of modular forms on Γ 0 (18), of weight 10, twisted by an appropriate character of modulus 18 on a cyclotomic field of order 6 = Φ Euler (18). The corresponding space of modular forms has dimension 31 and the theta function of the lattice, determined by its first Fourier coefficients starts as θ(z) = 1 + 240 q 6 + 1782 q 8 + 9072 q 10 + 59328 q 12 + 216432 q 14 + 810000 q 16 + 2059152 q 18 + 6080832 q 20 + 12349584 q 22 + 31045596 q 24 + O q 25 Here is the list of its coefficients, up to order 60 in q: 1, 0, 0, 240,1782,9072,59328,216432,810000,2059152,6080832,12349584,31045596,57036960,122715648,204193872,418822650,622067040,1193611392,1734272208,3043596384,4217152080,7354100160,9446435136,15901091892,20507712192,32268036096,40493364288,64454759856,76079125584,118436670720,142127536464,209154411792,246451249296,369868125312,413358056928,611268619740,698624989632,981886883328,1108342458624,1597262339340,1716946287264,2447106074496,2701744008624,3674391470784,4018040848656,5617678157568,5869298618208,8140982862948,8753718885120,11607623460864,12394567905984,16938128525364,17305593381648,23493640620096,24756714700128,32196165379200,33726641096496,45246801175488,45433065648240 The automorphism group of the lattice L 3 . The automorphism group aut of L 3 is of order 864 = 2 5 3 3 . Its structure is C 2 × ((((C 6 × C 6 ) C 3 ) C 2 ) C 2 ). Voronoi cells. The Voronoi polytope has 539214 19-dimensional facets. Note. In what follows we shall only provide basic information about the lattices. All lattice related properties ultimately rely on the explicit expression of Gram matrices -that will be displayed in the coming sections or in appendix 6.2. In particular, for most following examples, we only mention the order of the automorphism group, as determined by the computer algebra system Magma, without discussing its structure or the way it is generated. The hyper-root lattice L 4 of A 4 (SU(3)) For k = 4, N = 7, r A 4 = 15, the rank of the lattice L 4 is r = 2 r A 4 = 30. The period is a rhombus 7 × 7 and |R| = 490. The discriminant is 7 15 , and the level is 49. The automorphism group has order 2 3 3 2 7 2 = 3528. The dimension of the appropriate space of modular forms (modular forms on Γ 1 (49) with character Kronecker character -7 and weight 15), is quite large: it has dimension 70 over the ring of integers. The theta series starts as follows: θ(z) = 1 + 490 q 6 + 4998 q 8 + 45864 q 10 + 464422 q 12 + 3429426 q 14 + 21668094 q 16 + 111678742 q 18 + 492567012 q 20 + 1876801038 q 22 + 6352945942 q 24 + 19484903508 q 26 + 54935857326 q 28 + 144330551050 q 30 + O(q 31 ) Higher L k 's We only mention that the theta series of L 5 (of rank 42) and L 6 (of rank 56) start as follows: A 5 : θ(z) = 1+896 q 6 +11856 q 8 +154368 q 10 +2331648 q 12 +27065088 q 14 +281311128 q 16 +O q 17 A 6 : θ(z) = 1 + 1512 q 6 + 24300 q 8 + 425736 q 10 + 8530758 q 12 + O q 13 4.7 Some remarks about the lattices L k About the determination of a Gram matrix, for general L k . We already described in section 3.2 one way to select a basis made of hyper-roots. However, for each particular value of k, the determination of a Gram matrix, using equation 5 and a chosen basis, is not a computationally totally trivial task, and it would be nice to have a way to deduce such a matrix from the fusion coefficients of the module by a simpler algorithm. As already commented in section 3.2, we do not have any canonical choice here for the Gram matrix (no available Cartan matrix) and the naive generalization of the SU(2) algorithm to the SU(3) family fails. Let us nevertheless mention that in the cases k = 1 (see matrix A in section 4.2.1) and k = 2, the following simple expressions, written in terms of fusion matrices, are Gram matrices for the lattices L 1 , L 2 and are equivalent to those given previously: 6 l 1 6 + 2(F {1,2} + F {2,1} ) 2 l 1 3 -2(F {1,2} + F {2,1} ) 2 l 1 3 -2(F {1,2} + F {2,1} ) -2(F {1,2} + F {2,1} ) 6 l 1 12 + 2(F {1,2} + F {2,1} ) 2 l 1 6 + (F {1,2} + F {2,1} ) + (F {1,3} + F {3,1} ) -F {2,2} 2 l 1 6 + (F {1,2} + F {2,1} ) + (F {1,3} + F {3,1} ) -F {2,2} 2(F {1,2} + F {2,1} ) About the determination of θ(z), for general k. The theta function of L k , as a modular form twisted by a character, can, in principle, be obtained by following the method explained in the previous sections and illustrated in the case of the first few members of the L k series. In this respect we observed that the (quadratic form) level of L k is often equal to = (k + 3) 2 but it is not so for L 3 where the level is 18 and not 36. Notice that for L 1 , the matrix 8A -1 is integral but its diagonal elements are odd, so the level is indeed 16. The discriminant of the lattice L k is (k +3) 3(k+1) , the weight is r A k = (k +1)(k +2)/2, the quadratic form level is readily obtained from the Gram matrix, and the determination of the appropriate character requires a discussion relying on the arithmetic properties of the discriminant and of the level. However, the first coefficients of the Fourier series expansion have to be found, and the number of needed coefficients depends on the properties of an appropriate space of modular forms. The determination of the needed coefficients is done by brute force, namely by computing the norm of the vectors belonging to the first shells, using the Gram matrix as an input. Moreover, the explicit determination of a Gram matrix for L k (k being given) also becomes a non-trivial exercise when k is large (see the previous comment). The present method may therefore become rapidly intractable if we increase k too much. Admittedly it would be nice to have a general formula, like the one that we have for the root lattices of type A n-1 , that would be valid for all k's, and would express the theta function of L k in terms of known functions (for instance elliptic theta's). This was not done but we hope that our results will trigger new developments in that direction. About the vectors of smallest norm. For all the lattices L k that we considered explicitly, the lattice vectors of shortest length are precisely the hyper-roots (100 of them for L 3 , for instance), the kissing number of those lattices are then given by the number of hyper-roots. As it is well known, this property holds for all usual root lattices, i.e., hyper-root lattices of the SU(2) family. However, as we shall see below, this property does not always hold for those lattices associated with modules of the SU(3) family that are not of type A k (SU(3)). Theta function for D 3 (SU(3)) A Gram matrix is given in appendix 6.2. The fusion graphs of the D k (SU(3)) series are Z 3 orbifolds of the A k (SU(3)), and, when k = 0 mod 3, their number of vertices (simple objects of the category) is 1 3 (r A k -1) + 3, i.e., 1 3 ( (k+1)(k+2) 2 -1) + 3. So, for k = 3 we have r D 3 = 6, and the rank of the quadratic form is r = 2 r D 3 = 12. The discriminant of the quadratic form is 3 12 and the (modular) level is 2 × 9. The order of the automorphism group is 2 8 3 3 . The reader can check that θ, given below, belongs to a space of modular forms on Γ 0 (18), of weight 6, twisted by an appropriate character. θ(z) = 1 + 36 q 4 + 144 q 6 + 486 q 8 + 2880 q 10 + 5724 q 12 + 7776 q 14 + 31068 q 16 + 40320 q 18 + 47628 q 20 + O q 21 The number of hyper-roots is |R| = 2N 2 r D 3 /3 = 2(3 + 3) 2 6/3 = 144, whereas the number of vectors of smallest norm is 36. This is the first manifestation of a phenomenon that we mentioned in the previous paragraph and that never occurs for usual root lattices. In the present case, the first shell is made of vectors of norm 4, that are not hyper-roots, and the only vectors of the lattice that belong to the second shell, or norm 6, are precisely the hyper-roots. Vectors of smallest norm can of course be expanded on a chosen basis of hyper-roots; here are, for instance, the components of one of them, on the basis that is chosen to write the Gram matrix A of D 3 in appendix 6.2 : taking v = {1, 1, 1, 1, -2, -1, 0, 0, 0, 0, 1, 1}, one can check that < v, v >= 4. One finds that the vector space spanned by the 36 vectors of shortest length (it is enough to choose 18 of them from the pairs (v, -v)) is of dimension 6. Theta function for D 6 (SU(3)) A Gram matrix is given in appendix 6.2. The number of simple objects is r D 6 = 12 and the rank of the quadratic form is r = 2 r D 6 = 24. The discriminant of the quadratic form is 3 18 and the modular level is 2 × 27. The order of the automorphism group is 2 6 3 11 . The theta function reads: θ(z) = 1 + 162 q 4 + 2322 q 6 + 35478 q 8 + 273942 q 10 + 1771326 q 12 + 9680148 q 14 + 40813632 q 16 + 150043014 q 18 + 484705782 q 20 + O q 21 The number of hyper-roots is |R| = 2N 2 r D 6 /3 = 2(6 + 3) 2 12/3 = 648. The first shell is made of 162 vectors of norm 4, that are not hyper-roots, and the second shell, of norm 6, contains not only the hyper-roots themselves, but 3522 -648 = 2874 other vectors. Theta function for E 5 (SU(3)) A Gram matrix is given in appendix 6.2. The rank of the category is r E 5 = 12 and the rank of the quadratic form is r = 2 r E 5 = 24. Its discriminant is 2 30 and the modular level is 2 × 8. The order of the automorphism group is 2 11 3. The theta function reads: θ(z) = 1 + 512 q 6 + 11232 q 8 + 145920 q 10 + 1055616 q 12 + 5618688 q 14 + 25330128 q 16 + 89127936 q 18 + 295067136 q 20 + O q 21 Here, the hyper-roots (2N 2 r E 5 /3 = 28 2 12/3 = 512 of them), like for the A k series, coincide with the vectors of smallest length. Theta function for E 9 (SU(3)) A Gram matrix is given in appendix 6.2. The rank of the category is r E 9 = 12 and the rank of the quadratic form is r = 2 r E 9 = 24. Its discriminant is 2 24 and the modular level is 2 × 8. One finds: θ(z) = 1 + 756 q 4 + 5760 q 6 + 98928 q 8 + 1092096 q 10 + 8435760 q 12 + 45142272 q 14 + 202712400 q 16 + 715373568 q 18 + 2350118808 q 20 + O q 21 The number of hyper-roots is |R| = 2(9 + 3) 2 12/3 = 1152 and we observe that there are 756 vectors of smaller norm (4) that build the first shell, and that the second shell contains not only the hyper-roots, but other vectors as well. Theta function for E 21 (SU(3)) A Gram matrix is given in appendix 6.2. Here we have r E 21 = 24, so that the rank of the quadratic form is r = 2 r E 21 = 48. Its discriminant is 3 12 and the modular level is 2 × 3. One finds: θ(z) = 1 + 144 q 4 + 64512 q 6 + 54181224 q 8 + O q 9 The number of hyper-roots is |R| = 2(21 + 3) 2 24/3 = 9216 but the kissing number is only 144. Therefore, here again, the vectors of smallest norm are not hyper-roots, and not all the lattice vectors of the second shell are hyper-roots. Discussion and summary Root systems of Lie algebras are hyper-root systems of type G = SU(2) and generate lattices whose properties and associated theta series are well-known. Here we consider hyper-root systems of type G = SU(3) using a general definition given by A. Ocneanu in 2000 [START_REF] Ocneanu | The Classification of subgroups of quantum SU(N)[END_REF]. Such systems are classified by "quantum modules" or "quantum subgroups" i.e., using a categorical language, by module-categories over the modular fusion category defined by a pair (Lie(G), k), where k, the level, is a non-negative integer. In turn, quantum modules are characterized by graphs, that, for SU(3), have been obtained long ago in the mathematical physics literature (conformal field theories) and generalize the usual ADE Dynkin diagrams. Hyper-root systems generate lattices, and our main purpose, after having recalled in the first two sections the necessary definitions and concepts, was to provide closed expressions for the corresponding theta series. By exhibiting Gram matrices encoding the geometry of lattices we found that their theta series could be expressed in terms of modular forms twisted by appropriate Dirichlet characters and we gave explicit results for a number of examples belonging to various families of quantum subgroups. The first few terms of several series had already been obtained [START_REF] Ocneanu | The Classification of subgroups of quantum SU(N)[END_REF], [START_REF] Ocneanu | [END_REF], and our expressions, that can be expanded to arbitrary orders, agree with these older results. Using explicit Gram matrices leads to a technique that is only suitable for low levels but we hope that this work will trigger new developments that could allow one to obtain general formulae for G = SU(3), for all values of the level. Appendices Fundamental fusion matrices For completeness sake we give the fundamental fusion matrices F (1,0) = F {2,1} for the cases considered in the text (we have replaced zeroes by dots). They are also adjacency matrices for the associated fusion graphs. These expressions are needed to determine, first, the other fundamental fusion matrices, using the recurrence relation 2, then the inner product < α, β > of hyper-roots, using equations 5 or 6. Similar expressions for the other SU(3) cases can be gathered from the available literature, and also from the website [START_REF] Coquereaux | Fusion graphs[END_REF]. A 1 :   . 1 . . . 1 1 . .   A 2 :        . 1 . . . . . .                A 4 :                          . 1                          D 6 :                    . . . . . .                    E 9 :                    . . . .                    E 21 :                                            . .                                            Gram matrices for lattices considered in the text Gram matrices for lattices L 1 and L 2 , associated with the A k series, were given before. Here we display L 3 (it is small enough to fit in a single page), and we also give those associated with D 3 , D 6 , and with the three exceptional E 5 , E 9 , E 21 . Lattice L 3 associated with A 3 . A =                                    6 0 0 0 2 0 0 2 0 0 -2 1 0 0 -2 2 0 -2 2 0 0 6 0 0 2 2 2 2 2 2 1 0 1 1 0 0 0 0 0 0 0 0 6 0 0 0 2 0 2 0 0 1 -2 0 2 0 -2 0 -2 2 0 0 0 6 0 2 0 0 0 2 0 1 0 -2 0 -2 2 2 0 -2 2 2 0 0 6 0 0 2 2 0 2 2 0 0 -1 1 1 2 2 0 0 2 0 2 0 6 0 2 0 2 0 2 0 2 1 -1 1 2 0 2 0 2 2 0 0 0 6 0 2 2 0 2 2 0 1 1 -1 0 2 2 2 2 0 0 2 2 0 6 0 0 2 2 0 0 2 2 0 -1 1 1 0 2 2 0 2 0 2 0 6 0 0 2 2 0 2 0 2 1 -1 1 0 2 0 2 0 2 2 0 0 6 0 2 0 2 0 2 2 1 1 -1 -2 1 0 0 2 0 0 2 0 0 6 0 0 0 2 0 0 2 0 0 1 0 1 1 2 2 2 2 2 2 0 6 0 0 2 2 2 2 2 2 0 1 -2 0 0 0 2 0 2 0 0 0 6 0 0 0 2 0 2 0 0 1 0 -2 0 2 0 0 0 2 0 0 0 6 0 2 0 0 0 2 -2 0 2 0 -1 1 1 2 2 0 2 2 0 0 6 0 0 0 -2 2 2 0 0 -2 1 -1 1 2 0 2 0 2 0 2 0 6 0 -2 2 0 0 0 -2 2 1 1 -1 0 2 2 0 2 2 0 0 0 6 2 0 -2 -2 0 0 2 2 2 0 -1 1 1 2 2 0 0 0 -2 2 6 0 0 2 0 -2 0 2 0 2 1 -1 1 0 2 2 0 -2 2 0 0 6 0 0 0 2 -2 0 2 2 1 1 -1 0 2 0 2 2 0 -2 0 0 6                                    Lattice D 3 A =                    6 0 0 0 2 2 -2 1 1 1 0 0 0 6 0 0 2 2 1 -2 1 1 0 0 0 0 6 0 2 2 1 1 -2 1 0 0 0 0 0 6 2 2 1 1 1 -2 0 0 2 2 2 2 6 4 2 2 2 2 1 4 2 2 2 2 4 6 2 2 2 2 4 1 -2 1 1 1 2 2 6 0 0 0 2 2 1 -2 1 1 2 2 0 6 0 0 2 2 1 1 -2 1 2 2 0 0 6 0 2 2 1 1 1 -2 2 2 0 0 0 6 2 2 0 0 0 0 1 4 2 2 2 2 6 0 0 0 0 0 4 1 2 2 2 2 0 6                    Lattice D 6 A =                                            6 0 0 0 0 0 0 0 2 0 2 0 -2 1 0 1 1 1 2 0 0 0 0 2 0 6 0 0 0 0 0 0 2 0 2 0 1 -2 0 1 1 1 2 0 0 0 0 2 0 0 6 0 0 0 0 2 0 2 0 0 0 0 -2 1 0 0 2 -2 0 -2 0 2 0 0 0 6 0 0 2 2 2 2 2 2 1 1 1 0 1 2 0 0 2 0 2 0 0 0 0 0 6 0 0 0 2 0 2 0 1 1 0 1 -2 1 2 0 0 0 0 2 0 0 0 0 0 6 2 0 2 0 2 2 1 1 0 2 1 -1 -2 2 2 2 2 -2 0 0 0 2 0 2 6 0 0 2 2 0 0 0 0 2 0 2 -1 1 2 2 2 0 0 0 2 2 0 0 0 6 0 2 0 2 0 0 2 2 0 0 1 -1 1 2 0 2 2 2 0 2 2 2 0 0 6 0 4 2 2 2 0 2 2 2 2 1 2 0 4 2 0 0 2 2 0 0 2 2 0 6 0 0 0 0 2 2 0 0 2 2 0 -1 1 1 2 2 0 2 2 2 2 0 4 0 6 0 2 2 0 2 2 2 2 0 4 1 2 2 0 0 0 2 0 2 0 2 2 0 0 6 0 0 0 2 0 2 0 2 2 1 2 -1 -2 1 0 1 1 1 0 0 2 0 2 0 6 0 0 0 0 0 0 0 2 0 2 0 1 -2 0 1 1 1 0 0 2 0 2 0 0 6 0 0 0 0 0 0 2 0 2 0 0 0 -2 1 0 0 0 2 0 2 0 0 0 0 6 0 0 0 0 2 0 2 0 0 1 1 1 0 1 2 2 2 2 2 2 2 0 0 0 6 0 0 2 2 2 2 2 2 1 1 0 1 -2 1 0 0 2 0 2 0 0 0 0 0 6 0 0 0 2 0 2 0 1 1 0 2 1 -1 2 0 2 0 2 2 0 0 0 0 0 6 2 0 2 0 2 2 2 2 2 0 2 -2 -1 1 2 2 2 0 0 0 0 2 0 2 6 0 0 -2 0 4 0 0 -2 0 0 2 1 -1 1 2 0 2 0 0 2 2 0 0 0 6 0 0 2 -2 0 0 0 2 0 2 2 1 2 0 4 2 2 2 0 2 2 2 0 0 6 2 2 0 0 0 -2 0 0 2 2 2 0 -1 1 1 0 0 2 2 0 0 -2 0 2 6 0 0 0 0 0 2 0 2 2 0 4 1 2 2 2 2 0 2 2 2 0 2 2 0 6 0 2 2 2 0 2 -2 0 2 2 1 2 -1 0 0 0 2 0 2 4 -2 0 0 0 6       6 0 0 0 0 0 2 0 0 0 2 0 -2 0 1 1 0 2 -2 2 2 0 -2 2 0 6 0 0 0 0 0 2 0 0 0 2 0 -2 1 1 2 0 2 -2 0 2 2 -2 0 0 6 0 2 0 2 2 2 0 2 2 1 1 0 2 -2 2 2 0 -2 2 0 2 0 0 0 6 0 2 2 2 0 2 2 2 1 1 2 0 2 -2 0 2 2 -2 2 0 0 0 2 0 6 0 0 0 0 0 0 2 0 0 2 0 -2 0 1 1 0 0 0 2 0 0 0 2 0 6 0 0 0 0 2 0 0 0 0 2 0 -2 1 1 0 0 2 0 2 0 2 2 0 0 6 0 2 0 2 2 2 0 2 2 1 1 0 2 2 0 2 2 0 2 2 2 0 0 0 6 0 2 2 2 0 2 2 2 1 1 2 0 0 2 2 2 0 0 2 0 0 0 2 0 6 0 0 0 0 0 2 0 0 0 2 0 -2 0 1 1 0 0 0 2 0 0 0 2 0 6 0 0 0 0 0 2 0 0 0 2 0 -2 1 1 2 0 2 2 0 2 2 2 0 0 6 0 2 0 2 2 0 2 2 2 1 1 0 2 0 2 2 2 2 0 2 2 0 0 0 6 0 2 2 2 2 0 2 2 1 1 2 0 -2 0 1 1 0 0 2 0 0 0 2 0 6 0 0 0 0 0 2 0 0 0 2 0 0 -2 1 1 0 0 0 2 0 0 0 2 0 6 0 0 0 0 0 2 0 0 0 2 1 1 0 2 2 0 2 2 2 0 2 2 0 0 6 0 2 0 2 2 2 0 2 2 1 1 2 0 0 2 2 2 0 2 2 2 0 0 0 6 0 2 2 2 0 2 2 2 0 2 -2 2 -2 0 1 1 0 0 0 2 0 0 2 0 6 0 0 0 2 0 2 -2 2 0 2 -2 0 -2 1 1 0 0 2 0 0 0 0 2 0 6 0 0 0 2 -2 2 -2 2 2 0 1 1 0 2 2 0 2 2 2 0 2 2 0 0 6 0 -2 2 2 0 2 -2 0 2 1 1 2 0 0 2 2 2 0 2 2 2 0 0 0 6 2 -2 0 2 2 0 -2 2 0 0 2 0 -2 0 1 1 0 0 2 0 2 0 -2 2 6 0 0 0 0 2 2 -2 0 0 0 2 0 -2 1 1 0 0 0 2 0 2 2 -2 0 6 0 0 -2 2 0 2 0 2 2 2 1 1 0 2 2 0 2 2 2 -2 2 0 0 0 6 0 2 -2 2 0 2 0 2 2 1 1 2 0 0 2 2 2 -2 2 0 2 0 0 0 6      6 0 1 1 -1 0 0 0 0 0 0 2 2 0 0 0 0 0 2 -2 0 • 2 • • • 0 -2 • • • 1 • • • -1 -1 • • • 1 • • • -2 0 • • • 2 • -2 • • • 1 • • • -2 2 • • • -1 • • • -1 -1 • • • -1 • • • -2 2 • • • 1 • • • -2 • • -1 -1 • • • -1 • • • -1 2 • • • -2 • • • 1 -2 • • • -2 • • • 2 -1 • • • -1 • • • -1 -1 • • • 1 • • • -2 -1 • • • -1 • • • 1 -2 • • • 2 • • • 0 0 • • • 2 • • • 1 -2 • • • -1 • • • -1 -2 • • • 1 • 0 • • • 2 • • • -2 2 • • • -2 • • • 2 0 • • • 2 • • • 0 6 • • • 2 • • • 0 2 • • • -2 • • • -2 2 • • • 2 • • • 0 • 1 • • • -2 -1 • • • -1 • • • 1 -2 • • • 2 • • • 0 0 • • • 2 • • • 1 -2 • • • -1 • • • -1 -2 • • • 1 • • • -1 -1 • • • -1 • • • -1 2 • • • -2 • • • 1 -2 • • • -2 • • • 2 -1 • • • -1 • • • -1 -1 • • -2 • • • 1 • • • -2 2 • • • -1 • • • -1 -1 • • • -1 • • • -2 2 • • • 1 • • • -2 • 2 • • • 0 -2 • • • 1 • • • -1 -1 • • • 1 • • • -2 0 • • • 2 • • 2 • • • 2 1 • • • -1 • • • -2 -1 • • • -1 • • • 1 2 • • • 2 • 1 • • • -1 • • • 0 -2 • • • -2 • • • -1 -1 • • • -2 • • • 0 -2 • • • -1 • • • 1 • • -2 -1 • • • -2 • • • -1 -2 • • • 0 • • • -1 1 • • • 0 • • • -2 -1 • • • -2 • • • -2 -1 • • • -1 • • • 1 -1 • • • -2 • • • -1 1 • • • 2 • • • 2 2 • • • 2 • • • -1 1 • • • -2 • • • -1 1 • • • -1 • 2 • • • 2 • • • 0 -2 • • • 0 • • • -2 2 • • • 2 • • • 6 0 • • • 2 • • • 2 -2 • • • 0 • • • 0 -2 • • • 2 • • • 2 • -1 • • • 1 -1 • • • -2 • • • -1 1 • • • 2 • • • 2 2 • • • 2 • • • -1 1 • • • -2 • • • -1 1 • • • -1 • • • -2 -1 • • • -2 • • • -1 -2 • • • 0 • • • -1 1 • • • 0 • • • -2 -1 • • • -2 • • • -2 -1 • • 1 • • • -1 • • • 0 -2 • • • -2 • • • -1 -1 • • • -2 • • • 0 -2 • • • -1 • • • 1 • 2 • • • 2 1 • • • -1 • • • -2 -1 • • • -1 • • • 1 2 • • • 2 • • -1 • • • 1 -2 • • • 0 • • • -2 -1 • • • -2 • • • -1 -2 • • • 0 • -1 • • • -2 • • • -2 -1 • • • 0 • • • -2 1 • • • -1 • • • 0 -2 • • • -2 • • • -1 • • -1 1 • • • -2 • • • -1 1 • • • -1 • • • 2 2 • • • 2 • • • 2 1 • • • -1 • • • -2 -1 • • • 2 • • • 2 -2 • • • 0 • • • 0 -2 • • • 2 • • • 2 0 • • • 6 • • • 2 2 • • • 0 • • • -2 -2 • • • 0 • 2 • • • 2 • • • -1 1 • • • -2 • • • -1 1 • • • -1 • • • 2 2 • • • 2 • • • 2 1 • • • -1 • • • -2 -1 • • • -1 • • • 1 • 0 • • • -2 -1 • • • -2 • • • -2 -1 • • • 0 • • • -2 1 • • • -1 • • • 0 -2 • • • -2 • • • -1 -1 • • • -2 • • • 0 -2 • • • -1 • • • 1 -2 • • • 0 • • • -2 -1 • • • -2 • • • -1 -2 • • • 0 • • • -1 1 • • 2 • • • 2 • • • 2 2 • • • -1 • • • 1 -1 • • • -2 • • • -1 1 • • • 2 • • • 2 • 6 • • • 0 2 • • • 2 • • • 0 -2 • • • 0 • • • -2 2 • • • 2 • • 1 • • • -2 2 • • • -2 • • • -1 -1 • • • -1 • • • -1 2 • • • -2 • -1 • • • -1 • • • -1 -1 • • • -2 • • • 2 -2 • • • 1 • • • -2 2 • • • -1 • • • -1 • • 1 -2 • • • -1 • • • -1 -2 • • • 1 • • • 2 0 • • • 2 • • • 0 -2 • • • 1 • • • -1 -1 • • • 2 • • • 0 2 • • • -2 • • • -2 2 • • • 2 • • • 0 6 • • • 0 • • • 2 0 • • • -2 • • • 2 2 • • • -2 • 0 • • • 2 • • • 1 -2 • • • -1 • • • -1 -2 • • • 1 • • • 2 0 • • • 2 • • • 0 -2 • • • 1 • • • -1 -1 • • • 1 • • • -2 • -2 • • • 2 -1 • • • -1 • • • -1 -1 • • • -2 • • • 2 -2 • • • 1 • • • -2 2 • • • -1 • • • -1 -1 • • • -1 • • • -2 2 • • • 1 • • • -2 2 • • • -2 • • • -1 -1 • • • -1 • • • -1 2 • • • -2 • • • 1 -2 • • 0 • • • 2 • • • 2 0 • • • 1 • • • -2 -1 • • • -1 • • • 1 -2 • • • 2 • • • 0 • 0 • • • 6 0 • • • 2 • • • -2 2 • • • -2 • • • 2 0 • • • 2 • • 2 • • • 2 2 • • • 2 • • • -1 1 • • • -2 • • • -1 1 • • • -1 • -2 • • • 0 • • • -1 1 • • • 0 • • • -2 -1 • • • -2 • • • -2 -1 • • • 0 • • • -2 • • 0 -2 • • • -2 • • • -1 -1 • • • -2 • • • 0 -2 • • • -1 • • • 1 -2 • • • 0 • • • -2 -1 • • • 2 • • • 2 1 • • • -1 • • • -2 -1 • • • -1 • • • 1 2 • • • 2 • • • 2 2 • • • -1 • • • 1 -1 • • • -2 • 0 • • • 6 • • • 2 2 • • • 0 • • • -2 -2 • • • 0 • • • 2 2 • • • 6 • • • 0 2 • • • 2 • • • 0 -2 • • • 0 • • • -2 • 2 • • • 2 1 • • • -1 • • • -2 -1 • • • -1 • • • 1 2 • • • 2 • • • 2 2 • • • -1 • • • 1 -1 • • • -2 • • • 0 -2 • • • -2 • • • -1 -1 • • • -2 • • • 0 -2 • • • -1 • • • 1 -2 • • • 0 • • • -2 -1 • • -2 • • • 0 • • • -1 1 • • • 0 • • • -2 -1 • • • -2 • • • -2 -1 • • • 0 • • • -2 • 2 • • • 2 2 • • • 2 • • • -1 1 • • • -2 • • • -1 1 • • • -1 • • 2 • • • 0 0 • • • 2 • • • 1 -2 • • • -1 • • • -1 -2 • • • 1 • 2 • • • -2 • • • 1 -2 • • • -2 • • • 2 -1 • • • -1 • • • -1 -1 • • • -2 • • • 2 • • -2 2 • • • -1 • • • -1 -1 • • • -1 • • • -2 2 • • • 1 • • • -2 2 • • • -2 • • • -1 -1 • • • 2 • • • 0 -2 • • • 1 • • • -1 -1 • • • 1 • • • -2 0 • • • 2 • • • 2 0 • • • 1 • • • -2 -1 • • • -1 • 6 • • • 0 • • • 2 0 • • • -2 • • • 2 2 • • • -2 • • • 2 0 • • • 0 • • • 6 0 • • • 2 • • • -2 2 • • • -2 • • • 2 • 2 • • • 0 -2 • • • 1 • • • -1 -1 • • • 1 • • • -2 0 • • • 2 • • • 2 0 • • • 1 • • • -2 -1 • • • -1 • • • -2 2 • • • -1 • • • -1 -1 • • • -1 • • • -2 2 • • • 1 • • • -2 2 • • • -2 • • • -1 -1 • • 2 • • • -2 • • • 1 -2 • • • -2 • • • 2 -1 • • • -1 • • • -1 -1 • • • -2 • • • 2 • 2 • • • 0 0 • • • 2 • • • 1 -2 • • • -1 • • • -1 -2 • • • 1 • • -2 • • • 2 -2 • • • 1 • • • -2 2 • • • -1 • • • -1 -1 • • • -1 • 2 • • • -2 • • • -1 -1 • • • -1 • • • -1 2 • • • -2 • • • 1 -2 • • • -2 • • • 2 • • 2 0 • • • 1 • • • -2 -1 • • • -1 • • • 1 -2 • • • 2 • • • 0 0 • • • 2 • • • 1 -2 • • • 0 • • • 6 0 • • • 2 • • • -2 2 • • • -2 • • • 2 0 • • • 2 • • • 0 6 • • • 2 • • • 0 2 • • • -2 • 0 • • • 2 • • • 2 0 • • • 1 • • • -2 -1 • • • -1 • • • 1 -2 • • • 2 • • • 0 0 • • • 2 • • • 1 -2 • • • -1 • • • -1 • 1 • • • -2 2 • • • -2 • • • -1 -1 • • • -1 • • • -1 2 • • • -2 • • • 1 -2 • • • -2 • • • 2 -1 • • • -1 • • • -1 -1 • • • -2 • • • 2 -2 • • • 1 • • • -2 2 • • • -1 • • • -1 -1 • • • -1 • • • -2 2 • • -2 • • • 1 • • • 2 0 • • • 2 • • • 0 -2 • • • 1 • • • -1 -1 • • • 1 • • • -2 • 2 • • • 0 6 • • • 0 • • • 2 0 • • • -2 • • • 2 2 • • • -2 • • 0 • • • -2 1 • • • -1 • • • 0 -2 • • • -2 • • • -1 -1 • • • -2 • -2 • • • 0 • • • -2 -1 • • • -2 • • • -1 -2 • • • 0 • • • -1 1 • • • 0 • • • -2 • • 2 2 • • • -1 • • • 1 -1 • • • -2 • • • -1 1 • • • 2 • • • 2 2 • • • 2 • • • -1 1 • • • 6 • • • 0 2 • • • 2 • • • 0 -2 • • • 0 • • • -2 2 • • • 2 • • • 6 0 • • • 2 • • • 2 -2 • • • 0 • 2 • • • 2 • • • 2 2 • • • -1 • • • 1 -1 • • • -2 • • • -1 1 • • • 2 • • • 2 2 • • • 2 • • • -1 1 • • • -2 • • • -1 • -1 • • • 1 -2 • • • 0 • • • -2 -1 • • • -2 • • • -1 -2 • • • 0 • • • -1 1 • • • 0 • • • -2 -1 • • • -2 • • • -2 -1 • • • 0 • • • -2 1 • • • -1 • • • 0 -2 • • • -2 • • • -1 -1 • • • -2 • • • 0 -2 • • 1 • • • -1 • • • 2 2 • • • 2 • • • 2 1 • • • -1 • • • -2 -1 • • • -1 • • • 1 • 2 • • • 2 0 • • • 6 • • • 2 2 • • • 0 • • • -2 -2 • • • 0 • • -1 • • • 1 2 • • • 2 • • • 2 2 • • • -1 • • • 1 -1 • • • -2 • -1 • • • -2 • • • 0 -2 • • • -1 • • • 1 -2 • • • 0 • • • -2 -1 • • • -2 • • • -1 • • -1 1 • • • 0 • • • -2 -1 • • • -2 • • • -2 -1 • • • 0 • • • -2 1 • • • -1 • • • 0 -2 • • • 2 • • • 2 2 • • • 2 • • • -1 1 • • • -2 • • • -1 1 • • • -1 • • • 2 2 • • • 2 • • • 2 1 • • • -1 • 2 • • • 2 • • • 6 0 • • • 2 • • • 2 -2 • • • 0 • • • 0 -2 • • • 2 • • • 2 0 • • • 6 • • • 2 2 • • • 0 • • • -2 • 2 • • • 2 2 • • • 2 • • • -1 1 • • • -2 • • • -1 1 • • • -1 • • • 2 2 • • • 2 • • • 2 1 • • • -1 • • • -1 1 • • • 0 • • • -2 -1 • • • -2 • • • -2 -1 • • • 0 • • • -2 1 • • • -1 • • • 0 -2 • • -1 • • • -2 • • • 0 -2 • • • -1 • • • 1 -2 • • • 0 • • • -2 -1 • • • -2 • • • -1 • -1 • • • 1 2 • • • 2 • • • 2 2 • • • -1 • • • 1 -1 • • • -2 • • 1 • • • -2 0 • • • 2 • • • 2 0 • • • 1 • • • -2 -1 • • • -1 • -1 • • • -1 • • • -2 2 • • • 1 • • • -2 2 • • • -2 • • • -1 -1 • • • -1 • • • -1 • • 1 -2 • • • -2 • • • 2 -1 • • • -1 • • • -1 -1 • • • -2 • • • 2 -2 • • • 1 • • • -2 2 • • • 2 • • • 0 0 • • • 2 • • • 1 -2 • • • -1 • • • -1 -2 • • • 1 • • • 2 0 • • • 2 • • • 0 -2 • • • 1 • 0 • • • 2 • • • 0 6 • • • 2 • • • 0 2 • • • -2 • • • -2 2 • • • 2 • • • 0 6 • • • 0 • • • 2 0 • • • -2 • • • 2 • 2 • • • 0 0 • • • 2 • • • 1 -2 • • • -1 • • • -1 -2 • • • 1 • • • 2 0 • • • 2 • • • 0 -2 • • • 1 • • • 1 -2 • • • -2 • • • 2 -1 • • • -1 • • • -1 -1 • • • -2 • • • 2 -2 • • • 1 • • • -2 2 • • -1 • • • -1 • • • -2 2 • • • 1 • • • -2 2 • • • -2 • • • -1 -1 • • • -1 • • • -1 • 1 • • • -2 0 • • • 2 • • • 2 0 • • • 1 • • • -2 -1 • • • -1 • • -2 • • • -1 -1 • • • -2 • • • 0 -2 • • • -1 • • • 1 -2 • • • 0 • 1 • • • -1 • • • -2 -1 • • • -1 • • • 1 2 • • • 2 • • • 2 2 • • • -1 • • • 1 • • 2 2 • • • 0 • • • -2 -2 • • • 0 • • • 2 2 • • • 6 • • • 0 2 • • • 2 • • • 0 -2 • • • 2 • • • 2 1 • • • -1 • • • -2 -1 • • • -1 • • • 1 2 • • • 2 • • • 2 2 • • • -1 • • • 1 -1 • • • -2 • 1 • • • -1 • • • 0 -2 • • • -2 • • • -1 -1 • • • -2 • • • 0 -2 • • • -1 • • • 1 -2 • • • 0 • • • -2 -1 • • • -2 • • • -1 • -2 • • • -1 -2 • • • 0 • • • -1 1 • • • 0 • • • -2 -1 • • • -2 • • • -2 -1 • • • 0 • • • -2 1 • • • -1 • • • -1 1 • • • 2 • • • 2 2 • • • 2 • • • -1 1 • • • -2 • • • -1 1 • • • -1 • • • 2 2 • • 2 • • • 2 • • • 6 0 • • • 2 • • • 2 -2 • • • 0 • • • 0 -2 • • • 2 • • • 2 • 2 • • • 2 2 • • • 2 • • • -1 1 • • • -2 • • • -1 1 • • • -1 • • -1 • • • -1 -1 • • • -1 • • • -2 2 • • • 1 • • • -2 2 • • • -2 • -2 • • • 1 • • • -1 -1 • • • 1 • • • -2 0 • • • 2 • • • 2 0 • • • 1 • • • -2 • • 2 0 • • • -2 • • • 2 2 • • • -2 • • • 2 0 • • • 0 • • • 6 0 • • • 2 • • • -2 2 • • • 2 • • • 0 -2 • • • 1 • • • -1 -1 • • • 1 • • • -2 0 • • • 2 • • • 2 0 • • • 1 • • • -2 -1 • • • -1 • -2 • • • 1 • • • -2 2 • • • -1 • • • -1 -1 • • • -1 • • • -2 2 • • • 1 • • • -2 2 • • • -2 • • • -1 -1 • • • -1 • • • -1 • -1 • • • -1 2 • • • -2 • • • 1 -2 • • • -2 • • • 2 -1 • • • -1 • • • -1 -1 • • • -2 • • • 2 -2 • • • 1 • • • 1 -2 • • • 2 • • • 0 0 • • • 2 • • • 1 -2 • • • -1 • • • -1 -2 • • • 1 • • • 2 0 • • 0 • • • 2 • • • 0 6 • • • 2 • • • 0 2 • • • -2 • • • -2 2 • • • 2 • • • 0 • 2 • • • 0 0 • • • 2 • • • 1 -2 • • • -1 • • • -1 -2 • • • 1 • )) are classified by ADE Dynkin diagrams. If one chooses for instance k = 10, there are three of them, described by the Dynkin Diagrams A 11 , D 7 , E 6 , the first being the modular fusion category A k (SU(2)) itself, and N = k + 2 is the Coxeter number. Figure 3 : 3 Figure 3: SU(3) at level 2: one of the hyper-root hexagons (symmetry properties). Figure 5 : 5 Figure 5: Harmonicity property for one of the 12 roots of A 3 (SU(2)), i.e., SU(2) at level 3. The period (read vertically) is 2 × 5 since N = g + k = 2 + 3 = 5. Horizontally one recognizes the Dynkin diagram A 4 of SU[START_REF] Coquereaux | Comments about quantum symmetries of SU(3) graphs[END_REF]. The chosen root is located where the "2" stands. Figure 7 : 7 Figure 7: SU(3) at level 2: the periodicity rhombus associated with one hyper-root (marked 6). 4. 1 1 The hyper-root lattice L 0 of A 0 (SU(3)) H := DirichletGroup( 3 , 3 CyclotomicField(EulerPhi(3))); chars := Elements(H); eps := chars[2]; M := ModularForms([eps],1); Basis(M,17);4.2 The hyper-root lattice L 1 of A 1 (SU(3)) Figure 9 : 9 Figure9: Twelve relative hexagons associated with a basis of hyper-roots for A 2 (SU(3)). There are 50 positive (i.e., restricted) hyper-roots and therefore also 50 relative hexagons. Those displayed here correspond to the basis B 2 . . 1 . . . . . . . . . . 1 1 . . . . . . 1 . . . 1 . . . . . . . . . 1 . 1 . . . . 1 . . . 1 . 1 . . . . 1 . . . . . 1 . . . . . . . . 1 . . . . . 1 . . . . 1 . . . . . 1 . . . . 1 . . . . . 1 . . . .  1 1 . . 1 . . . 1 . . . . . 1 . . 1 . . . 1 . . 1 . . .        A 3 :               . . . . . . . . . . . . . . . 1 1 . . . . . . . . . . . 1 . . . 1 . . . . . . . . . . . . . . 1 . 1 . . . . . . . . . 1 . . . 1 . 1 . . . . . . . . . 1 . . . . . 1 . . . . . . . . . . . . . 1 . . 1 . . . . . . . 1 . . . . 1 . . 1 . . . . . . . 1 . . . . 1 . . 1 . . . . . . . 1 . . . . . . . 1 . . . . . . . . . . . . 1 . . . . . . . . . 1 . . . . . 1 . . . . . . . . . 1 . . . . . 1 . . . . . . . . . 1 . . . . . 1 . . . . . . . . . 1 . . . . . . . 1 . . . . . . . . . . . 1 . . . . . . . . . . 1 . . . . . . . . . . 1 1 1 . . . . . . . . . . . 1 . . . . . . . . . 1 . 1 . . . . . . . . . . . . 1 1 . . . . . . . . . . 1 . 1 . . . . . . . . . . 2 1 . . 1 1 . . . . . . . . . . . . 1 . 1 1 . 1 1 1 . . . . . .      . . . . 1 . . . . . 1 . . . . . 1 . . . . . . 2        1 1 1 1 . .  . . . . . . 1 . . . . . E 5 :                   . . . . . . . 1 . . . . . . . . 1 . 1 1 . . . . . . . . . 1 1 1 . . . . . . . . . . . . . . . 1 . . . . . . . . . . 1 . . . . . . . . . 1 . 1 1 . . . . . . . . . 1 1 1 . . 1 . . . . . . . . . . . . 1 . . . . . . . . 1 . 1 1 . . . . . . . . . 1 1 1 . . . . . . . . 1 . . . . . . . . . . . . 1 . . . . . . . . . . . . 1 . . . . . . . . . 1 1 1 2 . . . . . . . . . . . . 1 . . 1 . . . . . . . . . 1 . 1 . . . . . . . . . . 1 1 . . . . . . . . 1 1 1 1 1 . . 1 . . . . . . . . . 1 . 1 . . . . . . . . . . 1 1 . . . . . . . . . . . 2 . . . . . . . . . . . . . . 1 . . . . . . . . . . . . . . . . . . . . . . . 1 1 1 . . . . . . . . . . . . . . . . . . . . . . . 1 . 1 . 1 . . . . . . . . . . . . . . . . . . 1 . 1 . 1 . . . . . . . . . . . . . . . . . . . . 1 . 1 1 . . . . . . . . . . . . . . . . . . . . 1 1 . 1 . . . . . . . . . . . . . . . . . . . . . . . 1 1 1 . . . . . . . . . . . . . . . . . . . . . . . 1 . . . . . . . . . . . . . . . . . . . . . . . . 1 1 . . . . . . . . . . . . . . . . . . . . . . 1 . 1 . . . . . . . . . . . . . . . . . . . . . . 1 1 1 . 1 . . . . . . . . . . . . . . . . . . . . 1 . 1 . . . . . . . . . . . . . . . . . . . . . . 1 . 1 . . . . . . . . . . . . . . . . . . . . 1 . 1 1 1 . . . . . . . . . . . . . . . . . . . . . . 1 . 1 . . . . . . . . . . . . . . . . . . . . . . 1 1 1 1 . . . . . . . . . . . . . . . . . . . . . . . 1 1 . . . . . . . . . . . . . . . . . . . . . . 1 . 1 1 1 . . . . . . . . . . . . . . . . . . . . 1 . 1 . . . . . . . . . . . . . . . . . . . . . . 1 . 1 . . . . . . . . . . . . . . . . . . . . 1 . 1 1 1 . . . . . . . . . . . . . . . . . . . . 1 . . 1 . . . . . . . . . . . . . . . . . . . . . . . 1 1 . . . . . . . . . . . . . . . . or semi-simple, but k is then a multiplet of positive integers. The proof of equivalence given in the first two references assumed a negative level. The fact that it holds in all cases has been part of the folklore for a long time because it could be verified on a case by case basis. Its general validity is now considered as a consequence of the Huang's proof of the Verlinde conjecture[START_REF] Huang | Vertex operator algebras, the Verlinde conjecture, and modular tensor categories[END_REF]. This amounts to say[START_REF] Ostrik | Module categories, weak Hopf algebras and modular invariants[END_REF] that we are given a monoidal functor from A k (G) to the category of endofunctors of an abelian category E k (G). The Fn are sometimes called "annular matrices" when A k (G) and E k (G) are distinct (if they are the same, then Fn = Nn), and the τa are sometimes called "essential matrices". This is the shifted Weyl action: w • n = w(n + ρ) -ρ where ρ is the Weyl vector. meaning that we consider the fusion category A k (G) or one of its module-categories The terminology "ribbon" comes from A. Ocneanu. for reasons explained in section 2.2.4. actually we follow the reversed red arrows in figure2, which means that we use the opposite Λ, but this choice is purely conventional and plays no role in the sequel. This harmonicity property is illustrated for A2(SU 3) in figure4. Using eq. 5 one could define a periodic inner product on Λ ×Z E that would not be positive definite because of the periodicity, but we consider directly its non-degenerate quotient, naturally defined on the ribbon D ×Z E. The group G may be simply-laced or not, but for the modules considered in this paper (choices of E), all hyperroots have only one possible length. This general result was claimed in the last two slides of[START_REF] Ocneanu | Higher Coxeter systems[END_REF] and it can be explicitly checked in all the cases that we consider below. If G is not simply-laced, one should be careful not to use here the basis of simple coroots. It may be useful to enlarge these pictures, using an online version of the present paper. Some properties of this matrix and of its inverse are investigated in one section of[START_REF] Zuber | Contribution to Mathematical Foundations of Quantum Field Theory[END_REF], see also[START_REF] Coquereaux | Orders and dimensions for sl2 or sl3 module-categories and boundary conformal field theories on a torus[END_REF]. This parameter q is not related to the root of unity, called q, that appears in section 2.1. As Γ1( ) ⊂ Γ0( ), one can sometimes use modular forms (and bases of spaces of modular forms) twisted by Dirichlet characters on the congruence subgroup Γ1( ). Warning: A simple counting argument shows that the lattice of hyper-roots obtained by taking k = 0 for G = SU(n) and n > 3 cannot be identified with the usual root lattice of SU(()n).   . 1 . . . . . . . . . . . . . . . . . . . . 1 1 . . . . . . . . . . . . . . . . . 1 . . . 1 . . . . . . . . . . . . . . . . . . . . 1 . 1 . . . . . . . . . . . . . . . 1 . . . 1 . 1 . . . . . . . . . . . . . . . 1 . . . . . 1 . . . . . . . . . . . . . . . . . . . 1 . . 1 . . . . . . . . . . . . . 1 . . . . 1 . . 1 . . . . . . . . . . . . . 1 . . . . 1 . . 1 . . . . . . . . . . . . . 1 . . . . . . . 1 . . . . . . . . . . . . . . . . . . 1 . . . 1 . . . . . . . . . . . 1 . . . . . 1 . . . 1 . . . . . . . . . . . 1 . . . . . 1 . . . 1 . . . . . . . . . . . 1 . . . . . 1 . . . 1 . . . . . . . . . . . 1 . . . . . . . . . 1 . . . . . . . . . . . . . . . . . 1 . . . . . . . . . . . . . . 1 . . . . . . 1 . . . . . . . . . . . . . . 1 . . . . . . 1 . . . . . . . . . . . . . . 1 . . . . . . 1 . . . . . . . . . . . . . . 1 . . . . . . 1
01772744
en
[ "spi.signal" ]
2024/03/05 22:32:18
2007
https://hal.science/hal-01772744/file/ICIP2007.pdf
W Souidène email: [email protected] A Aïssa-El-Bey K Abed-Meraim A Beghdadi email: [email protected] BLIND IMAGE SEPARATION USING SPARSE REPRESENTATION Keywords: Separation, image restoration, sparse matrices This paper focuses on the blind image separation using their sparse representation in an appropriate transform domain. A new separation method is proposed that proceeds in two steps: (i) an image pretreatment step to transform the original sources into sparse images and to reduce the mixture matrix to an orthogonal transform (ii) and a separation step that exploits the transformed image sparsity via an p-norm based contrast function. A simple and efficient natural gradient technique is used for the optimization of the contrast function. The resulting algorithm is shown to outperform existing techniques in terms of separation quality and computational cost. INTRODUCTION Blind source separation (BSS) is an important research field in signal and image processing. In particular, separating linear mixtures of several 'independent' images has application in biomedical imaging [1-3], in cosmology and multispectral imaging [START_REF] Naceur | The contribution of the sources separation method in the decomposition of mixed pixels[END_REF][START_REF] Bijaoui | Blind source separation of multispectral astronomical images[END_REF], in polarimetric imaging [START_REF] Bronstein | Separation of reflections via sparse ICA[END_REF], etc. Recently, an important research activity has been observed for solving the BSS problem using a sparse representation of the source signals. Solution for the blind separation of image sources using sparsity include the wavelet-transform domain method in [START_REF] Bronstein | Separation of reflections via sparse ICA[END_REF] and the method in [START_REF] Zibulevsky | Blind source separation by sparse decomposition in signal dictionary[END_REF] using projection onto sparse dictionaries. In this work, we propose a new solution based on the transformed image sparsity. The new BSS algorithm is shown to be more efficient that other existing techniques in the literature and leads to improved separation quality with lower computational cost. NOTATIONS AND DATA MODEL We assume that N images f1, • • • , f N each of size (m f , n f ) are merged and M linear mixtures of these original images are observed. The latter mixtures can be modeled by the following linear system: g(m, n) = Af (m, n) + w(m, n) (1) where, f (m, n) = [f 1 (m, n), • • • , f N (m, n)] T is a N × 1 im- age source vector consisting of the stack of corresponding pixels of source images, w(m, n) = [w1(m, n), • • • , wM (m, n)] T is the M × 1 gaussian complex noise vector which affects each image mixture pixel, A is the M × N full column rank mixing matrix (i.e., M ≥ N ), g(m, n) = [g 1 (m, n), • • • , g M (m, n)] T is an M × 1 vector of mixture image pixels and the superscript T denotes the transpose operator. The purpose of blind image separation is to find a separating matrix, i.e. a N × M matrix B such that b f (m, n) = Bg(m, n ) is an estimate of original images. In practice, the separating matrix estimation is performed up to a permutation and a certain fixed scalar, i.e. B is a separating matrix iff: BA = P Λ ( 2 ) where P is a permutation matrix and Λ a non-singular diagonal matrix that represent the inherent ambiguities of the BSS problem. SEPARATION ALGORITHM As shown in [START_REF] Bronstein | Separation of reflections via sparse ICA[END_REF][START_REF] Zibulevsky | Blind source separation by sparse decomposition in signal dictionary[END_REF], exploiting the sparsity of some representations of the original images afford us to achieve the BSS problem. Indeed, the mixture destroys or 'reduces' the sparsity of the considered signals that is restored after source separation. Reversely, it is shown in [START_REF] Bronstein | Separation of reflections via sparse ICA[END_REF][START_REF] Zibulevsky | Blind source separation by sparse decomposition in signal dictionary[END_REF] that restoring (maximizing) the sparsity leads to the desired source separation. Based on this, we propose in the sequel a two-step BSS solution consisting in a linear pre-treatment that transforms the original sources into sparse signals followed by a BSS algorithm that minimizes the p norm of the transformed image mixtures using natural gradient technique. Image pre-treatment The algorithm proposed in this article is efficient for separating sparse sources. For some signals, one can assume that the spatial or temporal representation is naturally sparse, whereas for natural scenes, this assumptions falls down. We propose to make the image sparse by simply taking into account its Laplacian transform: F = ∇f = ∂ 2 f ∂x 2 + ∂ 2 f ∂y 2 , ( 3 ) or, in discrete form F (m, n) = f (m + 1, n) + f (m -1, n) + f (m, n + 1) +f (m, n -1) -4f (m, n) . Our motivation for choosing this transformation is two fold. First the Laplacian transform is a sparse representation of the image since it acts as an edge detector which provides a two-level image, the edges and the homogeneous background. Second, the Laplacian is a linear transformation. This latter property is 'critical' since the separation matrix estimated to separate the image mixtures is the same to separate the mixture of Laplacian images: G = ∂ 2 Af ∂x 2 + ∂ 2 Af ∂y 2 = AF ( 4 ) where G is the Laplacian transform of the mixtures. In the literature, some other linear transformations were proposed in order to make the image sparse, including the projection into a sparse dictionary [START_REF] Zibulevsky | Blind source separation by sparse decomposition in signal dictionary[END_REF]. In Figure 1, the original cameraman image is displayed as well as its Laplacian transform and their respective histograms that clearly show the sparsity of the latter. In the pre-treatment phase, we also propose an optional whitening step which aims to set the mixtures to the same energy level. Furthermore, this procedures reduces the number of parameters to be estimated. More precisely, the whitening step is applied to the Laplacian image mixtures before using our separation algorithm. The whitening is achieved by applying a N × M matrix Q to the Laplacian image mixtures in such a way Cov(QG) = I in the noiseless case, where Cov(•) stands for the covariance operator. As shown in [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF], Q can be computed as the inverse square root of the noiseless covariance matrix of the Laplacian image mixtures (see [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF] for more details). In the following, we apply our separation algorithm on the whitened data: G w (m, n) = QG(m, n). Sparsity-based BSS algorithm In this section, we propose an iterative algorithm for the separation of sparse signals, namely the ISBS for Iterative Sparse Blind Separation algorithm. It is well known that Laplacian image transform is characterized by its sparsity property in the spatial domain [1,[START_REF] Zibulevsky | Sparse source separation with relative Newton method[END_REF]. This property can be measured by the p norm where 0 ≤ p < 2. More specifically, one can define the following sparsity based contrast function, G p (F ) = N X i=1 [J p (F i )] 1 p ( 5 ) where Jp(Fi) = 1 m f n f m f X n=1 n f X m=1 |Fi(m, n)| p . ( 6 ) The algorithm finds a separating matrix B such as, B = arg min B {Gp(B)} (7) where Gp(B) Gp(H) [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF] and H(m, n) BG w (m, n) represents the estimated image sources Laplacian. The approach we choose to solve ( 7) is inspired from [START_REF] Pham | Blind separation of mixture of independent sources through a quasi-maximum likelihood approach[END_REF]. It is a block technique based on the processing of m f n f observed image pixels and consists in searching the minimum of the sample version of [START_REF] Zibulevsky | Blind source separation by sparse decomposition in signal dictionary[END_REF]. Solutions are obtained iteratively in the form: B (k+1) = (I + (k) )B (k) ( 9 ) H (k+1) (m, n) = (I + (k) )H (k) (m, n) . ( 10 ) At iteration k, a matrix (k) is determined from a local linearization of G p (BG w ). It is an approximate Newton technique with the benefit that (k) can be very simply computed (no Hessian inversion) under the additional assumption that B (k) is close to a separating matrix. This procedure is illustrated in the following. At the (k+1) th iteration, the proposed criterion ( 6) can be developed as follows: J p (H (k+1) i ) = = 1 m f n f m f X m=1 n f X n=1 ˛H(k) i (m, n) + N X j=1 (k) ij H (k) j (m, n) ˛p . Under the assumption that B (k) is close to a separating matrix, we have | (k) ij | 1 and thus, a first order approximation of J p (H (k+1) i ) is given by: Jp(H (k+1) i ) ≈ 1 m f n f m f P m=1 n f P n=1 n |H (k) i (m, n)| p + p N P j=1 (k) ij " |H (k) i (m, n)| p-1 sgn " H (k) i (m, n) " H (k) j (m, n) "o (11) where sgn(•) represents the sign value operator. Using equation [START_REF] Beghdadi | A new image distortion measure based on wavelet decomposition[END_REF], equation ( 5) can be rewritten in more compact form as: G p " B (k+1) " = G p " B (k) " + T r " (k) R (k)T D (k) " ( 12 ) where T r(•) is the matrix trace operator, the ij th entry of matrix R (k) is given by: R (k) ij = 1 m f n f m f X m=1 n f X n=1 |H (k) i (m, n)| p-1 sgn " H (k) i (m, n) " H (k) j (m, n) . and D (k) = h diag " R (k) 11 , . . . , R (k) N N "i 1 p -1 . ( 13 ) Using a gradient technique, (k) can be written as: (k) = -µD (k) R (k) , ( 14 ) where µ > 0 is the descent step. Replacing ( 14) into (12) leads to, Gp " B (k+1) " = Gp " B (k) " -µ D (k) R (k) 2 , ( 15 ) so µ controls the decrement of the criterion. Now, to avoid the algorithm's convergence to the trivial solution B = 0, one normalizes the outputs of the separating matrix to unit-power, i.e. ρ (k+1) H i E " |H (k+1) i (m, n)| 2 " = 1, ∀ i. Using first order approximation, this normalization leads to: (k) ii = 1 -ρ (k) H i 2ρ (k) H i . ( 16 ) The final estimated separation matrix B = B (K) Q is applied to the image mixtures g to obtain an estimation of the original images. K denotes here the number of iterations that can be either chosen a priori or given by a stopping criterion of the form B (k+1) -B (k) < δ where δ is a small threshold value. PERFORMANCE EVALUATION All simulations are carried on 256 × 256 parrot and cameraman images. The number of observed mixtures is M = 2. The algorithms are developed on MATLAB environment. Monte Carlo simulations are carried over 200 random realizations of the additive gaussian noise for variable Signal to Noise Ratios (SNR). In order to objectively evaluate the performance of the proposed algorithm, we consider two different criteria, the first one is the Interference to Signal Ratio (ISR) criterion [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF] defined as: ISR N X i=1 N X j =i E `|(BA) ij | 2 ´ρj E (|(BA) ii | 2 ) ρ i ( 17 ) where ρ i = E(|f i (m, n)| 2 ) is the i th source power. The second one is an objective image quality measure inspired from the Human Visual System (HVS) properties and developed in [START_REF] Beghdadi | A new image distortion measure based on wavelet decomposition[END_REF]. It is called P SN R -W AV for Peak Signal to Noise Ratio based on Wavelet decomposition. The separation result of the proposed algorithm is depicted on Fig. 2 where we represent the two original images (f 1 , f 2 ), the mixtures (g 1 , g 2 ) and the recovered ones ( b f 1 , b f 2 ) by the proposed algorithm in the noiseless case. In Fig. 3, we compare the performance of the proposed algorithm to the Relative Newton algorithm developed by Zibulevsky et al. in [START_REF] Zibulevsky | Sparse source separation with relative Newton method[END_REF] where the case of sparse sources is considered. We plot the residual interference between separated images (ISR) versus the SNR. It is clearly shown that our algorithm (ISBS) performs better in terms of ISR especially for low SNRs. We plot on Fig. 4 the objective distortion measure between the original and separated images versus the SNR. One can observe that, for each image, we reach the same conclusion for the P SN R -W AV as for the ISR. In Fig. 5, we represent the evolution of the ISR as a function of the iteration number. A fast convergence rate is observed. Moreover, the complexity of the proposed algorithm is equal to 2N 2 m f n f + O(N 2 ) flops per iteration whereas the complexity of the Relative Newton algorithm in [START_REF] Zibulevsky | Sparse source separation with relative Newton method[END_REF] is 2N 4 + N 3 m f n f + N 6 /6. CONCLUSION This article deals with a simple and efficient two-step algorithm of blind image separation. The proposed method consists in a sparsification of the natural observed mixtures followed by a blind separation of the original images. The sparsification is simply the Laplacian transform and has a low computational cost. The separation is performed using an iterative algorithm based on the minimizing of [1] A. Cichocki and S. Amari, Adaptive Blind Signal and Image Processing, Wiley & Sons, Ltd., UK, 2003. [2] N. Zhang, J. LU, and T. Yahgi, "Nonlinear blind source separa- Fig. 1 . 1 Fig. 1. (a) Original image, (b) Laplacian transform, (c) Original image histogram, (d) Sparse Laplacian transform histogram Fig. 2 . 2 Fig. 2. (a)-(b) original images, (c)-(d) M = 2 observed mixtures, (e)-(f) restored images using ISBS algorithm. Fig. 3 . 3 Fig. 3. Interference to Signal Ratio (ISR) versus SNR Fig. 4 .Fig. 5 . 45 Fig. 4. Objective image quality measure (P SN R -W AV )
01772752
en
[ "spi.signal" ]
2024/03/05 22:32:18
2007
https://hal.science/hal-01772752/file/ICASSP2007.pdf
Abdeldjalil Aïssa-El-Bey Karim Abed-Meraim Yves Grenier email: [email protected] UNDERDETERMINED BLIND SEPARATION OF AUDIO SOURCES FROM THE TIME-FREQUENCY REPRESENTATION OF THEIR CONVOLUTIVE MIXTURES Keywords: Separation, deconvolution, time-frequency analysis, identification This paper considers the blind separation of nonstationary sources in the underdetermined convolutive mixture case. We introduce two methods based on the sparsity assumption of the sources in the timefrequency (TF) domain. The first one assumes that the sources are disjoint in the TF domain; i.e. there is at most one source signal present at a given point in the TF domain. In the second method, we relax this assumption by allowing the sources to be TF-nondisjoint to a certain extent. In particular, the number of sources present (active) at a TF point should be strictly less than the number of sensors. In that case, the separation can be achieved thanks to subspace projection which allows us to identify the active sources and to estimate their corresponding time-frequency distribution (TFD) values. INTRODUCTION The blind source separation of more sources than sensors (referred to as UBSS for underdetermined blind source separation) is still a challenging problem especially in the convolutive mixtures case. In the instantaneous mixture case, some methods exploiting the sparseness of the sources in certain transform domain have been proposed for UBSS [START_REF] Bofill | Underdetermined blind source separation using sparse representations[END_REF][START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF][START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF][START_REF] Linh-Trung | Underdetermined blind source separation of non-disjoint nonstationary sources in time-frequency domain[END_REF]. These methods proceed 'roughly' as follows: The mixtures are first transformed to an appropriate representation domain; the transformed sources are then estimated using their sparseness, and finally one recovers their time waveforms by source synthesis (for more information, see the recent survey work [START_REF] O'grady | Survey of sparse and nonsparse methods in source separation[END_REF]). UBSS methods for nonstationary sources have been proposed, given that these sources are sparse in the time-frequency (TF) domain [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF][START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF]. The main assumption used in these methods is that the sources are TF-disjoint. In other words, there is at most one source present at any point in the TF domain. This assumption is rather restrictive, though the methods have also showed that they worked well under a quasi sparseness condition, i.e. sources are TF-almost-disjoint. In this paper we focus on the UBSS in convolutive mixture case and target the relaxation of the TF-disjointness condition by allowing the sources to be nondisjoint in the TF domain; that is, multiple sources are possibly present at any point in the TF domain. This case has been considered in [START_REF] Linh-Trung | Underdetermined blind source separation of non-disjoint nonstationary sources in time-frequency domain[END_REF] for the separation of instantaneous mixtures, in [START_REF] Rosca | Generalized sparse signal mixing model and application to noisy blind source separation[END_REF] for the deconvolution of single-path channels with non-zero delays and in [START_REF] Araki | Blind separation of more speech than sensors with less distortion by combining sparseness and ICA[END_REF] where binary TF-masking and ICA technique are jointly used. The main contribution of this paper consists in two new algorithms (TF-CUBSS for Time-frequency convolutive underdetermined blind source separation) for solving the UBSS in the TF domain; the first one uses vector clustering while the other uses subspace projection. PROBLEM FORMULATION Data model Let s 1 (t), . . . , s N (t) be the desired sources to be recovered from the convolutive mixtures x 1 (t), . . . , x M (t) given by: x(t) = K k=0 H(k)s(t -k) + w(t) (1) where s(t) = [s1(t), . . . , sN (t)] T is the source vector, x(t) = [x1(t), . . . , xM (t)] T is the mixture vector (with M < N ), w(t) is the observation noise, and H(k) are M × N matrices for k ∈ [0, K] representing the impulse response coefficients of the channel that satisfies: Assumption 1 The channel is such that each column vector of H(z) def = K k=0 H(k)z -k def = [h1(z), . . . , hN (z)] is irreducible, i.e. the entries of hi(z) denoted hji(z), j = 1, . . . , M , have no common zeros ∀i. Moreover, any M column vectors of H(z) form a polynomial matrix H(z) that it full rank over the unit-circle, i.e. rank( H(f )) = M ∀f . TF conditions on the sources In order to deal with UBSS, one often seeks for a sparse representation of the sources [START_REF] Bofill | Underdetermined blind source separation using sparse representations[END_REF]. In other words, if the sources can be sparsely represented in some domain, then their separation can be carried out in that domain by exploiting their sparseness. TF-disjoint sources Recently, there have been several UBSS methods, notably those in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF] and [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF], in which the TF domain has been chosen to be the underlaying sparse domain. These two papers have based their solutions on the assumption that the sources are disjoint in the TF domain. Mathematically, if Ω 1 and Ω 2 are the TF supports of two sources s 1 (t) and s 2 (t) then the sources are said TF-disjoint if Ω 1 ∩ Ω 2 = ∅. However, this is a rather strict assumption. A more practical assumption is that the sources are almost-disjoint in the TF domain [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF], allowing some small overlapping in the TF domain, for which the above two methods also worked. TF-nondisjoint sources In this paper, we want to relax the TF-disjoint condition by allowing the sources to be nondisjoint in the TF domain. Therefore, we will allow the sources to be nondisjoint in the TF domain; that is, multiple sources are allowed to be present at any point in the TF domain. However, instead of being inevitably nondisjoint, we limit ourselves by making the following constraint: Assumption 2 The number of active sources (i.e. sources that overlap) at any TF point is strictly less than the number of sensors. In other words, for the configuration of M sensors, there exists at most (M -1) overlapping sources at any point in the TF domain. For the special case when M = 2, Assumption 2 reduces to the disjoint condition. TF-CUBSS ALGORITHM In order to solve the UBSS problem in the convolutive case, we propose to identify first the impulse response of the channels. This problem in overdetermined case is very difficult and becomes almost impossible in the underdetermined case without side information on the considered sources. In this work and similarly to [START_REF] Huang | A blind channel identificationbased two-stage approach to separation and dereverberation of speech signals in a reverberant environment[END_REF], we exploit the sparseness property of the audio sources by assuming that from time to time only one source is present. In other words, we consider the following assumption: Assumption 3 There exists, periodically, time intervals where only one source is present in the mixture. This occurs for all source signals of the considered mixtures. To detect these time intervals, we propose to use information criteria based testing for the estimation of the number of sources present in the signal (see Section 3.1 for more details). Channel estimation Based on assumption 3, we propose here to apply SIMO (Single Input Multiple Output) based techniques to blindly estimate the channel impulse response. Regarding the problem at hand, we have to solve three different problems: first, we have to select time intervals where only one source signal is effectively present; then, for each selected time interval one should apply an appropriate blind SIMO identification technique to estimate the channel parameters; finally, the way we proceed, the same channel may be estimated several times and hence one has to group together (cluster) the channel estimates into N classes corresponding to the N source channels. Source number estimation Let define the spatio-temporal vector: x d (t) = [x T (t), . . . , x T (t-d+1)] T = N k=1 H k s k (t)+w d (t), (2) where H k are block-Sylvester matrices of size dM × (d + K), s k (t) def = [s k (t), . . . , s k (t -K -d + 1) ] T and d is a chosen processing window size. Under the data model assumption and for large window sizes (see [START_REF] Wax | Detection of signals by information theoretic criteria[END_REF] for more details), matrices H k are full column rank. Hence, in the noiseless case, the rank of the data covariance matrix R def = E[x d (t)x H d (t)] is equal to min(p(d + K), dM ) where p is the number of sources present in the considered time interval over which the covariance matrix is estimated. In particular, for p = 1, one has the minimum rank value equal to (d + K). Therefore, our approach consists in estimating the rank of the sample averaged covariance matrix R over several time slots (intervals) and select those corresponding to the smallest rank value r = d+K. The estimation of the rank value is done here by Akaike's criterion [START_REF] Wax | Detection of signals by information theoretic criteria[END_REF] according to: r = arg min k       -2 log      M d i=k+1 λ 1/(M d-k) i 1 M d-k M d i=k+1 λ i      (M d-k)Ts + 2k(2M d -k)       , (3) where λ 1 ≥ . . . ≥ λ M d represent the eigenvalues of R and T s is the time slot size. Note that it is not necessary at this stage, to know exactly the channel degree K as long as d > K (i.e. an over-estimation of the channel degree is sufficient) in which case the presence of one signal source is characterized by: d < r < 2d . Blind channel identification To perform the blind channel identification, we have used in this paper the Cross-Relation (CR) technique described in [START_REF] Xu | A least-squares approach to blind channel identification[END_REF]. This method is used on the time slots, where only one source signal is active. The latter are selected using the previously described Akaike's criterion. Note that there exist an improved, but more expensive, version of the CR method exploiting the quasi-sparse nature of acoustic impulse response [START_REF] Ahmad | Proportionate frequency domain adaptive algorithms for blind channel identification[END_REF] which can be used as well at this stage. Clustering of channel vector estimates The first step of our channel estimation method consists in detecting the time slots where only one single source signal is 'effectively' present. However, the same source signal s i may be present in several time intervals leading to several estimates of the same channel vector h i def = = [h 1i (0) . . . h M i (0) . . . h 1i (K) . . . h M i (K)] T . We end up, finally, with several estimates of each source channel that we need to group together into N classes. This is done by clustering the estimated vectors using k-means algorithm [START_REF] Frank | The data analysis handbook[END_REF]. The i th channel estimate is evaluated as the centroid of the i th class. UBSS algorithm with TF-disjoint assumption In this section, we propose a new cluster-based TF-CUBSS algorithm using the STFT (Short Time Fourier Transform) for convolutive mixture case. After transformation into the TF domain using the STFT, the model in (1) becomes (in the noiseless case): Sx(t, f ) = H(f )Ss(t, f ), (4) where Sx(t, f ) is the mixture STFT vector, Ss(t, f ) is the source STFT vector and H(f ) = [h 1 (f ) . . . h N (f )] is the channel Fourier transform matrix. Under the assumption that all sources are disjoint in the TF domain, (4) reduces to S x (t, f ) = h i (f )S s i (t, f ), ∀(t, f ) ∈ Ω i , ∀i ∈ N , ( 5 ) where N = {1, . . . , N }. Consequently, two TF points (t 1 , f 1 ) and (t2, f2) belonging to the same region Ωi (i.e. corresponding to the source signal si) are 'associated' with the same channel hi. This latter observation is used next to cluster together the TF points of a given source signal. More precisely the algorithm proceeds as follows: First, we compute the STFT of the mixtures according to: S x i (t, f ) = ∞ -∞ x i (τ )w(τ -t)e -j2πf τ dτ, i = 1, . . . , M, S x (t, f ) = [S x 1 (t, f ), . . . , S x M (t, f )] T . Then, we apply a noise thresholding procedure which mitigates the noise effect but also reduces the computational cost as only the selected TF points are further treated by our algorithm. In particular, for each frequency-slice (t, fp) of the TFD representation, we apply the following criterion for all the time points t k belonging to this frequency-slice If Sx(t k , fp) max t { S x (t, f p ) } > , then keep (t k , f p ), (7) where is a small threshold (typically, = 0.01). Then, the set of all selected points, Ω, is expressed by Ω = N i=1 Ω i , where Ω i is the TF support of source s i . Note that, the effects of spreading the noise energy while localizing the source energy in the time-frequency domain amounts to increasing the robustness of the proposed method with respect to noise. Hence, by equation ( 7), we would keep only time-frequency points where the signal energy is non-negligible, the other time-frequency points are rejected, i.e. not further processed, since considered to represent noise contribution only. Also, due to the noise energy spreading, the contribution of the noise in the source time-frequency points is relatively, negligible at least for moderate and high SNRs. Finally,the clustering procedure can be done as follows: For each TF point, we obtain the spatial direction vectors by: v(t, f ) = Sx(t, f ) S x (t, f ) , (t, f ) ∈ Ω, (8) and force them, without loss of generality, to have the first entry real and positive.These vectors are clustered into N classes {C i | i ∈ N } by minimizing the criterion: v(t, f ) ∈ Ci ⇐⇒ i = arg min k v(t, f ) - h k (f )e -jθ k h k (f ) 2 (9) where h k (f ) is the Fourier Transform of the k th channel vector estimate and θ k is the phase argument of h 1k (f ) (this is to force the first entry to be real positive as for v(t, f )). The collection of all points, whose vectors belong to the class C i , form the TF support Ω i of source s i . Therefore, we can estimate the STFT of each source s i by: S s i (t, f ) = h H i (f ) h i (f ) 2 S x (t, f ), ∀ (t, f ) ∈ Ω i , 0, otherwise, (10) since, from (5), we have h H i (f ) h i (f ) 2 Sx(t, f ) = h H i (f )h i (f ) h i (f ) 2 Ss i (t, f ) ≈ Ss i (t, f ), ∀ (t, f ) ∈ Ω i . UBSS algorithm with TF-nondisjoint assumption We have seen the cluster-based TF-CUBSS method, using the STFT. This method relies on the assumption that the sources are TF-disjoint, which led to the TF-transformed structure in [START_REF] O'grady | Survey of sparse and nonsparse methods in source separation[END_REF]. The latter is no longer valid, when the sources are nondisjoint in the TF domain. Under the TF-nondisjointness condition, stated in Assumption 2, we propose in this section an alternative method using subspace projection. Recall that the first two steps of the cluster-based quadratic TF-CUBSS algorithm do not rely on the assumption of TF-disjoint sources. Therefore, we can reuse these steps to obtain the channel estimates and all the TF points of the sources, i.e. Ω. Under the TF-nondisjointness condition, consider a TF point (t, f ) ∈ Ω at which there are J < M sources1 sα 1 (t), . . . , sα J (t) present, where α 1 , . . . , α J ∈ N denote the indices of the active sources at (t, f ). Our goal is to identify the sources that are present at (t, f ), i.e. α 1 , . . . , α J , and to estimate the STFT of each of these contributing sources. We define the following: s = [s α 1 (t), . . . , s α J (t)] T , (11a) Hα(f ) = [hα 1 (f ), . . . , hα J (f )]. (11b) Then, ( 4) is reduced to the following S x (t, f ) = Hα (f )Ss(t, f ). (12) Let Hβ (f ) = [h β 1 (f ), . . . , h β J (f )] and Q β (f ) be the orthogonal projection matrix onto the noise subspace of Hβ (f ) expressed by: Q β (f ) = I -Hβ (f ) HH β (f ) Hβ (f ) -1 HH β (f ). (13) We have the following observation: Q β (f )h i (f ) = 0, i ∈ {β 1 , . . . , β J } Q β (f )h i (f ) = 0, i ∈ N \{β 1 , . . . , β J } . ( 14 ) Consequently, as Sx(t, f ) ∈ Range{ Hα(f )}, we have Q β (f )S x (t, f ) = 0, if {β 1 , . . . , β J } = {α 1 , . . . , α J } Q β (f )Sx(t, f ) = 0, otherwise . (15) Since H(f ) has already been estimated by the method presented in Section 3.1, then this observation gives us the criterion to detect the indices α1, . . . , αJ and hence, the contributing sources at the considered TF point (t, f ). In practice, to take into account noise, one detects the column vectors of Hα(f ) by minimizing: {α 1 , . . . , α J } = arg min β 1 ,...,β J { Q β (f )S x (t, f ) } . ( 16 ) Next, TFD values of the J sources at the considered TF point are estimated by: Ss(t, f ) ≈ H# α (f )S x (t, f ), (17) where the superscript (•) # represents the Moore-Penrose's pseudoinversion operator. In the simulation, the optimization problem of ( 16) is solved using exhaustive search. This is computationally tractable for small vector array sizes but would be prohibitive if M is very large. SIMULATION RESULTS In the simulations, we have considered an array of M = 3 sensors, that receives signals from N = 4 independent speech sources. The filter coefficients are chosen randomly and the channel order is K = 6. The sample size is T = 8192 samples (corresponding approximately to 1 second recording of speech signals sampled at 8 KHz). The separation quality is measured by the normalized mean squares estimation errors (NMSE) of the sources evaluated over N r = 200 Monte-Carlo runs and defined as: NMSE i def = 1 N r Nr r=1 min α α si,r -si 2 s i 2 (18) NMSEi = 1 Nr N r r=1 1 - s i,r s H i si,r si 2 (19) NMSE = 1 N N i=1 NMSEi . ( 20 ) where s i def = [s i (0), . . . , s i (T -1)], s i,r ( defined similarly) represents the r th estimate of source s i and α is a scalar factor that compensates for the scale indeterminacy of the BSS problem. In Fig. 1, we compare the separation performance obtained by the subspacebased algorithm with J = 2 and the cluster-based algorithm. It is observed that subspace-based algorithm provides much better separation results than those obtained by the cluster-based algorithm. This is mainly due to the high occurrence of overlapping sources in the TF domain for this type of signals so that the 'TF-disjointness' assumption used by the TF-CUBSS algorithm is poorly satisfied. The plot in Fig. 2 presents the separation performance of the sub- space method when using the exact matrix H compared to that obtained with the proposed estimate H. The observed performance loss is due to the channel estimation error which is relatively high for low SNRs and becomes negligible for high SNRs. CONCLUSION This paper introduces new methods for the UBSS of TF-disjoint and TF-nondisjoint nonstationary sources in the convolutive mixture case using their time-frequency representations. The first proposed method has the advantage of simplicity while the second uses a weaker assumption on the source 'sparseness', i.e. the sources are not necessarily TF-disjoint, and proposes an explicit treatment of the overlapping points using subspace projection, leading to significant performance improvements. Fig. 1 . 1 Fig. 1. Comparison between subspace-based and cluster-based TF-CUBSS algorithms: normalized MSE (NMSE) versus SNR for 4 speech sources and 3 sensors. Fig. 2 . 2 Fig. 2. Comparison, for the subspace-based TF-CUBSS algorithm, when the mixing channel H is known or unknown: NMSE of the source estimates. To avoid the difficult problem of estimating the number of active sources at each TF point, we have chosen in this paper to set J to a fixed value in the range 1 < J < M .
01772758
en
[ "sde.be", "sdv.mp.bac", "sdu.stu.oc" ]
2024/03/05 22:32:18
2018
https://hal.sorbonne-universite.fr/hal-01772758/file/Grebert%20et%20al_Revised%20manuscript_for_HAL.pdf
Théophile Grébert Hugo Doré Frédéric Partensky Gregory K Farrant Emmanuel S Boss Marc Picheral Lionel Guidi Stéphane Pesant David J Scanlan Patrick Wincker Silvia G Acinas David M Kehoe Laurence Garczarek email: [email protected]. Light color acclimation: a key process in the global ocean distribution of Synechococcus cyanobacteria Keywords: marine cyanobacteria, metagenomics, light quality, phycobilisome, Tara Oceans Marine Synechococcus cyanobacteria are major contributors to global oceanic primary production and exhibit a unique diversity of photosynthetic pigments, allowing them to exploit a wide range of light niches. However, the relationship between pigment content and niche partitioning has remained largely undetermined due to the lack of a single-genetic marker resolving all pigment types (PTs). Here, we developed and employed a novel and robust method based on three distinct marker genes (cpcBA, mpeBA and mpeW) to estimate the relative abundance of all known Synechococcus PTs from metagenomes. Analysis of the Tara Oceans dataset allowed us, for the first time, to reveal the global distribution of Synechococcus PTs and to define their environmental niches. Green-light specialists (PT 3a) dominated in warm, green equatorial waters, whereas blue-light specialists (PT 3c) were particularly abundant in oligotrophic areas. Type IV chromatic acclimaters (CA4-A/B), which are able to dynamically modify their light absorption properties to maximally absorb green or blue light, were unexpectedly the most abundant PT in our dataset and predominated at depth and high latitudes. We also identified populations in which CA4 might be nonfunctional due to the lack of specific CA4 genes, notably in warm high-nutrient low-chlorophyll areas. Major ecotypes within clades I-IV and CRD1 were preferentially associated with a particular PT, while others exhibited a wide range of PTs. Altogether, this study provides important insights into the ecology of Synechococcus and highlights the complex interactions between vertical phylogeny, pigmentation and environmental parameters that shape Synechococcus community structure and evolution. Significance Statement Understanding the functional diversity of specific microbial groups at the global scale is critical yet poorly developed. By combining the considerable knowledge accumulated through recent years on the molecular bases of photosynthetic pigment diversity in marine Synechococcus, a major phytoplanktonic organism, with the wealth of metagenomic data provided by the Tara Oceans expedition, we have been able to reliably quantify all known pigment types along its transect and provide the first global distribution map. Unexpectedly, cells able to dynamically change their pigment content to match the ambient light color were ubiquitous and predominated in many environments. Altogether, our results unveiled the role of adaptation to light quality on niche partitioning in a key primary producer. Introduction Marine Synechococcus is the second most abundant phytoplankton group in the world's oceans and constitutes a major contributor to global primary production and carbon cycling [START_REF] Guidi | Plankton networks driving carbon export in the oligotrophic ocean[END_REF][START_REF] Flombaum | Present and future global distributions of the marine Cyanobacteria Prochlorococcus and Synechococcus[END_REF]. This genus displays a wide genetic diversity and several studies have shown that among the ~20 clades defined based on various genetic markers, five (clades I-IV and CRD1) predominate in situ and can be broadly associated with distinct sets of physico-chemical parameters [START_REF] Zwirglmaier | Global phylogeography of marine Synechococcus and Prochlorococcus reveals a distinct partitioning of lineages among oceanic biomes[END_REF][START_REF] Mazard | Multi-locus sequence analysis, taxonomic resolution and biogeography of marine Synechococcus[END_REF][START_REF] Sohm | Co-occurring Synechococcus ecotypes occupy four major oceanic regimes defined by temperature, macronutrients and iron[END_REF]. In a recent study, we further defined Ecologically Significant Taxonomic Units (ESTUs), i.e. organisms belonging to the same clade and co-occurring in the field, and highlighted that the three main parameters affecting the in situ distribution of these ESTUs were temperature and availability of iron and phosphorus [START_REF] Farrant | Delineating ecologically significant taxonomic units from global patterns of marine picocyanobacteria[END_REF]. Yet, marine Synechococcus also display a wide pigment diversity, suggesting that light could also influence their ecological distribution, both qualitatively and quantitatively [START_REF] Six | Diversity and evolution of phycobilisomes in marine Synechococcus spp.: a comparative genomics study[END_REF][START_REF] Alberte | Novel phycoerythrins in marine Synechococcus spp[END_REF]. This pigment diversity comes from differences in the composition of their main light-harvesting antennae, called phycobilisomes (PBS; 7-9). These water-soluble macromolecular complexes consist of a central core anchoring at least six radiating rods made of several distinct phycobiliproteins, i.e. proteins to which specific enzymes (phycobilin lyases) covalently attach chromophores called phycobilins [START_REF] Six | Diversity and evolution of phycobilisomes in marine Synechococcus spp.: a comparative genomics study[END_REF][START_REF] Sidler | Phycobilisome and phycobiliprotein structures[END_REF]. Although the PBS core is conserved in all marine Synechococcus, rods have a very variable composition, and three main pigment types (PTs) are usually distinguished (Fig. S1; [START_REF] Six | Diversity and evolution of phycobilisomes in marine Synechococcus spp.: a comparative genomics study[END_REF][START_REF] Humily | A gene island with two possible configurations is involved in chromatic acclimation in marine Synechococcus[END_REF]. In PT 1, PBS rods are solely made of phycocyanin (PC, encoded by the cpcBA operon) and bear the redlight absorbing phycocyanobilin (PCB; Amax = 620 nm) as the sole chromophore. In PT 2, rods are made of PC and phycoerythrin I (PE-I, encoded by cpeBA) and attach both PCB and the green-light absorbing phycoerythrobilin (PEB; Amax = 550 nm). All other marine Synechococcus belong to PT 3 and have rods made of PC, PE-I and PE-II (encoded by mpeBA) that bind PCB, PEB and the blue-light absorbing phycourobilin (PUB; Amax = 495 nm; Fig. S1). Several subtypes can be defined within PT 3, based on the fluorescence excitation ratio at 495 nm and 545 nm (hereafter Ex 495:545 ; Fig. S1), a proxy for the PUB:PEB ratio. This ratio is low (Ex495:545 < 0.6) in subtype 3a (green light specialists), intermediate in subtype 3b (0.6 ≤ Ex495:545 < 1.6) and high (Ex495:545 ≥ 1.6) in subtype 3c (blue light specialists ; 7, 11). Additionally, strains of subtype 3d are able to change their PUB:PEB ratio depending on ambient light color, a process called type IV chromatic acclimation (hereafter CA4), allowing them to maximally absorb blue or green light [START_REF] Humily | A gene island with two possible configurations is involved in chromatic acclimation in marine Synechococcus[END_REF][START_REF] Palenik | Chromatic adaptation in marine Synechococcus strains[END_REF][START_REF] Everroad | Biochemical cases of type IV chromatic adaptation in marine Synechococcus spp[END_REF][START_REF] Shukla | Phycoerythrin-specific bilin lyase-isomerase controls blue-green chromatic acclimation in marine Synechococcus[END_REF]. Comparative genomic analyses showed that genes involved in the synthesis and regulation of PBS rods are gathered into a dedicated genomic region, the content and organization of which correspond to the different PTs [START_REF] Six | Diversity and evolution of phycobilisomes in marine Synechococcus spp.: a comparative genomics study[END_REF]. Similarly, chromatic acclimation has been correlated with the presence of a small specific genomic island (CA4 genomic island) that exists in two distinct configurations (CA4-A and -B; 11). Both contain two regulators (fciA and fciB) and a phycobilin lyase (mpeZ in CA4-A or mpeW in CA4-B), thus defining two distinct CA4 genotypes: 3dA and 3dB [START_REF] Humily | A gene island with two possible configurations is involved in chromatic acclimation in marine Synechococcus[END_REF][START_REF] Shukla | Phycoerythrin-specific bilin lyase-isomerase controls blue-green chromatic acclimation in marine Synechococcus[END_REF][START_REF] Sanfilippo | Self-regulating genomic island encoding tandem regulators confers chromatic acclimation to marine Synechococcus[END_REF]. Finally, some strains possess a complete or partial CA4 genomic island but are not able to perform CA4, displaying a fixed Ex495:545 corresponding to 3a, 3b or 3c phenotypes [START_REF] Humily | A gene island with two possible configurations is involved in chromatic acclimation in marine Synechococcus[END_REF]. As there is no correspondence between pigmentation and core genome phylogeny [START_REF] Six | Diversity and evolution of phycobilisomes in marine Synechococcus spp.: a comparative genomics study[END_REF][START_REF] Toledo | Swimming marine Synechococcus strains with widely different photosynthetic pigment ratios form a monophyletic group[END_REF][START_REF] Humily | Development of a targeted metagenomic approach to study a genomic region involved in light harvesting in marine Synechococcus[END_REF], deciphering the relative abundance and niche partitioning of Synechococcus PTs in the environment requires specific approaches. In the past 30 years, studies have been based either on i) proxies of the PUB:PEB ratio as assessed by flow cytometry [START_REF] Jiang | Temporal and spatial variations of abundance of phycocyanin-and phycoerythrin-rich Synechococcus in Pearl River Estuary and adjacent coastal area[END_REF][START_REF] Olson | Pigments, size, and distributions of Synechococcus in the North Atlantic and Pacific Oceans[END_REF][START_REF] Sherry | Phycoerythrin-containing picocyanobacteria in the Arabian Sea in february 1995: diel patterns, spatial variability, and growth rates[END_REF], fluorescence excitation spectra [START_REF] Lantoine | Spatial and seasonal variations in abundance and spectral characteristics of phycoerythrins in the tropical northeastern Atlantic Ocean[END_REF][START_REF] Neveux | Phycoerythrins in the southern tropical and equatorial Pacific Ocean: evidence for new cyanobacterial types[END_REF][START_REF] Campbell | Response of microbial community structure to environmental forcing in the Arabian Sea[END_REF][START_REF] Wood | Fluorescence-based characterization of phycoerythrincontaining cyanobacterial communities in the Arabian Sea during the Northeast and early Southwest Monsoon (1994-1995)[END_REF][START_REF] Yona | Distribution of Synechococcus and its phycoerythrin pigment in relation to environmental factors in the East Sea, Korea[END_REF][START_REF] Hoge | Spatial variability of oceanic phycoerythrin spectral types derived from airborne laser-induced fluorescence emissions[END_REF][START_REF] Wood | Water column transparency and the distribution of spectrally distinct forms of phycoerythrin-containing organisms[END_REF], epifluorescence microscopy [START_REF] Campbell | Identification of Synechococcus spp. in the Sargasso Sea by immunofluorescence and fluorescence excitation spectroscopy performed on individual cells[END_REF], or ii) phylogenetic analyses of cpcBA or cpeBA [START_REF] Humily | Development of a targeted metagenomic approach to study a genomic region involved in light harvesting in marine Synechococcus[END_REF][START_REF] Xia | Phylogeography and pigment type diversity of Synechococcus cyanobacteria in surface waters of the northwestern pacific ocean[END_REF][START_REF] Xia | Variation of Synechococcus pigment genetic diversity along two turbidity gradients in the China Seas[END_REF][START_REF] Xia | Cooccurrence of phycocyanin-and phycoerythrin-rich Synechococcus in subtropical estuarine and coastal waters of Hong Kong: PE-rich and PC-rich Synechococcus in subtropical coastal waters[END_REF](32)[START_REF] Chung | Changes in the Synechococcus assemblage composition at the surface of the East China Sea due to flooding of the Changjiang river[END_REF][START_REF] Haverkamp | Diversity and phylogeny of Baltic Sea picocyanobacteria inferred from their ITS and phycobiliprotein operons[END_REF]. These studies showed that PT 1 is restricted to and dominates in low salinity surface waters and/or estuaries, which are characterized by a high turbidity resulting in a red wavelengths-dominated light field [START_REF] Jiang | Temporal and spatial variations of abundance of phycocyanin-and phycoerythrin-rich Synechococcus in Pearl River Estuary and adjacent coastal area[END_REF][START_REF] Neveux | Phycoerythrins in the southern tropical and equatorial Pacific Ocean: evidence for new cyanobacterial types[END_REF][START_REF] Xia | Cooccurrence of phycocyanin-and phycoerythrin-rich Synechococcus in subtropical estuarine and coastal waters of Hong Kong: PE-rich and PC-rich Synechococcus in subtropical coastal waters[END_REF](32)[START_REF] Chung | Changes in the Synechococcus assemblage composition at the surface of the East China Sea due to flooding of the Changjiang river[END_REF][START_REF] Haverkamp | Diversity and phylogeny of Baltic Sea picocyanobacteria inferred from their ITS and phycobiliprotein operons[END_REF][START_REF] Stomp | Colourful coexistence of red and green picocyanobacteria in lakes and seas[END_REF][START_REF] Hunter-Cevera | Diversity of Synechococcus at the Martha's Vineyard Coastal Observatory: insights from culture isolations, clone libraries, and flow cytometry[END_REF][START_REF] Fuller | Clade-specific 16S ribosomal DNA oligonucleotides reveal the predominance of a single marine Synechococcus clade throughout a stratified water column in the red sea[END_REF][START_REF] Larsson | Picocyanobacteria containing a novel pigment gene cluster dominate the brackish water Baltic Sea[END_REF], whereas PT 2 is found in coastal shelf waters or in the transition zones between brackish and oceanic environments with intermediate optical properties [START_REF] Jiang | Temporal and spatial variations of abundance of phycocyanin-and phycoerythrin-rich Synechococcus in Pearl River Estuary and adjacent coastal area[END_REF][START_REF] Wood | Water column transparency and the distribution of spectrally distinct forms of phycoerythrin-containing organisms[END_REF][START_REF] Haverkamp | Diversity and phylogeny of Baltic Sea picocyanobacteria inferred from their ITS and phycobiliprotein operons[END_REF][START_REF] Hunter-Cevera | Diversity of Synechococcus at the Martha's Vineyard Coastal Observatory: insights from culture isolations, clone libraries, and flow cytometry[END_REF][START_REF] Fuller | Clade-specific 16S ribosomal DNA oligonucleotides reveal the predominance of a single marine Synechococcus clade throughout a stratified water column in the red sea[END_REF][START_REF] Larsson | Picocyanobacteria containing a novel pigment gene cluster dominate the brackish water Baltic Sea[END_REF][START_REF] Chen | Phylogenetic diversity of Synechococcus in the Chesapeake Bay revealed by Ribulose-1,5-bisphosphate carboxylase-oxygenase (RuBisCO) large subunit gene (rbcL) sequences[END_REF]. Finally, PT 3 with increasing PUB:PEB ratio are found over gradients from onshore mesotrophic waters, characterized by green light dominance, to offshore oligotrophic waters, where blue light penetrates the deepest (19-24, 28, 36, 38, 40). Some authors reported an increase in the PUB:PEB ratio with depth [START_REF] Olson | Pigments, size, and distributions of Synechococcus in the North Atlantic and Pacific Oceans[END_REF][START_REF] Lantoine | Spatial and seasonal variations in abundance and spectral characteristics of phycoerythrins in the tropical northeastern Atlantic Ocean[END_REF][START_REF] Wood | Fluorescence-based characterization of phycoerythrincontaining cyanobacterial communities in the Arabian Sea during the Northeast and early Southwest Monsoon (1994-1995)[END_REF], while others observed a constant ratio throughout the water column, a variability potentially linked to the location, water column features and/or environmental parameters [START_REF] Neveux | Phycoerythrins in the southern tropical and equatorial Pacific Ocean: evidence for new cyanobacterial types[END_REF][START_REF] Yona | Distribution of Synechococcus and its phycoerythrin pigment in relation to environmental factors in the East Sea, Korea[END_REF][START_REF] Campbell | Identification of Synechococcus spp. in the Sargasso Sea by immunofluorescence and fluorescence excitation spectroscopy performed on individual cells[END_REF]. However, these analyses based on optical properties could only describe the distribution of high-and low-PUB populations without being able to differentiate green (3a) or blue light (3c) specialists from CA4 cells (3d) acclimated to green or blue light, while genetic analysis solely based on cpcBA and/or cpeBA could not differentiate all PTs. For instance, only two studies have reported CA4 populations in situ either in the western English Channel [START_REF] Humily | Development of a targeted metagenomic approach to study a genomic region involved in light harvesting in marine Synechococcus[END_REF] or in sub-polar waters of the western Pacific Ocean [START_REF] Xia | Phylogeography and pigment type diversity of Synechococcus cyanobacteria in surface waters of the northwestern pacific ocean[END_REF] but none of them were able to differentiate CA4-B from high PUB (i.e. 3c) populations. As a consequence, the global relative abundance of the different Synechococcus PTs, particularly CA4, and the link between genetic and pigment diversity have remained largely unclear. Here, we analyzed 109 metagenomic samples collected from all major oceanic basins during the 2.5-yr Tara Oceans (2009-2011) expedition (41) using a bioinformatic pipeline combining a metagenomic read recruitment approach [START_REF] Farrant | Delineating ecologically significant taxonomic units from global patterns of marine picocyanobacteria[END_REF][START_REF] Logares | Metagenomic 16S rDNA Illumina tags are a powerful alternative to amplicon sequencing to explore diversity and structure of microbial communities[END_REF] to recruit single reads from multiple PBS gene markers and placement of these reads in reference trees to assign them to a given PT. This pipeline allowed the first description of the worldwide distribution of all known Synechococcus PTs, as well as of their realized environmental niches (sensu [START_REF] Pearman | Niche dynamics in space and time[END_REF]. This study provides a synoptic view of how a major photosynthetic organism adapts to natural light color gradients in the ocean. Results A novel, robust approach for estimating pigment types abundance from metagenomes We developed a multi-marker approach combining phylogenetic information retrieved from three different genes or operons (cpcBA, mpeBA and mpeW; Fig. 1 and Datasets 1-2) to overcome the issue of fully resolving the whole range of PTs. While cpcBA discriminated PT 1, 2 and 3 (Fig. 1A), only the mpeBA operon, a PT 3 specific marker, was able to distinguish the different PT 3 subtypes (Fig. 1B), though as for cpeBA it could not differentiate PT 3dB (CA4-B) from PT 3c (i.e. blue light specialists ; 11, 29). The mpeW marker was thus selected to specifically target PT 3dB and, by subtraction, enumerate PT 3c (Fig. 1C). Using the cpcBA marker, members of PT 2 were split into two well-defined clusters, 2A and 2B (Fig. 1A), the latter corresponding to a purely environmental PT identified from assembled metagenomes of the Baltic Sea [START_REF] Larsson | Picocyanobacteria containing a novel pigment gene cluster dominate the brackish water Baltic Sea[END_REF]. Strains KORDI-100 and CC9616 also clustered apart from other strains in the mpeBA phylogeny, suggesting that they have a divergent evolutionary history from other PT 3 members (Fig. 1B). This is supported by the diverged gene content and order of their PBS rod genomic region and these strains were recently referred to as PT 3f, even though they have a similar phenotype as PT 3c (Ex495:545 ratio ≥ 1.6; 30). To investigate the phylogenetic resolution of small fragments of these three markers, sequences were removed one at a time from the reference database, and simulated reads (150 bp long as compared to 164 bp in average for Tara Oceans cleaned/merged reads) generated from this sequence were assigned using our bioinformatic pipeline against a database comprising the remaining sequences. Inferred and known PTs were then compared. The percentage of simulated reads assigned to the correct PT was between 93.2% and 97.0% for all three markers, with less than 2.1-5.6% of reads that could not be classified and an errorrate below 2%, showing that all three markers display a sufficient resolution to reliably assign the different PTs (Fig. S2B, D andF). To ensure that the different markers could be quantitatively compared in a real dataset, we examined the correlations between estimates of PT abundances using the different markers in the 109 metagenomes analyzed in this study. Total cpcBA counts were highly correlated (R²=0.994, n=109; Fig. S3A) with total Synechococcus counts obtained with the petB gene, which was previously used to study the phylogeography of marine picocyanobacteria [START_REF] Farrant | Delineating ecologically significant taxonomic units from global patterns of marine picocyanobacteria[END_REF], and the correlation slope was not significantly different from 1 (slope: 1.040; Wilcoxon's paired difference test p-value=0.356). cpcBA is thus as good as petB at capturing the total population of Synechococcus reads. Moreover, counts of cpcBA reads assigned to PT 3 and total mpeBA counts (specific for PT 3) were also strongly correlated (R²=0.996, n=109; Fig. S3B), and not skewed from 1 (slope of 0.991, Wilcoxon's p-value=0.607), indicating that mpeBA and cpcBA counts can be directly compared. Although no redundant information for PT 3dB is available with the three selected markers, another marker targeting 3dB (fciAB) was tested and produced results similar to mpeW (Fig. S3C). These results demonstrate that our multi-marker approach can be used to reliably and quantitatively infer the different Synechococcus PTs from short metagenomic reads, with PT 1, 2A, 2B abundances being assessed by cpcBA normalized counts, PT 3a, 3f and 3dA by mpeBA normalized counts, PT 3dB by mpeW normalized counts and PT 3c by the difference between mpeBA normalized counts for 3c + 3dB and mpeW normalized counts. We thus used this approach on the Tara Oceans metagenomes, generated from 109 samples collected at 65 stations located in the major oceanic basins (Fig. 2). CA4 populations are widespread and predominate at depth and high latitudes The latitudinal distribution of Synechococcus inferred from cpcBA counts is globally consistent with previous studies [START_REF] Flombaum | Present and future global distributions of the marine Cyanobacteria Prochlorococcus and Synechococcus[END_REF][START_REF] Farrant | Delineating ecologically significant taxonomic units from global patterns of marine picocyanobacteria[END_REF][START_REF] Paulsen | Synechococcus in the Atlantic gateway to the Arctic Ocean[END_REF], with Synechococcus being present in most oceanic waters, but quasi absent (< 20 cpcBA counts) beyond 60°S (Southern Ocean stations TARA_082 to TARA_085; Fig. 2B). Overall, the number of recruited cpcBA reads per station was between 0 and 8,151 (n=63, median: 449, mean: 924, sd: 1478) for surface and 0 and 3,200 (n=46, median:170, mean: 446, sd: 664) for deep chlorophyll maximum (DCM) samples, respectively. Stations with less than 30 cpcBA reads were excluded from further analysis. PT 1 and 2, being both known to be mostly found and abundant in coastal waters [START_REF] Xia | Phylogeography and pigment type diversity of Synechococcus cyanobacteria in surface waters of the northwestern pacific ocean[END_REF][START_REF] Hunter-Cevera | Diversity of Synechococcus at the Martha's Vineyard Coastal Observatory: insights from culture isolations, clone libraries, and flow cytometry[END_REF][START_REF] Larsson | Picocyanobacteria containing a novel pigment gene cluster dominate the brackish water Baltic Sea[END_REF][START_REF] Haverkamp | Colorful microdiversity of Synechococcus strains (picocyanobacteria) isolated from the Baltic Sea[END_REF], were expectedly almost absent from this dataset (total of 15 and 513 cpcBA reads, respectively; Fig. 2A-B) since the Tara cruise sampling was principally performed in oceanic waters. While PT 2A was mostly found at the surface at one station off Panama (TARA_141, 417 out of 6,637 reads at this station; Fig. 2B), PT 2B was virtually absent (total of 3 cpcBA reads) from our dataset and might thus be confined to the Baltic Sea [START_REF] Larsson | Picocyanobacteria containing a novel pigment gene cluster dominate the brackish water Baltic Sea[END_REF]. This low abundance of PT 1 and 2B precluded the correlation analysis between their distribution and physico-chemical parameters. PT 3 was by far the most abundant along the Tara Oceans transect, accounting for 99.1 ± 1.4% (mean ± sd) of cpcBA reads at stations with ≥30 cpcBA read counts. Interestingly, several PT 3 subtypes often co-occurred at a given station. PT 3a (green light specialists) totaled 20.3% of read counts, with similar abundance between surface (20.5%) and DCM (19.4%) samples, and was particularly abundant in intertropical oceanic borders and regional seas, including the Red Sea, the Arabian Sea and the Panama/Gulf of Mexico area (Fig. 2B). Correlation analyses show that this PT is consistently associated with high temperatures but also with greenish (as estimated from a low blue to green downwelling irradiance ratio, Irr495:545), particle-rich waters (high particle backscattering at 470 nm and beam attenuation coefficient at 660 nm; Fig. 3). Still, in contrast with previous studies that reported the distribution of low-PUB populations [START_REF] Olson | Pigments, size, and distributions of Synechococcus in the North Atlantic and Pacific Oceans[END_REF][START_REF] Lantoine | Spatial and seasonal variations in abundance and spectral characteristics of phycoerythrins in the tropical northeastern Atlantic Ocean[END_REF][START_REF] Campbell | Response of microbial community structure to environmental forcing in the Arabian Sea[END_REF][START_REF] Wood | Fluorescence-based characterization of phycoerythrincontaining cyanobacterial communities in the Arabian Sea during the Northeast and early Southwest Monsoon (1994-1995)[END_REF][START_REF] Hoge | Spatial variability of oceanic phycoerythrin spectral types derived from airborne laser-induced fluorescence emissions[END_REF][START_REF] Wood | Water column transparency and the distribution of spectrally distinct forms of phycoerythrin-containing organisms[END_REF], this PT does not seem to be restricted to coastal waters, explaining its absence of correlation with chlorophyll concentration and colored dissolved organic matter (cDOM). Blue light specialists (PT 3c) appear to be globally widespread, with the exception of high latitude North Atlantic waters, and accounted for 33.4% of reads, with a higher relative abundance at the surface (36.8%) than at the DCM (23.3%, Fig. 2A). This PT is dominant in transparent, oligotrophic, iron-replete areas such as the Mediterranean Sea as well as South Atlantic and Indian Ocean gyres (Figs. 2B and4C). In the South Pacific, PT 3c was also found to be predominant in the Marquesas Islands area (TARA_123 and 124), where the coast proximity induced a local iron enrichment [START_REF] Farrant | Delineating ecologically significant taxonomic units from global patterns of marine picocyanobacteria[END_REF]. Consistently, PT 3c was found to be positively associated with iron concentration, high temperature and DCM depth and anti-correlated with chlorophyll fluorescence, nitrogen concentrations, net primary production (NPP) as well as other related optical parameters, such as backscattering at 470 nm and beam attenuation coefficient at 660 nm (Fig. 3). Despite its rarity, PT 3f seems to thrive in a similar environment, with the highest relative abundances in the Indian Ocean and Mediterranean Sea (Figs. 2B and4C). Its occurrence in the latter area might explain its strong anti-correlation with phosphorus availability. Both CA4 types, 3dA and 3dB, which represented 22.6% and 18.9% of reads respectively, were unexpectedly widespread and could locally account for up to 95% of the total Synechococcus population (Figs. 2, 4C and S4). In contrast to blue and green light specialists, both CA4 types were proportionally less abundant at the surface (19.8% and 17.5%, for 3dA and 3dB, respectively) than at depth (30.9% and 22.9%). Interestingly, PT 3dA and 3dB generally displayed complementary distributions along the Tara Oceans transect (Fig. 2B). PT 3dA was predominant at high latitude in the northern hemisphere as well as in other vertically mixed waters such as in the Chilean upwelling (TARA_093) or in the Agulhas current (TARA_066 and 68; Fig 2B). Accordingly, PT 3dA distribution seems to be driven by low temperature, high nutrient and highly productive waters (high NPP, chlorophyll a and optical parameters), a combination of physico-chemical parameters almost opposite to those observed for blue light specialists (PT 3c; Fig. 3). In contrast, PT 3dB shares a number of characteristics with PT 3c, including the anti-correlation with nitrogen concentration and association with iron availability (as indicated by both a positive correlation with [Fe] and negative correlation with the iron limitation proxy Φsat; Fig. 3), consistent with their widespread occurrence in iron replete oceanic areas. Also noteworthy, PT 3dB was one of the sole PT (with 3f) to be associated with low photosynthetically available radiation (PAR). Niche partitioning of Synechococcus populations rely on a subtle combination of ESTU and PT niches We previously showed that temperature, iron and phosphorus availability constituted major factors influencing the diversification and niche partitioning of Synechococcus ESTUs (i.e. genetically related subgroups within clades that co-occur in the field; 6). Yet, these results cannot be extended to PTs since the pigment content does not follow the vertical phylogeny [START_REF] Six | Diversity and evolution of phycobilisomes in marine Synechococcus spp.: a comparative genomics study[END_REF]. In order to decipher the respective roles of genetic and pigment diversity in Synechococcus community structure, we examined the relationships between ESTUs and PTs in situ abundances through correlation and NMDS analyses (Fig. 4A-B) and compared their respective distributions (Figs. 4C andS4). Interestingly, all PTs are either preferentially associated with or excluded from a subset of ESTUs. PT 2A is found at low abundance at a few stations along the Tara Oceans transect and, when present, it is seemingly associated with the rare ESTU 5.3B (Fig. 4A), an unusual PT/ESTU combination so far only observed in metagenomes from freshwater reservoirs [START_REF]Novel Synechococcus genomes reconstructed from freshwater reservoirs[END_REF]. PT 3a is associated with ESTUs EnvBC (occurring in low iron areas) and IIA, the major ESTU in the global ocean (Fig. 4A), a result consistent with NMDS analysis, which shows that PT 3a is found in assemblages dominated by these two ESTUs (indicated by red and grey backgrounds in Fig. 4B), as well as with independent observations on cultured strains (Dataset 3). PT 3c is associated with ESTU IIIA (the dominant ESTU in P-depleted areas), as observed on many isolates (Dataset 3), and is also linked, like PT 3f, with ESTUs IIIB and WPC1A, both present at lower abundance than IIIA in P-poor waters (Fig. 4A). PT 3f is also associated with the newly described and low-abundance ESTU XXA (previously EnvC; Fig. S5;4,6). Both PT 3f and ESTU XXA were rare in our dataset but systematically co-occurred, in agreement with the fact that the only culture representative of the latter clade belongs to PT 3f (Dataset 3). PT 3dA appears to be associated with all ESTUs from clades CRD1 (specific to iron-depleted areas) as well as with those representative of coastal and cold waters (IA, IVA, IVC), but is anticorrelated with most other major ESTUs (IIA, IIIA and -B, WPC1A and 5.3B; Fig. 4A). This pattern is opposite to PT 3dB that is preferentially found associated with ESTU IIA, IIB and 5.3A, but not in CRD1A or -C (Fig. 4A). Thus, it seems that the two types of CA4 are found in distinct and complementary sets of ESTUs. Interestingly, our analysis might suggest the occurrence of additional PTs not isolated so far, since a number of reads (0.7% and 2.7% of cpcBA and mpeBA counts, respectively, Fig. 2A) could not be assigned to any known PTs. For instance, while most CRD1C seem preferentially associated with PT 3dA, a fraction of the population could only be assigned at the PT 3 level (Fig. 4A). Similarly, a number of reads could not be assigned to any known PT in stations rich in ESTU 5.3A and XXA, although one cannot exclude that this observation might be due to a low number of representative strains, and thus PT reference sequences, for these ESTUs. The preferred association of PTs with specific ESTUs is also well illustrated by some concomitant shifts of PTs and ESTU assemblages. For instance, in the wintertime North Atlantic Ocean, the shift from 3dB-dominated stations on the western side (TARA_142 and TARA_146-149) to 3dA-dominated stations near European coasts (TARA_150 to 152) and North of Gulf stream (TARA_145) is probably related to the shift in ESTU assemblages occurring along this transect, with ESTU IIA being gradually replaced by ESTU IVA (Fig. 4C; see also 6). Similarly, the takeover of CRD1C by IIA in the Marquesas Island area (TARA_123 to 125), which is iron-enriched with regard to surrounding high-nutrient lowchlorophyll (HNLC) waters (TARA_122 and 128), perfectly matched the corresponding replacement of PT 3dA by 3c. However, in several other cases, PT shifts were not associated with a concomitant ESTU shift or vice versa. One of clearest examples of these dissociations is the transect from the Mediterranean Sea to the Indian Ocean, where the entry in the northern Red Sea through the Suez Canal triggered a sharp shift from a IIIA-to a IIA-dominated community (TARA_030 and 031), which was not accompanied by any obvious change in PTs. Conversely, a sharp rise in the relative abundance of PT 3a was observed in the southern Red Sea/northeastern Indian Ocean (TARA_033 to 038) without changes in the large dominance of ESTU IIA. Altogether, this strongly suggests that a subtle combination of ESTUs and PTs respective niche occupancy is responsible for the observed niche partitioning of Synechococcus populations. Deficient chromatic acclimaters are dominant in HNLC areas Although our results clearly indicate that CA4 cells represent a large proportion of the Synechococcus community in a wide range of ecological niches, this must be somewhat tempered by the fact that, in culture, about 30% of the strains possessing a CA4-A or B genomic island are not able to chromatically acclimate (Dataset 3; 11). Some of these natural mutants have an incomplete CA4 genomic island (Fig. S6K). For example, strains WH8016 (ESTU IA) and KORDI-49 (WPC1A) both lack the CA4-A specific lyase-isomerase MpeZ, an enzyme shown to bind a PUB molecule on PE-II ( 14), and display a green light specialist phenotype (PT 3a, Ex495:545 ̴ 0.4) whatever the ambient light color [START_REF] Humily | A gene island with two possible configurations is involved in chromatic acclimation in marine Synechococcus[END_REF]. However, since they possess a PT 3a mpeBA allele, reads from field WH8016-or KORDI-49-like cells are adequately counted as PT 3a (Fig. S6K). Another CA4-deficient strain, BIOS-E4-1 (ESTU CRD1C), possesses mpeZ and a 3dA mpeBA allele but lacks the CA4 regulators FciA and FciB as well as the putative lyase MpeY and exhibits a fixed blue light specialist phenotype (PT 3c, Ex495:545 ̴ 1.7; Fig. S6K; [START_REF] Humily | A gene island with two possible configurations is involved in chromatic acclimation in marine Synechococcus[END_REF][START_REF] Sanfilippo | Self-regulating genomic island encoding tandem regulators confers chromatic acclimation to marine Synechococcus[END_REF]. Thus, reads from such natural Synechococcus CA4-incapable mutants in the field are counted as 3dA using the mpeBA marker. Lastly, the strain MVIR-18-1 possesses a complete CA4-A island and a 3dA mpeBA allele but lacks mpeU, a gene necessary for blue light acclimation (Fig. S6K; 47). While MVIR-18-1 displays a fixed green light phenotype, reads from such Synechococcus are also erroneously counted as 3dA. To assess the significance of these genotypes in the field, we compared the normalized read counts obtained for 3dA with mpeBA, fciAB, mpeZ, mpeU and mpeY (Fig. S6A-J). Overall this analysis revealed a high consistency between these different markers (0.860<R²<0.986), indicating that most mpeZ-containing populations also contained 3dA alleles for fciAB, mpeY, mpeU and mpeBA and are therefore likely able to perform CA4. However, a number of stations, all located in HNLC areas (TARA_094, 111 and 122 to 128 in the Pacific Ocean and TARA_052 located northwest of Madagascar, Fig. 2B), displayed more than 10-fold higher mpeBA, mpeU and mpeZ counts than fciAB and mpeY counts (Fig. S6A,B,E,F,H,I). This indicates that a large proportion or even the whole population (TARA_122 and 124) of 3dA in these HNLC areas is probably lacking the FciA/B regulators and MpeY and, like strain BIOS-E4-1 (Fig. S6K), might thus be stuck in the blue light specialist phenotype (PT 3c; 11). Conversely, station TARA_067 exhibited consistently more than twice the fciAB and mpeZ counts compared to mpeBA, mpeY or mpeU (Fig. G,H) and was a clear outlier when comparing pigment type and clade composition (Fig. S7). This suggests that the proportion of PT 3dA might have been underestimated at this station, as a significant proportion of this population probably corresponds to PT 3a genotypes that have acquired a CA4-A island by lateral gene transfer, as is seemingly the case for strains WH8016 and KORDI-49. Finally, no station exhibited markedly lower mpeU counts compared to all other genes, indicating that the genotype of strain MVIR-18-1 is probably rare in the oceans. It must be noted that two out of the six sequenced CA4-B strains (WH8103 and WH8109) also have a deficient CA4 phenotype and display a constant, intermediate Ex495:545 ratio (0.7 and 1, respectively), despite any obvious PBS-or CA4-related gene deletion [START_REF] Humily | A gene island with two possible configurations is involved in chromatic acclimation in marine Synechococcus[END_REF]. Accordingly, the plot of 3dB normalized read counts obtained with mpeW vs. fciAB shows no clear outlier (Fig. S3C). Discussion Marine Synechococcus display a large pigment diversity, with different PTs preferentially harvesting distinct regions of the light spectrum. Previous studies based on optical properties or on a single genetic marker could not differentiate all PTs [START_REF] Humily | Development of a targeted metagenomic approach to study a genomic region involved in light harvesting in marine Synechococcus[END_REF][START_REF] Xia | Phylogeography and pigment type diversity of Synechococcus cyanobacteria in surface waters of the northwestern pacific ocean[END_REF][START_REF] Xia | Variation of Synechococcus pigment genetic diversity along two turbidity gradients in the China Seas[END_REF][START_REF] Xia | Cooccurrence of phycocyanin-and phycoerythrin-rich Synechococcus in subtropical estuarine and coastal waters of Hong Kong: PE-rich and PC-rich Synechococcus in subtropical coastal waters[END_REF], and thus neither assess their respective realized environmental niches [START_REF] Pearman | Niche dynamics in space and time[END_REF] nor the role of light quality on the relative abundance of each PT. Here, we showed that a metagenomic read recruitment approach combining three genetic markers can be used to reliably predict all major PTs. Applied to the extensive Tara Oceans dataset, this original approach, which avoids PCR amplification and cloning biases, allowed us to describe for the first time the distribution of the different Synechococcus PTs at the global scale and to refine our understanding of their ecology. PT 3 was found to be largely dominant over PT 1 and 2 along the oceanic Tara Oceans transect, in agreement with the coastal-restricted distribution of the latter PTs [START_REF] Jiang | Temporal and spatial variations of abundance of phycocyanin-and phycoerythrin-rich Synechococcus in Pearl River Estuary and adjacent coastal area[END_REF][START_REF] Neveux | Phycoerythrins in the southern tropical and equatorial Pacific Ocean: evidence for new cyanobacterial types[END_REF][START_REF] Wood | Water column transparency and the distribution of spectrally distinct forms of phycoerythrin-containing organisms[END_REF][START_REF] Xia | Cooccurrence of phycocyanin-and phycoerythrin-rich Synechococcus in subtropical estuarine and coastal waters of Hong Kong: PE-rich and PC-rich Synechococcus in subtropical coastal waters[END_REF](32)[START_REF] Chung | Changes in the Synechococcus assemblage composition at the surface of the East China Sea due to flooding of the Changjiang river[END_REF][START_REF] Haverkamp | Diversity and phylogeny of Baltic Sea picocyanobacteria inferred from their ITS and phycobiliprotein operons[END_REF][START_REF] Fuller | Clade-specific 16S ribosomal DNA oligonucleotides reveal the predominance of a single marine Synechococcus clade throughout a stratified water column in the red sea[END_REF][START_REF] Larsson | Picocyanobacteria containing a novel pigment gene cluster dominate the brackish water Baltic Sea[END_REF][START_REF] Chen | Phylogenetic diversity of Synechococcus in the Chesapeake Bay revealed by Ribulose-1,5-bisphosphate carboxylase-oxygenase (RuBisCO) large subunit gene (rbcL) sequences[END_REF]. Biogeography and correlation analyses with environmental parameters provided several novel and important insights concerning niche partitioning of PT 3 subtypes. Green (PT 3a) and blue (PT 3c) light specialists were both shown to dominate in warm areas but display clearly distinct niches, with 3a dominating in Synechococcus-rich stations located on oceanic borders, while 3c predominated in purely oceanic areas where the global abundance of Synechococcus is low. These results are in agreement with the prevailing view of an increase in the PUB:PEB ratio from green onshore mesotrophic waters to blue offshore oligotrophic waters (19-24, 26-29, 40, 48). Similarly, we showed that PT 3dB, which could not be distinguished from PT 3c in previous studies [START_REF] Humily | Development of a targeted metagenomic approach to study a genomic region involved in light harvesting in marine Synechococcus[END_REF][START_REF] Xia | Phylogeography and pigment type diversity of Synechococcus cyanobacteria in surface waters of the northwestern pacific ocean[END_REF][START_REF] Xia | Variation of Synechococcus pigment genetic diversity along two turbidity gradients in the China Seas[END_REF][START_REF] Xia | Cooccurrence of phycocyanin-and phycoerythrin-rich Synechococcus in subtropical estuarine and coastal waters of Hong Kong: PE-rich and PC-rich Synechococcus in subtropical coastal waters[END_REF], prevails in more coastal and/or mixed temperate waters than do 3c populations. The realized environmental niche of the second type of CA4 (PT 3dA) is the best defined of all PTs as it is clearly associated with nutrient-rich waters and with the coldest stations of our dataset, occurring at high latitude, at depth and/or in vertically mixed waters (e.g., TARA_068, 093 and 133). This result is consistent with a recent study demonstrating the dominance of 3dA in sub-Arctic waters of the Northwest Pacific Ocean [START_REF] Xia | Phylogeography and pigment type diversity of Synechococcus cyanobacteria in surface waters of the northwestern pacific ocean[END_REF], suggesting that the prevalence of 3dA at high latitude can be generalized. The decrease of PT 3c (blue light specialists) with depth is unexpected given previous reports of a constant [START_REF] Neveux | Phycoerythrins in the southern tropical and equatorial Pacific Ocean: evidence for new cyanobacterial types[END_REF][START_REF] Yona | Distribution of Synechococcus and its phycoerythrin pigment in relation to environmental factors in the East Sea, Korea[END_REF][START_REF] Campbell | Identification of Synechococcus spp. in the Sargasso Sea by immunofluorescence and fluorescence excitation spectroscopy performed on individual cells[END_REF][START_REF] Katano | Growth rates of Synechococcus types with different phycoerythrin composition estimated by dual-laser flow cytometry in relationship to the light environment in the Uwa Sea[END_REF] or increasing [START_REF] Olson | Pigments, size, and distributions of Synechococcus in the North Atlantic and Pacific Oceans[END_REF][START_REF] Lantoine | Spatial and seasonal variations in abundance and spectral characteristics of phycoerythrins in the tropical northeastern Atlantic Ocean[END_REF][START_REF] Wood | Fluorescence-based characterization of phycoerythrincontaining cyanobacterial communities in the Arabian Sea during the Northeast and early Southwest Monsoon (1994-1995)[END_REF] PUB:PEB ratio throughout the water column. However, the high abundance of CA4 can reconcile these observations with the decreased abundance of PT 3c, as cells capable of CA4 likely have a blue-light phenotype at depth. Altogether, while little was previously known about the abundance and distribution of CA4 populations in the field, here we show that they are ubiquitous, dominate in a wide range of niches, are present both in coastal and oceanic mixed waters, and overall are the most abundant Synechococcus PT. The relationship between ESTUs and PTs shows that some ESTUs are preferentially associated with only one PT, while others present a much larger pigment diversity. ESTU IIA, the most abundant and ubiquitous ESTU in the field [START_REF] Sohm | Co-occurring Synechococcus ecotypes occupy four major oceanic regimes defined by temperature, macronutrients and iron[END_REF][START_REF] Farrant | Delineating ecologically significant taxonomic units from global patterns of marine picocyanobacteria[END_REF], displays the widest PT diversity (Fig. 4B), a finding confirmed by clade II isolates spanning the largest diversity of pigment content, with representative strains of PT 2, 3a, 3c and 3dB within this clade (Dataset 3; see also [START_REF] Six | Diversity and evolution of phycobilisomes in marine Synechococcus spp.: a comparative genomics study[END_REF][START_REF] Humily | A gene island with two possible configurations is involved in chromatic acclimation in marine Synechococcus[END_REF][START_REF] Ahlgren | Culture isolation and culture-independent clone libraries reveal new marine Synechococcus ecotypes with distinctive light and N physiologies[END_REF][START_REF] Bemal | Genetic and ecophysiological traits of Synechococcus strains isolated from coastal and open ocean waters of the Arabian Sea[END_REF][START_REF] Everroad | Phycoerythrin evolution and diversification of spectral phenotype in marine Synechococcus and related picocyanobacteria[END_REF]. This suggests that this ESTU can colonize all light color niches, an ability which might be partially responsible for its global ecological success. Our current results do not support the previously observed correlation between clade III and PT 3a (29) since the two ESTUs defined within this clade (IIIA and B) were associated with PT 3c and/or 3f. This discrepancy could be due either to the different methods used in these studies or to the occurrence of genetically distinct clade III populations in coastal areas of the northwestern Pacific Ocean and along the Tara Oceans transect. However, the pigment phenotype of strains isolated to date is more consistent with our findings (Dataset 3; 16, 36). In contrast to most other PTs, the association between PT 3dA and ESTUs was found to be nearly exclusive in the field, as ESTUs from clades I, IV, CRD1 and EnvA were not associated with any other PT, and reciprocally PT 3dA is only associated with these clades (Fig. 4A). An interesting exception to this general rule was observed in the Benguela upwelling (TARA_ 067), where the dominant ESTU IA population both displays a 3a mpeBA allele and possesses fciA/B and mpeZ genes (Figs. S6K andS7), suggesting that cells, which were initially green light specialists (PT 3a), have inherited a complete CA4-A island through lateral gene transfer at this station. Interestingly, among the seven clade I strains sequenced to date, three possess a 3a mpeBA allele, among which WH8016 also has a CA4-A island but only partial (lacking mpeZ) and therefore not functional [START_REF] Humily | A gene island with two possible configurations is involved in chromatic acclimation in marine Synechococcus[END_REF]. It is thus difficult to conclude whether the lateral transfer of this island, likely a rare event since it was only observed in populations of the Benguela upwelling, has conferred these populations the ability to perform CA4. Another important result of this study was the unsuspected importance of populations that have likely lost the ability to chromatically acclimate, specifically in warm HNLC areas, which cover wide expanses of the South Pacific Ocean [START_REF] Morel | Optical properties of the "clearest" natural waters[END_REF]. Interestingly, populations living in these ultraoligotrophic environments have a different genetic basis for their consistently elevated PUB phenotype than do typical blue light specialists (i.e. PT 3c), since they have lost the CA4 regulators fciA/B and accumulated mutations in mpeY, a yet uncharacterized member of the phycobilin lyase family, as observed in strain BIOS-E4-1 (Fig. S6K; 11). This finding, consistent with the previous observation that the south Pacific is dominated by high-PUB Synechococcus [START_REF] Neveux | Phycoerythrins in the southern tropical and equatorial Pacific Ocean: evidence for new cyanobacterial types[END_REF], is further supported by the recent sequencing of three isolates from the Equatorial Pacific, strains MITS9504, MITS9509 (both CRD1C) and MITS9508 (CRD1A; 54), all of which contain, like BIOS-E4-1, a 3dA mpeBA allele, a CA4-A island lacking fciA/B and a partial (MITS9508) or highly degenerated (2 other MIT strains) mpeY gene sequence (Fig. S6K). Thus, these natural CA4-A mutants seem to have adapted to blue, ultra-oligotrophic waters by inactivating a likely energetically costly acclimation mechanism (positive selection), although we cannot exclude that it might be a consequence of the lower selection efficiency associated to the reduced effective population size of Synechococcus in such an extreme environment (genetic drift). If, as we hypothesize, all Synechococcus cells counted as 3dA at these stations are CA4-deficient, these natural mutants would represent about 15% of the total 3dA population. In contrast, CRD1-A populations of the eastern border of the Pacific Ocean (TARA_102, 109-110, 137) are likely true CA4 populations as they possess all CA4 genes (Fig. S6K). In conclusion, our study provided novel insights into the distribution, ecology and adaptive value of all known Synechococcus PTs. Unexpectedly, the sum of 3dA and 3dB constituted about 40% of the total Synechococcus counts in the Tara Oceans dataset, making chromatic acclimaters (PT 3d) the most globally abundant PT, even when taking into account potential CA4-deficient natural mutants. In addition, this PT made up 95% of the Synechococcus population at high latitudes and was present in every one of the five major clades in the field (I, II, III, IV and CRD1). This suggests that chromatic acclimation likely confers a strong adaptive advantage compared to strains with a fixed pigmentation, particularly in vertically mixed environments and at depth at stations with a stratified water column. The occurrence of natural CA4 mutants and evidence for lateral transfer of the CA4 genomic island further support previous hypotheses that not only temperature and nutrient availability [START_REF] Zwirglmaier | Global phylogeography of marine Synechococcus and Prochlorococcus reveals a distinct partitioning of lineages among oceanic biomes[END_REF][START_REF] Sohm | Co-occurring Synechococcus ecotypes occupy four major oceanic regimes defined by temperature, macronutrients and iron[END_REF][START_REF] Farrant | Delineating ecologically significant taxonomic units from global patterns of marine picocyanobacteria[END_REF] but also light quality [START_REF] Six | Diversity and evolution of phycobilisomes in marine Synechococcus spp.: a comparative genomics study[END_REF][START_REF] Everroad | Phycoerythrin evolution and diversification of spectral phenotype in marine Synechococcus and related picocyanobacteria[END_REF] co-exert selective pressures affecting marine Synechococcus evolution. Thus, changes in pigment diversity could occur in response to changes in light niches by acquisition or loss of specific PBS synthesis and/or regulation genes, as previously observed for phosphorus and nitrogen transport genes in Prochlorococcus [START_REF] Martiny | Occurrence of phosphate acquisition genes in Prochlorococcus cells from different ocean regions[END_REF][START_REF] Martiny | Phosphate acquisition genes in Prochlorococcus ecotypes: evidence for genome-wide adaptation[END_REF][START_REF] Martiny | Widespread metabolic potential for nitrite and nitrate assimilation among Prochlorococcus ecotypes[END_REF]. Still, the complex interactions between PTs, vertical phylogeny and environmental parameters remain unclear and Materials and Methods Metagenomic samples This study focused on 109 metagenomic samples corresponding to 65 stations from the worldwide oceans collected during the 2.5-yr Tara Oceans circumnavigation (2009)(2010)(2011). Water sample and sequence processing are the same than in [START_REF] Farrant | Delineating ecologically significant taxonomic units from global patterns of marine picocyanobacteria[END_REF]. Dataset 4 describes all metagenomic samples with location and sequencing effort. Sequencing depths ranged from 16 × 10 6 to 258 × 10 6 reads per sample after quality control and paired-reads merging, and corresponding fragments lengths averaged 164 ± 20 bp (median: 168 bp). Databases: reference and outgroup sequences A reference database comprising the full-length gene or operon nucleotide sequences was generated for each marker used in this study (cpcBA, mpeBA and mpeW) based on culture isolates with characterized pigment type (Dataset 1). These databases comprised 83 cpcBA sequences (64 unique), including 18 PT 1, 5 PT 2A, 19 PT 2B and 39 PT 3, 41 mpeBA sequences (all unique), including 11 PT 3a, 2 PT 3f, 11 PT 3dA and 17 PT 3dB and 5 unique mpeW sequences. For each marker, a reference alignment was generated with MAFFT L-INS-i v6.953b [START_REF] Katoh | MAFFT Multiple Sequence Alignment Software Version 7: Improvements in Performance and Usability[END_REF], and a reference phylogenetic tree was inferred with PhyML v. 20120412 (GTR+I+G, 10 random starting trees, best of SPR and NNI moves, 500 bootstraps; [START_REF] Guindon | New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0[END_REF] and drawn using the ETE Toolkit [START_REF] Huerta-Cepas | ETE 3: reconstruction, analysis, and visualization of phylogenomic data[END_REF]. A database of outgroups was also built, comprising paralogous sequences from marine Synechococcus or Prochlorococcus as well as orthologous sequences from other marine and freshwater organisms retrieved from public databases. For cpcBA and mpeBA, the outgroup databases comprised apcA, apcB, apcD, apcF and cpeBA from marine Synechococcus, ppeBA from Prochlorococcus, cpcBA and cpeBA from other non-picocyanobacterial organisms as well as either mpeBA or cpcBA from marine Synechococcus, respectively (Datasets 1-2). For mpeW, the outgroup database was made of paralogous genes (mpeZ, mpeY and cpeY) from marine Synechococcus or Prochlorococcus, as no ortholog could be identified in public databases. Similarly, for mpeY and mpeZ, the outgroup database comprised cpeY, mpeW as well as mpeZ or mpeY, respectively. The outgroup database for mpeU comprised cpeF paralogous sequences from marine Synechococcus and Prochlorococcus. No outgroup database was used for fciAB, as no paralogs or other distantly related sequences were found either in marine Synechococcus and Prochlorococcus or in public databases. Read assignation and estimation of PT abundance Reads were preselected using BLAST+ [START_REF] Camacho | BLAST+: architecture and applications[END_REF] with relaxed parameters (blastn, maximum E-value of 1e-5, minimum percent identity 60%, minimum 75% of read length aligned), using reference sequences as subjects; the selection was then refined by a second BLAST+ round against databases of outgroups: reads with a best-hit to outgroup sequences were excluded from downstream analysis. Selected reads were then aligned to the marker reference alignment with MAFFT v.7.299b (-addfragments --adjustdirectionaccurately) and placed in the marker reference phylogenetic tree with pplacer (62). For each read, pplacer returns a list of possible positions (referred to as placements) at which it can be placed in the tree and their associated "likelihood weight ratio" (LWR, proxy for the probability of the placement; see pplacer publication and documentation for more details). Reads were then assigned to a pigment type using a custom classifier written in Python. Briefly, internal nodes of the reference tree were assigned a pigment type based on the pigmentation of descending nodes (PT of child reference sequences if the same for all of them, "unclassified" otherwise). For each read, placements were assigned to their nearest ascending or descending node based on their relative position on the edge, and the lowest common ancestor (LCA) of the set of nodes for which the cumulated LWR was greater than 0.95 (LCA of possible placements at 95% probability) was then computed. Finally, the read was assigned to the pigment type of this LCA. Different combinations of read assignment parameters (LCA at 90%, 95% or 100%; assignation of placements to the ascending, descending or nearest node) were also assessed, and resulted either in higher rates of unassigned reads or of wrongly assigned reads (Fig. S2). Read counts were normalized by adjusted marker length: for each marker and each sequence file, counts were normalized by (L -ℓ + 1), with L the length of the marker gene (cpcBA mean length: 1053.7 bp; mpeBA mean length: 1054.6 bp; mpeW mean length: 1193.3 bp) and ℓ the mean length of reads in the sequence file. Finally, the abundance of PT 1, 2A and 2B was defined as the normalized cpcBA read counts of these PT, the abundance of PT 3a, 3f and 3dA as the normalized mpeBA read counts of these PT, 3dB as the normalized mpeW count and 3c as the difference between the normalized mpeBA (3c + 3dB) read count and the PT 3dB count assessed with mpeW. The abundance of unclassified sequences was also taken into account. Detailed petB counts for clade and ESTU abundances were obtained from [START_REF] Farrant | Delineating ecologically significant taxonomic units from global patterns of marine picocyanobacteria[END_REF]. Read assignment simulations For each marker, simulated reads were generated from one reference sequence at a time using a sliding window of 100, 125 or 150 bp (Tara Oceans mean read length: 164.2 bp; median 169 bp) and steps of 5 bp. Simulated reads were then assigned to a pigment type with the aforementioned bioinformatic pipeline, using all reference sequences except the one used to simulate reads ("leave one out" cross-validation scheme). Inferred PTs of simulated fragments were then compared to known PTs of reference sequences. Statistical analyses All environmental parameters used for statistical analyses are the same as in [START_REF] Farrant | Delineating ecologically significant taxonomic units from global patterns of marine picocyanobacteria[END_REF], except the blue to green irradiance ratio that was modeled as described in the supplementary materials and methods. Hierarchical clustering and NMDS analyses of stations were performed using R (63) packages cluster v1.14.4 (64) and MASS v7.3-29 [START_REF] Venables | Modern applied statistics with S[END_REF], respectively. PT contingency tables were filtered by considering only stations with more than 30 cpcBA reads and 30 mpeBA reads, and only PT appearing in at least two stations and with more than 150 reads in the whole dataset. Contingency tables were normalized using Hellinger transformation that gives lower weights to rare PT. The Bray-Curtis distance was then used for ordination (isoMDS function; maxit, 100; k, 2). Correlations were performed with R package Hmisc_3.17-4 with Benjamini & Hochberg multiple comparison adjusted p-value [START_REF] Harrell | Hmisc: Harrell Miscellaneous Available at[END_REF]. Legends of Figures Fig. 1: Maximum likelihood phylogenetic trees of (A) cpcBA operon, (B) mpeBA operon and (C) the mpeW/Y/Z gene family. The cpcBA tree includes both strains with characterized pigment type (PT) and environmental sequences (prefixed with GS) assembled from metagenomes of the Baltic Sea [START_REF] Larsson | Picocyanobacteria containing a novel pigment gene cluster dominate the brackish water Baltic Sea[END_REF]. Circles at nodes indicate bootstrap support (black: > 90 %; white: > 70 %). Note that for PT 2B clade, only environmental sequences are available. The PT associated with each sequence is indicated as a colored square. The scale bar represents the number of substitutions per nucleotide position. P T 3 1 2 A 3 a 3 c 3 f 3 d A 3 d B Pigment Types C R D 1 A C R D 1 B C R D 1 C E n v A A E n v B A E n v B B E n v B C I A I I A I I B I I I A I I I B I V A I V B I V C I X A U C l A A V A V I A V I I A V I I I A W P C 1 A X V I A X X A 5 i 2 A 5 i 3 A 5 i 3 B 5 i 3 C 5 i 3 D ESTU more work is needed to refine our understanding of the balance between the forces shaping community composition and Synechococcus evolution. At the boundaries of Synechococcus environmental niche(s), where the harshest conditions are encountered, both pigment and clade diversity are drastically reduced, and this concomitant reduction tends to support a co-selection by light quality and other environmental parameters. On the contrary, the diverse PTs occurring within some clades, as well as the co-occurrence of different PTs at most stations compared to more clearcut clade shifts (e.g., in the Red Sea/Indian Ocean) might indicate that light quality is not the strongest selective force or that light changes are too transient to allow the dominance and fixation of a particular PT in a population. Future experimental work exploring the fitness of distinct ESTU/PT combinations under different controlled environmental conditions (including temperature, nutrients and light) might help clarifying the respective effects of these parameters on the diversification of this ecologically important photosynthetic organism. Fig. 2 : 2 Fig. 2: Distribution of Synechococcus pigment types (PT). (A) Relative abundance of each PT in the Fig. 3 : 3 Fig. 3: Correlation analysis between Synechococcus pigment types (PT) and environmental Fig. 4 : 4 Fig. 4: Relationship between Synechococcus pigment types (PT) and Ecologically Significant Figure S1 : S1 Figure S1: Biochemical composition and biooptical properties of phycobilisomes (PBS) of the Figure S2 : S2 Figure S2: Evaluation of the assignment pipeline and the resolution power of the different Figure S3 : S3 Figure S3: Correlations between the number of reads recruited using the main markers used in Figure S4 : S4 Figure S4: Distribution of Synechococcus pigment types (PTs) at depth (Deep Chlorophyll Figure S5 : S5 Figure S5: Same as Fig. 4A but for all ESTUs. Unclass., unclassified. Figure S6 : S6 Figure S6: Focus on pigment type (PT) 3dA natural mutants, exhibiting an altered gene content Figure S7 : S7 Figure S7: Correlation between the proportion of clades I, IV and CRD1, as assessed with petB, Acknowledgements We warmly thank Dr. Annick Bricaud for fruitful discussions on biooptics, members of the ABiMS platform (Roscoff) for providing us an efficient storage and computing facility for our bioinformatics analyses as well as the NERC Biomolecular Analysis Facility (NBAF, Centre for Genomic Research, University of Liverpool) for sequencing some Synechococcus genomes used in this study. This work was supported by the French "Agence Nationale de la Recherche" Programs SAMOSA (ANR-13-ADAP-0010) and France Génomique (ANR-10-INBS-09), the French Government "Investissements d'Avenir" programs OCEANOMICS (ANR-11-BTBR-0008), the European Union's Seventh Framework Programs FP7 MicroB3 (grant agreement 287589) and MaCuMBA (grant agreement 311975), UK Natural Environment Research Council Grant NE/I00985X/1 and the Spanish Ministry of Science and Innovation grant MicroOcean PANGENOMICS (GL2011-26848/BOS). We also thank the support and commitment of the Tara Oceans coordinators and consortium, Agnès b. and E. Bourgois, the Veolia Environment Foundation, Région Bretagne, Lorient Agglomeration, World Courier, Illumina, the EDF Foundation, FRB, the Prince Albert II de Monaco Foundation, the Tara schooner and its captains and crew. Tara Oceans would not exist without continuous support from 23 institutes (http://oceans.taraexpeditions.org). This article is contribution number XXXX of Tara Oceans. Supplementary materials and methods Modeling of the blue to green irradiance ratio (Irr495:545) at Tara Oceans stations. We used the clear sky surface irradiance model of Frouin and McPherson in Fortran and translated to Matlab by Werdell (see [START_REF] Frouin | A Simple analytical formula to compute clear sky total and photosynthetically available solar irradiance CC at the ocean surface[END_REF][START_REF] Tanre | Atmospheric modeling for Space measurements of ground reflectances, including bi-directional properties[END_REF] for the analytical formula used) using the date, latitude and longitude of each station, assuming sunny sky and at noon. The spectral light distribution averaged over the mixed layer was computed from: where: -chl denotes the average chlorophyll value in the mixed layer. [chl] was based on a fluorometer that was calibrated against HPLC data and corrected for non-photochemical quenching, -MLD is the mixed layer depth that was computed based on a temperature threshold criterion -𝑘(𝜆, 𝑐ℎ𝑙) is the diffuse attenuation coefficient at wavelength 𝜆 (495 or 545 using a 10 nm bandwidth). This parameter was computed using Morel and Maritorena (2001)'s equation: kw,  and e are provided in If the sampling depth was below the MLD, the irradiance was computed as follows: 𝐼𝑟(𝜆, 𝑠𝑎𝑚𝑝𝑙𝑖𝑛𝑔 𝑑𝑒𝑝𝑡ℎ) = (𝜆, 0 -)𝑒 -𝑘(𝜆,𝑐ℎ𝑙)𝑠𝑎𝑚𝑝𝑙𝑖𝑛𝑔 𝑑𝑒𝑝𝑡ℎ . The ratio was then computed as Irr495:545.
01772779
en
[ "sdu.stu.gp" ]
2024/03/05 22:32:18
2018
https://hal.sorbonne-universite.fr/hal-01772779/file/Rezeau_et_al_JGR_2018_sans%20marque.pdf
L Rezeau G Belmont R Manuzzo N Aunai J Dargent Analyzing the magnetopause internal structure: new possibilities offered by MMS tested in a case study The internal structure of the magnetopause is investigated, using new analysis tools allowed by the high performance MMS instruments. • In a case study, the observed boundary is shown to be non planar and non stationary which makes it necessary to perform a local study of the internal structure of the boundary. • Thanks to this local analysis, quasi 1D thin sub layers are identified separated by regions that are mainly 2D. Introduction The magnetopause boundary separates two magnetized plasmas of different origins: the solar wind and the magnetosphere. Its existence is due to the frozen-in property that prevails at large scale and which would fully prevent the two plasmas to reconnect if it was valid always and everywhere. As the magnetopause is accessible to in-situ spacecraft measurements, it provides a unique occasion to study the internal structure of such a boundary and understand how the two plasmas interpenetrate each other via the kinetic effects. However, this study is made difficult by the fact that the boundary is always perturbed by non stationary effects, due to the non stationary incident solar wind and/or to surface wave instabilities such as tearing and Kelvin-Helmholtz instabilities [START_REF] Chen | Tearing instability, Kelvin-Helmholtz instability, and magnetic reconnection[END_REF], [START_REF] Faganello | Competing mechanisms of plasma transport in inhomogeneous configurations with velocity shear: The solarwind interaction with earth's magnetosphere[END_REF]). It is worth noticing that, if purely planar and stationary, the magnetopause layer should obey the classical theory of discontinuities [START_REF] Belmont | Collisionless plasmas in astrophysics[END_REF], i.e. be purely tangential (B N = 0) or, if not, either purely rotational or purely compressional. This is in contradiction with observations since compressional and rotational variations are always observed in a close vicinity of each other in the magnetopause layer, often mixed but with sometimes a clear separation between both [Dorville et al., 2014a]. Thanks to its unprecedented high quality and high time resolution experiments, the MMS spacecraft [START_REF] Pollock | Fast plasma investigation for magnetospheric multiscale[END_REF], [START_REF] Russell | The magnetospheric multiscale magnetometers[END_REF]) nowadays allow significant advances in the study of the internal structure of the magnetopause layer. This paper shows the new methods that can be used for that purpose. October 16th, 2015 was a day with multiple magnetopause crossings by MMS. Fig. 1 shows that it is due to the fact that the orbit of the spacecraft grazes the magnetopause for about 4 hours between 09:00 and 13:00 UTC. The expected position of the magnetopause is calculated with the Shue et al model ( [START_REF] Shue | A new functional form to study the solar wind control of the magnetopause size and shape[END_REF]) using ACE data [START_REF] Stone | The Advanced Composition Explorer[END_REF]. The figure evidences that many crossings are expected to happen. This is what is observed and these multiple crossings can be expected to be complex, with possible back and forth motions and partial penetration in the current layer. We choose to study the crossing around 13:06 (which is shown by a red arrow) because this period has already been studied by [START_REF] Burch | Electron-scale measurements of magnetic reconnection in space[END_REF], [START_REF] Torbert | Estimates of terms in ohm's law during an encounter with an electron diffusion region[END_REF], and Le [START_REF] Contel | Whistler mode waves and hall fields detected by mms during a dayside magnetopause crossing[END_REF], with a main emphasis put on its relationship with the reconnection event identified at 13:07. Fig. 2 displays the magnetic field measured by the MMS magnetometers [START_REF] Russell | The magnetospheric multiscale magnetometers[END_REF] during a 1 minute interval around the crossing investigated. In this figure as in all the others unless specified, the times are counted for convenience from t 0 =13:05:30. The magnetic field is smoothed using a gaussian filter, with a standard deviation of the gaussian kernel equal to 70 points, which makes an effective smoothing window of about 1.6s. All the data used in the study are resampled to the magnetic field sampling time and then smoothed in the same way as the magnetic field. One can see that the crossing is complex. The spacecraft come from a clear magnetospheric field at the beginning of the interval (B z ≈ 35nT); a reversal is seen around t ≈ 15s, showing the crossing of the main magnetopause current layer; the magnetic field is not completely stationary afterwards, which can be interpreted, as done by Torbert et al, by the fact that the spacecraft do not progress further in the normal direction with respect to the magnetopause, so remaining inside it ("stagnation"), with even a backward motion around t = 28s. Fig. 3 summarizes the evolution of the main physical parameters during the interval under study, where it can be seen that the region where the plasma properties change is not identical to the magnetic field reversal region but is close to the first part of it, and slightly before. The magnetopause is non-stationary and non-planar Comparison of normals The most common method to analyze a magnetopause crossing is the Minimum Variance Analysis (MVA), which has been introduced with the first measurements of the magnetic field in space ( [START_REF] Sonnerup | Magnetopause structure and attitude from explorer 12 observations[END_REF], [START_REF] Sonnerup | Minimum and Maximum Variance Analysis[END_REF]). It is based on the assumption that the boundary is perfectly 1D, i.e. that all isosurfaces are parallel planes, and it provides a single boundary normal based on the magnetic field measurements across the "whole crossing". Years of study of experimental results have shown that this assumption is acceptable as long as sufficiently large scales are considered and ultimately amount to finding out the normal of the magnetopause boundary itself and compare it to a model, e.g. [START_REF] Shue | A new functional form to study the solar wind control of the magnetopause size and shape[END_REF]. But they have also shown that the magnetopause itself has an internal structure which can be complex ([Dorville et al., 2014a], [START_REF] Burch | Electron-scale measurements of magnetic reconnection in space[END_REF]). MVA relies on the Maxwell equation ∇ • B = 0, and on the constancy of the normal component that follows from it for a strictly 1D geometry. This property is sufficient to determine the normal direction as long as this component is the only that does not vary, i.e. when the B T tangential hodogram has a certain curvature: otherwise, two components are constant and B N =cst is not a sufficient condition to identify the normal direction (this excludes the coplanar case of shocks). If the magnetopause conformed to the simple classical image of a boundary made of a monotonous ramp connecting two homogeneous regions, the strict B N conservation would be valid on any interval, whatever the number of points. The existence of different sub layers that can move with respect to each other would not invalidate this property, at the condition that these sub layers are all planar and strictly parallel to each other. The existence of non stationarity should not bring difficul-ties either, at the condition that the boundary remains strictly planar everywhere and that its normal direction does not vary in time. The main difficulties therefore come from the departures from planarity and from the absence of time stationarity of the normal direction. Such departures are likely to occur often at the magnetopause, even if only due to the small scales waves and turbulence that are always present. To fix this difficulty, MVA is usually used on a statistical basis and applied over a sufficiently long interval between two points around the crossing, one considered as assuredly in the magnetosphere and one as assuredly in the magnetosheath. This actually transforms the condition that B N is constant into the condition that its variance is less than the variance of the other components. A necessary condition for applying safely this condition is that the ratio between the minimum and intermediate variances is sufficiently small. Another condition that should be checked is that these two variances are really characteristic of the large scale variation related with the current layer under study and not mainly due to the parasitic small scale turbulence. When these conditions are not fulfilled, the result actually depends on the position and the size of the "global" interval chosen. The stability of the result is sometimes tested a posteriori, by checking the variations of the observed B N and by using nested intervals (see for instance [START_REF] Zhang | Neutral sheet normal direction determination[END_REF]). When contradictions occur in one of these two tests, the results are rejected, under the assumption that the real local normal should not depend on time inside the crossing. Beyond this constraint of a strictly constant normal direction, MVA also suffers from another limitation that prevents people from using it on short intervals and therefore analyzing the sub-structure of the layer: the interval used must be long enough to evidence the curvature of the B T tangential hodogram. Any variation obviously tends toward a straight line when the interval duration decreases, so increasing the inaccuracy of the result in the M-N plane. These limitations encouraged scientists to develop more elaborate methods (a review can be found in [START_REF] Haaland | Four-spacecraft determination of magnetopause orientation, motion and thickness: comparison with results from single-spacecraft methods[END_REF]). They are not all used nowadays, probably because they require more high time resolution data and are more difficult to apply than MVA. Let us cite in particular the different GRA methods (Generic Residue Analysis [START_REF] Sonnerup | Orientation and motion of a plasma discontinuity from single-spacecraft measurements: Generic residue analysis of Cluster data[END_REF]). These are generalizations of MVA to other parameters than just B. Although generally more efficient than MVA, these methods rely on conservation laws (fields and plasma) that require also planarity (1D variations) to be valid. They therefore suffer from most of the limitations of MVA for investigating sub-layers. In addition, they require stationarity (∂ t = 0). The BV method [Dorville et al., 2014b] mixes magnetic field (B) and velocity (V) data and is based on different grounds but still in the same "global layer" spirit. It has been shown to give accurate normal determinations in a statistical study [START_REF] Dorville | Magnetopause orientation: Comparison between generic residue analysis and bv method[END_REF]. Nevertheless it is not either perfectly suited for analyzing intervals much shorter than the global width of the current layer (in spite of the excellent time resolution of the MMS data). In any case, all the methods mentioned here assume the boundary is locally a plane. This assumption may be questionable due to local deformations of the surface, such as surface waves. Confirmation is given by all the numerical simulations of reconnection or Kelvin-Helmholtz instability show [START_REF] Aunai | Orientation of the x-line in asymmetric magnetic reconnection[END_REF], [START_REF] Chen | Tearing instability, Kelvin-Helmholtz instability, and magnetic reconnection[END_REF], [START_REF] Dargent | Kinetic simulation of asymmetric magnetic reconnection with cold ions[END_REF], [START_REF] Miura | Nonlocal stability analysis of the mhd kelvinhelmholtz instability in a compressible plasma[END_REF]) and as some experimental observations [START_REF] Blagau | A new technique for determining orientation and motion of a 2-D, non-planar magnetopause[END_REF]. For the crossing investigated in the present paper, MVA has been first applied on the global interval. It shows that the three eigenvalues are not well separated, the maximum variance being clearly larger than the two others, but these two others being rather similar (ratio 1.9). This means that the normal might not be precisely determined. Nevertheless, we obtain N MV A = [0.811, 0.536, -0.234] which is close to the normal obtained with the [START_REF] Shue | A new functional form to study the solar wind control of the magnetopause size and shape[END_REF] model which is N Shue = [0.854, 0.519, -0.043]. The angle between the two normals is 11°indicating that in this case the "global" magnetopause is probably not far from the standard paraboloid shape assumed by Shue et al. As MVA, as we use it, is a single-spacecraft technique,one can compare the MVA normals derived from the data on each of the four spacecraft. As they are actually very near, they measure very similar magnetic fields and the angle between each normal and the average normal is indeed less than 1°. Looking at the magnetic data in Fig. 2, the global crossing can be guessed to consist of a first current layer between, typically, t = 10s and 20s, followed by a backward motion later, with only a partial entrance in the magnetopause between t = 25s and 30s. For confirming or disproving such a guess, one has to investigate the internal structure of the magnetopause layer in more details and look for possible sub-structures. For this purpose, let us first compute MVA on shorter intervals. Between t = 10s and 20s, we obtain (on MMS1) N MV A = [0.591, -0.591, -0.548], which is very different from the previous normal, the angle between both being 73°. Let us note that changing slightly the choice of the beginning and ending times of this MVA interval does not change much this conclusion. As the ratio between minimum and intermediate eigenvalues is again not much larger then 1 (2.6), MVA is quite questionable and one can wonder whether this determination is just erroneous or if such a large difference can actually exist between the local and the global normals. Taking advantage that, beyond B, all the other physical parameters are measured at the same time, it is possible to use the particle data [START_REF] Pollock | Fast plasma investigation for magnetospheric multiscale[END_REF] to analyze the crossing with the BV technique [Dorville et al., 2014b]. The hodogram (Fig. 4) is almost a straight line, without a clear curvature, but this does not prevent the BV method from working, the fit of this hodogram by a very elongated ellipse remaining quite acceptable. The BV program automatically determines the optimum interval for its fitting procedure, which is between, unsurprisingly, t =14s and 18s. The normal obtained is then (on MMS1): N BV = [0.838, 0.506, -0.205], which is only 9°from the Shue et al normal. This result is much more likely than the MVA one. Thickness of the magnetopause A possible byproduct of the BV method is an estimation of the thickness of the current layer of the magnetopause and of its normal velocity, but it is worth noticing that these estimations have to be taken with caution. The BV program provides, in its present version an estimated thickness of 30 km on MMS1 and MMS2 and 40 km on MMS3 and MMS4, which is smaller than the thermal ion Larmor radii (which vary from ≈ 140 km in the magnetosphere to ≈ 110 km in the magnetosheath). It also provides an estimated normal velocity of 8 km.s -1 for MMS1 and MMS2 and 10 km.s -1 for MMS 3 and MMS4, which is much smaller than the normal Alfvén velocity (36 to 170 km.s -1 ). These results being noticeably smaller than the values commonly observed, we have used other methods to check them. These methods provide more likely results of about 200 km for the thickness and 50 km.s -1 for the normal velocity. The first calculation is the same as done in the BV method, but also similar to those used in [START_REF] Paschmann | The magnetopause and boundary layer for small magnetic shear -Convection electric fields and reconnection[END_REF] and [START_REF] De Keyser | Trying to bring the magnetopause to a standstill[END_REF], which consists in integrating the normal ion velocity V in over time to obtain the abscissa x(t), but using a different normal which is likely to be more precise (see in further sections how we have obtained this normal). The second calculation makes use of the four-spacecraft gradient determination. The abscissa along the normal is obtained by integrating the quantity δx = Y /Y , where Y is a scalar variable and where Y represents the projection of ∇Y on the normal direction (the normal direction being determined in the same way as above). The spatial derivatives in the different directions are estimated by linear interpolations from the multi-point measurements (here 4 spacecraft). This can be done by methods similar to the well-known "curlometer", which is very often used to calculate the electric current density [START_REF] Chanteur | Spatial Interpolation for Four Spacecraft: Theory[END_REF]. We have taken here Y = B L , which is the component of B that varies most during the crossing. Fig. 5 shows the comparison between the two results. Both results look quite compatible during the crossing of the main current layer and lead to the same value of ≈ 200 km for its thickness. This similarity validates the hypothesis which is done in the BV method that the flow through the boundary is negligible. Nevertheless, the two results clearly depart at later times. This is due to a very strong dependence of the result, with the BV method, on the quality of the normal determination [Dorville et al., 2014b]. A small uncertainty in the normal direction determination can draw a large variation of the V n component because the tangential component of the velocity is much larger than the normal one (see Fig. 3). With a magnitude of the velocity of about ≈ 300 km.s -1 , an uncertainty of 10°on the normal direction corresponds to an uncertainty of ≈ 50 km.s -1 for the normal velocity, and an uncertainty of about ≈ 200 km for the thickness. It is so quite understandable that, with a normal valid in the 14-18s interval, the inaccuracy increases very fast at later times where this normal is no more valid. The method based on gradients does not present this difficulty: it is much less sensitive to the accuracy of the normal determination. Nevertheless, we had also to add a caution to make it work correctly: because of various small accuracy issues, the denominator Y may cancel at a time slightly different from the numerator, which results in short divergences in the result and jumps in the x(t) curve. This has to be corrected by adding adequate small shifts in the denominator. In addition, Fig. 5 clearly gives the confirmation that the spacecraft is going backward inside the magnetopause around t = 27s, as was guessed before. Due to its importance, this technique is under review for further improvements and will be applied to other cases in next studies. In Fig. 6, we have plotted the projection of the ion velocity along the normal obtained by BV, together with the density profile. This evidences an internal structure inside the magnetopause. Two main parts can be observed in the interval t = 14 -18s, where the main plasma gradients are located and which is emphasized by a thick line: in interval (a) a sharp density gradient, with an almost constant V n , followed in interval (b) by a smoother gradient with a normal velocity close to zero. This is in agreement with the sketch drawn in Fig. 3 of [START_REF] Burch | Electron-scale measurements of magnetic reconnection in space[END_REF] which is a possible interpretation of this crossing (although assuming a stationary boundary): a rather straight crossing, followed by a stagnation of the spacecraft inside the boundary. This is confirmed by the observation of energetic ions continuously after 13:05:42 [START_REF] Contel | Whistler mode waves and hall fields detected by mms during a dayside magnetopause crossing[END_REF]. Out of the central interval t = 14 -18s, the curve V n (t) is plotted with a dashed line, to warn the reader that the projection of the velocity is obtained using the BV normal based on this interval and that the validity of this projection, even if correct in the magnetic ramp itself, remains questionable outside of it. Non-stationarity Using timing methods is another classical way for getting information on the boundary properties from multi-spacecraft measurements. We tested CVA (Constant Velocity Analysis), which assumes the boundary is a planar structure encountered by the 4 spacecraft with a constant velocity [Sonnerup et al., 2008a], [Sonnerup et al., 2008b]. As in any other timing method, the analysis is based on the knowledge of the positions of the spacecraft and the measurements of the delays between the signatures of the crossing seen by the four spacecraft. As shown in Fig. 7, these delays are very short with respect to the parasitic variations due to the intrinsic non stationarities, in particular waves and turbulence. If the boundary was stationary, we should find a constant delay between the fields observed by MMS1 and MMS4. On the contrary, it is obvious that the dispersion of the points is not negligible at all with respect to the delay itself. It is worth noticing that we have plotted here the B z component, which is the component that varies most, and for the MMS1-MMS4 pair, for which the delay is maximum. The situation is worst when using the other components and the other spacecraft pairs. This results in a very inaccurate determination of the delays and therefore in a bad determination of the normal direction. The first conclusion is therefore that, in this case, the CVA method cannot be used without much caution. Looking at Fig. 7, we can also derive some hints on the non stationarity of the boundary at different scales. In the beginning of the crossing there are oscillations, evoking the presence of waves, superimposed to the magnetopause variation. This induces variations of the delay on the top of the figure. But there is also a large-scale variation of the delay: on the top of the figure (beginning of the crossing) its mean value is about -0.07s and afterward it goes to -0.15s: the delay is not constant through the crossing. Similar conclusions are obtained with the two other spacecraft. Using an averaging of the delays, one could interpret the large-scale variation as a constant acceleration of the boundary, which would help improving this result [START_REF] Dunlop | Four-point cluster application of magnetic field analysis tools: The discontinuity analyzer[END_REF]. Results of other timing methods, such as CTA (Constant Thickness Analysis) are not presented here, but the same difficulty (small delays with respect to the intrinsic fluctuations) would lead, on this example, to the same difficulties. time (s) from 13:05:30 delays (s) between the magnetic fields The conclusion of these observations is that the magnetic field is not stationary during the crossing by the four MMS spacecraft and therefore the boundary is not the planar stationary discontinuity which is the most simple model for the magnetopause. It is necessary to investigate in more details the geometry and behaviour of the magnetopause. Internal structure: departures from planarity When analyzing a boundary crossing, one most often assumes that this boundary is 1D, i.e. that all parameters vary only in one direction, which is its normal. When this hypothesis of planarity is fully verified, the normal component B n of the magnetic field is strictly constant and this property is used in MVA method to determine a single "global" normal direction (if no other B component is constant in the interval). Nevertheless, when the boundary is shaken by some non stationary effect (either due to varying incident conditions or due to surface instability such as tearing mode or Kelvin Helmholtz), it generally does not remain fully 1D. Such departures to planarity can easily be observed in numerical simulations of reconnection (see for instance [START_REF] Dargent | Kinetic simulation of asymmetric magnetic reconnection with cold ions[END_REF], which will be used afterwards in the paper) or, less easily, it can be guessed from data (see the magnetopause reconstructions in [START_REF] Hasegawa | Optimal reconstruction of magnetopause structures from Cluster data[END_REF], De [START_REF] De Keyser | Empirical Reconstruction[END_REF]). These departures result in the fact that MVA is not suitable to this case and the meaning a global normal direction becomes unclear. One way for dealing with these cases is to try to determine, when possible, a "local normal", possibly varying along the crossing, instead of a single "global" one. Local Normal Analysis We introduce here a new method, that we call LNA (Local Normal Analysis), based on the independent measurements of B (from field data) and j (from particle data), and which allows determining a normal that can vary along the crossing. Mathematically speaking, a local normal direction can be defined wherever all plasma parameters depend on space only through a single scalar function s(x, y, z) of the three coordinates. This ensures that the gradients of all parameters are parallel to each other at any point, this common "normal" direction possibly depending on the point considered. The direction N is given by: N = ∇s |∇s| (1) In a cylindrical geometry for instance, all quantities depend on space only through the radius r, so that all gradients are everywhere parallel to the radial direction. Of course, this direction is variable from one point to another in the azimuthal direction. For any vectorial field U verifying this property, one can write the curl as ∇ × U = ∇s × d s U =| ∇s | N × d s U (2) where d s U stands for the derivative of U with respect to s. Therefore when it is applied to the magnetic field it shows that the current density is perpendicular to the normal (neglecting the displacement current). When applied to the electric field, it shows that ∂ t B is perpendicular to the normal, using Maxwell-Faraday equation. A simple cross product between these two vectors is then a priori sufficient to provide the normal direction N = j × ∂ t B |j × ∂ t B| (3) When both parameters j and B are independently determined with a sufficient accuracy, this expression can provide a simple and efficient way for determining the local normal N at each time and for a single spacecraft. It is worth noticing that this method does not rely on ∇ • B = 0 and thus on the fact that one component (and only one) is constant: it is therefore not limited to sufficiently rotational cases. For the first time in space history, MMS provides independent -and generally reliable-measurements for j and B [START_REF] Torbert | Estimates of terms in ohm's law during an encounter with an electron diffusion region[END_REF], since we can compute a high resolution current density from the particle data [START_REF] Pollock | Fast plasma investigation for magnetospheric multiscale[END_REF]. On previous space observations we used to work only with current density obtained from the magnetic field, with the well-known curlometer technique, because the particle instruments had neither the necessary accuracy nor the necessary time resolution to do it. On MMS it has been shown that both calculations of the current show a global fairly good agreement (see [START_REF] Contel | Whistler mode waves and hall fields detected by mms during a dayside magnetopause crossing[END_REF] who computed the currents for the same time period). It is worth noticing that this new method has to be scale dependent: in the present program, this dependence is crudely controlled by the way the variables are smoothed before use. Since the method relies on time derivatives, this smoothing has an important role in the result. Here, the components of the magnetic field are smoothed with a local cubic fit, which is convenient for getting the time derivatives analytically (The smoothing is performed on the same timescale as the previous gaussian filtering). Going to large-scale smoothing should allow retrieving the classical notion of global normal. On the contrary, going to very short scale smoothing would provide the wave vectors of the different waves encountered (which can be considered as "parasitic" for the present kind of study). This step could be improved in the future (by using for instance a Fourier filtering instead of a smoothing). Fig. 8 shows what the results look like when running the "Local Normal Analysis" (LNA) method on the case presented in Fig. 2 without further precaution. The data have been smoothed over 1.6 seconds (the global interval being of 1min). This time scale is a good compromise for this case: it is significantly shorter than the global crossing time (so giving access to the internal structure), and long enough to get rid of most high frequency turbulence. One can see that this figure appears almost unintelligible in these conditions: apart from a short period about t = 15s where the normal appears relatively stable (and where its direction will be confirmed by another method hereafter), it appears highly fluctuating and apparently random. The reason can easily be understood: the method provides the local normal under the hypothesis that this normal exists, i.e. that the variations are locally 1D. As, at this stage, there is no test of this hypothesis, one gets a result everywhere, even where it is not verified and where the result is thus meaningless. An additional test of locally 1D variations is therefore necessary to make the LNA method complete. It will be the subject of the next sections. Test of the local planarity The best test for determining the dimensionality of observed variations demands multi-point measurements. It has been proposed by [START_REF] Shi | Dimensional analysis of observed structures using multipoint magnetic field measurements: Application to Cluster[END_REF] for Cluster data. This method, called MDD (Minimum Directional Derivative) analysis makes use of magnetic field data, although it is not based on specific properties of this field. It actually has been little used with Cluster, most of the authors preferring to stay in the purely 1D hypothesis and the simple notion of a global normal supposed to be determined by MVA. But it is nowadays attracting increasing interest for analyzing the MMS data (see for instance [START_REF] Chen | Electron diffusion region during magnetopause reconnection with an intermediate guide field: Magnetospheric multiscale observations[END_REF]) because of the short separation between spacecraft that allows a better determination of the local gradients. In a recent paper, [START_REF] Denton | Motion of the mms spacecraft relative to the magnetic reconnection structure observed on 16 october 2015 at 1307 ut[END_REF] have even applied this MDD method on a magnetopause crossing in the same global interval shown in Fig. 1 as the crossing analyzed here, but a bit later. The MDD method consists in diagonalizing the matrix L = G • G T , where G = ∇B and the superscript T indicates matrix transposition and where the spatial derivatives are computed as explained before. The largest eigenvalue λ 1 corresponds to the largest deriva-tive for the ensemble of the B components. When this eigenvalue is much larger than the two other eigenvalues, it means that all B components vary in one single direction, which is given by the corresponding eigenvector α 1 , i.e. that it is 1D, with the normal direction N = α 1 . When the two largest values λ 1 and λ 2 have the same order of magnitude, while the third one λ 3 is much smaller, it means that the problem is 2D, the variations occurring in the plane (α 1 , α 2 ), α 3 so being the direction of invariance. When the three eigenvalues have the same order of magnitude, it means that the B variations are fully 3D. A modified MDD method has been proposed by [START_REF] Denton | Test of methods to infer the magnetic reconnection geometry from spacecraft data[END_REF] (see also a test in simulation in [START_REF] Denton | Test of Shi et al. method to infer the magnetic reconnection geometry from spacecraft data: MHD simulation with guide field and antiparallel kinetic simulation[END_REF]) to avoid the effects of possible offsets and calibration errors in the data. These errors might have a noticeable impact when the method is used to compute the velocity of a structure [START_REF] Denton | Test of methods to infer the magnetic reconnection geometry from spacecraft data[END_REF]) but, as it is not what we do here, we use only the original version of MDD in the present paper. Nevertheless, this point of view may have to be reconsidered for the generalized MDD method that we propose hereafter because such errors have certainly a much larger effect when using the electric field data than with the only magnetic field ones. In order to visualize more easily the effective dimensionality of the variations, we have introduced three parameters, which can be used as proxies: D 1 = λ 1 -λ 2 λ 1 (4) D 2 = λ 2 -λ 3 λ 1 (5) D 3 = λ 3 λ 1 (6) These three parameters vary between 0 and 1 and their sum is equal to 1. For D 1 = 1 and D 2 = D 3 = 0, variation happens only in one direction: the geometry can be told "purely 1D variation". For D 2 = 1 and D 1 = D 3 = 0, the amplitudes of the variations are equal in two directions: it is what we call the case "purely 2D". For D 3 = 1 and D 1 = D 2 = 0, the amplitudes of the variations are equal in the three directions: it is what we call it "purely 3D". Of course, all intermediate situations are possible. Let us consider, for instance, a flux rope with λ 1 = 5, λ 2 = 1 and λ 1 = 0.1, which gives the dimensions D 1 = 0.8, D 2 = 0.18 and D 3 = 0.02. The structure has a slightly 2D character since D 2 is not negligible, but D 1 > D 2 indicates that the tube is strongly flattened in one direction: this makes the transition between 2D (circular tube) and 1D (tube infinitely flattened). Such structures have been observed and studied by [START_REF] Shi | Spatial structures of magnetic depression in the earth's high-altitude cusp: Cluster multipoint ob-servations[END_REF] and [START_REF] Shi | Solar wind entry into the high-latitude terrestrial magnetosphere during geomagnetically quiet times[END_REF] on Cluster and [START_REF] Yao | Observations of kinetic-size magnetic holes in the magnetosheath[END_REF] on MMS. When applying the MDD Analysis to the interval under study, the three eigenvalues obtained are quite similar to those of the Fig. 1 of [START_REF] Denton | Motion of the mms spacecraft relative to the magnetic reconnection structure observed on 16 october 2015 at 1307 ut[END_REF]. These results are plotted in Fig. 9 using the three D i parameters. It must be kept in mind that the D i coefficients deriving from MDD give a local measurement of the dimensionality at the scale which has been selected by the smoothing. Our data have been smoothed on 1.6 s, therefore the wave structures superimposed on the magnetopause crossing are mostly removed. It can be observed that the 1D variations are generally dominant but that 2D and 3D variations are also present in the interval. It is worth noticing that, in the regions of 2D variations, the direction of invariance α 3 is determined by the MDD method, which may be an important information for numerical modeling purposes. In the regions where D 1 ≈ 1, the normal can be determined by N M DD = α 1 . In Fig. 10, the angular distance of this MDD normal with the reference N Shue normal is plotted, for the regions where D 1 > 0.9 (thin line) and for D 1 > 0.98 (thick line). An additional caution has been taken in this figure: we have discarded the regions where there are no significant magnetic field variations (|∂ t (B)| 2 less than 1/10 of its maximum value) because we are not interested in the direction of the gradients for these small variations: they are more likely related to wave and turbulence rather than to the large-scale current layers. In the remaining regions, the results of our LNA have been over-plotted for comparison (in blue). One can observe that, as expected, the results obtained by the two methods are generally close to each other when D 1 ≈ 1, and that they diverge from each other for smaller values of D 1 . For the sake of clarity, we have isolated the two intervals, limited by dashed lines in the figure, where D 1 > 0.98 and which are long enough: interval 1 from 13.8 to 16.8, and interval 2 from 27.4 to 28.4. If we compute the averaged normals on these intervals, we find that the two normals make a 4°angle in interval 1 and 7°angle in interval 2. Considering for instance the normal determined with MDD, it is: N 1 = [0.925, 0.124, -0.355] for interval 1 and: N 2 = [0.872, 0.473, -0.121] for interval 2. Therefore, during the small incursion into the magnetopause which is observed around t = 28s, the normal is different from the normal observed during the large crossing. The two normals are separated by 25°, and the interval 2 normal is closer to the nominal Shue model (which assumes the magnetopause is a paraboloid) than the interval 1 normal. Nevertheless, one can also observe that, at some points (see t ≈ 22 or t ≈ 29), the results can be significantly different (with fast variations for LNA), while D 1 is not much smaller than unity. A possible reason for these differences may be the use of different current densities: LNA uses the particle current density, whereas MDD is based on the magnetic field. These departures may also indicate that, sometimes, the layer is 1D in the sense of MDD, but not in the sense of LNA. The physical reasons for these discrepancies will be investigated in the next subsection, where the two analysis methods have been tested in a numerical simulation. Tests of the MDD and LNA methods on a numerical simulation and generalization of MDD For testing the MDD and LNA methods, we use a 2D numerical PIC simulation published in [START_REF] Dargent | Kinetic simulation of asymmetric magnetic reconnection with cold ions[END_REF]. Note that this simulation of reconnection has no relation with the above experimental case. In this simulation, we have mimicked various spacecraft crossings of the magnetopause layer and treated the data by both the MDD and LNA methods. The crossing used in this paper is shown in Fig. 11 where a map of the magnetic field in the simulation is plotted. The only difference with the real spacecraft data is that the spatial derivatives have been estimated directly from the simulation grid instead of being estimated from the 4-point measurements of the MMS irregular tetrahedron. Fig. 12 shows the results for the crossing shown in Fig. 11, in the same format as Fig. 10, with the same criterion on |∂ t (B)| 2 . It can be seen that MDD determines a normal which is, as expected, close to the y direction, with a clear regular variation which finely fits the shape of the exhaust region in the simulation. It is worth noticing that the B variations are shown to be almost 1D everywhere in the layer, even in the region relatively close to the X point where the field lines are clearly not straight lines.Our LNA result is quite consistent, in general, with this one. Nevertheless, one can once again observe that the two results are not perfectly identical: at some points (see t = 41 -43) where D 1 is very close to unity, the difference between the two results is significant. The LNA result can even include a non negligible z component (not shown), which is inconsistent with the 2D simulation. Although the discrepancies remain generally small, they are to be understood because, for a fully 1D variation, it is clear that j and ∂ t (B) should be strictly tangential and the LNA method should work perfectly. The MDD local normals are plotted also in Fig. 11, where it is clear that the local normal varies along the crossing. These discrepancies point out a weak point in the basic MDD method, which is based on the magnetic field only: when D 1 ≈ 1, it indeed guarantees that the B variations are 1D, so that j is tangential, but it does not guarantee that the other plasma variations are also 1D. In particular, if E variations are not 1D, there is no reason why ∂ t B should be strictly tangential, which is necessary for LNA to work. In low beta regions, one can guess that the magnetic field controls all the other plasma parameters, so that everything is likely to be 1D when the magnetic field is 1D. It is probably the reason why the discrepancies remain quite limited. But in the regions where pressure effects are important (in the central part of the exhaust for instance in reconnection geometries), it is not certain that the 1D variations of B actually ensure the planarity for all the plasma parameters. The fluid equations of momentum, for ions and electrons, clearly show in particular that the variations of the parallel components of the fluid velocities u i and u e are determined by the pressure forces. When these pressure effects are not negligible, the parallel velocities are therefore not constrained by the geometry of the magnetic field variations. Fortunately, the MDD can easily be generalized. Instead of considering the 3*3 matrix G = ∇B, one can introduce variations of all the needed parameters G = ∇S, where S is a vector of dimension N, including not only the 3 components of B, but also any of the other available parameters: the components of the electric field, those of the ion and electron velocities, those of the pressure tensors, as well as the scalars as the density, etc. In these conditions, G is a 3 * N tensor, but L remains 3*3 and the rest of the method can remain unchanged. A normalization has to be introduced in the computation so that the weight of the different physical quantities is equivalent: the Frobenius norm of ∇B is computed as a function of time, and the magnetic field is normalized by the maximum of the norm over all the interval. And the same is done for the electric field. In the simulation data, such a generalization has been done by just introducing the electric field vector in addition to the magnetic one. The result, which can be compared with the result of Fig. 12 is presented in Fig. 13. One can see that the generalized MDD method allows evidencing a 2D character of the plasma in a small region in the current layer, close to the X point, that was not evidenced by the only B variations. D 1 has more contrasted variations with the non-generalized method, so that the same threshold is now more demanding. This leads to reject some normal determinations in the regions where the discrepancy between the LNA and MDD normals was the most important (with a noticeable z component for the LNA normal in particular) and where D 1 has now smaller values. Concerning the magnetopause crossing presented in this paper, preliminary tests have been done of the generalization of MDD. They are not presented here because they have not proved yet to be efficient. When applying the same generalization as in the simulation (addition of the E data), the result is not conclusive. The reason seems to be purely experimental: as the calibration of electric antennas is a difficult issue, the precision on the different components of E [START_REF] Ergun | The axial double probe and fields signal processing for the mms mission[END_REF] is not sufficient to calculate safely the tensor ∇E from the four spacecraft measurements: even the basic Maxwell-Faraday law cannot be verified from the data because the differences between spacecraft are dominated by the differences between offsets rather than by the physical differences. The problem is still complicated by the presence, on the magnetospheric side, of very strong electrostatic bursts of short period, which can hardly be eliminated by the smoothing process and which make difficult obtaining the small transverse field induced by the current layers we are interested in. The attempts to use the MDD method modified by [START_REF] Denton | Test of methods to infer the magnetic reconnection geometry from spacecraft data[END_REF] have not allowed hitherto to overcome this difficulty. Generalizing with the ion velocity V i does not pose similar problems. This has been done, but this test did not lead to conclusive results either: introducing the V i variations does not change significantly the result obtained with B alone. Improving the generalized MDD method to make it efficient with the experimental observations is still a work in progress. Conclusion and perspectives For investigating the magnetopause internal structure, one cannot be satisfied with the simplest hypothesis of a perfectly stationary and mono-dimensional layer. We give here evidence of departures from these two simple hypotheses on a magnetopause crossing by MMS. The departure from planarity is particularly investigated, introducing a new single spacecraft method, called LNA, used together with an existing multi-spacecraft method called MDD [START_REF] Shi | Motion of observed structures calculated from multi-point magnetic field measurements: Application to cluster[END_REF]. As LNA can give a reliable result only when the variations are locally 1D, it can indeed be usefully combined with MDD, which allows selecting the intervals where this local 1D hypothesis is verified. We have shown that the basic MDD method, which is based on the B variations only, is not always sufficient for that: even when it indicates variations close to perfectly 1D, the normal provided by LNA can show small but significant differences with the corresponding normal coming from MDD itself. We therefore propose a generalization of MDD using more data. The idea has been tested by adding the E variations to the B ones, with data coming from a numerical simulation: the test has shown that this addition is sufficient for solving, at least partly, the problem. It remains to be investigated more thoroughly with spacecraft data. It is worth emphasizing once again that this paper presents the different methods accessible by MMS for investigating the internal structure of the magnetopause only from a case study: benchmarking these methods and comparing their performances on a statistical basis remain to be done in future studies. Pending these studies, Table 1 shows that the case presented here is not exceptional and seems rather typical. We analyze six cases in the same way as above, six of them being in the same day as the example of this paper. And we show that the two determinations, LNA and MDD, when restricted to strong criteria for D 1 and for the amplitude of the B variation, are globally consistent, even though they both vary with respect to the "global" MVAB normal (determined in a short interval including the main magnetic gradient). They both show to be often clearly different from this global MVAB determination. The choice of severe criteria has been done here in order to limit as much as possible the effects of non planarity and the role of the superposed turbulence and therefore make the different cases more comparable. However, the results are not perfect in the sense that the distance between the LNA and MDD determinations, which could be expected to be negligible, are generally not smaller than the local variations of each determination, as estimated by the standard deviation of their direction with respect the global MVAB result. This imperfection is likely to be due to the same reason as explained above: using MDD only on the magnetic field does not guarantee the real mono-dimensionality of the physics. Generalizing the method to the electric field should solve this problem if the electric field measurement was accurate enough to allow such a generalization. The MDD method, contrary to LNA, does not make use of Maxwell equations. In return, it loses the single-spacecraft character of LNA and so part of its locality. is a priori no method that would be strictly single-spacecraft and which would allow to test the local 1D hypothesis with a comparable reliability. Nevertheless, some simplifying hypotheses could be used, in the future, to discard the non-1D regions with some confidence. If one assumes, for instance, that the observed B variations can be approximated locally as stationary in some frame, we must have, in the observation frame: ∂ t (B) = -V • ∇B (7) where V is the local propagation velocity of the structure. The same property has already been assumed in [START_REF] Shi | Motion of observed structures calculated from multi-point magnetic field measurements: Application to cluster[END_REF], where the propagation velocity of the structures could so be determined. It can be noticed that the red curve plotted in Fig. 5 is an integration of the velocity obtained by this method. The change of slope in the curve around t = 27 indicates a change of the velocity of the boundary and therefore gives a confirmation of the relative back and forth motion of the boundary that was guessed at the beginning of the paper. It seems to also confirm the hypothesis that the flow across the structure is negligible. If true, this may justify Eq. ( 7), the propagation velocity simply being the normal flow velocity. As soon as the property of Eq. ( 7) is valid, it can easily be shown that the two vectors ∂ t (B) and j are perpendicular to each other when the local variation is 1D, since j = n × ∂ N (B) and ∂ t (B) = -V N ∂ N (B). Checking where the two vectors are perpendicular may provide a test of planarity. This is left for further studies. As discussed before, the MDD method gives the normal to a one-dimensional boundary, but it can also give information when the problem is 2D. In this case, the eigenvector associated with the largest eigenvalue α 1 does not give much information, but the eigenvector associated with the smallest eigenvalue, α 3 , indicates the direction in which the problem is quasi-invariant. This direction will have to be compared with the direction obtained by other methods such as [START_REF] De Keyser | Empirical reconstruction and long-duration tracking of the magnetospheric boundary in single-and multi-spacecraft contexts[END_REF]. Knowing experimentally the invariant direction may be important for comparing the data with 2D numerical simulations. Of course, α 3 is approximately in the plane perpendicular to N Shue , since the effective normal, given by α 1 is not much different from N Shue . In this plane, investigating the actual direction of α 3 deserves to be explored further. It may provide information, for instance, on the local fluctuations at different scales, whatever their cause: reconnection [START_REF] Aunai | Orientation of the x-line in asymmetric magnetic reconnection[END_REF], Kelvin-Helmholtz [START_REF] Miura | Nonlocal stability analysis of the mhd kelvinhelmholtz instability in a compressible plasma[END_REF], [START_REF] Belmont | Advances in magnetopause kelvin-helmholtz instability studies[END_REF]) or any other phenomenon. Finally, we have reported in Fig. 3 the intervals where the B variations are mainly 1D (D 1 > 0.98) or 2D (D 3 < 0.05D 2 ) with a colour code. Of course these criteria leave many intervals where the dimension of the problem is not determined, either because the variations are too weak and the concept of dimension is meaningless, either because the dimension of the problem is not close to 1D or 2D. The 2D intervals are concentrated in the region where the spacecraft go back into the magnetopause layer which is reached only in the very small interval around t = 28s. It seems that this incursion is made in a region which is much more complex than the "clean" magnetopause crossing observed at the beginning of the period. The "oscillations" that are seen in the dimension may correspond to the oscillations that are observed on the density. The reason remains to be investigated. Figure 1 . 1 Figure 1. Radial distance from the Earth as a function of time: comparison between MMS orbit (blue line) and Shue magnetopause position computed with ACE data. Figure 2 . 2 Figure 2. GSE Magnetic field components observed on MMS1, October 16th, 2015, beginning time at 13:05:30. Figure 3 . 3 Figure 3. From top to bottom: magnetic field, electron velocity, density, and spectrograms of ions and electrons for the global period studied in the paper. The blue boxes select the regions where the geometry is 1D and the yellow ones the regions where it is 2D (see the discussion at the end of the paper). Figure 4 . 4 Figure 4. Hodogram of the magnetic field in the plane tangential to the magnetopause obtained by BV, and its fit. The tangential directions BT1 and BT2 chosen for the plot are those of intermediate and largest variances, but any rotation would not change the interpretation. The axis scales are in nT. Figure 5 . 5 Figure 5. Abscissas x(t) along the magnetopause normal, as determined by two different methods (see text). The origin is arbitrary. Figure 6 . 6 Figure 6. Comparison of the normal component of the velocity and the density variation (MMS1). The thick lines correspond to the t = 14 -18s interval. The vertical thin lines indicate the limits of the two periods described in the text. Figure 7 . 7 Figure 7. Comparison of the main component (B z ) of the magnetic field (left) and computation of the delay (right) between points having the same B z value. The green vertical line is the average delay. Figure 8 . 8 Figure 8. The three components of the vector N L N A as determined by LNA without 1D selection in GSE frame, with no test of the significance of the result. Figure 9 . 9 Figure 9. The three dimensions resulting of the MDD Analysis as functions of time for the same interval as Fig 8. Figure 10 . 10 Figure 10. On top, the D 1 parameter. Below the angle between the normal determined by MDD (in blue) and the reference normal given by the Shue model. The thin lines correspond to D 1 >0.9. The thick lines correspond to D 1 >0.98. In black, the results of the LNA method have been over-plotted for comparison, with the same convention. The intervals selected by dotted lines refer to the text. Figure 11 . 11 Figure 11. B z component in the numerical simulation superimposed to the magnetic field lines in the simulation plane. The (x, y) components are those of the 2D simulation box. The straight line indicates the simulated crossing trajectory, with the period of time which is studied below over-lined in green, beginning at the bottom of the simulation box and going in the direction of the increasing y. The small arrows are the MDD local normals determined along the trajectory. Figure 12 . 12 Figure 12. Same as Fig. 10 for the crossing in the simulation box shown on Fig. 11. The time is counted from the entrance of the spacecraft in the simulation box which is crossed at constant velocity. The angle is measured with respect to the reference direction, which is here the y direction of the simulation box. The thin lines correspond to D 1 >0.9. The thick lines correspond to D 1 >0.98. Figure 13 . 13 Figure 13. Same as Fig. 12 for the crossing in the simulation box shown on Fig. 11 when MDD is replaced by MDD generalized to E field. The three components of the electric field are plotted in the lowest panel for reference. Table 1 . 1 There Date θ L N A/MV AB θ M DD/MV AB θ L N A/M DD Comparison of the normals obtained by MDD and LNA on the periods given on the left (the duration is indicated in brackets). The table provides the angles (in degrees) of the two types of normals with respect to MVAB and the angle between them. The statistics are done over all the local normals that satisfy D 1 > 0.99 and ∂ t (B) 2 > 0.5 of its maximum value. The first number corresponds to the mean value and the second one (after ±) corresponds to the standard deviation. 2015 10 16 10:20:00 (+120) 20 ± 3 17 ± 8 9 ± 6 2015 10 16 10:29:30 (+120) 56 ± 0.5 44 ± 3 12 ± 4 2015 10 16 10:36:30 (+120) 33 ± 0.8 21 ± 0.9 12 ± 0.4 2015 10 16 10:55:00 (+60) 12 ± 1 11 ± 4 3 ± 1 2015 10 16 13:05:30 (+60) 24 ± 2 20 ± 3 7 ± 3 2017 01 27 12:05:23 (+70) 35 ± 19 39 ± 14 9 ± 6 Acknowledgments The authors thank Olivier Le Contel and Laurent Mirioni for their help in dealing with the MMS data and for fruitful discussions. The French involvement on MMS is supported by CNES and CNRS. All the data used are available on the MMS data server: https://lasp.colorado.edu/mms/sdc/public/about/browse-wrapper/
00177278
en
[ "info.info-ai" ]
2024/03/05 22:32:18
2005
https://inria.hal.science/inria-00177278/file/marchiori_sebag_evobio05.pdf
Elena Marchiori Michèle Sebag email: [email protected] Bayesian learning with local support vector machines for cancer classification with gene expression data This paper describes a novel method for improving classification of support vector machines (SVM) with recursive feature selection (SVM-RFE) when applied to cancer classification with gene expression data. The method employs pairs of support vectors of a linear SVM-RFE classifier for generating a sequence of new SVM classifiers, called local support classifiers. This sequence is used in two Bayesian learning techniques: as ensemble of classifiers in Optimal Bayes, and as attributes in Naive Bayes. The resulting classifiers are applied to four publically available gene expression datasets from leukemia, ovarian, lymphoma, and colon cancer data, respectively. The results indicate that the proposed approach improves significantly the predictive performance of the baseline SVM classifier, its stability and robustness, with satisfactory results on all datasets. In particular, perfect classification is achieved on the leukemia and ovarian cancer datasets. Introduction This paper deals with tumor classification with gene expression data. Microarray technology provides a tool for estimating expression of thousands of genes simultaneously. To this end, DNA arrays are used, consisting of a large number of DNA molecules spotted in a systematic order on a solid substrate. Depending on the size of each DNA spot on the array, DNA arrays are called microarrays when the diameter of DNA spot is less than 250 microns, and macroarrays when the diameter is bigger than 300 microns. DNA microarrays contain thousands of individual DNA sequences printed in a high density array on a glass microscope slide using a robotic instrument. The relative abundance of these spotted DNA sequences in the two DNA and RNA samples may be assessed by monitoring the differential hybridization of the two samples to the sequences on the array. For mRNA samples, the two samples are reverse-transcribed into cDNA, labeled using different fluorescent dyes mixed (red-fluorescent dye Cy5 and green-fluorescent dye Cy3). After these samples are hybridized with the arrayed DNA probes, the slides are imaged using scanner that makes fluorescence measurements for each dye. The log ratio between the two intensities of each dye is used as the gene expression data (cf. [START_REF] Eisen | Dna arrays for analysis of gene expression[END_REF]) expression(gene) = log 2 (int(Cy5)/int(Cy3)), were int(Cy5) and int(Cy3) are the intensities of the two fluorescent dyes. Four main machine learning tasks are used to analyze DNA microarray data: clustering, e.g. for identifying tumor subtypes, classification, e.g. for tumor diagnostic, feature selection for potential tumor biomarker identification, and gene regulatory network modeling. This paper deals with classification. Many machine learning techniques have been applied to classify gene expression data, including Fisher linear discriminat analysis [START_REF] Dudoit | Comparison of discrimination methods for the classification of tumors using gene expression data[END_REF], k-nearest neighbour [START_REF] Li | Gene assessment and sample classification for gene expression data using a genetic algorithm/k-nearest neighbor method[END_REF], decision tree, multi-layer perceptron [START_REF] Khan | Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks[END_REF][START_REF] Xu | Artificial neural networks and gene filtering distinguish between global gene expression profiles of barrett's esophagus and esophageal cancer[END_REF], support vector machine (SVM) [START_REF] Brown | Knowledge-based analysis of microarray gene expression data by using support vector machines[END_REF][START_REF] Furey | Support vector machine classification and validation of cancer tissue samples using microarray expression data[END_REF][START_REF] Guyon | Gene selection for cancer classification using support vector machines[END_REF][START_REF] Noble | Support vector machine applications in computational biology[END_REF], boosting and ensemble methods [START_REF] Golub | Molecular classification of cancer: class discovery and class prediction by gene expression monitoring[END_REF][START_REF] Cho | Machine learning in DNA microarray analysis for cancer classification[END_REF][START_REF] Tan | Ensemble machine learning on gene expression data for cancer classification[END_REF][START_REF] Liu | A combinational feature selection and ensemble neural network method for classification of gene expression data[END_REF][START_REF] Dettling | Boosting for tumor classification with gene expression data[END_REF]. A recent comparison of classification and feature selection algorithms applied to tumor classification can be found in [START_REF] Cho | Machine learning in DNA microarray analysis for cancer classification[END_REF][START_REF] Tan | Ensemble machine learning on gene expression data for cancer classification[END_REF]. This paper introduces a method that improves the predictive performance of a linear SVM with Recursive Feature Elimination (SVM-RFE) [START_REF] Guyon | Gene selection for cancer classification using support vector machines[END_REF] on four gene expression datasets. The method is motivated by previous work on aggregration of classifiers [START_REF] Breiman | Bagging predictors[END_REF][START_REF] Breiman | Arcing classifiers[END_REF], where it is shown that gains in accuracy can be obtained by aggregrating classifiers built from perturbed versions of the train set, for instance using bootstrapping. Application of aggregration of classifiers to microarray data is described e.g. in [START_REF] Dudoit | Comparison of discrimination methods for the classification of tumors using gene expression data[END_REF][START_REF] Cho | Machine learning in DNA microarray analysis for cancer classification[END_REF][START_REF] Liu | A combinational feature selection and ensemble neural network method for classification of gene expression data[END_REF][START_REF] Dettling | Boosting for tumor classification with gene expression data[END_REF]. In this paper a novel approach is proposed, for generating a sequence of classifiers from the support vectors of a baseline linear SVM-RFE classifier. Each pair of support vectors of the same class are used to generate an element of the sequence, called local support classifier (lsc). Such classifier is obtained by training SVM-RFE on data consisting of the two selected support vectors and all the support vectors of the other class. The sequence of lsc's provides an approximate description of the data distribution by means of a set of linear decision functions, one for each region of the input space in a small neighbourhoods of two support vectors having equal class label. We propose to use this sequence of classifiers in Bayesian learning (cf. [START_REF] Mitchell | Machine Learning[END_REF]). The first technique applies Naive Bayes to the transformed data, where an example is mapped into the binary vector of its classification values. The resulting classifier is called Naive Bayes Local Support Classifier (NB-LSC). The second technique applies Optimal Bayes to the sequence of lsc's classifiers. The resulting classifier is called Optimal Bayes Local Support Classifier (OB-LSC). The two classifiers are applied to four publically available datasets for cancer classification with gene expression. The results show a significant improvement in predictive performance of OB-LSC over the baseline linear SVM-RFE classifier, and a gain in stability. In particular, on the leukemia and ovarian cancer datasets perfect classification is obtained, and on the other datasets performance comparable to the best published results we are aware of. The rest of the paper is organized as follows. The next two sections describe the baseline and new methods. Sections 4 contains a short description of the data. Section 5 reports results of experiments and discuss them. Finally, the paper ends with conclusive considerations on research issues to be tackled in future work. Support Vector Machines This section describes in brief SVM-RFE, the local support classifier construction procedure, and the integration of the resulting classifier sequence in Naive Bayes and Optimal Bayes classification. SVM In linear SVM binary classification [START_REF] Vapnik | Statistical Learning Theory[END_REF][START_REF] Cristianini | Support Vector machines[END_REF] patterns of two classes are linearly separated by means of a maximum margin hyperplane, that is, the hyperplane that maximizes the sum of the distances between the hyperplane and its closest points of each of the two classes (the margin). When the classes are not linearly separable, a variant of SVM, called soft-margin SVM, is used. This SVM variant penalizes misclassification errors and employs a parameter (the soft-margin constant C) to control the cost of misclassification. Training a linear SVM classifier amounts to solving the following constrained optimization problem: min w,b,ξ k 1 2 ||w|| 2 + C m i=1 ξ i s.t. w • x i + b ≥ 1 -ξ i with one constraint for each training example x i . Usually the dual form of the optimization problem is solved: Maximizing the margin allows one to minimize bounds on generalization error. Because the size of the margin does not depend on the data dimension, SVM are robust with respect to data with high input dimension. However, SVM are sensitive to the presence of (potential) outliers, (cf. [START_REF] Guyon | Gene selection for cancer classification using support vector machines[END_REF] for an illustrative example) due to the regularization term for penalizing misclassification (which depends on the choice of C). min αi 1 2 m i=1 m j=1 α i α j y i y j x i • x j - m i=1 α i such that 0 ≤ α i ≤ C, m i=1 α i y i = 0. SVM requires O(m 2 SVM-RFE The weights w i provide information about feature relevance, where bigger weight size implies higher feature relevance. In this paper feature x i is scored by means of the absolute value of w i . Other scoring functions based on weight features are possible, like, e.g., w 2 i , which is used in the original SVM-RFE algorithm [START_REF] Guyon | Gene selection for cancer classification using support vector machines[END_REF]. SVM-RFE is an iterative algorithm. Each iteration consists of the following two steps. First feature weights, obtained by training a linear SVM on the training set, are used in a scoring function for ranking features as described above. Next, the feature with minimum rank is removed from the data. In this way, a chain of feature subsets of decreasing size is obtained. In the original SVM-RFE algorithm one feature is discarded at each iteration. Other choices are suggested in [START_REF] Guyon | Gene selection for cancer classification using support vector machines[END_REF], where at each iteration features with rank lower than a user-given theshold are removed. In general, the threshold influences the results of SVM-RFE [START_REF] Guyon | Gene selection for cancer classification using support vector machines[END_REF]. In this paper we use a simple instance of SVM-RFE where the user specifies the number of features to be selected, 70% of the actual number of features are initially removed, and then 50% at each further iteration. These values are chosen after cross-validation applied to the training set. Local Support Classifiers We propose to describe the distribution of the two classes by means of a sequence of classifiers, generated from pairs of support vectors ofSVM-RFE. Each of these classifiers, called local support classifier (lsc), is obtained using data generated from two support vectors of the same class, and all support vectors of the other class. In this way, each classifier uses only a local region near the two selected support vectors when separating the two classes. Each classifier generated from two (distinct) support vectors of the same class provides an approximate description of the distribution of the other class given the two selected support vectors. Before describing the procedure for constructing lsc's, some notation used throughout the paper is introduced. -D denotes the training set, -c denotes the classifier obtained by training a linear SVM on D, -S p and S n denote the set of positive and negative support vectors of c, respectively, -P air p and P air n denote the set of pairs of distinct elements of S p and S n , respectively. The following procedure, called LSC, takes as input one (s, s ) in P air p and outputs a linear SVM classifier C s,s by means of the following two steps. An analogous procedure is applied to generate C s,s from pairs (s, s ) in P air n . When applied to all pairs of support vectors in P air p , P air n , LSC produces a sequence of lsc's. Such sequence of classifiers induces a data transformation, called seq D , which maps example x in the sequence seq D (x) of class values C s,s (x), with (s, s ) in P air p ∪ P air n . The construction of the sequence oflsc's requires computation that grows quadratically with the number of support vectors. However, this is not a severe problem, since the number of examples, hence of support vectors, is small for this type of data. Furthermore, LSC is applied to each pair of support vectors independently, hence can be executed in parallel. Naive Bayes and Optimal Bayes Classification Naive Bayes (NB) is based on the principle of assigning to a new example the most probable target value, given the attribute values of the example. In order to apply directly NB to the original gene expression data, gene values need to be discretized, since NB assumes discrete-valued attributes. Examples transformed using seq D contain binary attributes, hence discretization is not necessary. Let x be a new example. Suppose seq D (x) = (x 1 , . . . , x N ). First, the prior probabilities p y of the two target values are estimated by means of the frequency of positive and negative examples occurring in the train set D, respectively. Next, for each attribute value x i , the probability P (x i | y) of x i given target value y is estimated as the frequency with which x i occurs as value of i-th attribute among the examples of D with class value y. Finally, the classification of x is computed as the y that maximizes the product p y N i=1 P (x i | y). The resulting classifier is denoted by NB-LSC. Optimal Bayes (OB) classifier is based on the principle of maximizing the probability that a new example is classified correctly, given the available data, classifiers, and prior probabilities over the classifiers. OB maps example x to the class that maximizes the weighted sum C s,s w s,s I(C s,s (x) = y), where w s,s is the accuracy of C s,s over D, and I is the indicator function, which returns 1 if the test contained in its argument is satisfied and 0 otherwise. The resulting classifier is denoted by OB-LSC. Datasets There are several microarray datasets from published cancer gene expression studies, including leukemia cancer dataset, colon cancer dataset, lymphoma dataset, breast cancer dataset, NCI60 dataset, ovarian cancer, and prostate dataset. Among them four datasets are used in this paper, available e.g. at http://sdmc.lit.org.sg/GEDatasets/Datasets.html. The first and third dataset contain samples from two variants of the same disease, the second and last dataset consist of tumor and normal samples of the same tissue. Table 1 shows input dimension and class sizes of the datasets. The following short description of the datasets is partly based on [START_REF] Cho | Machine learning in DNA microarray analysis for cancer classification[END_REF]. Leukemia The Leukemia dataset consists of 72 samples: 25 samples of acute myeloid leukemia (AML) and 47 samples of acute lymphoblastic leukemia (ALL). The source of the gene expression measurements is taken from 63 bone marrow samples and 9 peripheral blood samples. Gene expression levels in these 72 samples are measured using high density oligonucleotide microarrays [START_REF] Ben-Dor | Tissue classification with gene expression profiles[END_REF]. Each sample contains 7129 gene expression levels. Colon The Colon dataset consists of 62 samples of colon epithelial cells taken from colon-cancer patients. Each sample contains 2000 gene expression levels. Although the original data consists of 6000 gene expression levels, 4000 out of 6000 were removed based on the confidence in the measured expression levels. 40 of 62 samples are colon cancer samples and the remaining are normal samples. Each sample is taken from tumors and normal healthy parts of the colons of the same patients and measured using high density oligonucleotide arrays [START_REF] Alon | Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays[END_REF]. Lymphoma B cell diffuse large cell lymphoma (B-DLCL) is a heterogeneous group of tumors, based on significant variations in morphology, clinical presentation, and response to treatment. Gene expression profiling has revealed two distinct tumor subtypes of B-DLCL: germinal center B cell-like DLCL and activated B cell-like DLCL [START_REF] Lossos | Ongoing immunoglobulin somatic mutation in germinal center b cell-like but not in activated b cell-like diffuse large cell lymphomas[END_REF]. Lymphoma dataset consists of 24 samples of germinal center B-like and 23 samples of activated B-like. Ovarian Ovarian tissue from 30 patients with cancer and 23 without cancer were analyzed for mRNA expression using glass arrays spotted for 1536 gene clones. Attribute i of patient j is the measure of the mRNA expression of the i-th gene in that tissue sample, relative to control tissue, with a common control employed for all experiments [START_REF] Schummer | Comparative hybridization of an array of 21,500 ovarian cdnas for the discovery of genes overexpressed in ovarian carcinomas[END_REF]. Numerical Experiments The two classifiers NB-LSC and OB-LSC, described in Section 3.1, are applied to the four gene expression datasets the baseline SVM-RFE algorithm. In all experiments the same value of the SVM parameter C = 10 is used, while the number of selected genes was set to 30 for the lymphoma dataset and 50 for all other datasets. These values are chosen by means of cross-validation applied to the training set. Because of the small size of the datasets, Leave One Out Cross Validation (LOOCV) is used to estimate the predictive performance of the algorithms [START_REF] Evgeniou | Leave one out error, stability, and generalization of voting combinations of classifiers[END_REF]. Moreover, while the performance of SVM on the Lymphoma dataset is rather scare (possibly due to the fact that we did not scale the data), OB-LSC obtains results competitive to the best results known (see Table 3). Table 3 reports results of OB-LSC and the best result among those contained nine papers on tumor classification and feature selection using different machine learning methods [START_REF] Cho | Machine learning in DNA microarray analysis for cancer classification[END_REF]. Note that results reported in this table have been obtained using different cross-validation methods, mainly by repeated random partitioning the data into train and test set using 70 and 30 % of the data, respectively. Because the resulting estimate of predictive performance may be more biased than the one of LOOCV [START_REF] Evgeniou | Leave one out error, stability, and generalization of voting combinations of classifiers[END_REF], those results give only an indication for comparing the methods. Only the results on the colon dataset from Liu et al 04 and Dettling et al 03 [START_REF] Dettling | Boosting for tumor classification with gene expression data[END_REF][START_REF] Liu | A combinational feature selection and ensemble neural network method for classification of gene expression data[END_REF] are obtained using LOOCV. The methods proposed in these latter papers use boosting and bagging, respectively. The results they obtain seem comparable to OB-LSC. The results indicate that OB-LSC is competitive with most recent classification techniques for this task, including non-linear methods. Conclusion This paper introduced an approach that improves predictive performance and stability of linear SVM for tumor classification with gene expression data on four gene expression datasets. We conclude with two considerations on research issues still to be addressed. Our approach is at this stage still an heuristic, and needs further experimental and theoretical analysis. In particular, we intend to analyze how performance is related to the number of support vectors chosen to generate lsc's. Moreover, we intend to investigate the use of this approach for feature selection, for instance whether the generated lsc's can be used for ensemble feature ranking [START_REF] Jong | Ensemble feature ranking[END_REF]. ) storage and O(m 3 ) to solve. The resulting decision function f (x) = w • x + b has weight vector w = m k=1 α k y k x k . Examples x i for which α i > 0 are called support vectors, since they define uniquely the maximum margin hyperplane. 1 . 1 Let Xp = {s, s }. Assign positive class label to these examples. 2. Let C s,s be the classifier obtained by training a linear SVM on data Xp∪Sn. Table 1 . 1 Datasets description Name Tot Positive Negative Genes Colon 62 22 40 2000 Leukemia 72 25 47 7129 Lymphoma 58 26 32 7129 Ovarian 54 30 24 1536 Table 2 . 2 Results of LOOCV: average sensitivity, specificity and accuracy (with standard deviation between brackets). Method Dataset Sensitivity Specificity Accuracy SVM-RFE Colon 0.90 (0.3038) 1.00 (0.00) 0.9355 (0.2477) NB-LSC 0.75 (0.4385) 1.00 (0.00) 0.8387 (0.3708) OB-LSC 0.90 (0.3038) 1.00 (0.00) 0.9355 (0.2477) SVM-RFE Leukemia 0.96 (0.20) 1.00 (0.00) 0.9861 (0.1179) NB-LSC 1.00 (0.00) 1.00 (0.00) 1.00 (0.00) OB-LSC 1.00 (0.00) 1.00 (0.00) 1.00 (0.00) SVM-RFE Ovarian 0.7000 (0.4661) 0.9583 (0.2041) 0.8148 (0.3921) NB-LSC 1.00 (0.00) 1.00 (0.00) 1.00 (0.00) OB-LSC 1.00 (0.00) 1.00 (0.00) 1.00 (0.00) SVM-RFE Lymphoma 0.6923 (0.4707) 0.6562 (0.4826) 0.6724 ( 0.4734) NB-LSC 1.00 (0.00) 0.6562 (0.4826) 0.8103 (0.3955) OB-LSC 1.00 (0.00) 0.8750 (0.3360) 0.9310 (0.2556) Table 2 2 reports results of LOOCV. They indicate a statistically significant improvement of OB-LSC over the baseline SVM-RFE classifier, and a gain in stability, indicated by lower standard deviation values. In particular, on the ovarian and leukemia datasets both NB-LSC and OB-LSC achieve perfect classification. Table 3 . 3 Comparison of results with best average accuracy reported in previous papers on tumor classification. The type of classifiers considered in the paper are given between brackets. An entry '-' means that the corresponding dataset has not been considered. Colon Leukemia Lymphoma Ovarian
01772795
en
[ "spi.other" ]
2024/03/05 22:32:18
2008
https://hal.science/hal-01772795/file/PLM-Based_2008.pdf
Farouk Belkadi Nadège Troussier Frederic Huet Thierry Gidel Eric Bonjour email: [email protected]' Benoît Eynard email: [email protected] PLM-based approach for collaborative design between OEM and suppliers: case study of aeronautic industry Keywords: Suppliers Integration, PLM, UML, Innovative Organisation To achieve different assembly operations on the aircraft structure, the aeronautic OEM needs to create and manage various fixture tools. To cope with these needs, the OEM begun to adopt the supplier integration into the tooling development process. This paper presents a conceptual PLM-based approach to support new business partnership of different suppliers. The new business partnership aims to improve the role of supplier in the different tasks of design, configuration and fabrication of the tooling. The use of the PLM concepts is proposed to enhance the collaboration between OEM and the equipment's suppliers. UML models are proposed to specify the structure of the PLM solution. These models describe the relation between the aircraft assembly project, and the tooling design process. Introduction The role of supplier in a successful assembly process of aircraft component is very important. Because of the specific aircraft structure, assembly department needs to constantly design new fixture tools used for new assembly operations. It obviously happens when the aeronautic OEM creates new aircraft model, and also when this OEM modifies the existing models to satisfy a particular customer requirement. To deal with assembly tool costs and time to market optimization challenges, the collaboration between OEM and suppliers should rather go into strategic partnership, covering the whole tool's lifecycle. The purpose of our research is to develop a new business partnership that enables efficient collaboration between OEM and suppliers. This partnership would enhance the suppliers' role in the design process of assembly tools. The case study concerns the tooling design activities and manufacturing process in the aeronautic industry. The construction of this business partnership is obtained according the following perspectives: • Definition of its mission and organization, • Identification of new methodologies to optimize it's operating processes, • Realization of a collaborative IT framework to support its activities. This paper focuses on the last perspective and describes a conceptual framework to specify an innovative PLM-based approach. The originality of our approach comes from the high abstraction level of the proposed models based on the situation concept. These concepts are valuable to describe different organization forms. It mainly provides specification of IT system that can gives innovation aided by enhancing the project organization in context of extended enterprise and by favouring interoperability between heterogeneous shared between OEM and supplier systems. (for instance, between SAP system to capture Aircraft information at the OEM level and DELMIA tool to identify the tooling behaviour in the assembly process, at the supplier level). First, we present an overview of the context study and the interest of PLM approach to solve this problematic. Second, a literature review is presented concerning the use of PLM methodologies to support the OEM supplier partnership. Third, we develop our conceptual models of the IT structure. The specification of the PLM-based approach is defined according to a unified modelling that describes, at the same abstraction level, the product (assembly tools or equipment) data and process data. Fourth, the concept of project view is presented. Using the UML activity diagram, we detailed some functionalities of the future collaborative system to manage the equipment design project. Context and aims of the study Traditionally, in the aeronautic industry, the tooling supplier is a basic manufacturer of the assembly tools. The design and manufacture processes of these tools are considered as a sequential one. First the design department delivers the engineering documents of the different aircraft parts; the production engineering department specifies and designs the detailed assembly processes and needed tools to carry out the assembly operations. Then, the production engineering department sends the detailed specifications to the supplier for tools manufacturing. Figure 1 shows this configuration. On the one hand, three departments are engaged in the global process of assembly tools purchasing: production service specifies the assembly needs, the equipment's R&D designs the tooling structure and the purchase service negotiates and sends the order to supplier. On the other hand, several suppliers located in different geographical locations are contracted with to produce the various parts of the tool. After the completion of the tool, it is sent directly to the production shop for use. During the manufacturing process of the assembly tool, some modifications may occur on the initial configuration of aircraft components. These modifications imply changes on the specification of the assembly process and thus of the assembly tool. The whole cycle of the assembly tool ordering is then repeated to cope with the new specifications. This approach proved its limits in the current context. The supplier is not integrated in the first stages of the tools specification and likewise, the OEM has not access to the manufacture process of the tool. Thus, much iteration is occurred before obtaining the final tool definition fulfilling the requirements of production engineering department. Several problems have been observed during the preliminary study: Important time and costs of the assembly tools manufacturing (and consequently for the assembly process of the aircraft parts, delivery date not respected); difficulty to manage the assembly tool range by the OEM (no-use of standards, bad maintenance…); the OEM has to manage several product data interfaces with various partners. In the future configuration, an innovative PLM-based approach is proposed to support a new business partnership approach. PLM is used for the seamlessly integration of all the information specified throughout all phases of the equipment's life cycle to everyone in the new organization (OEM and a new global supplier network) at every managerial and technical level. Figure 2 shows the proposed configuration of the new business partnership. In this configuration, design tasks', configuration and fabrication of the assembly tool are performed collaboratively with the new global supplier network. Suppliers are already informed by new modifications of the assembly operations and design themselves the new tool. This description shows important evolutions in the configuration of the development process that can be summarized by considering the shift from a linear and sequential process to a much more "interactionnist" one [START_REF] Kline | An overview of innovation[END_REF]. This reconfiguration should lead to significant improvement in cost and time saving in association with a greater innovative potential [START_REF] Nishiguchi | Suppliers' innovation: undestated aspects of japanese industrial sourcing[END_REF], [START_REF] Kim | Performance effect of partnership between manufacturers and suppliers for new product development: the supplier's standpoint[END_REF]. But, what is at stake in this study goes beyond the development process and the impact or the evolution has to be considered at the (inter)organizational level. We can both consider the renewal of the expected shared competences and new governance modalities for these new relationships [START_REF] Foss | Theories of the firm : contractual and competence perspective[END_REF], [START_REF] Mahmoud-Jouni | Les coopérations interentreprises dans les projets de développement[END_REF]. First, in the traditional development process, the supplier's competences were exclusively manufacturing ones. In the new process, the expected incomes evolve towards innovation capacity, sub-system integration, proactive behavior during the process… This leads to consider the new role of the suppliers, not only as an efficient manufacturer, but more as a service supplier, collaborating in the definition and conception stages [START_REF] Calvi | Le rôle des services achats dans le développement des produits nouveaux : une approche organisationnelle[END_REF]. Thus, knowledge transfer and learning capacities are at the core of these new activities for these suppliers [START_REF] Nooteboom | Learning and innovation in organizations and economies[END_REF]. So, as we can see, this evolution will have to be encompassed in a wider evolution of the competences that is expected from the suppliers. Second, to promote greater innovative potential, interactions between the different partners will have to be carefully managed, because of the change in the nature of their transactions. We can at least anticipate three significant modifications in their relationships. The advantage of the new process is a more important distribution of risk between the partners, previously only assumed by the OEM. In the new context, risk is distributed between all the involved actors. This collaborative organization implies that the partners reveal some of their competences, to combine and fertilize them. Thus, the management of core competences and the equilibrium between the individual interests of each actor and the collective objectives has to be questioned [START_REF] Hamel | Strategy as Stretch and Leverage[END_REF], [START_REF] Ouchi | Markets, bureaucracies[END_REF]. And, to promote innovation, upstream monitoring and planning will necessarily have to be adapted, in order to facilitate the emergence of new opportunities, which were not anticipated at the beginning of the collaboration. This seems all the more important, that previous studies have shown that innovation through cooperation is linked to a sort of "plasticity" of the relationship, allowing to discover new opportunities and sources of learning [START_REF] Huet | Capacités d'innovation et coopération de PME : des effets auto-renforçants[END_REF]. These different preliminary elements shed light on the extended impact of this evolution in the development process, both in the vertical interactions (between OEM and suppliers) and in the horizontal ones (between suppliers). The innovation and collaboration objectives show that governance will have to rely on a new equilibrium between contractual prescriptions and trust based relationships [START_REF] Adler | Market hierarchy and trust, The knowledege Economy and the future of capitalism[END_REF]. Indeed, to promote innovation, contracts will necessarily remain uncomplete and could lead to strong inertia during the collaboration, while trust, both on competence and behavior, will bring more flexibility in front of novelty and knowledge mutualisation. This vertical and horizontal integration that necessitate risks and benefits sharing, implies developing common practices and methodologies. By sharing project management approach, problem solving methods or design methodology, the partners would in turn shared objectives and decision processes. Even if this contribution is focused on the definition of a PLM platform, these different elements have to be mentioned. They necessarily won't be neutral for the definition and appropriation of this new support by the different partners. PLM is used for the seamlessly integration of all the information specified throughout all phases of the equipment's life cycle to everyone in the new organization (OEM and a new global supplier network) at every managerial and technical level. The following section presents a literature review about PLM concept. About Product Life Management PLM is defined as a systematic concept for the integrated management of all product related information and processes through the entire lifecycle, from the initial idea to end-of-life [START_REF] Saaksvuori | Product Lifecycle Management[END_REF]. In [START_REF] Jun | Research issues on closed-loop PLM[END_REF], PLM is considered as a strategic business approach that applies a consistent set of business solution in support of the collaborative creation, management, dissemination, and use of product information across the extended enterprise. Such as in the automotive industry, the aeronautic industry is seen to adopt the supplier integration into the development process. The new management culture considers necessary the PLM approach to get these goals [START_REF] Gomes | Applying a benchmarking method to organize the product lifecycle management for aeronautic suppliers[END_REF]. Tang [START_REF] Tang | Product lifecycle management for automotive development focusing on supplier integration[END_REF] present a literature review of PLM approaches used in automotive industry to improve collaboration between OEM and suppliers. The lifecycle currently support the OEM supplier partnership can be grouped in collaborative environment with three main phases [START_REF] Schilli | Collaborative life cycle management between suppliers and OEM[END_REF]: • Designing the systems to be used in the OEM's product. • Supply chain integration to produce and deliver the requested systems to OEM. • Provide services for the components for both OEM and supplier systems. The IT solution to support PLM results from the integration between enterprise resource planning (ERP), product data management (PDM) and other related systems, such as computer aided design (CAD) and costumer relationship management (CRM) [START_REF] Schuh | Process oriented framework to support PLM implementation[END_REF]. A critical aspect of PLM systems is their product information modeling architecture. In the literature, several representations of the product data are presented [START_REF] Terzi | Development of a metamodel to foster interoperability along the product lifecycle traceability[END_REF], [START_REF] Moka | Managing engineering knowledge: methodology for knowledge based engineering applications[END_REF]. The unified representation of the product knowledge can favor semantic interoperability of CAD/CAE/ tools at the conceptual level [START_REF] Szykmana | A foundation for interoperability in next-generation product development systems[END_REF]. UML is currently used to support product models [START_REF] Nishiguchi | Suppliers' innovation: undestated aspects of japanese industrial sourcing[END_REF]. STEP and XML are used to obtain interoperability at the implementation level [START_REF] Fenves | CPM2: A revised core product model for representing design information[END_REF]. Sudarsan [START_REF] Sudarsan | A product information modeling framework for product lifecycle management[END_REF] propose a product information-modeling framework that aimed at support PLM information needs and intended to capture product data, design rationale, assembly, tolerance information, the evolution of products and product families. However, product model is not the unique element in a PLM structure. Nowak [START_REF] Nowak | Towards a design process model enabling the integration of product, process and organization[END_REF] present architecture of a collaborative aided design framework integrating Product, Process and Organization models for engineering performance improvement. Danesi [25] propose the P4LM methodology which allows the management of Projects, Products, Processes, and Proceeds in collaborative design and that aims to allow the integration of information coming from different partners which are involved in a PLM application. This framework allows a topdown approach by defining functions in an abstraction level according to four modules (Project, Product, Proceed and Process). In aim to get best integration of suppliers in the automotive design and manufacturing processes, Trappey [START_REF] Trappey | Applying collaborative design and modularized assembly for automotive ODM supply chain integration[END_REF] develops and implements an information platform called advanced production quality planning (APQP) hub. The information hub mainly provides a collaborative environment that enhancing the visibility of the supply chain operations and contributes in collecting and delivering APQP documents among both the OEM and all supply chain. This information platform applies the concept of modularized assembly and consists of five major functions: Categorized part library, project based collaborative design, real-time information exchange, on-line confirmation of modularized products, and on-line negotiation and ordering. Each one of the obvious functions is implemented according to an interactive process. Our work deals with the integration of Product, Process, and Organization dimensions of a design project. Several models are developed to support, at the conceptual level, this integration. The Package model of the PLM approach At the conceptual level, our approach is based on the concept of working situation proposed in [START_REF] Belkadi | Modelling Framework of a Traceability System to Improve Knowledge Sharing and Collaborative Design[END_REF]. According to this model, each process and activity in a collaborative design project is considered as an interactional entity that refers to links between various entities of the situation. These entities may bring together different physical elements such as human resources and material resources (product, drawings, documents, CAD tools, planning tools, etc). It may bring together, also, other interactional entities (activities, processes, communities). The nature of the contribution made by each entity to the interactions is formalized in this approach using the concept of specific role that is a systemic extension of the role proposed in the organization theory [START_REF] Uschold | The Enterprise Ontology[END_REF]. Five kinds of specific roles are distinguished: • The "actor" role concerns every entity who/which participates directly in the interaction and who/which is responsible for the end result. • The "customer" role brings together all the entities that are going to be receiving the end result of the interaction. • The "manager" role concerns every entity who/which regulates the functioning of an interaction. • The "support" role includes every entity who/which give help in an interaction. • The "object" role concerns every entity on whom/which the interaction acts. Figure 3 describes the global package model structuring the PLM data. According to this model, the assembly tool (equipment) may be considered as a mediation artifact since it is simultaneously a support of the assembly aircraft project (that is performed in the OEM Assembly Workshop) and, the main object of the equipment project (that are realized by the new trade organization). Aircraft project plays the role of customer of the equipment project. The different needs of the equipment project are specified according to the different activities of the aircraft assembly process. Thus, Aircraft processes take also the role of customer in the equipment processes. Figure 3 The package model structuring our PLM approach. The processes packages (of aircraft and equipment) group various processes that organize the functioning of related projects. For example, design and fabrication processes are the principal processes of the equipment project, its play the role "actor". The process data concerns both the assembly process of aircraft parts and design process of the equipment. Saved information is used to recognize the activities evolution of each partner (new requirement of the OEM, new kind of assembly tools proposed by suppliers …). The data model package organizes, according to various sub-models, the equipment's information that are produced and manipulated during different stages of the whole equipment lifecycle. The detailed data are stored in different documents represented by "documents package". For example, the structural model contains information about the physical composition of the equipment. The detailed structure is stored in CAO documents. The project view We consider the concept of project as a form of operational interaction including a set of processes in order to obtain specific goals. The project structure consists of a systemic decomposition into different sub-projects regarding to the product complexity. Figure 4 The project view Figure 4 shows the Meta model of the project structure, the project is considered as an interactional entity according to the situation concept (cf. section 2). Each project contributes to one or several goals. The class "goals" make dependence between the project view, the process view and the task view. The project Meta model presents the contribution of different elements similarly at the organizational level (organization of different human resources and communities) and at the operational level (organization of different processes). The contribution of all project elements is presented by an instantiation of: {the class entity, the class role (replaced by a specific subclass) and interactional entity (in this case project)}. For instance, the Aircraft project is associated to the Equipment project by mean of the "Customer" class. The main idea is that both the aircraft design project and equipment design project are described under the same main project reference. When a manufacturing order of an assembly tool is submitted, a new sub project for this need is created. All aircraft sub-project that are concerned by this equipment are related to the above project in the global situation framework. For this use case, three specific roles where to be considered: • The aircraft R&D takes the role of actor in the aircraft design process and the indirect "customer" in the equipment design process (send the original needs through the production department). We note this entity "Aircraft_R&D". • The production department takes the role of support in the aircraft design process; it performs different production and assembly operations. At the same time, it takes the role of direct "customer" in the equipment design process (define the assembly procedure and assembly tool functions). We note this entity "Aircraft_Prod". • The R&D service of the new business partnership takes the role actor of the equipment design process. We note this entity "Equipment_R&D". • The manufacturing service of the new business partnership takes the role support of the equipment design process. This entity is noted "Equipment_Prod". When a modification in the aircraft structure is occurred, the system informs the members of the business partnership and sends him the new requirement to consider in the specification of the related assembly tool. Figure 6 presents the interaction process for this case. Conclusion In this paper, we have proposed a modeling framework to support, at the conceptual level, a new PLM approach to improve information sharing in collaborative design, and then to enhance the integration of supplier in the design and manufacturing processes. The final goals of the project is to reduce costs and time to market of the assembly tools, and consequently thus of the aircraft product. The new business partnership implies to establish new collaboration strategy between OEM and supplier. Other benefits can be obtained from this framework by monitoring the evolution of collective work and facilitating its coordination. The developed framework deals with the integration of Product, Process and Organization dimensions of a design project, and, in future works, the corresponding extension of CAD/CAM and PDM existing tools. The proposed Product model gives the structure of the product data base. It uses a generic semantic that can favor, in our sense, the conceptual interoperability between different product data coming from different partners. Although our work is developed initially to resolve a particular problem in a special firm of the aeronautic industry, the use of a modeling framework based on the generic concepts of entities and interactions in the working situation may gives more interests. In this contribution, one specific dimension has been developed, related to the PLM platform, to support the shift from a sequential to an interactionnist development process. Face to the complexity of such a change, success will not only rely on this support dimension. This PLM platform will have to be considered in a more global system/organisation, to take into account the entanglement of technology, market and usage dimensions. At an operational level, this integrated approach will enhance the chances of success and at a more analytic level, it will allow to precise the conditions of application and transposition in other contexts. Further research work will be performed to improve and validate these different issues. A prototype is under development and is being tested thanks to our industrial case study. Figure 1 . 1 Figure 1. The current OEM-Supplier partnership. Figure 2 2 Figure 2 Future configuration of OEM-Supplier partnership. Figure 6 6 Figure 6 Scenario of modifying requirement Farouk Belkadi, Nadège Troussier, Frederic Huet et al. Several modeling tools are used to describe process achievement (IDEF, GRAi, UML, etc.). In our approach, we used the Activity diagram of UML formalism (figure 5) to describe the interaction process during the creation of a new project. At the beginning, Aircraft R&D creates a new project and sends initial specifications to the production department. It defines the different operations of the assembly process and specifies the functions of the assembly tool to be realized. After, it searches in the furniture warehouse a tool which satisfies these functions. If no tool is founded, production department creates new equipment sub project and sends information to the supplier network (equipment R&D and manufacturing). In fact, the real process is established in concurrent way. When, the equipment R&D starts the design process, manufacturing service is simultaneously schedules the manufacturing operations and researches the available technological solutions. Thanks to the collaborative system, the specification and manufacturing of the assembly tool is performed progressively and co-jointly by different partners according to the global scenario.
01772799
en
[ "spi.signal" ]
2024/03/05 22:32:18
2007
https://hal.science/hal-01772799/file/ICA2007.pdf
A Aïssa-El-Bey K Abed-Meraim Y Grenier email: [email protected] Blind audio source separation using sparsity based criterion for convolutive mixture case In this paper, we are interested in the separation of audio sources from their instantaneous or convolutive mixtures. We propose a new separation method that exploits the sparsity of the audio signals via an p-norm based contrast function. A simple and efficient natural gradient technique is used for the optimization of the contrast function in an instantaneous mixture case. We extend this method to the convolutive mixture case, by exploiting the property of the Fourier transform. The resulting algorithm is shown to outperform existing techniques in terms of separation quality and computational cost. Introduction Blind Source Separation (BSS) is an approach to estimate and recover independent source signals using only the information within the mixtures observed at each channel. Many algorithms have been proposed to solve the standard blind source separation problem in which the mixtures are assumed to be instantaneous. A fundamental and necessary assumption of BSS is that the sources are statistically independent and thus are often separated using higher-order statistical information [START_REF] Cardoso | Blind signal separation : statistical principles[END_REF]. If extra information about the sources is available at hand, such as temporal coherency [2], source nonstationarity [3], or source cyclostationarity [4], then one can remain in the second-order statistical scenario, to achieve the BSS. In the case of non-stationary signals (including audio signals), certain solutions using time-frequency analysis of the observations exist [5]. Other solutions use the statistical independence of the sources assuming a local stationarity to solve the BSS problem [6]. This is a strong assumption that is not always verified [7]. To avoid this problem, we propose a new approach that handles the general linear instantaneous model (possibly noisy) by using the sparsity assumption of the sources in the time domain. Then, we extend this algorithm to the convolutive mixture case, by transforming the convolutive problem into instantaneous problem in the frequency domain, and separating the instantaneous mixtures in every frequency bin. The use of sparsity to handle this model, has arisen in several papers in the area of source separation [8,9]. We first present a sparsity contrast function for BSS. Then, in order to achieve BSS, we optimize the considered contrast function using an iterative algorithm based on the relative gradient technique. In the following section, we discuss the data model that formulates our problem. Next, we detail the different steps of the proposed algorithm. In Section 4, some simulations are undertaken to validate our algorithm and to compare its performance to other existing BSS techniques. Instantaneous mixture case Data model Assume that N audio signals impinge on an array of M ≥ N sensors. The measured array output is a weighted superposition of the signals, corrupted by additive noise, i.e. x(t) = As(t) + w(t) t = 0, . . . , T -1 (1) where s(t) = [s 1 (t), • • • , s N (t)] T is the N × 1 sparse source vector, w(t) = [w 1 (t), • • • , w M (t)] T is the M × 1 complex noise vector, A is the M × N full column rank mixing matrix (i.e., M ≥ N ), and the superscript T denotes the transpose operator. The purpose of blind source separation is to find a separating matrix, i.e. a N × M matrix B such that s(t) = Bx(t) is an estimate of the source signals. Before proceeding, note that complete blind identification of separating matrix B (or equivalently, the mixing matrix A) is impossible in this context, because the exchange of a fixed scalar between the source signal and the corresponding column of A leaves the observations unaffected. Also note that the numbering of the signals is immaterial. It follows that the best that can be done is to determine B up to a permutation and scalar shifts of its columns, i.e., B is a separating matrix iff : Bx(t) = P Λs(t) (2) where P is a permutation matrix and Λ a non-singular diagonal matrix. Sparsity-based BSS algorithm Before starting, we propose to use 'an optional' whitening step which set the mixtures to the same energy level and reduces the number of parameters to be estimated. More precisely, the whitening step is applied to the signal mixtures before using our separation algorithm. The whitening is achieved by applying a N × M matrix W to the signal mixtures in such a way Cov(W x) = I in the noiseless case, where Cov(•) stands for the covariance operator. As shown in [2], W can be computed as the inverse square root of the noiseless covariance matrix of the signal mixtures (see [2] for more details). In the following, we apply our separation algorithm on the whitened data : x w (t) = W x(t). We propose an iterative algorithm for the separation of sparse audio signals, namely the ISBS for Iterative Sparse Blind Separation. It is well known that audio signals are characterized by their sparsity property in the time domain [8,9] which is measured by their p norm where 0 ≤ p < 2. More specifically, one can define the following sparsity based contrast function G p (s) = N i=1 [J p (s i )] 1 p , (3) where J p (s i ) = 1 T T -1 t=0 |s i (t)| p . (4) The algorithm finds a separating matrix B such as, B = arg min B {G p (B)} , (5) where G p (B) G p (z) , (6) and z(t) Bx w (t) represents the estimated sources. The approach we choose to solve (5) is inspired from [10]. It is a block technique based on the processing of T received samples and consists in searching iteratively the minimum of (5) in the form : B (k+1) = (I + (k) )B (k) (7) z (k+1) (t) = (I + (k) )z (k) (t) (8) where I denotes the identity matrix. At iteration k, a matrix (k) is determined from a local linearization of G p (B (k+1) x w ). It is an approximate Newton technique with the benefit that (k) can be very simply computed (no Hessian inversion) under the additional assumption that B (k) is close to a separating matrix. This procedure is illustrated in the following : At the (k + 1) th iteration, the proposed criterion (4) can be developed as follows: J p (z (k+1) i ) = 1 T T -1 t=0 z (k) i (t) + N j=1 (k) ij z (k) j (t) p = 1 T T -1 t=0 |z (k) i (t)| p 1 + N j=1 (k) ij z (k) j (t) z (k) i (t) p . Under the assumption that B (k) is close to a separating matrix, we have | (k) ij | 1 and thus, a first order approximation of J p (z (k+1) i ) is given by : J p (z (k+1) i ) ≈ 1 T T -1 t=0 |z (k) i (t)| p + p N j=1 e( (k) ij ) e |z (k) i (t)| p-1 e -φ (k) i (t) z (k) j (t) -m( (k) ij ) m |z (k) i (t)| p-1 e -φ (k) i (t) z (k) j (t) (9 ) where e(x) and m(x) denote the real and imaginary parts of x and φ (k) i (t) is the argument of the complex number z (k) i (t). Using equation (9), equation (3) can be rewritten in more compact form as : G p B (k+1) = G p B (k) + e T r (k) R (k)H D (k)H (10) where (•) denotes the conjugate of (•), T r(•) is the matrix trace operator and the ij th entry of matrix R (k) is given by : R (k) ij = 1 T T -1 t=0 |z (k) i (t)| p-1 e -φ (k) i (t) z (k) j (t) (11) D (k) = diag R (k) 11 , . . . , R (k) N N 1 p -1 . (12) Using a gradient technique, (k) can be chosen as : (k) = -µD (k) R (k) (13) where µ > 0 is the gradient step. Replacing (13) into (10) leads to, G p B (k+1) = G p B (k) -µ D (k) R (k) 2 . ( 14 ) So µ controls the decrement of the criterion. Now, to avoid the algorithm's convergence to the trivial solution B = 0, one normalizes the outputs of the separating matrix to unit-power, i.e. ρ (k+1) zi 1 T T -1 t=0 |z (k+1) i (t)| 2 = 1, ∀ i. Using first order approximation, this normalization leads to : (k) ii = 1 -ρ (k) zi 2ρ (k) zi . ( 15 ) After convergence of the algorithm, the separation matrix B = B (K) is applied to the whitened signal mixtures x w to obtain an estimation of the original source signals. K denotes here the number of iterations that can be either chosen a priori or given by a stopping criterion of the form B (k+1) -B (k) < δ where δ is a small threshold value. Convolutive mixture case Unfortunately, instantaneous mixing is very rarely encountered in real-world situations, where multipath propagation with large channel delay spread occurs, in which case convolutive mixtures are considered. In this case, the signal can be modeled by the following equation : x(t) = L l=0 H(l)s(t -l) + w(t) (16) where H(l) are M × N matrices for l ∈ [0, L] representing the impulse response coefficients of the channel and the polynomial matrix H(z) = L l=0 H(l)z -l is assumed to be irreducible (i.e. H(z) is of full column rank for all z). If we apply a short time Fourier transform (STFT) to the observed data x(t), the model in (16) (in the noiseless case) becomes approximately S x (t, f ) ≈ H(f )S s (t, f ) (17) where S x (t, f ) is the mixture STFT vector, S s (t, f ) is the source STFT vector and H(f ) is the channel Fourier Transform matrix. It shows that, for each frequency bin, the convolutive mixtures reduce to simple instantaneous mixtures. Therefore we can apply our ISBS algorithm for each frequency and separate the signals. As a result, in each frequency bin, we obtain the STFT source estimate S b s (t, f ) = B(f )S x (t, f ) . ( 18 ) It seems natural to reconstruct the separated signals by aligning these S b s (t, f ) obtained for each frequency bin and applying the inverse short time Fourier transform. For that we need first to solve the permutation and scaling ambiguities as shown next. Removing the scaling end permutation ambiguities In this stage, the output of the separation filter is processed with the permutation matrix Π(f ) and the scaling matrix C(f ). G(f ) = Π(f )C(f )B(f ) . ( 19 ) The scaling matrix C(f ) is a N × N diagonal matrix found as in [11] by C(f ) = diag[B(f ) # ]. For the permutation matrix Π(f ), we exploit the continuity property of the acoustic filter in the frequency domain [12]. To align the estimated sources at two successive frequency bins, we test of the closeness of G(f n )G(f n-1 ) # to a diagonal matrix. Indeed, by using the representation (19), one can find the permutation matrix by minimizing : Π(f n ) = arg min f Π    i =j ΠC(f n )B(f n )G(f n-1 ) # 2 ij    . (20) In our simulations, we have used an exhaustive search to solve (20). However, when the number of sources is large, the exhaustive search becomes prohibitive. In that case, one can estimate Π(f n ) as the matrix with ones at the ij th entry satisfying |M(f n )| ij = max k |M(f n )| ik and zeros elsewhere with M(f n ) = C(f n )B(f n )G(f n-1 ) # . This solution has the advantage of simplicity but may lead to erroneous solution in difficult context. An alternative solution would be to decompose Π(f n ) as product of elementary permutations 1 Π (pq) . The latter is considered at a given iteration, only if it decrease criterion (20), if |M(f n )| 2 pq + |M(f n )| 2 qp > |M(f n )| 2 pp + |M(f n )| 2 qq Finally, we obtain : Π(f n ) = nb of iterations 1≤p<q≤N Π (pq) , (21) Π (pq) being either the identity matrix or the above permutation matrix Π (pq) depending on the binary decision rule define above. We stop the iterative process, when all matrices Π (pq) are equal to the identity. We have observed that one or, at most, two iterations are sufficient to get the desired permutation. Finally, we apply the updated separation matrix G(f ) to the frequency domain mixture : S b s (t, f ) = G(f )S x (t, f ) . ( 22 ) Simulation results We present here some numerical simulations to evaluate the performance of our algorithm. We consider an array of M = 2 sensors receiving two audio signals in the presence of stationary temporally white noise of covariance σ 2 I (σ 2 being the noise power). 10000 samples are used with a sampling frequency of 8Khz (this represents 1.25sec recording). In order to evaluate the performance in the instantaneous mixture case, the separation quality is measured using the Interference to Signal Ratio (ISR) criterion [2] defined as : ISR def = p =q E |(BA) pq | 2 ρ q E (|(BA) pp | 2 ) ρ p (23) where ρ i = E(|s i (t)| 2 ) is the i th source power evaluated here as 1-(a) represents the two original sources and their mixtures in the noiseless case. In Fig. 1-(b), we compare the performance of the proposed algorithm in instantaneous mixture case, to the Relative Newton algorithm developed by Zibulevsky et al. in [9] where the case of sparse sources is considered and to SOBI algorithm developed by Belouchrani et al. in [2]. We plot the residual interference between separated sources (ISR) versus the SNR. It is clearly shown that our algorithm (ISBS) performs better in terms of ISR especially for low SNRs as compared to the two other methods. In Fig. 2-(a), we represent the evolution of 1 Π (pq) is defined such as way that for a given vector y, e y = Π (pq) y iff e y(k) = y(k), for k / ∈ {p, q}, e y(p) = y(q) and e y(q) = y(p). the ISR as a function of the iteration number. A fast convergence rate is observed. In Fig. 2-(b), we compare, in the 2 × 2 convolutive mixture case the separation performance of our algorithm, Deville's algorithm in [13], Parra's algorithm in [14] and extended version of Zibulevsky's algorithm to the convolutive mixture case. The filter coefficients are chosen randomly and the channel order is L = 128. We use in this experiment the ISR criterion defined for the convolutive case in [14] that takes into account the fact the source estimates are obtained up to a scalar filter. We observe a significant performance gain in favor of the proposed method especially at low SNR values. Moreover, the complexity of the proposed algorithm is equal to 2N 2 T + O(N 2 ) flops per iteration whereas the complexity of the Relative Newton algorithm in [9] is 2N 4 + N 3 T + N 6 /6. 1 T T -1 t=0 |s i (t)| 2 . Fig. 0 0.2 0.4 0.6 0.8 1 1.2 -1 0 1 s1(t) 0 0.2 0.4 0.6 0.8 1 1.2 -1 0 1 s2(t) 0 0.2 0.4 0.6 0.8 1 1.2 -1 0 1 x1(t) 0 0.2 0.4 0.6 0.8 1 1.2 -1 0 1 Time (sec) x2(t) Conclusion This paper presents a blind source separation method for sparse sources in instantaneous mixture case and its extension to the convolutive mixture case. A sparse contrast function is introduced and an iterative algorithm based on gradient technique is proposed to minimize it and perform the BSS. Numerical simulation results have been given evidence the usefulness of the method. The proposed technique outperforms existing solutions in terms of separation quality and computational cost in both instantaneous and convolutive mixture cases. Fig. 1 . 1 Fig. 1. (a) Up the two original source signals and bottom the two signal mixtures. (b) Interference to Signal Ratio (ISR) versus SNR for 2 audio sources and 2 sensors in instantaneous mixture case. Fig. 2 . 2 Fig. 2. (a) ISR as a function of the iteration number for 2 audio sources and 2 sensors in instantaneous mixture case. (b) ISR versus SNR for 2 × 2 convolutive mixture case.
01772809
en
[ "phys.phys.phys-chem-ph" ]
2024/03/05 22:32:18
2018
https://hal.sorbonne-universite.fr/hal-01772809/file/text_PRE_resub.pdf
Giuseppe Boniello Christophe Tribet Emmanuelle Marie Vincent Croquette Dražen Zanchi Rolling and aging in temperature-ramp soft adhesion Keywords: colloids, temperature-responsive polymer coating, Brownian dynamics, hindered diffusion, adhesion à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. I. INTRODUCTION Adhesion of colloids on a flat surface presents interesting aging dynamics if either of surfaces is elastic or soft [1]. Soft adhesion with aging dynamics can also be produced by coating surfaces with polymer whose ends can stick to the adjacent surface, as it has been recently reported [2]. During soft adhesion, ends of polymers protruding from engaged surfaces stochastically explore the opposite side, increasing the number of attached contacts, so that the soft contact domain evolves in time. Control of these very final events preceding the immobilisation are of interest for research on functional materials, lubrication, cell adhesion etc. [3][4][5][6]. In the present paper we focus on colloids coated by a controlled molar fraction f of T-responsive polymers that switch interactions between colloids and the surface from repulsive to attractive at T = T c ≈ 32 • C. We analyze the details of the colloid Brownian motion occurring just before their adsorption to the surface. Near-surface Brownian dynamics can be analyzed by direct video 2D or 3D tracking, either by real-time analysis of the diffraction pattern [7,8], by total internal reflection microscopy (TIRM) [9][10][11] or by three-dimensional ratiometric total internal reflection fluorescence microscopy (3-D R-TIRFM) [12,13]. The mean-square displacement (MSD) reveals how the diffusion is hindered by near-surface effects [14][15][16][17]. In the 3D tracking the resident time distribution (RTD), corresponding to the z-histogram of the trajectory, has been used for studying the particle-surface interaction potential [11]. If the adsorption (or self-aggregation) is irreversible, as it is in our present case, the situation is clearly out-of-equilibrium process. In such case, the aging effects in the pre-adhesion phase were identified in constant shear flux by Kalasin and Santore [2] and for a particle held (and released) by optical tweezers in suspension above a flat surface, by Kumar, Sharma, Ghosh and Bhattacharya [1,18]. To bring the system out from equilibrium we use T-switchable attraction between colloids and the surface. Namely, T-responsiveness of the polymer Poly(N-isopropylacrylamide) (PNIPAM, T c = 32 ± 1 • C [19]), at varying surface molar fraction f , was already exploited for analysis of self-association kinetics of coated microparticles [20,21]. In the Experimental section of the paper we present methods and results. The theoretical section reports on a simple adsorption model allowing to understand all experimental findings. Several issues that our results and their interpretation could raise are pointed out in Discussion section. Final section contains concluding remarks. II. EXPERIMENTAL SECTIONN A. Materials and methods Silica beads (0.96µm in diameter, Bangs Laboratories, SS03N) are dispersed in sodium hydroxide solution 1M by sonication for 15 min and dialysed against water (Slide-A-Lyzer, M W cutoff 3500 kDa, Thermo Scientific). The solution is diluted in Phosphate-buffered Saline (PBS) solution (0.15 M). Particle coating is obtained by mixing 22µL of beads in PBS with 100µL of polymer solution 10g.L -1 (f % PLL-g-PNIPAM, (100 -f )% PLLg-PEG, PLL: M w = 15 -30 kg/mol; PEG: M w = 20 kg/mol; PNIPAM: M w = 7 kg/mol) in PBS. The re- sulting suspension is incubated for 30 min at room temperature. Polymer excess is removed by 5 centrifugation cycles (1,800 g for 5 min) replacing the supernatant by deionized water. Flat substrate: borosilicate glass coverslips are cleaned with ethanol and plunged in a 1M sodium hydroxide ultrasonic bath for 30 min. After rinsing with deionized water and drying, the experimental cell is prepared by superposing bi-adhesive tape and a mylar film. A 52 mm x 5 mm x 50µm channel is thus created between the borosilicate glass slide and the mylar film. The bottom of the channel is functionalized by coating the glass surface with PLL-g-PEG (same as the one used for coating particles), ensuring a steric repulsion for T < T c . This is achieved by injecting polymer solution (1g.L -1 in PBS) in the cell and left incubated for 30 min at room temperature. The cell is then rinsed with deionized water and dried by compressed air. T-ramp experiment: the particles suspension is injected in the cell at 26 • C, the temperature is increased at 10 • C/min up to 38 • C and kept constant until the end of the acquisition. Attraction between beads and the flat plate is thereby triggered by crossing the critical temperature T c = 32 ± 1 • C. The advantage of the method is two-fold. First, it allows to switch from repulsive to attractive regime without requiring a fine temperature control around the transition temperature. Secondly, we get rid of additional effects, such as thermal inertia or uncertainty on PNIPAM transition. 3D beads motion is observed by slightly defocus microscopy in parallel illumination decorating bead image with interference rings observed with a CMOS camera. Particles are tracked in real time using a PicoTwist apparatus and PicouEye software [7] allowing sub-pixel resolution (∼ 5 nm) at 50 frames/s, see Figure 1a. To measure the fraction of adsorbed particles over time, large number of particles (typically 40-50 per run) are followed by CCD camera at 10 frames/s, as long as all visible beads are immobilized. The recorded movies are converted to binary masks (1: particle, 0: background). Cumulative distributions of binding times, i.e. the fraction of adsorbed particles, are extracted from the correlation of each frame with the last one: P a (t) = x,y I(t)I(t f ) x,y I(t f ) (1) where x,y stands for the sum of all the pixel values on the same resulting image. At a given coverage ratio f the in-plane average diffusion D eff (f ) is obtained by fitting the mean-square displacement (MSD) extracted from in-plane tracking over the final time interval [t 0 -∆t, t 0 ] before stopping at t = t 0 . In our experiment ∆t = 16 s was chosen: long enough to contain several up-down excursions, and short enough to give meaningful results even for rapidly adsorbing particles. The MSD for i-th particle on stage is extracted from x i (t) and y i (t) tracks using the relation MSD x,i ≡ [x i (t + t ) -x i (t )] 2 t = 2D eff,i t , (2) and equivalently for M SD y . The average ... t runs over t , lying within the observed time interval ∆t. For all values of f MSDs are linear in time over several seconds, indicating that the movement is diffusive. The diffusion constant is obtained by fitting the equation 2 for each of typically N ∼ 10 particles per run and taking the average: D eff = D eff,i i . Finally, for each 3D track the RTD over z direction is extracted by constructing histogram of the z track record within typically 6-7 µm above the flat surface, and averaged over N particles from the same run. B. Results The fraction of immobilised particles as function of time is shown on Figure 1b. Samples with higher PNI-PAM coverage f adsorb faster. The characteristic sigmoidal shape of the adsorption kinetics indicates that adsorption rate increases gradually, corroborating the aging nature of the pre-adhesion dynamics of surface engaged particles. If particles are in contact with the surface and rolling (or crawling) most of the time, their in-plane average diffusion D eff is reduced, as it was observed in experiments with finely thermostated DNA-coated microbeads [17]. Our systems shows similar behaviour. Figure 2a shows D eff (f )/D 0 , where D 0 = k B T /6πµR = 0.67 µm 2 / s is the diffusion constant for 1 µm beads in water far from wall at 38 • C. As we increase f from 2% to 100%, D eff /D 0 increases from ∼0.2 to ∼0.65. Naively, one would expect the contrary, that is, stickier are the particles, and slower is their motion. As we will show in next section, the puzzle is solved by assuming that the surface engaged beads are much slower than the untethered ones, and that the time that beads spend in contact with the surface decreases as f increases. In fact, beads with high coverage stick rapidly after engagement, while these with low f spend most of their time rolling, which lowers their diffusion coefficient considerably. The normalized RTD over z is shown on Fig. 2b, as extracted from the z-record histogram. It shows that the beads with high f spend most of their pre-arrest time away from surface, while beads with lower coverage f can roll un tumble without sticking, implying that in average they spend more time closer to the surface. III. THEORY According to the theory of Mani, Gopinath and Mahadavan (MGM) [22] a typical time scale for soft contact aging is given by τ * ∼ a l 1 × τ visc. × 1 2 Kl 2 1 /k B T , (3) where τ visc. = 3µ/(nKl 1 ) is the viscous settling time, a is the microparticle radius, K is the spring constant of a single polymer involved in sticking, l 1 is its rest length, µ is the viscosity and n is the number of available sticky polymer tethers per unit surface. Thus, τ * is the result of interplay between visco-eleastic time and the attachment efficiency measured by Kl 2 1 /(2k B T ) factor. The resulting τ * = µa/nk B T is independent of the elastic constant of the tethers. On the contrary, the settling vertical distance during the draining does depend on spring constant. However, the latter does not intervene in our present model. For our case the most important is the inverse proportionality between τ * and n. Notice that the binding energy per sticking tether does not enter in the MGM theory since the polymer-substrate sticking rate is determined by the first passage time. Moreover, according to [1,2,18], the pre-arrest dynamics of soft adhesion proceeds step-wise, i.e. the 3D approach is well separated from soft contact (2D) rolling phase, indicating that the untethered and the rolling colloids can be seen as separated populations. We construct the rate equations for three fractions: 1) the untethered fraction represented by the probability P b (t), 2) the fraction of rolling colloids of age τ , whose probability to be found between the ages τ and τ + dτ is P r (t, τ ) dτ , and 3) the population of colloids in arrest, with the probability P a (t) = 1 -P b (t) - t 0 P r (t, τ )dτ . (4) The time zero is chosen to coincide with the onset of the attractive interactions. Consequently, rolling particles cannot be older than t. To describe properly the evolution of the rolling population, we introduce following physical quantities: the sedimentation rate κ, the re- desperion rate function a(τ ), and the irreversible stopping rate function b(τ ). κ corresponds to the fraction of particles within the observation window absorbed per unit time by a totally absorbing sink at z = 0. It can be determined from the stationary solution of Fokker-Planck equation with z-dependent viscous drag due to near-wall hydrodynamic corrections [23] in gravitational and van der Waals potential. The observation window in our case corresponds to the layer of 6-7 µm above the surface. For our choice of parameters we calculated the theoretical value κ = 0.17 s -1 . a(τ ) is the re-dispersion rate of rolling colloids of age τ back into bulk, and b(τ ) is the irreversible stopping rate of rolling colloids of age τ . high$f" α ∼$f" low$f" α ∼$f" 1/2 " 0 5 10 The rate equation for P r (t, τ ) is determined by rates κ, a(τ ) and b(τ ) as follows: dP r (t, τ ) dt = - ∂P r (t, τ ) ∂τ + δ(τ )κP b (t) + -a(τ )P r (t, τ ) -b(τ )P r (t, τ ) , (5) where δ(τ ) is Dirac function, ensuring that any newly sunk particle is a "just born" rolling one (τ = 0). The first term on the RHS reproduces simple uniform time evolution. To illustrate, suppose that we have some initial distribution P r (t = 0, τ ) = F (τ ), in the absence of any source or sink terms (κ = 0 and a = b = 0 respectively) at some later time t the solution is F (τ -t), that is, the time evolution is trivial shifting by -t. Nontrivial (and interesting) modulation in the time-evolution is brought in by sources and sinks. The rate equation for untethered colloids is Ṗb = -κP b + t 0 a(τ )P r (t, τ ) dτ , (6) which, together with Eqs. 4 and 5, determines completely the evolution of the system. Since in our experiment the particles are unresolved over τ , starting from eq.5 a simplified equation is derived, in which all rolling particles are represented by P r (t) ≡ t 0 P r (t, τ )dτ . (7) We will suppose that the re-dispersion rate a(τ ) and the arrest rate b(τ ) depend on τ over characteristic aging time scale τ * , allowing us to write: a(τ ) = k off Φ(τ /τ * ) , b(τ ) = kχ(τ /τ * ) , ( 8 ) where Φ is a monotonically decreasing and χ monotonically increasing function limited between 0 and 1. We introduce the effective aging functions ρ and g as follows: t 0 Φ(τ /τ * )P r (t, τ )dτ = ρ(t/τ * )P r (t) (9) and t 0 χ(τ /τ * )P r (t, τ )dτ = g(t/τ * )P r (t) . These relations are purely formal and do not allow to obtain functions ρ and g from the original aging functions a and b. In this regard the present theory is merely a phenomenology because the aging functions are not given explicitly in terms of the parameters of the model. However, the relations 9 and 10 show that it is always possible to recast the system of rate equations for P r (t, τ ), to the system for the total number of rolling beads P r (t), and that the effective aging functions ρ and g vary at the same aging time scale τ * as the original functions a and b. By integrating over τ the equation 5 and using the definition 7 we obtain the rate equations Ṗb = -κP b + k off ρ(t/τ * )P r Ṗr = κP b -[k off ρ(t/τ * ) + k g(t/τ * )] P r , (11) while the third population of colloids is in arrest, P a (t) = 1 -P b (t) -P r (t) . According to [2,18,22] the "age" can be associated with the number of stuck point contacts within the engaged soft domain. We chose ρ(x) = e -x and g(x) = 1 -ρ(x) because we want the re-dispersion rate to decrease and the arrest rate to increase upon aging. Initial conditions of the systems are P r (0) = P a (0) = 0 and P b (0) = 1. The central issue of this work is to find out how the parameters of the model, Eqs. 11, depend on f . Following MGM, we suppose that the aging time τ * scales as n -1 , but with extended meaning of n as the effective surface density of sticky tethers visited by the contact domain during rolling, Fig. 3b. Sticky tethers for us are the dangling polymers that are able to reach the opposite bare surface by overcoming the steric shield. This is possible in the immediate vicinity of the spots containing collapsed PNIPAM. The effective number of available sticky tethers is therefore proportional to the number of PNIPAM spots visited by the contact domain. For low f , we expect that average width of the contact domain is wider than typical distance between PNIPAM spots. The number of tethers involved is proportional to the linear density of PNIPAM patches along the rolling path, i.e. n ∼ f 1/2 , while for higher f , n becomes proportional to f , since the inter-patch distance becomes smaller than the width of the searching trail. Notice that the present argumentation does not prejudice about details of PNIPAM disposition over the surface, as far as PLL-g-PNIPAM is disposed in a discrete number of spots. In particular, the PLL-g-PNIPAM molecules can be grouped in patches with some size distribution. In order to confirm that present argumentation makes sense, we assumed that all parameters of Eq.11 depend on f over a single, monotonically increasing, scaling function α(f ), proportional to the number of available tethers within the searching area. Accordingly, for the aging time we pose τ * = τ * 0 /α(f ). Since the stopping rate constant k is supposed to increase with α(f ), we put k = k 0 α(f ). Detachment rate constant should decrease with increasing number of sticking points: we use k off = k off0 /α(f ). Calculated P a (t) using Eqs. 11 are fitted to adsorption kinetics by adjusting solely the value of α for each f , see figure 1b. The absolute scale for α being arbitrary, we choose α(f = 2%) = 1. The fitting parameters are k 0 = k off0 = 0.05 s -1 and τ * 0 = 350 s for all curves. Figure 3a) shows the best fitting scaling function α(f ). It is consistent with ∼ √ f tendency for low f and is steeper for high f , which confirms our picture based on interplay between the discreteness of PNIPAM spots and the finite width of contact domain. We want now to reproduce the measured in-plane diffusion D eff , Fig. 2a). Since the process is non-stationary and irreversible, we must know in what moment (t 1 ) and for how long (∆t) the tracks are recorded, so that the average fraction of time that beads spend in "b" and in "r" state evolves in time. We assume that the most probable time t 1 is equal t max (f ), the time of maximal adsorption rate, corresponding to the maximum slope of kinetics, Fig. 1b. The tracking duration was ∆t = 16 s, a compromise regarding the number of particle excursions between "b" and "r" and the lifetime of the moving particle starting from t = 0. D eff is calculated as the time average of instantaneous diffusion d inst (t) over ∆t centred at t max , which mimics the way in which experimental D eff is extracted from the tracking. Formally, it writes D eff ≈ 1 ∆t tmax+∆t/2 tmax-∆t/2 d inst (t) dt , (12) where the instantaneous diffusion constant is Values of α(f ) used here are the ones that fit the adsorption kinetics, figure 1b). Bold line: most probable time intervals of particle tracking. The effective diffusion constant Deff is obtained as average over the bold portions, see eq. 12. d inst (t) = [D b P b (t) + D r P r (t Eqs. 11. It is important not to confound the quantity d inst (t) with the instantaneous diffusion "constant" of a single particle, that can be extracted from the tracking and is a fast fluctuating quantity. In fact, d inst (t) is interpreted as the ensemble average of diffusion constant at the time t. [START_REF]The equation 12 ignores the temperature variation within the observed T-ramp segment. Namely, the temperature dependence enters via D0(T ) = kBT /6πµ(T )R and µ(T ) decreases from 0.77 to 0[END_REF] Calculated time evolution of d inst (t) for a range of coverage values f is shown in dashed lines on figure 4. The portions drawn in bold correspond to the most probable time intervals of tracking recording. According to eq. 12, the resulting effective in plane diffusion constant D eff is the average over these intervals. Notice that for the very highest values of f the acquisition interval spread even over times before the PNIPAM collapse transition at t = 0. The resulting calculated profile of D eff (f )/D 0 , shown on Figure 2a) is fitted to experimental data by adjusting D b and D r , while we used the same dependence α(f ) (Fig. 3a) that fits the adsorption kinetics, Fig. 1b. Best fitting values are D b = 0.73D 0 and D r = 0.15D 0 . In order to fit the residence time distribution (RTD), figure fig2prl, to our theory, we associate the RTD to the probability distribution function (PDF), supposed to have the form W (z) = Pb W b (z) + (1 -Pb )W r (z) , ( 13 ) where Pb is the average P b (t) over the observation time of ∆t centred at t = t max : Pb = 1 ∆t tmax+∆t/2 tmax-∆t/2 P b (t) P b (t) + P r (t) dt . (14) It is in fact the average fraction of time that particle spends away from the surface during the recording time interval. We take the equilibrium barometric law for PDF of detached particles W b (z) ∼ e -mgz/k B T , ( 15 ) where m is the buoyant mass, and we assume that rolling particles have a phenomenological distribution W r (z) ∼ e -arz/k b T , (16) a r being the apparent weight of rolling particles, a r mg. Calculated distributions W (z) are shown in Figure 2b), together with Pb (f ) in inset. We see that in the asymptotic part of RTD, corresponding to the untethered particles, is fairly well reproduced by our model, i.e. the untethered population decreases as predicted, with decreasing f . IV. DISCUSSION In the light of rate equations-based theory, Eq. 11, we understand why the sample with the highest f have also the highest D eff . At high f a large majority of moving particles are in the suspension far from the surface during tracking, because the rolling regime is very short (i.e. the stopping is faster than the free sedimentation k κ, so that the beads get arrested as soon as they touch the surface), see inset of Fig. 2b. For low f this is not the case any more, the beads spend most of their time engaged in rolling motion, which is much slower. In that sense, the most interesting regime for us is the one of moderately low f , since the rolling and aging are the signatures of soft adhesion. For f 5%, the theory predicts a rapid increase in D eff (f )/D 0 with decreasing f , as expected, because particles without PNIPAM never stick nor roll on the surface. This regime is controlled by stopping rates constant k, which scales as √ f for small f . The points measured at f = 5% and 2% confirm this tendency. Another instructive point to discuss concerns the value of D b = 0.73D 0 that fits experimental data. One expects it to correspond to the equilibrium near-wall hindered diffusion of untethered particles. The corresponding stationary PDF taking only gravitational potential is W b (z), given by eq. 15, which indeed fits the RTD, see figure 2. The diffusion constant at distance z for parallel (in-plane) motion is D 0 /φ (z), where φ (z) is the hinderance factor due to hydrodynamic interactions of spherical particle moving near a flat surface. Analytic form of φ (z) is a standard result, reported in literature [START_REF] O'neill | [END_REF][26][START_REF] Russel | Colloidal Dispersions[END_REF]. Taking average over z we get D b = D eff | f =0 = D 0 W b (z) φ (z) dz = 0.72D 0 , (17) which is fairly close to the value that fits experiment. Interestingly, taking into account also the Van der Waals interaction we get D b = 0.44D 0 , which is too low. This indicates that untethered colloids live in gravitational potential only, since the steric shield prevents them from approaching the surface and feeling the VdW forces. One could wonder if our whole interpretation in terms of rolling and aging is over-complicated, and that experimental findings could be understood by effects of concentration depletion in the vicinity of partially absorbing sink at z = 0, and casting only two populations, the untethered and the arrested one. Indeed, it sounds plausible that higher adsorption implies lower concentration near the surfaces, and consequently some shift of the overall population upwards, where diffusion hinderance is less effective. Consequence is that the average D increases with increasing f , just as we want. For that reason, we calculated D , which under present assumptions equals D b , given by eq. 17. Calculations of W b (z) for partially absorbing wall imply finite flux solution of the Fokker-Planck equation [15]. We find that D eff increases linearly with f , as shown in Figure 2a). Cases with gravitational potential alone and including also Van der Waals part are both in disagreement with the experimental points, which shows that model that ignores the possibility of rolling cannot explain D eff (f ). For a similar system, the increase of D upon raising temperature above T c has been reported in [START_REF] Tu | [END_REF] in equilibrium conditions, where the phenomenon was attributed to electrostatic effects. This interpretation cannot be applied to our case in which, for high f , the particles spend most of their time away from the surface, at distances much higher than the Debye length. Finally, let's discuss the assumption made in our model that the f dependence is contained simply in the parameter α(f ) with the meaning of the effective surface density of available sticking tethers during the soft sticking. This assumption assumes that individual "stick" and "release" events between one PEG end and the adjacent surface are independent on PNIPAM coverage f . In another words, when collapsing at T c , PNIPAM patches simply allow a finite number of PEGs to reach the adjacent surface, without affecting the sticking event of each PEG with the surface, which includes the diffusive search of the surface by the PEG end. A priori, this is a reasonable assumption because within the PLL-g-PEG coating the surface density of PEG chains is well below the over-crowded coverage [6], which would affect the sticking/release dynamics per PEG chain at low f . The agreement between the model and the experimental findings confirms that the present interpretation makes sense. V. CONCLUSION In conclusion, the soft adsorption kinetics depends on f over a single function α(f ), which is a measure of the number of discrete sticky patches within the soft contact area during Brownian rolling, extending the theory of Mani, Gopinath and Mahadavan [22]. At low/high f the effective inter-patch distance is larger/smaller than the contact diameter. From the point of view of PNIPAM spots (Figure 3b) the trail of the rolling contact domain crosses over from 1D to 2D, implying crossover of α(f ) from ∼ √ f to ∼ f . Two most remarkable effects are (i) a characteristic sigmoidal profile of the adsorption kinetics due to aging, and (ii) a decrease of the in-plane diffusion constant in pre-arrest Brownian dynamics with decreasing f due to reduction of pre-adhesion rolling time. FIG. 1 1 FIG. 1. a) Typical 3D tracking record. Bead was captured irreversibly at t = 15.2 s. Inset: schematic visualisation of temperature ramp experiment. b) Fraction of adsorbed particles as a function of time in a T-ramp of 10 • C/min between 26 • C and 38 • C, for a range of PNIPAM ratios f . Solid lines are calculated best fitting adsorption profiles Pa(t). FIG. 2 2 FIG. 2. a) Experimental points: effective in-plane diffusion Deff(f ) obtained by fitting the MSDs immediately before stopping. Bold line: result of our theory. Dotted lines: estimation of the near-wall diffusion hindrance by hydrodynamic effects near partially absorbing wall in gravitational potential. Dashed line: same estimation, including Van der Waals potential. b) Residence time distribution (RTD) extracted from tracking records z(t) record, compared to calculated PDF W (z). Inset: Pb (f ), the calculated average time fraction that particle spend away from the surface during the recording time interval. FIG. 3 3 FIG. 3. a) Dot symbols: scaling parameter α(f ) fitting the experiments. Low f and high f regimes are visible. b) Schematic interpretation of the two regimes. Shaded trail is the bead surface portion visited by the contact domain during Brownian rolling. At low f the number of tethers involved is proportional to the linear density of PNIPAM patches along the rolling path, while at high f it crosses over to the surface density. FIG. 4. Dashed line: d ef f (t)/D0 within the present model based on aging rolling particles, for a range of coverages f . 1.0 d inst (t)/D 0 2% 5% 0.8 20% 0.6 100 % 90 % 33% 40% 0.4 33 % 50 % 50% 2 % 90% 0.2 5 % -20 100% 0 20 40 60 80 100 t/s )]/[P b (t) + P r (t)], D b and D r are effective diffusion constants for untethered and rolling particles respectively while P b (t) and P r (t) are calculated by ACKNOWLEDGMENTS We thank Maurizio Nobili for critical reading of the manuscript and Ken Sekimoto for stimulating and helpful discussions. This work was supported by ANR DAP-PlePur 13-BS08-0001-01 and program "investissement d'avenir" ANR-11-LABX-0011-01.
01772810
en
[ "sde", "sdu", "sdu.stu", "sdu.stu.ag" ]
2024/03/05 22:32:18
2018
https://brgm.hal.science/hal-01772810/file/NOAFrance_Cagnard.pdf
Didier Florence Cagnard Naturally Occurring Asbestos in France: Geological Mapping, Mineral Characterization and Regulatory Developments Naturally Occurring Asbestos in France: Geological Mapping, Mineral Characterization and Regulatory Developments Cagnard, Florence, French Geological Survey, [email protected] Lahondere, Didier, French Geological Survey, [email protected] In France, the asbestos banning is subject to a national decree (n° 96-1133), published in 1996. The regulatory texts and standards adopted to control this banning concern in particular asbestos-bearing manufactured products, but remain difficult to apply to asbestos-bearing natural materials (ie. rocks, soils). Considering problems related to such asbestos-bearing natural materials, the Ministry of Ecological and Solidary Transition has mandated the French Geological Survey to locate the impacted areas. Mappings were priority carried out in geological domains where NOA was predictable (French Alps, Corsica). These studies integrated field expertise, sampling and laboratory analyses, in order to characterize the potential of geological units to contain NOA. Furthermore, some expertises were carried out on geological formations exploited in France to produce aggregates. These studies concerned the quarries exploiting massive basic or ultrabasic rocks, likely to contain NOA, and quarries exploiting alluvium likely to contain asbestos-bearing rock pebbles. These studies highlight the difficulty of establishing robust diagnoses for natural materials. Indeed, distinction between cleavage fragments resulting from the fragmentation of non-asbestos particles and proper asbestos fibers is particularly problematic for laboratories. Thus, a recent study of the National Agency for Health Safety, Food, Environment and Work (2015) recommends to apply the asbestos regulation for elongated mineral particles (L/D > 3:1, L > 5 μm, D < 3 μm) with chemical composition corresponding to one of the five regulated amphibole species, irrespective of their mode of crystallization (asbestiform or non-asbestiform). The upcoming regulatory changes are part of a decree published in 2017, including the prior identification of asbestos in natural soils or rocks likely to be impacted by the execution of work. Specific protocols will be defined for sampling, analysis and characterization of natural materials that may contain asbestos.
01772827
en
[ "spi.signal" ]
2024/03/05 22:32:18
2007
https://hal.science/hal-01772827/file/TrASP_TF_CUBSS_C2_vf.pdf
Abdeldjalil Aïssa-El-Bey Karim Abed-Meraim Yves Grenier Blind Separation of Underdetermined Convolutive Mixtures using their Time-Frequency Representation Keywords: blind source separation, underdetermined/overcomplete representation, vector clustering, subspace projection, speech signals, convolutive mixture, time-frequency distribution, sparse signal decomposition/representation This paper considers the blind separation of nonstationary sources in the underdetermined convolutive mixture case. We introduce, two methods based on the sparsity assumption of the sources in the time-frequency (TF) domain. The first one assumes that the sources are disjoint in the TF domain; i.e. there is at most one source signal present at a given point in the TF domain. In the second method, we relax this assumption by allowing the sources to be TF-nondisjoint to a certain extent. In particular, the number of sources present (active) at a TF point should be strictly less than the number of sensors. In that case, the separation can be achieved thanks to subspace projection which allows us to identify the active sources and to estimate their corresponding time-frequency distribution (TFD) values. Another contribution of this paper is a new estimation procedure for the mixing channel in the underdetermined case. Finally, numerical performance evaluations and comparisons of the proposed methods are provided highlighting their effectiveness. I. INTRODUCTION T HE OBJECTIVE of blind source separation (BSS) is to extract the original source signals from their mixtures and possibly to estimate the unknown mixing channel using only the information of the observed signal with no, or very limited, knowledge about the source signals and the mixing channel. The BSS problem arises in many fields of study including speech processing, data communication, biomedical signal processing, etc [START_REF] Cichocki | Adaptive Blind Signal and Image Processing[END_REF]. Most approaches to blind source separation assume the sources are statistically independent and thus are often seek solutions of separation criteria using higher-order statistical information [START_REF] Cardoso | Blind signal separation: statistical principles[END_REF] or using only second order statistical information in the case where the sources have temporal coherency [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF], are nonstationary [START_REF] Belouchrani | Blind source separation based on time-frequency signal representations[END_REF], or eventually are cyclostationary [START_REF] Abed-Meraim | Blind source separation using second order cyclostationary statistics[END_REF]. Although the plethora of existing BSS algorithms, the underdetermined case (UBSS for underdetermined blind source separation) where the number of sources is greater than the number of sensors remains relatively poorly treated especially in the convolutive case, and its resolution is one of the challenging problems of blind source separation. In the instantaneous mixture case, some methods exploiting the sparseness of the sources in certain transform domain have been proposed for UBSS [START_REF] Bofill | Underdetermined blind source separation using sparse representations[END_REF]- [START_REF] Abrard | A time-frequency blind signal separation method applicable to underdetermined mixtures of dependent sources[END_REF]. Other methods consider Manuscript received July 1, 2006; revised January 31, 2007. A. Aïssa-El-Bey, K. Abed-Meraim and Y. Grenier are with the TSI Department, ENST-Paris, 46 rue Barrault 75634, Paris Cedex 13, France. Email: {elbey, abed, grenier}@tsi.enst.fr similarly underdetermined mixtures of delayed sources [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF], [START_REF] Rosca | Generalized sparse signal mixing model and application to noisy blind source separation[END_REF]. All these methods proceed 'roughly' as follows: The mixtures are first transformed to an appropriate representation domain; the transformed sources are then estimated using their sparseness, and finally one recovers their time waveforms by source synthesis (for more information, see the recent survey work [START_REF] O'grady | Survey of sparse and nonsparse methods in source separation[END_REF]). The UBSS methods for nonstationary sources have been proposed, given that these sources are sparse in the timefrequency (TF) domain [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF], [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF]. The first method uses timefrequency distributions (TFDs), whereas the second one uses a linear TFD. The main assumption used in these methods is that the sources are TF-disjoint. In other words, there is at most one source present at any point in the TF domain. This assumption is rather restrictive, though the methods have also showed that they worked well under a quasi sparseness condition, i.e. sources are TF-almost-disjoint. In this paper, we focus on the UBSS in convolutive mixtures case and target the relaxation of the TF-disjoint condition by allowing the sources to be nondisjoint in the TF domain; that is, multiple sources are possibly present at any point in the TF domain. This case has been considered in [START_REF] Linh-Trung | Underdetermined blind source separation of non-disjoint nonstationary sources in time-frequency domain[END_REF] for the separation of instantaneous mixtures, in [START_REF] Rosca | Generalized sparse signal mixing model and application to noisy blind source separation[END_REF] for the deconvolution of single-path channels with non-zero delays, in [START_REF] Araki | A novel blind source separation method with observation vector clustering[END_REF] where a priori information about the location of the considered sources as well as an approximation of the filter impulse response are considered, and in [START_REF] Araki | Blind separation of more speech than sensors with less distortion by combining sparseness and ICA[END_REF] where binary TF-masking (or directivity pattern based masking [START_REF] Araki | Underdetermined blind separation of convolutive mixtures of speech with directivity pattern based mask and ICA[END_REF], [START_REF] Araki | Underdetermined blind separation of convolutive mixtures of speech by combining time-frequency masks and ICA[END_REF]) and ICA technique are jointly used. In particular, we limit ourselves to the scenario where the number of sources present at any point is smaller than the number of sensors. Under this assumption, the separation of TF-nondisjoint sources is achieved thanks to subspace projection. Subspace projection allows us to identify at any point the active sources, and then to estimate their corresponding TFD values. The main contribution of this paper consists in two new algorithms for UBSS in the TF domain; the first one uses vector clustering while the other uses subspace projection. Another side contribution of the paper is an estimation method for the mixing channel matrix. The paper is organized as follows. Section II-A formulates the UBSS problem, introduces the underlying TF tools, and states some TF conditions necessary for the separation of nonstationary sources in the TF domain. In Section III-A, we propose a new method for the blind estimation of mixing channel. Section III-B deals with the TF-disjoint sources. It proposes a cluster-based TF-CUBSS (Time-frequency convolutive underdetermined blind source separation) algorithm. Section III-C proposes the subspace-based TF-CUBSS algorithm for TF-nondisjoint sources. Some comments and remarks on the proposed methods are provided in Section IV. Finally, the performance of the above methods are numerically evaluated in Section V while Section VI is devoted for the concluding remarks. II. PROBLEM FORMULATION A. Data model Let s 1 (t), . . . , s N (t) be the desired sources to be recovered from the convolutive mixtures x 1 (t), . . . , x M (t) given by: x(t) = K k=0 H(k)s(t -k) + η(t) (1) where s(t) = [s 1 (t), . . . , s N (t)] T is the source vector with the superscript T denoting the transpose operation, x(t) = [x 1 (t), . . . , x M (t)] T is the mixture vector, η(t) is the observation noise, and H(k ) def = [h 1 (k), . . . , h N (k)] are M × N matrices for k ∈ [0, K] representing the impulse response coefficients of the channel that satisfies: Assumption 1: The channel is such that each column vector of H(z) def = K k=0 H(k)z -k def = [h 1 (z), . . . , h N (z)] is irreducible, i.e. the entries of h i (z) denoted h ij (z), j = 1, . . . , M , have no-common zeros ∀i. Moreover, any M column vectors of H(z) form a polynomial matrix H(z) that it full rank over the unit-circle, i.e. rank( H(f )) = M ∀f . The sources are nonstationary, that is their frequency spectra vary in time. Often, nonstationarity gives rise to more difficulties in a problem, however, in this case it actually offers certain diversity that allows us to achieve the BSS without using higher-order approaches by directly exploiting the additional information of this TF diversity across the spectra [START_REF] Belouchrani | Blind source separation based on time-frequency signal representations[END_REF]. In that case, we often use the powerful tool of time-frequency signal analysis which basic concept is introduced next. B. Time-frequency distributions TF signal processing provides effective tools for analyzing nonstationary signals, whose frequency content varies in time. This concept is a natural extension of both the time domain and the frequency domain processing that involves representing signals in a two-dimensional space, the joint TF domain, hence providing a distribution of signal energy versus time and frequency simultaneously. For this reason, a TF representation is commonly referred to as a time-frequency distribution (TFD). Well-known TFD 1 and most used in practice is the shorttime Fourier transform (STFT): S x (t, f ) ∞ -∞ x(τ )w(τ -t) e -j2πf τ dτ, ( 2 ) where w(t) is a windowing function and x(t) a given nonstationary signal. Note that the STFT is a linear TFD and thus has the advantage of simplicity compared to other nonlinear (quadratic) TFDs, e.g. Wigner-Ville and Cohen's class distributions [START_REF] Boashash | Time Frequency Signal Analysis and Processing: Method and Applications[END_REF]. C. TF conditions on the sources In order to deal with UBSS, one often seeks for a sparse representation of the sources [START_REF] Bofill | Underdetermined blind source separation using sparse representations[END_REF]. In other words, if the sources can be sparsely represented in some domain, then their separation can be carried out in that domain by exploiting their sparseness. 1) TF-disjoint sources: Recently, there have been several UBSS methods, notably those in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF] and [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF], in which the TF domain has been chosen to be the underlaying sparse domain. These two papers have based their solutions on the assumption that the sources are disjoint in the TF domain. Mathematically, if Ω 1 and Ω 2 are the TF supports of two sources s 1 (t) and s 2 (t) then the sources are said TF-disjoint if Ω 1 ∩ Ω 2 = ∅. However, this is a rather strict assumption. A more practical assumption is that the sources are almostdisjoint in the TF domain [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF], allowing some small overlapping in the TF domain, for which the above two methods (in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF] and [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF]) also worked. 2) TF-nondisjoint sources: In this paper, we want to relax the TF-disjoint condition by allowing the sources to be nondisjoint in the TF domain. This is motivated by a drawback of the methods in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF], [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF]. Although these methods worked under the TF-almostdisjoint condition, they did not explicitly treat the TF regions (points) where the sources were overlapping. A point at the overlapping of two sources was assigned 'by chance' to belong to only one of the sources. As a result, the source that picks up this point will have some information of the other source while the latter loses some information of its own. The loss of information can be recovered to some extent by the interpolation at the intersection point using TF synthesis. However, for the other source, there is an interference at this point, hence the separation performance may degrade if no treatment is provided. If the number of overlapping points increases (i.e. the TF-almost-disjoint condition is violated), the performance of the separation is expected to degrade unless the overlapping points are properly treated. This paper will give such a treatment using subspace projection. Therefore, we will allow the sources to be nondisjoint in the TF domain; that is, multiple sources are allowed to be present at any point in the TF domain. However, instead of being inevitably nondisjoint, we limit ourselves by making the following constraint: Assumption 2: The number of active sources (i.e. sources that overlap) at any TF point is strictly less than the number of sensors. In other words, for the configuration of M sensors, there exists at most (M -1) sources at any point in the TF domain. For the special case when M = 2, Assumption 2 reduces to the disjoint condition. Note that in [START_REF] Araki | Blind separation of more speech than sensors with less distortion by combining sparseness and ICA[END_REF]- [START_REF] Araki | Underdetermined blind separation of convolutive mixtures of speech by combining time-frequency masks and ICA[END_REF], the case of M overlapping sources has been treated thanks to additional strong assumptions that we do not consider in our work. More specifically, the channels are assumed to be of single-path with a given direction of arrival, and the sources are such that one of them is present alone at certain time instant and can be removed by binary masking. III. TF-CUBSS ALGORITHM In order to solve the UBSS problem in the convolutive case, we propose to identify first the impulse response of the channels (see the algorithm's diagram in Figure 1). This problem in overdetermined case is very difficult and becomes almost impossible in the underdetermined case without side information on the considered sources. In this work and similarly to [START_REF] Huang | A blind channel identification-based two-stage approach to separation and dereverberation of speech signals in a reverberant environment[END_REF], we exploit the sparseness property of the audio sources by assuming that from time to time only one source is present. In other words, we consider the following assumption: x 2 (t) x 1 (t) Diagram of proposed TF-CUBSS algorithm combining channel identification and UBSS technique in TF domain. Assumption 3: There exists, periodically, time intervals where only one source is present in the mixture. This occurs for all source signals of the considered mixtures (see Figure 2). To detect these time intervals, we propose to use informationcriteria based testing for the estimation of the number of sources present in the signal (see Section III-A for more details). A. Channel estimation Based on assumption 3, we propose here to apply SIMO (Single Input Multiple Output) based techniques to blindly estimate the channel impulse response. Regarding the problem at hand, we have to solve 3 different problems: first, we have to select time intervals where only one source signal is effectively present; then, for each selected time interval one should apply an appropriate blind SIMO identification technique to estimate the channel parameters; finally, the way we proceed, the same channel may be estimated several times and hence one has to group together (cluster) the channel estimates into N classes corresponding to the N source channels. 1) Source number estimation: Let define the spatiotemporal vector: x d (t) = [x T (t), . . . , x T (t -d + 1)] T = N k=1 H k s k (t) + η d (t), (3) where H k are block-Sylvester matrices of size dM × (d + K): H k =    h k (0) • • • h k (K) 0 . . . . . . 0 h k (0) • • • h k (K)    s k (t) def = [s k (t), . . . , s k (t -K -d + 1) ] T and d is a chosen processing window size. Under the no-common zeros assumption (Assumption 1) and for large window sizes (see [START_REF] Wax | Detection of signals by information theoretic criteria[END_REF] for more details), matrices H k are full column rank. Hence, in the noiseless case, the rank of the data covariance matrix R def = E[x d (t)x H d (t) ] is equal to min(p(d + K), dM ) where p is the number of sources present in the considered time interval over which the covariance matrix is estimated. In particular, for p = 1, one has the minimum rank value equal to (d + K). Therefore, our approach consists in estimating the rank of the sample averaged covariance matrix R over several time slots (intervals) and select those corresponding to the smallest rank value r = d + K. In the case where p sources are active (present) in the considered time slot, the rank would be r = p(d + K) and hence p can be estimated by the closest integer value to r d+K . The estimation of the rank value is done here by Akaike's criterion [START_REF] Wax | Detection of signals by information theoretic criteria[END_REF] according to: r = arg min k 2 6 6 6 6 4 -2 log 0 B B B @ M d Q i=k+1 λ 1/(M d-k) i 1 M d-k M d P i=k+1 λ i 1 C C C A (M d-k)T s + 2k(2M d -k) , (4) where λ 1 ≥ . . . ≥ λ M d represent the eigenvalues of R and T s is the time slot size. This criterion represents the maximum likelihood estimate of the system parameters (given here by the signal eigenvectors and eigenvalues of the covariance matrix) penalized by the number of free adjusted parameters under the asymptotic Gaussian distribution of the latter (see [START_REF] Wax | Detection of signals by information theoretic criteria[END_REF] for more details). Note that it is not necessary at this stage, to know exactly the channel degree K as long as d > K (i.e. an over-estimation of the channel degree is sufficient) in which case the presence of one signal source is characterized by: d < r < 2d . Histogram representing the number of time intervals for each estimated number of sources for 4 audio sources and 3 sensors in convolutive mixture case. 2) Blind channel identification: To perform the blind channel identification, we have used in this paper the Cross-Relation (CR) technique described in [START_REF] Xu | A least-squares approach to blind channel identification[END_REF], [START_REF] Aïssa-El-Bey | Blind system identification using cross-relation methods: Further results and developments[END_REF]. Consider a time interval where we have only the source s i present. In this case, we can consider a SIMO system of M outputs given by: x(t) = K k=0 h i (k)s i (t -k) + η(t), ( 5 ) where h i (k) = [h i1 (k) . . . h iM (k)] T , k = 0, • • • , K. From ( 5), the noise-free outputs x j (k), 1 ≤ j ≤ M are given by: x j (k) = h ij (k) * s i (k), 1 ≤ j ≤ M, (6) where " * " denotes the convolution. Using commutativity of convolution, it follows: h il (k) * x j (k) = h ij (k) * x l (k), 1 ≤ j = l ≤ M. ( 7 ) This is a linear equation satisfied by every pair of channels. It was shown that reciprocally, the previous M (M -1)/2 cross-relations characterize uniquely the channel parameters. We have the following theorem [START_REF] Xu | A least-squares approach to blind channel identification[END_REF]: Theorem 1: Under the no-common zeros assumption (Assumption 1), the set of cross-relations (in the noise free case): [START_REF] Linh-Trung | Underdetermined blind source separation of non-disjoint nonstationary sources in time-frequency domain[END_REF] where h (z) = [h 1 (z) . . . h M (z)] T is a M × 1 polynomial vector of degree K, is satisfied if and only if h (z) = αh i (z) for a given scalar constant α. By collecting all possible pairs of M channels, one can easily establish a set of linear equations. In matrix form, this set of equations can be expressed as: x l (k) * h j (k) -x j (k) * h l (k) = 0, 1 ≤ l < j ≤ M, X M h i = 0, (9) where h i def = [h i1 (0) . . . h i1 (K), . . . , h iM (0) . . . h iM (K)] T and X M is defined by: X 2 = [X (2) , -X (1) ], X n =      X n-1 0 X (n) 0 -X (1) . . . . . . 0 X (n) -X (n-1)      , ( 10 ) with n = 3, . . . , M and: X (n) =    x n (K) . . . x n (0) . . . . . . x n (T s -1) . . . x n (T s -K -1)    . ( 11 ) In the presence of noise, equation ( 9) can be naturally solved in the least-squares (LS) sense according to: h CR = arg min h i =1 h H i X H M X M h i ( 12 ) which solution is given by the least unit-norm eigenvector of matrix X H M X M . It is shown in [START_REF] Xu | A least-squares approach to blind channel identification[END_REF] that the noise term in the quadratic form [START_REF] Rosca | Generalized sparse signal mixing model and application to noisy blind source separation[END_REF] has a mean value proportional to the identity matrix. Consequently, the channel estimates remains unbiased under white additional noise assumption. Remark: We have presented here a basic version of the CR method. In [START_REF] Ahmad | Proportionate frequency domain adaptive algorithms for blind channel identification[END_REF] an improved version of the method (introduced in the adaptive scheme) is proposed exploiting the quasi-sparse nature of acoustic impulse responses. Other channel estimation techniques in the overcomplete case, e.g. [START_REF] Winter | Overcomplete BSS for convolutive mixtures based on hierarchical clustering[END_REF], can be used as well at this stage. 3) Clustering of channel vector estimates: The first step of our channel estimation method consists in detecting the time slots where only one single source signal is 'effectively' present. However, the same source signal s i may be present in several time intervals (see Figure 2 and Figure 3) leading to several estimates of the same channel vector h i . We end up, finally, with several estimates of each source channel that we need to group together into N classes. This is done by clustering the estimated vectors using k-means algorithm [START_REF] Frank | The data analysis handbook[END_REF]. The i th channel estimate is evaluated as the centroid of the i th class. [START_REF] Wax | Detection of signals by information theoretic criteria[END_REF] to detect the number of source, then application of the blind identification algorithm in [START_REF] Xu | A least-squares approach to blind channel identification[END_REF], [START_REF] Aïssa-El-Bey | Blind system identification using cross-relation methods: Further results and developments[END_REF] followed by vector clustering. 2) Mixture STFT computation by [START_REF] Araki | Blind separation of more speech than sensors with less distortion by combining sparseness and ICA[END_REF] and noise thresholding by ( 16) 3) Vector clustering by [START_REF] Araki | Underdetermined blind separation of convolutive mixtures of speech by combining time-frequency masks and ICA[END_REF] and [START_REF] Boashash | Time Frequency Signal Analysis and Processing: Method and Applications[END_REF]. 4) Source STFT estimation by [START_REF] Huang | A blind channel identification-based two-stage approach to separation and dereverberation of speech signals in a reverberant environment[END_REF]. 5) Source TF synthesis by [START_REF] Griffin | Signal estimation from modified shorttime fourier transform[END_REF]. B. UBSS algorithm with TF-disjoint assumption As we have seen before, the STFT is often used for speech/audio signals because of its low computational cost. Therefore, in this section we propose a new cluster-based TF-CUBSS algorithm using the STFT for convolutive mixture case. Note that the STFT is a particular form of wavelet transforms which have been used in [START_REF] Zibulevsky | Independent Component Analysis: Principles and Practice[END_REF] for the UBSS of image signals. After transformation into the TF domain using the STFT, the model in ( 1) becomes (in the noiseless case): S x (t, f ) = H(f )S s (t, f ), (13) where S x (t, f ) is the mixture STFT vector, S s (t, f ) is the source STFT vector and H(f ) = [h 1 (f ) . . . h N (f )] is the channel Fourier Transform matrix. Under the assumption that all sources are disjoint in the TF domain, (13) reduces to S x (t, f ) = h i (f )S si (t, f ), ∀(t, f ) ∈ Ω i , ∀i ∈ N , ( 14 ) where N = {1, . . . , N } and Ω i is the TF support of the i th source. Consequently, two TF points (t 1 , f 1 ) and (t 2 , f 2 ) belonging to the same region Ω i (i.e. corresponding to the source signal s i ) are 'associated' with the same channel h i . It is this observation that is used to derive the separation algorithm summarized in Table I and detailed next. First, we compute the STFT of the mixtures, S x (t, f ), by applying (2) for each of the mixture in x(t), as follows: S x i (t, f ) = m=(L-1)/2-1 m=-(L-1)/2 w(t -m)x i (m)e -j2πf m , i = 1, . . . , M, (15a) S x (t, f ) = [S x1 (t, f ), . . . , S x M (t, f )] T . ( 15b ) where w(t) is a chosen window (in our simulations we chose Hamming window) of length L. Then, we apply a noise thresholding procedure which mitigates the noise effect and reduces the computational cost as only the selected TF points are further treated by our algorithm. In particular, for each frequency f 0 , we apply the following criterion for all the time points t k belonging to the frequencyslice (t, f 0 ) If S x (t k , f 0 ) max t { S x (t, f 0 ) } > 1 , then keep (t k , f 0 ), ( 16 ) where 1 is a small threshold (typically, 1 = 0.01). Then, the set of all selected points, Ω, is expressed by Ω = N i=1 Ω i , where Ω i is the TF support of the source s i (t). Note that, the effects of spreading the noise energy while localizing the source energy in the time-frequency domain amounts to increasing the robustness of the proposed method with respect to noise (see Part IV of [START_REF] Boashash | Time Frequency Signal Analysis and Processing: Method and Applications[END_REF]). Hence, by equation ( 16), we would keep only time-frequency points where the signal energy is non-negligible, the other time-frequency points are rejected, i.e. not further processed, since considered to represent noise contribution only. Also, due to the noise energy spreading, the contribution of the noise in the source time-frequency points is relatively, negligible at least for moderate and high SNRs. On the other hand, note that the noise thresholding as well as TF masking induce non-linear distortion in the reconstructed signal. Now, how this distortion affects the source estimates is an open problem that still raises many questioning and research works including those which try to mitigate this distortion in the TF domain, e.g. [START_REF] Rosca | Statistical inference of missing speech data in the ICA domain[END_REF]. After noise thresholding, the clustering procedure can be done as follows: For each TF point, we obtain the spatial direction vectors by: v(t, f ) = S x (t, f ) S x (t, f ) , (t, f ) ∈ Ω, (17) and force them, without loss of generality, to have the first entry real and positive. Next, we cluster these vectors into N classes {C i | i ∈ N } by minimizing the criterion: v(t, f ) ∈ C i ⇐⇒ i = arg min k v(t, f ) - h k (f )e -jθ k h k (f ) 2 (18 ) where h k (f ) is the Fourier Transform of the k th channel vector estimate (given by [START_REF] Rosca | Generalized sparse signal mixing model and application to noisy blind source separation[END_REF] and the proposed clustering procedure) and θ k is the phase argument of h k1 (f ) (this is to force the first entry to be real positive). The collection of all points, whose vectors belong to the class C i , now forms the TF support Ω i of the source s i (t). Therefore, we can estimate the STFT of each source s i (t) by: S si (t, f ) = b h H i (f ) b h i (f ) 2 S x (t, f ), ∀ (t, f ) ∈ Ω i , 0, otherwise, (19) since, from ( 14), we have b h H i (f ) b h i (f ) 2 Sx(t, f ) = b h H i (f )h i (f ) b h i (f ) 2 Ss i (t, f ) ≈ Ss i (t, f ), ∀ (t, f ) ∈ Ω i . C. UBSS algorithm with TF-nondisjoint assumption We have seen the cluster-based TF-CUBSS methods, using the STFT, as summarized in Table I. This method relies on the assumption that the sources were TF-disjoint, which led to the TF-transformed structure in [START_REF] Araki | A novel blind source separation method with observation vector clustering[END_REF]. The latter is no longer valid, when the sources are nondisjoint in the TF domain. Under the TF-nondisjoint condition, stated in Assumption 2, we propose in this section an alternative method using subspace projection. Recall that the first two steps of the cluster-based quadratic TF-CUBSS algorithm do not rely on the assumption of TFdisjoint sources (see Table I). Therefore, we can reuse these steps to obtain the channel estimation and all the TF points of Ω. Under the TF-nondisjoint condition, consider a TF point (t, f ) ∈ Ω at which there are J sources s α1 (t), . . . , s α J (t) present, with J < M where α 1 , . . . , α J ∈ N denote the indices of the sources present at (t, f ). Our goal is to identify the sources that are present at (t, f ), i.e. α 1 , . . . , α J , and to estimate the STFT of each of these contributing sources. We define the following: s = [s α 1 (t), . . . , s α J (t)] T , (20a) Hα (f ) = [h α 1 (f ), . . . , h α J (f )]. (20b) Then, ( 13) is reduced to the following S x (t, f ) = Hα (f )S s(t, f ). ( 21 ) Let Hβ (f ) = [h β 1 (f ), . . . , h β J (f ) ] and Q β (f ) be the orthogonal projection matrix onto the noise subspace of Hβ (f ) expressed by: Q β (f ) = I -Hβ (f ) HH β (f ) Hβ (f ) -1 HH β (f ). (22) We have the following observation: Q β (f )h i (f ) = 0, i ∈ {β 1 , . . . , β J } Q β (f )h i (f ) = 0, i ∈ N \{β 1 , . . . , β J } . ( 23 ) Consequently, as S x (t, f ) ∈ Range{ Hα (f )}, we have Q β (f )S x (t, f ) = 0, if {β 1 , . . . , β J } = {α 1 , . . . , α J } Q β (f )S x (t, f ) = 0, otherwise . (24) If H(f ) has already been estimated by the method presented in Section III-A, then this observation gives us the criterion to detect the indices α 1 , . . . , α J ; and hence, the contributing sources at the considered TF point. In practice, to take into account noise, one detects the column vectors of Hα (f ) by minimizing: {α 1 , . . . , α J } = arg min β 1 ,...,β J { Q β (f )S x (t, f ) } . ( 25 ) Next, TFD values of the J sources at TF point (t, f ) are estimated by: S s(t, f ) ≈ H# α (f )S x (t, f ), (26) where the superscript ( # ) represents the Moore-Penrose's pseudo-inversion operator. In the simulation, the optimization problem of ( 25) is solved using exhaustive search. This is computationally tractable for small array sizes but would be prohibitive if M is very large. Table II provides a summary of the subspace projection based TF-CUBSS algorithm using STFT. [START_REF] Wax | Detection of signals by information theoretic criteria[END_REF] to detect the number of source, then application of the blind identification algorithm in [START_REF] Xu | A least-squares approach to blind channel identification[END_REF], [START_REF] Aïssa-El-Bey | Blind system identification using cross-relation methods: Further results and developments[END_REF] followed by vector clustering. 2) STFT computation and noise thresholding. 3) For all selected TF points, detect the active sources by [START_REF] Aïssa-El-Bey | Blind system identification using cross-relation methods: Further results and developments[END_REF] and ( 25). 4) Source STFT estimation by [START_REF] Zibulevsky | Independent Component Analysis: Principles and Practice[END_REF]. 5) Source TF synthesis by [START_REF] Griffin | Signal estimation from modified shorttime fourier transform[END_REF]. IV. DISCUSSION We discuss here certain points relative to the proposed TF-CUBSS algorithms and their applications. 1) Number of sources: The number of sources N is assumed known in the clustering method that we have used. However, there exist clustering methods [START_REF] Frank | The data analysis handbook[END_REF] which perform the class estimation as well as the estimation of the number N . In our simulation, we have observed that most of the time the number of classes is overestimated, leading to poor source separation quality. Hence, robust estimation of the number of sources in the UBSS case remains a difficult open problem that deserves particular attention in future works. 2) Number of overlapping sources: In the subspace-based approach, it is also possible to consider a fixed (maximum) value of J that is used for all TF points. Indeed, if the number of overlapping sources is less than J , we would estimate close-to-zero source STFT values. For example, if we assume J = 2 sources are present at a given TF point while only one source is effectively contributing, then we estimate one closeto-zero source STFT value. This approach increases slightly the estimation error of the source signals (especially at low SNRs) but has the advantage of simplicity compared to using information theoretic-based criteria for estimating the value of J . 3) Separation quality versus number of sources: Although we are in the underdetermined case, the number of sources N should not exceed too much the number of sensors. Indeed, when N increases, the level of source interference increases, and hence, the source quasi-disjointness assumption is illsatisfied. Moreover, for a large number of sources, the likelihood of having two closely spaced sources, i.e. such that the spatial directions h i and h j are 'close' to linear dependency, increases. In that case, vector clustering performance degrades significantly. In brief, sparseness and spatial separation are the two limiting factors against increasing the number of sources. 4) Overdetermined case: Our algorithm can be further simplified in the overdetermined case where M ≥ N . In that context, the algorithm can be reduced to the channel estimation step, the STFT computation and noise thresholding then source STFT estimation using the channel matrix pseudo-inversion at each frequency: S s (t, f ) = H # (f )S x (t, f ). V. SIMULATION RESULTS In the simulations, we have considered an array of M = 3 sensors, that receives signals from N = 4 independent audio sources (3 speech signals corresponding to 2 men and 1 woman plus a guitar signal). The filter coefficients are chosen randomly and the channel order is K = 6. The sample size is T = 8192 samples (corresponding approximately to 1 second recording of speech signals sampled at 8 KHz). The separation quality is measured by the normalized mean squares estimation errors (N M SE) of the sources evaluated over N r = 200 Monte-Carlo runs and defined as: N M SE i def = 1 N r Nr r=1 min α α s i,r -s i 2 s i 2 (27) N M SE i = 1 N r N r r=1 1 - s i,r s H i s i,r s i 2 ( 28 ) N M SE = 1 N N i=1 N M SE i . ( 29 ) where s i def = [s i (0), . . . , s i (T -1)], s i,r (defined similarly) represents the r th estimate of source s i and α is a scalar factor that compensate for the scale indeterminacy of the BSS problem. In Figure 4, the top four plots represent the TF representation of the original source signals, the middle three plots represent the TF representation of the M mixture signals and the bottom four plots represent the TF representation of the source estimates by the subspace-based algorithm (Table II) using STFT of length 1024. Figure 5 represents the same disposition of signals but in the time domain. In Figure 6, we compare the separation performance obtained by the subspace-based algorithm with J = 2 and the cluster-based algorithm (Table I). It is observed that subspacebased algorithm provides much better separation results than those obtained by the cluster-based algorithm. This is mainly due to the high occurrence of overlapping sources in the TF domain for this type of signals so that the 'TF-disjointness' assumption used by the TF-CUBSS algorithm is poorly satisfied. This can be observed also from Figure 4, where we can see that the TFD supports of the 4 audio sources are clearly overlapping. In Figure 7, we present the performance of channel identification obtained by using SIMO identification algorithm (in this case we choose only the time intervals where only one source is present using AIC criterion) with SIMO and MIMO identification algorithms2 (in this case we choose the time intervals where we are in the overdetermined case; i.e. where p = 1 or p = 2). It is observed that SIMO based identification provides better results than those obtained by SIMO and MIMO identification algorithms. Indeed, the advantage of considering overdetermined MIMO system identification resides in the fact that the occurrence of MIMO (i.e. number of time intervals where this situation occurs as shown in Figure 3) is much higher than that of SIMO case. However, as we observe it, this does not compensate for the higher estimation error of MIMO systems compared to SIMO systems. The plot in Figure 8 (respectively in Figure 9) presents the separation performance when using the exact matrix H compared to that obtained with the proposed estimate H using the cluster-based method (respectively the subspacebased method). The observed performance loss is due to the channel estimation error which is relatively high for low SNRs and becomes negligible for high SNRs. a) S s 1 (t, f ) (b) S s 2 (t, f ) (c) S s 3 (t, f ) (d) S s 4 (t, f ) (e) Sx 1 (t, f ) (f) Sx 2 (t, f ) (g) Sx 3 (t, f ) (h) S ŝ1 (t, f ) (i) S ŝ2 (t, f ) (j) S ŝ3 (t, f ) (k) S ŝ4 (t, f ) In Figure 10, we compare the performance obtained with the subspace-based method for J = 2 and J = 3. In that experiment, we have used M = 4 sensors and N = 5 source signals. One can observe that, for high SNRs, the case of J = 3 leads to a better separation performance than for the case of J = 2. However, for low SNRs, a large value of J increases the estimation noise (as mentioned in Section IV) Figure 12 illustrates the algorithm's performance when we consider long impulse response channels. More specifically, the plots represent the separation performance for channels of length 50, 100 and 200 respectively. The channel taps are generated randomly using Gaussian law. We observe a slight performance degradation when the channel order increases but the separation quality remains quite good. In Figure 13, we compare the separation performance of our algorithm, Deville's algorithm in [START_REF] Albouy | Alternative structures and power spectrum criteria for blind segmentation and separation of convolutive speech mixtures[END_REF] and Parra's algorithm in [START_REF] Parra | Convolutive blind separation of nonstationary sources[END_REF] in the overdetermined case of 2 sensors and 2 speech signals of one man and one woman (selected among the four previous sources). The algorithms in [START_REF] Albouy | Alternative structures and power spectrum criteria for blind segmentation and separation of convolutive speech mixtures[END_REF], [START_REF] Parra | Convolutive blind separation of nonstationary sources[END_REF] separate the sources only up to an unknown filter and hence we use in this experiment the interference to signal ratio (ISR) criterion defined in [START_REF] Parra | Convolutive blind separation of nonstationary sources[END_REF] instead of the NMSE. We observe a significant performance gain in favor of the proposed method especially at high SNR values. Moreover, our method has the following advantages : (i) it can treat the underdetermined case, (ii) it estimates the sources up to a constant not to an unknown filter like in [START_REF] Albouy | Alternative structures and power spectrum criteria for blind segmentation and separation of convolutive speech mixtures[END_REF], [START_REF] Parra | Convolutive blind separation of nonstationary sources[END_REF], (iii) the proposed frame selection procedure does not involve any thresholding (the choice of an appropriate threshold value is a difficult problem as it is strongly dependent on the context) or ad-hoc selection of frequency range like in [START_REF] Albouy | Alternative structures and power spectrum criteria for blind segmentation and separation of convolutive speech mixtures[END_REF]. VI. CONCLUSION This paper introduces new methods for the UBSS of TFdisjoint and TF-nondisjoint nonstationary sources in the convolutive mixture case using their time-frequency representations. The first proposed method has the advantage of simplicity while the second uses a weaker assumption on the source 'sparseness', i.e. the sources are not necessarily TF-disjoint, and proposes an explicit treatment of the overlapping points using subspace projection, leading to significant performance improvements. Simulation results illustrate the effectiveness of our algorithms in different scenarios. Fig. 1.Diagram of proposed TF-CUBSS algorithm combining channel identification and UBSS technique in TF domain. 4 Fig. 2 . 42 Fig. 2. Time representation of 4 audio sources: this representation illustrates the audio signal sparsity (i.e. there exists time intervals where only one source is present). Figure 3 3 Figure3illustrates the effectiveness of the proposed method where a recording of 6 seconds of M = 3 convolutive mixtures of N = 4 sources is considered. The sampling frequency is 8 KHz and the time slot size is T s = 200 samples. The sources consist of 3 speech signals corresponding to 2 men and 1 woman plus a guitar signal. The convolutive channel is of order K = 6 and its coefficients are generated randomly using Gaussian law. One can observe that the case p = 1 (one signal source) occurs approximatively 10% of the time in the considered context. Fig. 3.Histogram representing the number of time intervals for each estimated number of sources for 4 audio sources and 3 sensors in convolutive mixture case. ( Fig. 4 . 4 Fig. 4. Simulated example (viewed in TF domain) for the subspace-based TF-CUBSS algorithm in the case of 4 speech sources and 3 sensors. The top four plots represent the original source signals, the middle three plots represent the 3 mixtures, and the bottom four plots represent the source estimates. Fig. 5 . 5 Fig. 5. Simulated example (viewed in time domain) for the subspace-based TF-CUBSS algorithm in the case of 4 speech sources and 3 sensors. Fig. 6 . 6 Fig. 6. Comparison between subspace-based and cluster-based TF-CUBSS algorithms : normalized MSE (NMSE) versus SNR for 4 speech sources and 3 sensors. Fig. 7 . 7 Fig.7. NMSE versus SNR for 4 audio sources and 3 sensors in convolutive mixture case : comparison of the performance of identification algorithme using only SIMO system and the algorithm using SIMO and MIMO system. Fig. 8 . 8 Fig. 8. Comparison, for the cluster-based TF-CUBSS algorithm, when the mixing channel H is known or unknown: NMSE of the source estimates. Fig. 9 . 9 Fig. 9. Comparison, for the subspace-based TF-CUBSS algorithm, when the mixing channel H is known or unknown: NMSE of the source estimates. Fig. 10 . 10 Fig. 10. Comparison between subspace-based and cluster-based TF-CUBSS algorithms: NMSE of the source estimates for different ranks of the projection subspace, for the case of 5 sources and 4 sensors. Figure 11 11 Figure 11 illustrates the rapid degradation of the separation quality when we increase the number of sources from N = 4 to N = 7. This confirms the remarks made in Section IV.Figure12illustrates the algorithm's performance when we consider long impulse response channels. More specifically, the plots represent the separation performance for channels of length 50, 100 and 200 respectively. The channel taps are generated randomly using Gaussian law. We observe a slight performance degradation when the channel order increases but the separation quality remains quite good.In Figure13, we compare the separation performance of our algorithm, Deville's algorithm in[START_REF] Albouy | Alternative structures and power spectrum criteria for blind segmentation and separation of convolutive speech mixtures[END_REF] and Parra's algorithm in[START_REF] Parra | Convolutive blind separation of nonstationary sources[END_REF] in the overdetermined case of 2 sensors and 2 speech signals of one man and one woman (selected among the four previous sources). The algorithms in[START_REF] Albouy | Alternative structures and power spectrum criteria for blind segmentation and separation of convolutive speech mixtures[END_REF],[START_REF] Parra | Convolutive blind separation of nonstationary sources[END_REF] separate the sources only up to an unknown filter and hence we use in this experiment the interference to signal ratio (ISR) criterion defined in[START_REF] Parra | Convolutive blind separation of nonstationary sources[END_REF] instead of the NMSE. We observe a significant performance gain in favor of the proposed method especially Fig. 11 . 11 Fig. 11. Comparison between subspace-based and cluster-based TF-CUBSS algorithms: NMSE versus number of sources. Fig. 12 . 12 Fig. 12. NMSE versus SNR for 4 audio sources and 3 sensors: Comparison, for the subspace-based TF-CUBSS algorithm, for different filter size K. Fig. 13 . 13 Fig. 13. ISR versus SNR for 2 audio sources and 2 sensors: Comparison between the subspace-based TF-CUBSS algorithm, Parra's algorithm and Deville's algorithm. In fact, the STFT does not represent an energy distribution of the signal in the TF plane. However, for simplicity, we still refer to it as a TFD. For the identification of MIMO system, we have used the subspace method[START_REF] Abed-Meraim | A subspace algorithm for certain blind identification problems[END_REF] for the equalization step followed by SOBI algorithm[START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF] for the separation step.
01772863
en
[ "spi.signal" ]
2024/03/05 22:32:18
2007
https://hal.science/hal-01772863/file/tf-ubss_subspace_tsp05_vf.pdf
Abdeldjalil Aïssa-El-Bey Nguyen Linh-Trung email: [email protected] Karim Abed-Meraim Adel Belouchrani email: [email protected] Yves Grenier email: [email protected] Underdetermined Blind Separation of Nondisjoint Sources in the Time-Frequency Domain Keywords: blind source separation, underdetermined/overcomplete representation, spatial time-frequency representation, vector clustering, subspace projection, speech signals, sparse signal decomposition/representation This paper considers the blind separation of nonstationary sources in the underdetermined case, when there are more sources than sensors. A general framework for this problem is to work on sources that are sparse in some signal representation domain. Recently, two methods have been proposed with respect to the time-frequency (TF) domain. The first uses quadratic timefrequency distributions (TFDs) and a clustering approach, and the second uses a linear TFD. Both of these methods assume that the sources are disjoint in the TF domain; i.e. there is at most one source present at a point in the TF domain. In this paper, we relax this assumption by allowing the sources to be TF-nondisjoint to a certain extent. In particular, the number of sources present at a point is strictly less than the number of sensors. The separation can still be achieved thanks to subspace projection that allows us to identify the sources present and to estimate their corresponding TFD values. In particular, we propose two subspace-based algorithms for TF-nondisjoint sources, one uses quadratic TFDs and the other a linear TFD. Another contribution of this paper is a new estimation procedure for the mixing matrix. Finally, then numerical performance of the proposed methods are provided highlighting their performance gain compared to existing ones. I. INTRODUCTION S OURCE SEPARATION aims at recovering multiple sources from multiple observations (mixtures) received by a set of linear sensors. The problem is said to be 'blind' when the observations have been linearly mixed by the transfer medium, while having no a priori knowledge of the transfer medium or the sources. Blind source separation (BSS) has applications in several areas, such as communication, speech/audio processing, and biomedical engineering [START_REF] Nandi | Blind estimation using higher-order statistics[END_REF]. A fundamental and necessary assumption of BSS is that the sources are statistically independent and thus are often sought solutions using higher-order statistical information [START_REF] Cardoso | Blind signal separation: statistical principles[END_REF]. If some information about the sources is available at hand, such as temporal coherency [START_REF] Belouchrani | A blind source separation technique using second-order statistics[END_REF], source nonstationarity [START_REF] Belouchrani | Blind source separation based on time-frequency signal representations[END_REF], or source cyclostationarity [START_REF] Abed-Meraim | Blind source separation using second order cyclostationary statistics[END_REF] then one can remain in the second-order statistical scenario. The BSS is said to be underdetermined if there are more sources than sensors. In that case, the mixing matrix is not invertible and, consequently, a solution for source estimation must also be found even if the mixing matrix has been estimated. A general framework for underdetermined blind source separation (UBSS) is to exploit the sparseness, if it exists, of the sources in a given signal representation domain [START_REF] Bofill | Underdetermined blind source separation using sparse representations[END_REF]. The mixtures are then transformed to this domain; one may then, estimate the transformed sources using their sparseness, and finally recover their time waveforms by source synthesis. For more information on BSS and UBSS methods, see for example a recent survey [START_REF] O'grady | Survey of sparse and nonsparse methods in source separation[END_REF]. Recently, several UBSS methods for nonstationary sources have been proposed, given that these sources are sparse in the time-frequency (TF) domain [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF]- [START_REF] Barkat | Algorithms for blind components separation and extraction from the time-frequency distribution of their mixture[END_REF]. The first method uses quadratic time-frequency distributions (TFDs), whereas the second one uses a linear TFD. The main assumption used in these methods is that the sources are TF-disjoint; in other words, there is at most one source present at any point in the TF domain. This assumption is rather restrictive, though the methods have also showed that they worked well under a quasi sparseness condition, i.e. sources are TF-almost-disjoint. In this paper, we want to relax the TF-disjoint condition by allowing the sources to be nondisjoint in the TF domain; that is, multiple sources are possibly present at any point in the TF domain. This case has been considered in [START_REF] Linh-Trung | Underdetermined blind source separation of non-disjoint nonstationary sources in time-frequency domain[END_REF] (which corresponds to part of this work) and in [START_REF] Rickard | Desprit -histogram based blind source separation of more sources than sensors using subspace methods[END_REF] for the parametric mixing matrix case. In particular, we limit ourselves to the scenario where the number of sources present at any point is smaller than the number of sensors. Under this assumption, the separation of TF-nondisjoint sources is achieved thanks to subspace projection. Subspace projection allows us to identify at any point the sources present, and hence, to estimate the corresponding TFD values of these sources. The main contribution of this paper is proposing two subspace-based algorithms for UBSS in the TF domain; one uses quadratic TFDs while the other uses linear TFD. In line with the cluster-based quadratic algorithm proposed in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF], we also propose here a cluster-based algorithm but using a linear TFD, which is not a block-based technique like the quadratic one. Therefore, its low cost computation is useful for processing speech and audio sources. Another contribution of the paper is a method of estimation for the mixing matrix. The paper is organized as follows. Section II-A formulates the UBSS problem, introduces the underlying TF tools, and states some TF conditions necessary for the separation of nonstationary sources in the TF domain. Section III deals with the TF-disjoint sources. It reviews the cluster-based quadratic TF-UBSS algorithm [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF], and from that, proposes a cluster-based linear TF-UBSS algorithm. Section IV proposes two subspace-based TF-UBSS algorithms for TF-nondisjoint sources, using quadratic and linear TFDs. In this section, we propose also a method for the blind estimation of mixing matrix. There is some discussion of the proposed methods in Section V. The performance of the above methods are numerically evaluated in Section VI. II. PROBLEM FORMULATION A. Data model Let s 1 (t), . . . , s N (t) be the desired sources to be recovered from the instantaneous mixtures x 1 (t), . . . , x M (t) given by: x(t) = As(t), (1) where s(t) = [s 1 (t), . . . , s N (t)] T is the source vector with the superscript T denoting the transpose operation, x(t) = [x 1 (t), . . . , x M (t)] T is the mixture vector, and A = [a 1 , . . . , a N ] is the mixing matrix of size M ×N that satisfies: Assumption 1: The column vectors of A are pair-wise linearly independent. That is, for any index pair i, j ∈ N , where N = {1, . . . , N }, and i = j, we have a i and a j linearly independent. This assumption is necessary because if otherwise, we have a 1 = αa 2 for example, then the input/output relation (1) can be reduced to x(t) = [a 1 , a 3 , . . . , a N ] [s 1 (t) + αs 2 (t), s 3 (t), . . . , s N (t)] T , and hence the separation of s 1 (t) and s 2 (t) is inherently impossible. It is known that BSS is only possible up to some scaling and permutation. We take advantage of these indeterminacies to further assume, without loss of generality, that the column vectors of A all have unit norm, i.e. a i = 1 for all i ∈ N . The sources are nonstationary, that is their frequency spectra vary in time. Often, nonstationarity imposes more difficulties on a problem, however, in this case it actually offers a solution: one can solve the BSS problem without using higher-order approaches by directly exploiting the additional information of this TF diversity across the spectra; this solution was proposed in [START_REF] Belouchrani | Blind source separation based on time-frequency signal representations[END_REF]. We defer to a little later making TF assumptions on the sources, and for now we introduce the concept of TF signal processing. B. Time-frequency distributions TF signal processing provides effective tools for analyzing nonstationary signals, whose frequency content varies in time. This concept is a natural extension of both the time domain and the frequency domain processing that involve representing signals in a two-dimensional space the joint TF domain, hence providing a distribution of signal energy versus time and frequency simultaneously. For this reason, a TF representation is commonly referred to as a time-frequency distribution (TFD). The general class of quadratic TFDs of an analytic signal z(t) is defined as [START_REF] Boashash | Time Frequency Signal Analysis and Processing: Method and Applications[END_REF]: ρ zz (t, f ) ∞ -∞ e j2πν(u-t) Γ(ν, τ ) × z(u + τ 2 )z * (u - τ 2 ) e -j2πf τ dν du dτ, (2) where Γ(ν, τ ) is a two-dimensional function in the so-called ambiguity domain and is called the Doppler-lag kernel, and the superscript ( * ) denotes the conjugate operator. We can design a TFD with certain desired properties by properly constraining Γ. When Γ(ν, τ ) = 1 we have the following famous Wigner-Ville distribution (WVD): ρ wvd zz (t, f ) ∞ -∞ z(t + τ 2 )z * (t - τ 2 ) e -j2πf τ dτ. ( 3 ) The WVD is the most widely studied TFD. It achieves maximum energy concentration in the TF plane around the instantaneous frequency for linear frequency-modulated (LFM) signals. However, it is in general non-positive and it introduces the so-called "cross-terms" when multiple frequency laws (e.g. two LFM components) exist in the signals, due to the quadratic multiplication of shifted versions of the signals. Another well-known TFD and most used in practice is the short-time Fourier transform (STFT): S z (t, f ) ∞ -∞ z(τ )h(τ -t) e -j2πf τ dτ, (4) where h(t) is a window function. Note that the STFT is a linear TFD1 , and its quadratic version, called the spectrogram (SPEC), is defined as: ρ spec zz (t, f ) |S z (t, f )| 2 . ( 5 ) Clearly, from the definition, there is no cross-terms effect present in STFT, hence in the SPEC. However, these distributions have very low TF resolution in comparison with the WVD. The low cost of implementation for the STFT, hence for the SPEC, in comparison with that for the WVD and, together with the advantage of being free of cross-terms, justifies the fact that the STFT is most used in practice, especially for speech or audio signals. But when it comes to FM-like signals, the WVD is preferred. To combine the high resolution of the WVD while using the free cross-term property of the SPEC, the masked Wigner-Ville distribution (MWVD) is derived so that: ρ mwvd zz (t, f ) ρ wvd zz (t, f ) • ρ spec zz (t, f ). (6) There are many other useful TFDs in the literature, notably those that give high TF resolution while effectively minimizing the cross-terms, for example the B distribution [START_REF] Barkat | A high-resolution quadratic time-frequency distribution for multicomponent signal analysis[END_REF]. However, we only introduce here the TFDs above since they will be used in the later sections. C. TF conditions on sources Now, as we have introduced the concept of TF signal processing as a useful tool for analyzing nonstationary signals, some TF conditions need to be applied to the sources. Note that the TF method in [START_REF] Belouchrani | Blind source separation based on time-frequency signal representations[END_REF] does not work for UBSS because the mixing matrix is not invertible. In order to deal with UBSS, one often seeks for a sparse representation of the sources [START_REF] Bofill | Underdetermined blind source separation using sparse representations[END_REF]. In other words, if the sources can be sparsely represented in some domain, then the separation is to be carried out in that domain to exploit the sparseness. 1) TF-disjoint sources: Recently, there have been several UBSS methods, notably those in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF] and [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF], in which the TF domain has been chosen to be the underlaying sparse domain. These two papers have based their solutions on the assumption that the sources are disjoint in the TF domain. Mathematically, if Ω 1 and Ω 2 are the TF supports of two sources s 1 (t) and s 2 (t) then Ω 1 ∩ Ω 2 = ∅. This condition can be illustrated in Figure 1. However, this is a rather strict assumption. A more practical assumption is that the sources are almost-disjoint in the TF domain [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF], allowing some small overlapping in the TF domain, for which the above two methods also worked. 2) TF-nondisjoint sources: In this paper, we want to relax the TF-disjoint condition by allowing the sources to be nondisjoint in the TF domain; as illustrated in Figure 2. Ω 2 Ω 1 f frequency t Fig. 2. TF nondisjoint condition: Ω 1 ∩ Ω 2 = ∅ This is motivated by a drawback of the method in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF]. Although this method worked well under the TF-almost-disjoint condition, it did not explicitly treat the TF regions where the sources were allowed to have some small overlapping. A point at the overlapping of two sources was assigned 'by chance' to belong to only one of the sources. As a result, the source that picks up this point will have some information of the other source while the latter loses some information of its own. The loss of information can be recovered to some extent by the interpolation at the intersection point using TF synthesis. However, for the other source, there is an interference at this point, hence the separation performance may degrade if no treatment is provided. If the number of overlapping points increases (i.e. the TF-almost-disjoint condition is violated), the performance of the separation is expected to degrade unless the overlapping points are treated. This paper will give such a treatment using subspace projection. Therefore, we will allow the sources to be nondisjoint in the TF domain; that is, multiple sources are allowed to be present at any point in the TF domain. However, instead of being inevitably nondisjoint, we limit ourselves by making the following constraint: Assumption 2: The number of sources that contribute their energy at any TF point is strictly less than the number of sensors. In other words, for the configuration of M sensors, there exist at most (M -1) sources at any point in the TF domain. For the special case when M = 2, Assumption 2 reduces to the disjoint condition. We also make another assumption on the TF conditioning of the sources. Assumption 3: For each source, there exists a region in the TF domain, where this source exists alone. Note that, this assumption is easily met and hence not restrictive for audio sources and FM-like signals. Also, it should be noted that this last assumption is, however, not a restriction on the use of subspace projection, because it will only be used later for the estimation of the mixing matrix. If otherwise, the mixing matrix can be obtained by another method, for example the one in [START_REF] Lathauwer | ICA techniques for more sources than sensors[END_REF], then Assumption 3 can be omitted. III. CLUSTER-BASED TF-UBSS APPROACH FOR DISJOINT SOURCES A. Quadratic TFD approach In this section, we review a method proposed in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF] based on the idea of clustering; hence, it is now referred to as the cluster-based quadratic TF-UBSS algorithm. For a signal vector z(t) = [z 1 (t), . . . , z N (t)] T , the Spatial Time Frequency Distribution (STFD) matrix is given by [START_REF] Belouchrani | Blind source separation based on time-frequency signal representations[END_REF]: D zz (t, f )    ρ z 1 z 1 (t, f ) . . . ρ z 1 z N (t, f ) . . . . . . . . . ρ z N z 1 (t, f ) . . . ρ z N z N (t, f )    , (7) where, for i, j ∈ N , ρ z i z j (t, f ) is the quadratic cross-TFD between z i (t) and z j (t) as obtained by ( 2), but with the first z being replaced by z i and the second by z j . By definition, the STFD takes into account the spatial diversity. By applying the STFD defined in [START_REF] O'grady | Survey of sparse and nonsparse methods in source separation[END_REF] on both sides of the BSS model in [START_REF] Nandi | Blind estimation using higher-order statistics[END_REF], we obtain the following TF-transformed structure: D xx (t, f ) = AD ss (t, f )A H (8) where D ss (t, f ) and D xx (t, f ) are, respectively, the source STFD matrix and mixture STFD matrix. [START_REF] Barkat | Algorithms for blind components separation and extraction from the time-frequency distribution of their mixture[END_REF]; noise thresholding by [START_REF] Linh-Trung | Underdetermined blind source separation of non-disjoint nonstationary sources in time-frequency domain[END_REF]. 2) Noise thresholding and auto-source point selection by [START_REF] Linh-Trung | Underdetermined blind source separation of non-disjoint nonstationary sources in time-frequency domain[END_REF]. 3) Vector clustering by [START_REF] Rickard | Desprit -histogram based blind source separation of more sources than sensors using subspace methods[END_REF] and k-means algorithm; source TFD estimation by [START_REF] Boashash | Time Frequency Signal Analysis and Processing: Method and Applications[END_REF]. 4) Source TF synthesis by [START_REF] Boudreaux-Bartels | Time-varying filtering and signal estimation using Wigner distributions[END_REF]. Let us call an auto-source TF point a point at which there is a true energy contribution/concentration of source or sources in the TF domain, and a cross-source point a point at which there is a 'false' energy contribution (due to the cross-term effect of quadratic TFDs). Note that, at other points with no energy contribution, the TFD value is ideally equal to zero. Under the assumption that all sources are disjoint in the TF domain, there is only one source present at any auto-source point. Therefore, the structure of D xx (t, f ) is reduced to D xx (t a , f a ) = ρ s i s i (t a , f a ) a i a H i , ∀(t a , f a ) ∈ Ω i , (9) where Ω i denotes, hereafter, the TF support of source s i (t). The observation [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF] suggests that for all (t a , f a ) ∈ Ω i , the corresponding set of STFD matrices {D xx (t a , f a )} will have the same principal eigenvector a i . It is this observation that leads to the general separation method using quadratic TFDs in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF]. Indeed, [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF] proposed several algorithms and pointed out that the choice of the TFD should be made carefully in order to have a 'clean' (cross-term free) TFD representation of the mixture, and chose the MWVD as a good candidate. This algorithm is summarized in Table I, and further detailed below for later use. 1) STFD mixture computation and noise thresholding: The STFD of the mixtures using the MWVD is computed by the following: D wvd xx (t, f ) k,l = ρ wvd x k x l (t, f ) (10a) D stft xx (t, f ) k,l = S x k (t, f ), for k = l, 0, otherwise, (10b) D mwvd xx (t, f ) = D wvd xx (t, f ) D stft xx (t, f ) 2 (10c) In [START_REF] Barkat | Algorithms for blind components separation and extraction from the time-frequency distribution of their mixture[END_REF], k, l ∈ N , and denotes the Hadamard product. 2) Noise thresholding and auto-source point selection: A 'noise thresholding' procedure is used to keep only those points having sufficient energy, i.e. auto-source points. One way to do this is: for each time-slice (t p , f ) of the TFD representation, apply the following criterion for all the frequency points f q belonging to this time-slice: If D mwvd xx (t p , f q ) max f { D mwvd xx (t p , f ) } > 1 , keep (t p , f q ), ( 11 ) where 1 is a small threshold (typically, 1 = 0.05). This 'hard thresholding' procedure has been preferred to the 'soft thresholding' using power-weighting of [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF] as it contributes also to reducing the computation complexity. The set of all the auto-source points is denoted by Ω. Since sources are TFdisjoint, we have Ω = N i=1 Ω i . This partition is found in the following way: 3) Vector clustering and source TFD estimation: For each point (t a , f a ) ∈ Ω, compute its corresponding spatial direction a(t a , f a ) a(t a , f a ) = diag D stft xx (t a , f a ) diag D stft xx (t a , f a ) , ( 12 ) and force it, without loss of generality, to have the first entry real and positive. Having the set of spatial direction {a(t a , f a )|(t a , f a ) ∈ Ω} one can cluster them into N classes using any unsupervised clustering algorithm (see [START_REF] Frank | The data analysis handbook[END_REF] for different clustering methods). The clustering algorithm used in [START_REF] Linh-Trung | Separating more sources than sensors using time-frequency distributions[END_REF] is rather sensitive due to the threshold in use; a robust method should be investigated, and this deserves another contribution. If the number of sources has been well estimated, one can use the so-called k-means clustering algorithm [START_REF] Frank | The data analysis handbook[END_REF] to achieve a good clustering performance. The output of the clustering algorithm is a set of N classes {C i |i ∈ N }. Also, the collection of all the points that correspond to all the vectors in the class C i forms the TF support Ω i of the source s i (t). Then, estimate the TFD of the source s i (t) (up to a scalar constant) as: ρwvd si (t, f ) = trace D wvd xx (t, f ) , (t, f ) ∈ Ω i , 0, otherwise. (13) 4) Source TF synthesis: Having obtained the source TFD estimate ρwvd s i (t, f ), the estimation of the source s i (t) can be done through a TF synthesis algorithm. The method in [START_REF] Boudreaux-Bartels | Time-varying filtering and signal estimation using Wigner distributions[END_REF] is used for TF synthesis from a WVD estimate, based on the following inversion property of the WVD [START_REF] Boashash | Time Frequency Signal Analysis and Processing: Method and Applications[END_REF]: x(t) = 1 x * (0) ∞ -∞ ρ wvd x ( t 2 , f ) e j2πf t df , which implies that the signal can be reconstructed to within a complex exponential constant e jα = x * (0)/|x(0)| given |x(0)| = 0. It can be observed that in this version of the quadratic TF-UBSS algorithm, the STFD matrices are not fully needed as only their diagonal entries are used in the algorithm. This should be taken into account to reduce the computational cost. B. Linear TFD approach As we have seen before, the STFT is often used for speech/audio signals because of its low computational cost. Therefore, in this section we briefly review the STFT method in [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF], and propose simultaneously a cluster-based linear TF-UBSS algorithm using the STFT to avoid some of the drawbacks in [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF]. First, under the transformation into the TF domain using the STFT, the model in (1) becomes: S x (t, f ) = AS s (t, f ), (14) where S x (t, f ) is the mixture STFT vector and S s (t, f ) is the source STFT vector. Under the assumption that all sources are disjoint in the TF domain, ( 14) is reduced to [START_REF] Frank | The data analysis handbook[END_REF]; noise thresholding by [START_REF] Griffin | Signal estimation from modified shorttime fourier transform[END_REF] 2) Vector clustering by [START_REF] Zibulevsky | Independent Component Analysis: Principles and Practice[END_REF] and [START_REF] Wax | Detection of signals by information theoretic criteria[END_REF]. S x (t a , f a ) = a i S s i (t a , f a ), ∀(t a , f a ) ∈ Ω i , ∀i ∈ N . ( 15 ) 3) Source STFT estimation by (21). 4) Source TF synthesis by [START_REF] Griffin | Signal estimation from modified shorttime fourier transform[END_REF]. Now, in [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF], the structure of the mixing matrix is particular in that it has only 2 rows (i.e. the method uses only 2 sensors) and the first row of the mixing matrix contains all 1. Then, ( 15) is expanded to S x 1 (t a , f a ) S x2 (t a , f a ) = 1 a 2,i S s i (t a , f a ), which results in a 2,i = S x2 (t a , f a ) S x1 (t a , f a ) . ( 16 ) Therefore, all the points for which the ratios on the right-hand side of ( 16) have the same value form the TF support Ω i of a single source, say s i (t). Then, the STFT estimate of s i (t) is computed by: Ŝs i (t, f ) = S x1 (t, f ), ∀(t, f ) ∈ Ω i , 0, otherwise. The source estimate ŝi (t) is then obtained by converting Ŝs i (t, f ) to the time domain using inverse STFT [START_REF] Griffin | Signal estimation from modified shorttime fourier transform[END_REF]. Note that, the extension of the UBSS method in [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF] to more than two sensors is a difficult task. Second, the division on the right-hand side of ( 16) is prone to error if the denominator is close to zero. To avoid the above mentioned problems, we propose here a modified version of the previous method referred to as the cluster-based linear TF-UBSS algorithm. In particular, from the observation [START_REF] Lathauwer | ICA techniques for more sources than sensors[END_REF], we can deduce the separation algorithm as shown next, and summarized in Table II. 1) Mixture STFT computation and noise thresholding: Compute the STFT of the mixtures, S x (t, f ), by applying (4) for each of the mixture in x(t), as follows: S x i (t, f ) = ∞ -∞ x i (τ )h(τ -t)e -j2πf τ dτ, i = 1, . . . , M, (17a) S x (t, f ) = [S x 1 (t, f ), . . . , S x M (t, f )] T . ( 17b ) Since the STFT is totally free of cross-terms, a point with a nonzero TFD value is ideally an auto-source point. Practically, we can select all auto-source points by only applying a noise thresholding procedure as that in the cluster-based quadratic TF-UBSS algorithm. In particular, for each time-slice (t p , f ) of the TFD representation, apply the following criterion for all the frequency points f k belonging to this time-slice If S x (t p , f k ) max f { S x (t p , f ) } > 2 , then keep (t p , f k ), ( 18 ) where 2 is a small threshold (typically, 2 = 0.05). Then, the set of all selected points, Ω, is expressed by Ω = N i=1 Ω i , where Ω i is the TF support of the source s i (t). Note that, the effects of spreading the noise energy while localizing the source energy in the time-frequency domain amounts to increasing the robustness of the proposed method with respect to noise. Hence, by equation ( 18) (or equation ( 11)), we would keep only time-frequency points where the signal energy is significant, the other time-frequency points are rejected, i.e. not further processed, since considered to represent noise contribution only. Also, due to the noise energy spreading, the contribution of the noise in the source time-frequency points is relatively, negligeable at least for moderate and high SNRs. 2) Vector clustering and source TFD estimation: The clustering procedure can be done in a similar manner as in the quadratic algorithm. First, we obtain the spatial direction vectors by: v(t a , f a ) = S x (t a , f a ) S x (t a , f a ) , (t a , f a ) ∈ Ω, (19) and force them, without loss of generality, to have the first entry real and positive. Next, we cluster these vectors into N classes {C i | i ∈ N }, using the k-means clustering algorithm. The collection of all points, whose vectors belong to the class C i , now forms the TF support Ω i of the source s i (t). Then, the column vector a i of A is estimated as the centroid of this set of vectors: âi = 1 |C i | (t,f )∈Ω i v(t, f ), (20) where |C i | is the number of vectors in this class. Therefore, we can estimate the STFT of each source s i (t) by: Ŝsi (t, f ) = âH i S x (t, f ), ∀ (t, f ) ∈ Ω i , 0, otherwise. (21) since, from (15), we have âH i S x (t, f ) = âH i a i S si (t, f ) ≈ S si (t, f ), ∀ (t, f ) ∈ Ω i . Note that the STFT is a particular form of wavelet transforms which have been used in [START_REF] Zibulevsky | Independent Component Analysis: Principles and Practice[END_REF] for the UBSS of image signals. IV. SUBSPACE-BASED TF-UBSS APPROACH FOR NONDISJOINT SOURCES We have seen the cluster-based TF-UBSS methods, using either quadratic TFDs such as the MWVD or linear TFDs such as the STFT, as summarized in Table I or Table II, respectively. These methods relied on the assumption that the sources were TF-disjoint, which has led to the enabling TFtransformed structures in [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF] or [START_REF] Lathauwer | ICA techniques for more sources than sensors[END_REF]. When the sources are nondisjoint in the TF domain, then these equations are no longer true. Under the TF-nondisjoint condition, stated in Assumption 2, we propose in this section two alternative methods, one for quadratic TFDs and the other for linear TFDs, for the UBSS problem using subspace projection. A. Subspace-based quadratic TF-UBSS algorithm Recall that the first two steps of the cluster-based quadratic TF-UBSS algorithm do not rely on the assumption of TFdisjoint sources (see Table I). Therefore, we can reuse these steps to obtain the set of auto-source points Ω. Now, under the TF-nondisjoint condition, consider an auto-source point (t b , f b ) ∈ Ω such that there are K sources, K < M , present at this point. Our goal is to identify the sources present at (t b , f b ) and to estimate the energy each of these sources contributes. Denote α 1 , . . . , α K ∈ N the indices of the sources present at (t b , f b ), and define the following: s = [s α1 (t), . . . , s α K (t)] T , (22a) Ã = [a α1 , . . . , a α K ]. (22b) Then, under Assumption 2, ( 8) is reduced to D wvd xx (t b , f b ) = ÃD ss (t b , f b ) ÃH , ( 23 ) Consequently, given that D ss is of full rank, we have Range {D xx (t b , f b )} = Range{ Ã}. ( 24 ) Let P be the orthogonal projection matrix onto the noise subspace of D wvd xx (t b , f b ). Then, from (24), we obtain: P = I -VV H , ( 25 ) and Pa i = 0, ∀ i ∈ {α 1 , . . . , α K } , Pa i = 0, ∀ i ∈ N \ {α 1 , . . . , α K } (26) In ( 25), V is the matrix formed by the K principal singular eigenvectors of D xx (t b , f b ). Assuming that A has been estimated by some method, the observation in (26) enables us to identify the indices α 1 , . . . , α K ; and hence, the sources present at (t b , f b ). In practice, to take into account the estimation noise, one can detect these indices by detecting the K smallest values from the set { Pa i | i ∈ N }, as mathematically expressed by: {α 1 , . . . , α K } = arg min K { Pa i | i ∈ N } , ( 27 ) where min K denotes the minimization to obtain the K smallest values. The TFD values of the K sources at (t b , f b ) are estimated as the diagonal elements of the following matrix: Dss (t b , f b ) ≈ Ã# D xx (t b , f b ) Ã# H , ( 28 ) where the superscript ( # ) is the Moore-Penrose's pseudoinversion operator. Here, we propose also an estimation method for A by using Assumption 3. This assumption states that, for each source s i (t), there exists a TF region R i where s i (t) exists alone. In other words, R i contains all the single-source auto-source points of s i (t). Therefore, we can reuse the observation (9) in the TF-disjoint case, but for some TF regions, as below: [START_REF] Barkat | Algorithms for blind components separation and extraction from the time-frequency distribution of their mixture[END_REF]. D xx (t, f ) = ρ s i s i (t, f )a i a H i , ∀(t, f ) ∈ R i , ∀i ∈ N . The union of these regions, R = N i=1 R i , is detected by the following: If λ max {D wvd xx (t, f )} trace{D wvd xx (t, f )} -1 < 3 , then (t, f ) ∈ R, ( 29 ) 2) Noise thresholding and auto-source point selection by [START_REF] Linh-Trung | Underdetermined blind source separation of non-disjoint nonstationary sources in time-frequency domain[END_REF]. 3) Single-source auto-source point selection by (29); mixing matrix estimation by ( 30) and (31) 4) For all auto-source points, perform subspace-based TFD estimation of sources by ( 25), ( 27) and (28) 5) Source TF synthesis by [START_REF] Boudreaux-Bartels | Time-varying filtering and signal estimation using Wigner distributions[END_REF]. where 3 is a small threshold value (typically, 3 ≤ 0.1) and λ max {D wvd xx (t, f )} denotes the maximum eigenvalue of D wvd xx (t, f ). Then, we can apply the same vector clustering procedure as in Section III-A.3 to estimate A. In particular, we first obtain all the spatial direction vectors: a(t, f ) = diag D stft xx (t, f ) diag D stft xx (t, f ) , ∀(t, f ) ∈ R. ( 30 ) Next, we cluster these vectors into N classes {D i |i ∈ N } using the k-means clustering algorithm. The collection of all points, whose vectors belong to the class D i , now forms the TF region R i of the source s i (t). Finally, the column vectors A are estimated as the centroid vectors of these classes as: âi = 1 |D i | (t,f )∈R i a(t, f ), ∀i ∈ N (31) where D i is the number of points in R i . Table III gives a summary of the subspace-based quadratic TF-UBSS algorithm. B. Subspace-based linear TF-UBSS algorithm Similarly, we propose here a subspace-based linear TF-UBSS algorithm for TF-nondisjoint sources using STFT. We also use the first step of the cluster-based linear TF-UBSS algorithm (see Table II) to obtain all the auto-source points Ω. Under the TF-nondisjoint condition, consider an autosource point (t b , f b ) ∈ Ω at which there are K sources s α 1 (t), . . . , s α K (t) present, with K < M . Then, ( 8) is reduced to the following S x (t b , f b ) = ÃS s(t b , f b ), ∀(t b , f b ) ∈ Ω ( 32 ) where à and s are as previously defined in (22). Let Q represent the orthogonal projection matrix onto the noise subspace of Ã. Then, Q can be computed by: Q = I -Ã ÃH Ã -1 ÃH . ( 33 ) We have the following observation: Qa i = 0, i ∈ {α 1 , . . . , α K } Qa i = 0, i ∈ N \{α 1 , . . . , α K } . ( 34 ) If A has already been estimated by some method, then this observation gives us the criterion to detect the indices 20) and (37), and k-means algorithm. 4) For all auto-source points, perform subspace-based TFD estimation of sources by (33), ( 35) and (36). 5) Source TF synthesis by [START_REF] Griffin | Signal estimation from modified shorttime fourier transform[END_REF]. α 1 , . . . , α K ; and hence, the contributing sources at the autosource point (t b , f b ). In practice, to take into account noise, one detects the column vectors of à minimizing: {α 1 , . . . , α K } = arg min β 1 ,...,β K QS x (t, f ) | Ãβ (35) where Ãβ = [a β1 , . . . , a β K ]. Next, TFD values of the K sources at TF point (t, f ) are estimated by: Ŝs (t, f ) ≈ Ã# S x (t, f ). (36) Here we propose a method for estimating the mixing matrix A. This is performed by clustering all the spatial direction vectors in [START_REF] Zibulevsky | Independent Component Analysis: Principles and Practice[END_REF] as for the preview TF-UBSS algorithm. Then within each class C i we eliminate the far-located vectors from the centroid (in the simulation we estimate vectors v(t, f ) such that: v(t, f ) -âi > 0.8 max v(t,f )∈Ωi v(t, f ) -âi , (37) leading to a size-reduced class Ci . Essentially this is to keep the vectors corresponding to the TF region R i , which are ideally equal to the spatial direction a i of the considered source signal. Finally, the i th column vector of A is estimated as the centroid of Ci . Table IV provides a summary of the subspace projection based TF-UBSS algorithm using STFT. V. DISCUSSION We discuss here certain points relative to the proposed TF-UBSS algorithms and their applications. 1) Number of sources: The number of sources N is assumed known in the clustering method (k-means) that we have used. However, there exist clustering methods [START_REF] Frank | The data analysis handbook[END_REF] which perform the class estimation as well as the estimation of the number N . In our simulation, we have observed that most of the time the number of classes is overestimated, leading to poor source separation quality. Hence, robust estimation of the number of sources in the UBSS case remains a difficult open problem that deserves particular attention in future works. 2) Number of overlapping sources: In the subspace-based approach, we have to evaluate the number K of overlapping sources at a given TF point. This can be done by finding out the number of non-zero eigenvalues of D wvd xx (t, f ) using criteria such as Minimum Description Length (MDL) or Akaike Information Criterion (AIC) [START_REF] Wax | Detection of signals by information theoretic criteria[END_REF]. It is also possible to consider a fixed (maximum) value of K that is used for all auto-source TF points. Indeed, if the number of overlapping sources is less than K, we would estimate close-to-zero source STFT values. For example, if we assume K = 2 sources are present at a given TF point while only one source is effectively contributing, then we estimate one close-to-zero source STFT value. This approach increases slightly the estimation error of the source signals (especially at low SNRs) but has the advantage of simplicity compared to using information theoretic-based criterion. In our simulation, we did choose this solution with K = 2 or K = 3. 3) Quadratic versus linear TFDs: We have proposed two algorithms using quadratic and linear TFDs. The one using the quadratic TFD should be preferred when dealing with FM-like signals and for small or moderate sample sizes. For audio source separation often the case the sample size is large, and hence, to reduce the computational cost one should prefer the linear TFD based UBSS algorithm. Overall, the quadratic version performs slightly better than the linear one but costs much more in computations. 4) Separation quality versus number of sources: Although we are in the underdetermined case, the number of sources N should not exceed too much the number of sensors. Indeed, when N increases, the level of source interference increases, and hence, the source disjointness assumption is ill-satisfied. Moreover, for a large number of sources, the likelihood of having two sources closely spaced, i.e. such that the spatial directions a i and a j are 'close' to linear dependency, increases. In that case, vector clustering performance degrades significantly. In brief, sparseness and spatial separation are the two limiting factors against increasing the number of sources. Figure 8 illustrates the performance degradation of source separation versus the number of sources. VI. SIMULATION RESULTS A. Simulation results of subspace-based TF-UBSS algorithm using STFT In the simulations, we use a uniform linear array of M = 3 sensors. It receives signals from N = 4 independent speech sources in the far field from directions θ 1 = 15, θ 2 = 30, θ 3 = 45 and θ 4 = 75 degrees respectively. The sample size is T = 8192 samples. In Figure 3, the top four plots represent the TF representation of the original sources signal, the middle three plots represent the TF representation of the M mixture signals and the bottom four plots represent the TF representation of the estimate of sources by the subspacebased algorithm using STFT (Table IV). Figure 4 represents the same disposition of signals but in the time domain. In Figure 5, we compare the separation performance obtained by the subspace-based algorithm with K = 2 and the clusterbased algorithm (Table II). It is observed that subspace-based algorithm provides much better separation results than those obtained by the cluster-based algorithm. In the subspace-based method, one first needs to estimate the mixing matrix A. This is done by the cluster-based method presented previously. The plot in Figure 6 represents the normalized estimation error of A versus the SNR in dB. Clearly, the proposed estimation method of the mixing matrix provides satisfactory performance, while the plot in Figure 7 presents the separation performance when using the exact matrix A compared to that obtained with the proposed estimate Â. Figure 8 illustrates the rapid degradation of the separation quality when we increase the number of sources from N = 4 to N = 7. This confirms the remarks made in Section V. (a) S s 1 (t, f ) (b) S s 2 (t, f ) (c) S s 3 (t, f ) (d) S s 4 (t, f ) (e) Sx 1 (t, f ) (f) Sx 2 (t, f ) (g) Sx 3 (t, f ) (h) S ŝ1 (t, f ) (i) S ŝ2 (t, f ) (j) S ŝ3 (t, f ) (k) S ŝ4 (t, f ) In Figure 9, we compare the performance obtained with the subspace-based method for K = 2 and K = 3. In that experiment, we have used M = 4 sensors and N = 5 source signals. One can observe that, for high SNRs, the case of K = 3 leads to a better separation performance than for the case of K = 2. However, for low SNRs, a large value of K increases the estimation noise (as mentioned in Section V) and hence degrades the separation quality. B. Simulation results of subspace-based TF-UBSS algorithm using STFD In this simulation, we use a uniform linear array of M = 3 sensors with half wavelength spacing. It receives signals from N = 4 independent LFM sources, each has 256 samples, in the presence of additive Gaussian noise where the SNR=20 dB. We compare the cluster-based (Table I) and the proposed subspace-based (Table III) TF-UBSS algorithms. Fig- ures 10-(a,d,g,j) represent the TFDs (using WVD) of the four sources. Figures e,h,k) show the estimated source TFDs using the cluster-based algorithm, whereas Figures f,i,l) are those obtained by the subspace-based algorithm. From Figures 10-(b,e) we can see that the overlapping points between source s 1 (t) and source s 2 (t) were picked up by source s 2 (t) with the cluster-based algorithm. On the other hand, using the subspace-based algorithm, the intersection points have been redistributed to the two sources (Figure 10-(c,f)). In general, the overlapping points in the nondisjoint case have been explicitly treated. This provides a visual performance comparison. In Figure 11, we compare the statistical separation performance between the subspace-based algorithm and the clusterbased algorithm using STFD, evaluated over 1000 Monte-Carlo runs. One can also notice that the gain here is smaller than the one obtained previously for audio sources. This is due to the fact that the overlapping region of the considered signals is smaller. This result confirms the previous visual observation with respect to the performance gain in favor of our subspacebased method. VII. CONCLUSIONS This paper introduces new methods for the UBSS of TFnondisjoint nonstationary sources using time-frequency repre- sentations. The main advantages over the proposed separation algorithms are, first, a weaker assumption on the source 'sparseness', i.e. the sources are not necessarily TF-disjoint, and second, an explicit treatment of the overlapping points using subspace projection, leading to significant performance improvements. Simulation results illustrate the effectiveness of our algorithms in different scenarios compared to those existing in the literature. Fig. 1 . 1 Fig. 1. Source TF-disjoint condition: Ω 1 ∩ Ω 2 = ∅ (when Ω 1 ∩ Ω 2 ≈ ∅, sources are said to be TF-almost-disjoint). Fig. 3 . 3 Fig. 3. Simulated example (viewed in TF domain) for the subspace-based TF-UBSS algorithm with STFT in the case of 4 speech sources and 3 sensors. The top four plots represent the original source signals, the middle three plots represent the 3 mixtures, and the bottom four plots represent the source estimates. Fig. 4 . 4 Fig. 4. Simulated example (viewed in time domain) for the subspace-based TF-UBSS algorithm with STFT in the case of 4 speech sources and 3 sensors. The top four plots represent the original source signals, the middle three plots represent the 3 mixtures, and the bottom four plots represent the source estimates. Fig. 5 . 5 Fig. 5. Comparison between subspace-based and cluster-based TF-UBSS algorithms using STFT: normalized MSE (NMSE) versus SNR for 4 speech sources and 3 sensors. Fig. 6 . 6 Fig. 6. Mixing matrix estimation: normalized MSE versus SNR for 4 speech sources and 3 sensors. Fig. 7 . 7 Fig. 7. Comparison, for the subspace-based TF-UBSS algorithm using STFT, when the mixing matrix A is known or unknown: NMSE of the source estimates. Fig. 8 . 8 Fig. 8. Comparison between subspace-based and cluster-based TF-UBSS algorithms using STFT: NMSE versus number of sources. Fig. 9 . 9 Fig. 9. Comparison between subspace-based and cluster-based TF-UBSS algorithms using STFT: NMSE of the source estimates for different sizes of the projector, for the case of 5 sources and 4 sensors. Fig. 10 . 10 Fig.10.Simulated example (viewed in TF domain) for the subspacebased TF-UBSS algorithm with STFT in the case of 4 LFM sources and 3 sensors. From left to right, the figures respectively represent the original source TF signatures, the estimated source TF signatures using the clusterbased algorithm, and the estimated source TF signatures using the subspacebased algorithm. Fig. 11 . 11 Fig. 11. Comparison between subspace-based and cluster-based TF-UBSS algorithms using STFD: normalized MSE (NMSE) versus SNR for 4 LFM sources and 3 sensors. TABLE I CLUSTER I -BASED QUADRATIC TF-UBSS ALGORITHM USING STFD 1) Mixture STFD computation by In fact, the STFT does not represent an energy distribution of the signal in the TF plane. However, for simplicity, we still refer to it as a TFD. Abdeldjalil Aïssa-El-Bey was born in Algiers, Algeria, in 1981. He received the State Engineering degree from École Nationale Polytechnique (ENP), Algiers, Algeria, in 2003, the M.S. degree in signal processing from Supélec and Paris XI University, Orsay, France, in 2004. Currently he is working towards the Ph.D. degree at the Signal and Image Processing Department of École Nationale Supérieure des Télécommunications (ENST) Paris, France. His research interests are blind source separation, blind system identification and equalization, statistical signal processing, wireless communications and adaptive filtering.
01773015
en
[ "sdv.neu" ]
2024/03/05 22:32:18
2018
https://inserm.hal.science/medihal-01773015/file/Fiche_consentement_donnees_partagees.pdf
CONFIDENTIALITE DES DONNEES VOUS CONCERNANT Dans le cadre de la recherche impliquant la personne humaine à laquelle [nom du promoteur] vous propose de participer, un traitement de vos données personnelles va être mis en oeuvre pour permettre d'analyser les résultats de la recherche au regard de l'objectif de cette dernière. A cette fin, les données médicales vous concernant et les données relatives à vos habitudes de vie, ainsi que, dans la mesure où ces données sont nécessaires à la recherche, vos origines ethniques ou des données relatives à votre vie sexuelle, seront transmises au Promoteur de la recherche ou aux personnes ou sociétés agissant pour son compte, en France ou à l'étranger. Ces données seront identifiées par un numéro de code et/ou vos initiales ou les trois premières lettres de votre nom Vous pouvez également accéder directement ou par l'intermédiaire d'un médecin de votre choix à l'ensemble de vos données médicales en application des dispositions de l'article L 1111-7 du Code de la Santé Publique. Ces droits s'exercent auprès du médecin qui vous suit dans le cadre de la recherche et qui connaît votre identité. Nous vous informons que vous serez inscrit dans le fichier national des personnes qui se prêtent à des recherches impliquant la recherche humaine prévu à l'article L.1121-16 du code de la santé publique. Vous avez la possibilité de vérifier auprès du ministre chargé de la santé l'exactitude des données vous concernant présentes dans ce fichier et la destruction de ces données au terme du délai prévu par le Code de la santé publique. [à préciser selon les cas]. Ces données pourront également, dans des conditions assurant leur confidentialité, être transmises aux autorités de santé françaises ou étrangères et à d'autres entités de [nom du Promoteur]. Ces données pourront également être utilisées lors de recherches ultérieures pour des fins scientifiques ; en cas de retrait de consentement et sauf précision de votre part, les données collectées jusqu'à cette date pourront être utilisées. Conformément aux dispositions de loi relative à l'informatique aux fichiers et aux libertés, le promoteur a procédé à une déclaration auprès de la Commission Nationale de l'Informatique et des Libertés (CNIL). Vous disposez d'un droit d'accès et de rectification. Vous disposez également d'un droit d'opposition à la transmission des données couvertes par le secret professionnel susceptibles d'être utilisées dans le cadre de cette recherche et d'être traitées. Les données sont stockées [précisez lieu et organisme]. Le promoteur procédera à la collecte de ces données [précisez les modalités].
01773092
en
[ "phys", "phys.astr", "phys.astr.co", "phys.hthe", "phys.hphe", "phys.nexp" ]
2024/03/05 22:32:18
2016
https://hal.science/hal-01773092/file/Highlights_and_Conclusions_CIAS2016.pdf
Héctor J De Vega Norma G Sanchez email: [email protected] Sinziana Paduroiu email: [email protected] Peter L Biermann email: [email protected] Warm Dark Matter Astrophysics in Agreement with Observations and keV Sterile Neutrinos: Synthesis of Highlights and Conclusions of the Chalonge -de Vega Meudon Workshop 2016 . In Memoriam ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Norma G. Sanchez (a) , Sinziana Paduroiu (b) , Peter L. Biermann (c,d,e,f ) (a) LERMA, CNRS UMR 8112, Observatoire de Paris PSL, Sorbonne Universités UPMC Univ Paris 6. 61 Avenue de l'Observatoire, 75014 Paris, France The mass distribution observed in galaxies is found to be incompatible with the CDM predictions, falsifies the baryonically fine-tuned CDM scenarios and leads to keV Warm Dark Matter (WDM). WDM is a hot topic in galaxies and cosmology and implies novelties in the astrophysical, cosmological, particle and nuclear physics context. WDM research is progressing fast because it essentially works naturally reproducing the observations at all scales. A Turning Point operated recently in the Dark Matter research: Warm Dark Matter emerged impressively over Cold Dark Matter (CDM) as the leading Dark Matter candidate. WDM solves naturally the problems of CDM and CDM + baryons. (ΛWDM) provides the same successful large-scale results and CMB results as ΛCDM and agrees with the observations at the galactic and small scales as well. The Chalonge -de Vega Workshop 'Warm Dark Matter in Astrophysics in Agreement with Observations and keV Sterile Neutrinos' addressed the last WDM achievements including its distribution function and equation of state (The Eddington like approach to galaxies), the quantum mechanical framework to galaxy structure reproducing in particular the observed galaxy cores and their sizes, and the properties of dwarf galaxies. This workshop summary puts together astrophysical, cosmological, particle and nuclear WDM, astronomical observations, theory and WDM analytical and numerical frameworks, which reproduce the observations. The Workshop addressed the exciting ongoing theoretical and experimental developments in the search for the leading WDM particle candidate: keV sterile neutrinos. The recent impact of WDM astrophysics, its signatures and constraints with high redshift galaxies, clusters, cosmic recombination, 21 cm line, with or for the JWST, HST, SKA, X-ray astronomy and gravitational lensing were presented. News from KATRIN, ECHo, and ASTRO-H were presented by members of the respective collaborations. Peter L. Biermann, Isabella P. Carucci, Pier-Stefano Corasaniti, Loredana Gastaldo, Anton Huber, Daniel Maier, Nicola Menci, Eloisa Menegoni, Sinziana Paduroiu, Paolo Salucci, Norma G. Sanchez, Matthieu Vivier presented their lectures. A discussion session on the present and future of the Dark Matter research and galaxies allowed new inputs, overall vision, and working plans "Où va la Science" ?. The Héctor de Vega medal in the honor of Héctor de Vega was introduced and awarded. This Workshop is the seventh of a Chalonge series in Meudon started with Héctor J. de Vega dedicated to Warm Dark Matter, and this is now the Chalonge -de Vega series. The first WDM Workshop of this series (June 2010) allowed to identify and understand the issues of the serious problems faced by Cold Dark Matter (CDM) and CDM + baryons to reproduce the galactic observations. The 2010 and 2011 Workshops served also to verify and better understand the endless confusion situation in the CDM research, namely the increasing number of cyclic arguments, and ad-hoc mechanisms and recipes introduced in the CDM + baryon galaxy simulations over most of thirty years, in trying to deal with the CDM + baryons small scale crisis: cusped profiles and overabundance of substructures (too many satellites) are predicted by CDM. In contrast, cored profiles and not so overabundant substructures are seen in astronomical observations. The so many galaxy formation and evolution models of CDM + baryons are plagued with ever increasing tailoring or fine-tuning and recipes. Such a type of circular, "never-ending" -increasing and sustained -confusion over the years takes today such research out of the science context. Students and first comers in the subject ask the question: Why then does such research continue ? Why are such WIMP experiments still planned for the future again and again... The answer is not within a pure scientific context. But the real WIMP scientific research is in decline. On the CDM particle physics side, the situation is no less critical. So far, all the dedicated experimental searches after more than thirty years to find the theoretically proposed CDM particle candidate (WIMP) have repeatedly failed. The CDM indirect searches (invoking CDM annihilation) to explain cosmic ray positron excesses, are in crisis as well, as WIMP annihilation models are plagued with growing tailoring or fine tuning, and in any case, such cosmic rays excesses are well explained and reproduced by natural astrophysical process and sources. The so-called and continuously invoked 'wimp miracle' is nothing but being able to solve one equation with three unknowns (mass, decoupling temperature, and annihilation cross section) within WIMP models theoretically motivated by the SUSY model built twenty five years ago when such models were fashionable. After more than thirty-five years, and as often in big-sized science, CDM research (CDM+ baryon simulations, direct and indirect WIMP experimental research and model building) has by now its own internal inertia and own organized community, without reproducing the astronomical observations and failing to provide any experimental signal of WIMPs (except signals compatible with experimental noise). Growing CDM + baryon galaxy simulations involve ever increasing large super-computers and large number of people; CDM particle WIMP search involved (and involves) large and long-time planned experiments, huge number of people and huge budgets. One should not to be surprised then if a strategic scientific change has not yet operated in the CDM + baryon research and in the WIMP research, given the way in which the organization operates, although their real scientific situation is of decline. The New Dark Matter Situation Today and WDM State-Of-The-Art Warm Dark Matter (WDM) research is progressing fast, the subject is new and WDM essentially works, naturally reproducing the astronomical observations over all scales: small (galactic) and large (cosmological) scales (ΛWDM). Astronomical evidence that Cold Dark Matter (ΛCDM) and its proposed tailored baryonic cures do not work at small scales is staggering. ΛWDM is a more complete, correct and general theory than ΛCDM, it contains CDM as a limiting case (in the limit of the high mass of the particle), reproduces ΛCDM at large scales and solves all the known problems of CDM at small and intermediate scales. A. The fermionic quantum pressure of WDM ensures the observed small scale structures as the cores of galaxies and their right sizes (including the dwarf galaxies). The Thomas-Fermi Theory naturally takes it into account and produces the correct cored density profiles and their correct sizes. N-body simulations in classical (non-quantum) physics present in the literature do not take into account the fermionic WDM quantum pressure and produce unreliable results at small scales: That is the reason of the "too small core size" problem in classical (non-quantum) N-body WDM simulations present in the literature and the similar dwarf galaxies problem. The WDM simulations to address the core density profiles of the right core size must be quantum simulations or take into account in some effective way the quantum WDM pressure. B. Two observed quantities crucially constrain the DM nature in an inescapable way independently of the particle physics model: the average DM density ρ and the phase space density Q. The observed values of ρ and Q in galaxies today robustly point to a keV scale DM particle (WDM) and exclude CDM as well as axion Bose-Einstein condensate DM. C. Lyman alpha bounds on the WDM particle mass apply to specific sterile neutrino models and many sterile neutrino models are available today for which the Lyman alpha bounds are unknown. Therefore, WDM cannot be disfavored in general on the grounds of the Lyman alpha bounds only valid for specific models, as erroneously stated and propagated in the literature. Also, Lyman alpha bounds on the WDM particle mass depends on astrophysical uncertainties and the baryonic modelling. Astrophysical constraints put the sterile neutrino mass m in the range 1 < m < 10 keV. For a dark matter particle decoupling at thermal equilibrium (thermal relic), all evidences point to a 2 keV particle. Remarkably enough, sterile neutrinos decouple out of thermal equilibrium with a primordial power spectrum similar to a 2 keV thermal relic when the sterile neutrino mass is about 7 keV, and therefore, WDM can be formed by 7 keV sterile neutrinos. KATRINextensions, ECHo and others experiments could detect such keV sterile neutrinos. It will be a fantastic discovery to detect dark matter in beta decay or in electron capture. Exciting WDM work to perform is ahead of us. This Workshop addresses the last and fast steps of progress made in Warm Dark Matter Galaxies in Agreement with Observations. In the tradition of the Chalonge -de Vega School, an effort of clarification and synthesis is made by combining in a conceptual framework, theory, analytical, observational and numerical simulation results. The subject is approached in a fourfold way: (I) Conceptual context: Dark Matter in cosmology and astrophysics: perspective and prospective of the research in the subject: Theory and observations. The emergence of Warm (keV scale) Dark Matter from theory and observations. (II) Astronomical observations: galaxy structural properties, the universal and non universal properties of galaxies, high quality rotation curves, kinematics, density profiles, gravitational lensing, small and large structures, deep surveys, clusters of galaxies. (III) Computational framework with the equations of physics. Analytical and numerical frameworks. The new important physical ingredient in galaxy structure: quantum mechanics. Classical (non quantum) numerical simulations with Warm Dark Matter and resulting structures. Results versus observations. (IV) Experiments and detection: experimental constraints on the DM particle, detection techniques, status of present experiments and results, experiments in development and future prospects. Topics Covered Included: • Astrophysical and cosmological observational signatures of Warm Dark Matter, sterile neutrinos and their experimental search. • Warm (keV scale) dark matter N-body simulations in agreement with observations at large and intermediate scales. • The phase-space density of dark matter. • Particle model independent analysis of astrophysical dark matter. • Baryonic model independent analysis of astrophysical dark matter. • The radial profiles and the Dark Matter distribution; observed galactic cored DM profiles. • The keV scale Dark Matter (Warm Dark Matter): Observational and theoretical progress. • Large and small scale structure formation in agreement with observations at small galactic and at large scales. • The serious dark matter candidate: Sterile neutrinos at the keV scale. • Active and sterile neutrinos mass bounds from cosmological data, from astrophysical and X-ray data and from high precision beta decay experiments. • News on neutrinos and eV scale sterile neutrinos. News from reactor and accelerator experiments on neutrinos and their science implications. • Signatures and constraints on Warm Dark Matter scenarios from Reionization, 21-cm line, First Galaxies. • The impact of the mass of the dark matter particle on the small scale structure formation and the choice of the initial conditions. • The Thomas-Fermi framework to describe the structure and physical states of galaxies in agreement with observations. • The Eddington like framework to obtain the DM distribution function and the equation of state of galaxies. • Universal and non-universal profiles. Cored density profiles with WDM core sizes in agreement with observations. • Supermassive Black Holes : Theory and Observations. Ecole Internationale Daniel Conceptual Context and Theoretical Considerations Dark Matter is the dominant component of galaxies and is an essential ingredient in understanding the formation of galaxies and their properties. WDM thermal particles with a thermal Fermi-Dirac distribution and a free streaming length corresponding to a mass in the few keV range explain the cosmological structures at all scales: large, intermediate and small scales. The large scale structure is reproduced by both CDM and WDM in the keV scale simulations, in agreement with the CMB observations. In addition, the free streaming length of WDM particles suppresses the power at small scales, thus preventing a high number of small structures to form -in agreement with observations. At small scales, at high densities, in the inner halo regions and for the smallest galaxies, the quantum properties of the fermionic WDM become important and quantum calculations (Thomas-Fermi theory of de Vega-Sanchez) give galaxy core sizes, galaxy surface density, phase space density, scaling mass-radius relations, in agreement with observations. Dwarf galaxies, which are dark matter dominated, are supported against gravity by the fermionic quantum pressure of WDM. Depending on the particle physics model, the mechanism of production, the temperature at decoupling, there are several hypothesized WDM particles, with free streaming length/velocity dispersions corresponding to different masses. In order to give a precise constraint on the WDM particle mass, one needs to distinguish between these models and use the appropriated conversion factor between these masses computed corresponding to the different primordial power spectra. The keV scale is the common meeting point for all WDM particles in general, that means particle masses between 2 and 10 keV. The thermal relic mass is around 2 keV corresponding to the reference or minimal WDM particle mass. Connecting the physics of warm dark matter particle candidates with some observations, sterile neutrinos may explain the presence of early supermassive black holes distributed along well-contoured semicircles -arcs and early star formation. Talks by: Norma Sanchez, Peter L. Biermann, Sinziana Paduroiu Observations While both ΛWDM and ΛCDM agree with the CMB data and the large scale structure (LSS), only ΛWDM agrees with the small scale structure (SSS), the scale of galaxies. Several properties of galaxies have been discussed and explained in the context of keV WDM models. The WDM abundance of structures agrees with observations. The mass distribution observed in galaxies is found to be incompatible with the CDM predictions, falsifies the baryonically fine-tuned CDM scenarios and leads to WDM. Rotation curves from 3200 galaxies show the presence of cores, not cusps. Using data from I-band photometry and from HI observations, the rotation curves are fitted by the Universal Rotation Curve (URC). For the first time, data on recent observations of disk dwarfs properties from a sample of 36 objects have been presented: they are another indication of WDM fitted by a generalized URC. Fermions always provide a non-vanishing pressure of quantum nature. Using the Thomas-Fermi approach, the theoretical rotation curves and density profiles reproduce very well the observational curves for galaxy masses from 5 × 10 9 to 10 11 M . Thus, gravity and quantum physics -Newton, Fermi and Dirac 'meet' together in galaxies via keV WDM. This is consistent with the expectations; since the fraction of dark matter over the total mass of galaxies varies from 95% for large dilute galaxies to 99.99% for dwarf compact galaxies, while the baryon fraction can only reach up to 5% in large galaxies. New results based on the Eddington like approach extended to galaxies show robustly that the self-gravitating DM can thermalize in the inner halo region, despite of being collisionless, due to the gravitational interaction between DM particles, which is important in the inner region. In the outer region, particles are too dilute to thermalize, even if they are virialized. Indeed, the local temperature in the outer region is lower than in the inner region of a halo, where thermalization is achieved. Thermalization has been also linked to ergodicity, the self-gravitating DM gas is an ergodic system. More constraints on fundamental physics are coming from CMB and galaxy clustering. Planck data improve the constraints on the fine structure constant with respect to those from WMAP-9 by a factor of 5. Analysis of the Planck data limits the variation in the fine structure constant from redshift z=1000 to present day to be less than approximately 0.4%. Dark Energy could be zero at recombination for all we know. Tighter constraints on the fine structure constant and the dark energy density are expected from future experiments, like the Euclid mission. Talks by: Paolo Salucci, Norma Sanchez, Eloisa Menegoni Simulations, Modelling and WDM constraints No CDM simulations can produce pure big disk galaxies; there is no way not to have a bulge in mergers; dwarf population is very different in CDM vs WDM. Simulations of ΛWDM show distinctive features that can be compared with observations. The free-streaming length of the WDM particles, which gives a suppression of the power on small scales (cut-off in the power spectrum) and a velocity dispersion, has as an immediate obvious effect a lower number of small structures -satellites. Exploring a wide range of particle velocities, one can see the mechanism of structure formation is a hybrid mechanism of top-down and bottom-up at different scales. While this is obvious with the present resolution in a regime otherwise excluded by different observations (hundreds of eV), to a certain extent this phenomenology is describing the structure formation in the whole regime of warm dark matter. High resolution warm dark matter halos exhibit caustics and shells, which are permanent structures in real space and phase space. As expected, their size and density depends on the velocity of the particle. Some technical difficulties are encountered when simulating WDM particles. The artificial mass segregation, which results in the formation of spurious halos is hard to overcome. However, using several structural properties of halos like the spin (spurious halos have large spin) and halos dynamical state -virialization, helps to distinguish the spurious halos and to eliminate them from the studied samples and mass functions. In WDM the star formation rate appears higher as compared to CDM, at 10 9 M it could be a factor of 100 which depends on the DM particle mass, consistent too with results by Biermann & Kusenko (2006). Using hydrodynamical simulations in the few keV range, the impact of WDM on the 21cm intensity mapping in the post-ionization era (z=3-5) has been investigated. For a 3 -4 keV WDM thermal mass, a 20 -40 % suppression of low mass halos of order 10 9 M is found (including photo-ionization, self-shielding and molecular Hydrogen). This implies an increase of power in HI and hence, the 21cm power spectra, testable with the SKA forecasts. WDM Semi-analytic models are less expensive tools. A Monte Carlo realization of collapse and merging histories links the physics of baryons to the DM halos through scaling laws and allows a fast spanning of parameter space and even though they do not contain spatial information, they confirm general constraints on the properties of WDM. The tightest constraints to date on the WDM particle masses independent of the baryonic physics come from the abundance of ultra-faint lensed galaxies of z = 6 galaxies using the recently measured UV luminosity functions in the Hubble Frontier Fields, which yield: m > 2.1 keV at 3 σ, m > 2.4 keV at 2 σ (thermal). This sets m sterile > 6.1 keV for Shi-Fuller model and firmly rules out Dodelson-Widrow mechanism (in the case of the 3.5 keV line). Talks by: Sinziana Paduroiu, Pier-Stefano Corasaniti, Isabella P. Carucci, Nicola Menci Detection and Experiments Members of different collaborations have presented current experiments, the setups, results from analyzing the data and future prospects. For the eV sterile neutrino range, an experimental overview has emphasized the uncertainties and anomalies in results from LSND and MiniBooNE (FNAL) experiments, reactor neutrinos, and the Gallium anomaly, and the solutions proposed for future neutrino experiments. In the present framework, there is no full consistency in fitting all anomalies, but they are suggestive for sterile neutrinos. The 163 Ho Electron Capture experiment (ECHo) is designed to investigate the electron neutrino mass in the sub-eV range, giving an upper limit of 10 eV, by analyzing the calorimetrically measured energy spectrum following the electron capture process of 163 Ho. In investigating the existence of keV sterile neutrinos, the presence of resonances complicates the analysis, but preliminary tests on how the keV sterile neutrino would affect the electron capture spectrum have been done and presented, electrons and photons of the excited state of the isotope are observed. Other isotopes are also proposed to study. The main goal of the KATRIN experiment is to measure the neutrino mass, but the setup can be modified to detect the imprint of a keV scale sterile neutrino. Two planned measurements will be performed, the first one in 2017. A new detector system, TRISTAN is currently in development. Three years of KATRIN will get close to the DM limit in a m s -sin 2 θ diagram; systematics are critical and recent developments were presented. The ASTRO-H International X-ray satellite was designed to acquire data for new insights on large scale structure, matter in strong gravitational fields, cosmic rays acceleration and dark matter. Simulations demonstrate that ASTRO-H could have been very good. After it was launched this year successfully, the satellite broke down and it cannot be recovered. Still, it was possible for the satellite to collect 38 days of data (instead of 3 years) on the Perseus cluster. These data will be analyzed and will be made available. Plans to take profit of the capabilities achieved in the Astro-H instruments should be pursued, in particular, for the search of the potential keV sterile neutrino decay emission lines. Talks by: Mathieu Vivier, Loredana Gastaldo, Anton Huber, Daniel Maier IV. CONCLUSIONS The evidence for keV dark matter particles, commonly referred to as Warm Dark Matter (WDM) is originally derived from galaxies; and galaxies still provide the strongest quantitative argument (Norma Sanchez). The structure of small disk galaxies (Paolo Salucci) and the number counts of small galaxies at very high redshifts (Nicola Menci) provide strong constraints now: The thermal equivalent of the WDM particle is between about 2 and 3 keV, and correspondingly higher for, e.g., the Shi-Fuller mechanism (a factor of about 2.5 higher). Other evidence does show evidence for very early strong star formation, supporting the predictions (Peter L. Biermann) made for the effect of sterile neutrinos (Pier Stefano Corasaniti). Cosmological simulations now give predictions (Sinziana Paduroiu), that show caustic structure possibly making observation tests for X-rays challenging. Other simulations on what neutral Hydrogen observations with high spatial resolution might see were also explored (Isabella Paola Carucci), and the prediction is that WDM and CDM universes would look very different. The hopes for the ASTRO-H satellite X-ray mission were severely curtailed, as just a month of data rather than three years are available now (Daniel Maier). The existing MWBG fluctuation data (Planck) permit the parameters of fundamental constants of nature such as the fine-structure constant to be strongly constrained (Eloisa Menegoni). Many of the existing neutrino experiments suggest anomalies (M. Vivier for Thierry Lasserre) are present, but do not lend themselves to an easy solution. Future dedicated experiments such as ECHo (Loredana Gastaldo) and KATRIN (Anton Huber) may allow the parameter range for a postulated sterile neutrino to be seriously restricted, but this is expected to take many years of work. In conclusion, the strongest push comes from observations of galaxies at high redshift, and predictions of what we might detect at yet higher redshifts; the allowed range of the WDM particle mass is more restricted than ever before. Sessions lasted for three full days in the beautiful Meudon campus of Observatoire de Paris, where CIAS 'Centre International d'Ateliers Scientifiques' is located. All sessions took place in the historic Chateau building, (built in 1706 by the great architect Jules-Hardouin Mansart in orders by King Louis XIV for his son the Grand Dauphin). The meeting was open to all scientists interested in the subject. All sessions were plenary followed by discussions. The format of the Meeting was intended to allow easy and fruitful mutual contact and communication. Large time was devoted to discussions. All informations about the meeting are displayed at: June 15: Norma Sanchez greets everybody, comments on the strikes that make it difficult for people to get here. PLB: comments after my talk from Paolo Salucci on the LF of QSOs, and from Norma Sanchez about growing BHs from DM directly (as we did with Faustin Munyaneza); also a comment by Pier Stefano Corasaniti, saying that there is evidence observationally about very early star formation in the universe; argument about activity rate of active SMBHs; Norma Sanchez prefers the direct growth of SMBHs; however, the agglomeration of massive stars requires near-zero heavy element abundance, so can be tested. http:// Norma Sanchez (results obtained with Héctor de Vega before 2015): DM particles freeze out at decoupling, about T d 100 GeV; defining keV scale as between 1 and 10 keV; other arguments show that the thermal equivalent mass must be between 2 and 4 keV; structures in the Universum such as galaxies and clusters of galaxies grow out of small primordial quantum fluctuations; WDM cuts the fluctuation spectrum at 73 kpc(keV/m s ) 1.45 ; defines a transfer fct going from CDM to WDM; e.g., thermal FD particle 2.5 keV, Dodelson-Widrow 9.67 keV, Shi-Fuller 6.38 keV, and νMSM 4.75 keV; all analogous; for small scales quantum effects are necessary to explain galaxies; she repeats that compact dwarf galaxies are quantum objects for WDM; Q = ρ/v 3 , phase density; thermal relic mass limited to about 4 keV, from halo mass versus galaxy halo radius; also phase density Q versus galaxy halo mass; mentions the universal rotation curve by Paolo Salucci (URC) with proper scaling "collapses" into a common curve for r/r h ∼ 2; uses also the Burkert profile; mentions 1401.0726 Héctor de Vega + Norma Sanchez, and1401.1214;PRD 77, 043518 (2008); suggests that simulations will be necessary for Q/m 4 < 0.1. Sinziana Paduroiu: CDM versus WDM: she emphasizes that no CDM simulations can produce pure big disk galaxies; there is no way not to have a bulge in mergers; dwarf population very different in CDM vs WDM; mentions Bode, Turok, and Ostriker 2001;Viel et al. 2005; most WDM simulations just cut the power-spectrum, and do not worry about initial velocities; runs through the argument that the Bode et al. case to correspond to 1000 degrees of freedom; discusses the starting conditions with thermal or non-thermal velocities; she used 3 keV, matching the latest arguments by Norma Sanchez and Nicola Menci; shows some movies of structure formation in WDM, illustrating both top-down and bottom-up mechanisms; she keeps mentioning the quantum pressure for compact galaxies; mentions the dependencies of phase density Q on the parameters in the simulations; in the discussion Paolo Salucci mentions that in M87 they found a core radius of about 150 kpc; S.P. says, that kind of thing is difficult in DM only N-body simulations; P.S. says that M87 has about 10 14 M ; mentions that someone else has determined a cored distribution of the globular cluster system around M87. Matthieu Vivier (prepared with Thierry Lasserre): Sterile neutrino experiments and the keV case; runs through the standard case for neutrino physics, including the recent Nobel prizes; then focuses on neutrino anomalies, or "new" oscillations? i) LSND anomaly, anomaly on the electron anti-neutrino rate; anti-electron-neutrino excess detected; one interpretation is a 4th neutrino state; one option is to extend the standard model by another one or two more neutrinos; they would have to be sterile neutrinos; M. Shaposhnikov 2005 Phys.L.B. 620, 15, model to explain all properties; one neutrino is then around 10 keV, and two others with GeV; ii) MiniBooNe (FNAL) anomaly: they run in both neutrino and anti-neutrino mode: MiniBooNe was not conclusive checking the LSND anomaly; they found at lower energy an excess; iii) reactor neutrinos: found more neutrinos than expected after improved predictions; iv) the Gallium neutrino anomaly: deficit observed; also suggests a new sterile state; no-oscillation hypothesis disfavored at > 99.9 % C.L. (> 3 sigma level); describes new reactor experiments as well as experiments where the neutrino sources are brought to detectors; most recent results from NUCIFER are all consistent with predictions; summary: no full consistency in fitting all anomalies, and consistency experiments. Anton Huber: KATRIN experiment; extension to measuring the keV-scale sterile neutrino; three years of KATRIN will get close to the DM limit in a m s -sin 2 θ diagram; however, systematics are critical; he cites Norma Sanchez, who argues that a phase space limit is m s > 1.86 keV; then the X-ray limits versus the assumption, that all DM is just one particle, the sterile neutrino, suggests about 3 keV; allowing for all systematics allows a limit of 5 10 -7 in sin 2 θ at the "best" case mass of about 10 keV; several papers by S. Mertens et al.; one systematic problem is back-scattered energetic electrons in keV range; another is due to very high energy electrons (enough energy to just pass through); results in reflection and trapping of electrons; solved via a regular external magnetic field; detector problems, backscattering, charge sharing, pile-up (work by Kai Dolde); two measurements, one 7 days pre-KATRIN, and then in five years, a post-KATRIN stage; he says that KATRIN will determine the neutrino mass if it is 300 meV, and get an upper limit if it is at or below 200 meV. Jun 16: Eloisa Menegoni (Roma): Constraints on fundamental physics from CMB data and galaxy clustering: starts with discovery of MBWBG; explains the derivation of C l ; shows the success of the simple model fit to the Planck data; primary anisotropies are due to i) gravity (Sachs-Wolfe-effect), ii) adiabatic density perturbations, and iii) Doppler effect, from velocity perturbations; the resulting visibility function is connected to the fine-structure constant α; rate of scattering τ = n e σ T c ...; recombination; see Avelino et al. 2001 PRD 64, 103505; changing the fine structure constant changes the redshift of recombination pretty significantly; there is "cosmic degeneracy" between the Hubble constant and the fine structure constant; Menegoni et al. 2009 PRD 80, 08/302, 0909.3584; using also HST data reduces the allowed error range in a plot of H 0 versus α; Planck gave a very much reduced error budget as compared to WMAP9, so the data confirm that α is at its canonical value; also degeneracy between the fine structure constant and the equation of state of DE; using CMB + HST + SN-Ia, then both α and w are reasonable, α/alpha 0 = 0.996 ± 0.009, and w very close to -1; Calabrese, Menegoni et al. 2011 PRD 84, 023518; find no insight really from early DE models; Euclid will help, to be launched 2020; 0501174 Cole et al.; then talks about clustering; summary: the fine structure constant may have varied by less than 0.4 % from recombination to today; she confirms that DE could be zero at recombination for all we know; Norma Sanchez emphasizes in discussion that the values of H 0 from Planck should be taken with caution. Daniel Maier (Saclay): ASTRO-H and the search for the keV sterile neutrino; he says that ASTRO-H is broken beyond repair; they have data on the Perseus cluster from the satellite for the first 38 days (instead of the planned 3 yrs); these data will be published in a few weeks; describes the satellite, very many instruments, with an extended optical bench for the hard X-rays; explains the detectors; temperature stability is 2 µK in orbit, mostly due to CRs, since on the ground the stability was 0.4 µK; lots of technology; Paolo Salucci mentions that we now know the distribution of DM in the Galactic Center region better than in galaxy clusters, at the worst just as good; shows a plot of the expected signal, and the Milky Way is best, but forgot the galaxy M31 and also the best dwarfs (Paolo Salucci states this); shows simulations which demonstrate that ASTRO-H could have been very good; all data taken during the check-out phase were not expected to be used for science, but now they have to be used; in emergency mode the Solar Array Paddles and the Extended Optical Bench separated from the satellite; incorrect rotation modes also in safe mode; a publication is coming on the possible 3.5 keV line; a second paper will discuss the velocity field of the gas in Perseus; he then says that it would be relatively easy to build the satellite again, but he is not sure the funding exists; so a more optimistic view than what was expressed at the Vulcano meeting; he mentioned that a number of Japanese satellites had serious failures, ASTRO-H is not the first; long discussion about the options i) to build another satellite, and ii) reanalyze the existing data. Nicola Menci (Roma): WDM astrophysics, galaxy and star formation: M f reestream = 10 15.6 M (m s /30eV ) -2 -> keV scale as well; feedback can bring models back to reality; however, critical issues remain, such as at high redshift; Guo et al. 2011; it also suppresses the L/M ratio in small galaxies, see Brook & Di Cintio;Papastergis et al. 2011Papastergis et al. , 2015;; argues for delay of star formation in WDM models, but of course ignores the ionization and extra cooling; the fraction of quiescent galaxies is far too high in CDM models; in WDM models that fraction is much closer to reality; uses CDM Somerville, CDM de Lucia, and his own models; m s = 2.9 m x for ..; m x > 4 keV indistinguishable from CDM in terms of galaxy formation; the most powerful probe is the formation of high redshift galaxies; using clusters as lenses one can reach galaxies a factor of 10 -20 fainter using HST; Alavi et al. 2015; -> deepest LF ever measured; Livermore et al. 2016 even deeper; 164 galaxies at redshift 6 and beyond; reaching quite large densities; NM and NM et al. 2016; give number density of galaxies -> 3 keV; NM et al. 2016b;Alavi et al. 2015, Parsa et al. 2015, Livermore et al. 2016; Merle 2016; he plots comoving density, and finds galaxies with a density beyond 1 Mpc -3 ; Marsch et al. 2015;Schive et al. 2016; he says after questioning, that the lower limit is 2.1 keV for a thermal relic; Norma Sanchez emphasizes that an upper limit is 4 keV for the thermal relic mass from the abundance of small galaxies; so the window is now 2.1 (3 sigma) to 4 keV; apparently their paper is out today; assuming the Shi-Fuller mechanism allows then a real mass which is larger: At coffee Nicola Menci says that UV is strongly obscured, and the corresponding less obscured X-rays are an important test; he was not certain whether these models used the mass function of the quiescent BHs and not just the active BHs; he also said that there is a "massive seed theory"; in the competition between accretion, and making new BHs, the mass function can be modified obviously. Pier Stefano Corasaniti (Meudon): Non-linear structure formation in non-standard cosmologies; Pontzen & Governato 2014, Klypin et al. 1999;Boylan-Kolchin et al. 2012;Hahn, Abel, Kaehler 2013, Agarwal & Corasaniti .. 2015 ; spurious halo contamination; so he proposed (A & C '15) a physical criterion; he uses the spin parameter of halos; in spin parameter J/(2 1/2 M V R) = λ ; in terms of this spin parameter all have spuriously high spin; also spurious non-sphericity; virial condition as well violated for spurious halos; most spurious halos are low mass; so eliminating all spurious halos recovers log-normality in all these three measures, spin, non-sphericity, and viriality; cleaning up gives better mass function; Atek et al. 2015LF at high redshift, Bouwens et al. 2015,..; points out problems with UV LF due to extinction; they use a conversion to star formation rate; Mashian,.. Loeb 2015; then using matching on star formation densities; in WDM higher star formation rate as compared to CDM; at 10 9 M a factor of 100 possibly; this depends on the DM particle mass; upon questioning he says, that only from my lecture did he realize that there is a mechanism to have star formation really early; later he talked a lot about how to relate the SMBH density (Caramete & PLB 2010) to the density which he determines for small galaxies at high redshift; he showed comoving densities up to about 10 +0.5 M pc -3 . PLB comment: the SMBH original density is higher than 10 -2 Mpc -3 , using growth by merging gives about 10 -1 Mpc -3 , so a fair fraction of all small galaxies must produce a SMBH originally; and to explain the great arc many of the original SMBHs must merge in 5 or 6 binary generations to produce what we observe. Loredana Gastaldo (U. HD): ECHo experiment to measure keV sterile neutrino 163 67 Ho; uses Penning trap; she says that she observes electrons and photons of the excited state of this isotope after that it is hit by a sterile neutrino; statistics at end point of decay spectrum only a few 10 -13 of all events; the basic system is to measure the normal neutrinos; Gastaldo et al. NIMA 711, 150 (2013), Ranitzsch et al. 1409.0071;Faessler et al. JPhysG 42, 015108 (2015), PRC 91, 045505 (2015) Jun 17: Isabella Paola Carucci (SISSA, Trieste): Exploring the 21cm power spectrum forecasts for SKA on WDM: she starts with the Current Power spectrum extending it with the Lyman α forest from Tegmark et al 2004; Lyman α constrains the scales larger then 300 kpc; she says we need to look for damped Lyman α systems from galaxies; then use "intensity mapping", not focusing on any one source, just map what you see; initially about a degree resolution; use simulations, study CDM and WDM; for 3 to 4 keV WDM particle (thermal equivalent) mass they find a 20 -40 % suppression of low mass halos, of order 10 9 M ; Bagla 2010, and Dave 2013, connected to halos, and the other based just on "particles"; uses photo-ionization, self-shielding, and molecular Hydrogen; the simulations show much smoother structures for WDM; huge differences in WDM structures; reduction of power in the matter power spectra results in an increase of power in the terms of the HI and hence the 21cm power spectra; the effect is quite drastic; she tests the HI modelling using the HI column density distribution, Noterdaeme 2012, and Zafar 2013. Paolo Salucci (SISSA): Observed structural properties of galaxies and cored density profiles lead to WDM; for the first time presents data on disks in dwarf galaxies; argues against a WIMP particle; Sinziana Paduroiu asks him about where all the DM is, and he answers that integrating the galaxy mass function, multiplying with the proper proportion of DM, gives 80 % of all DM; clusters of galaxies give the rest 20 %; disks have exponential profile (Freeman 1970); M33 has truncation at 4 exponential scales, NGC300 at 10 exponential scales; at low luminosity almost linear rotation curve. High luminosity galaxies have a rapid rise of V rot and then a flat rotation curve; introduces his "Universal Rotation Curve (URC)"; mentions paper with Hector de Vega and Norma Sanchez in MNRAS 2014/15; smallest galaxies most numerous, more DM dominated, densest objects, first born, immune to baryonic physics arguments in CDM; states dSph complex dynamics, dwS simple disk; updates nearby galaxy catalogue by Karachentsev et al.; looking for dwarf disks he ends up with 36 objects; doubly normalizes the RC, in radius and in velocity; this normalization is very important; most extreme is UGC4483; stellar mass is proportional to light in K-band; states that the compactness of the luminous matter depends on the compactness of the DM; this RC he calls DDURC, and then the scatter is very small; new relation between baryonic mass and total mass, an extension of the old relation; these small galaxies do not follow the URC, but again lie on a "fundamental plane", so seen in projection, lie on a line in a cube of concentration, mass and length scale; he can make a similar argument with mass, concentration and central density; in the URC he needs to add another parameter, the concentration; he ends by saying that these dwarf disks do not contribute much to the WDM discussions; he says upon questioning that the total mass of these dwarf disks is larger than the dwarf spheroidals. Norma Sanchez (Paris): WDM: She emphasizes that the nature of DM is critical for the understanding of galaxies and galaxy formation; on large scales CDM and WDM coincide; she summarizes "CMB data confirm the ΛWDM model on large scales"; then adds the new point of quantum mechanics in galaxies; Newton + Fermi + Dirac meet soto-speak in galaxies via keV WDM; she argues against MOND; keV WDM solves the core/cusp problem, the satellite problem, the non-observation of WIMPs, DM-annihilation, axions, DM bosons; Destri, Hector de Vega, & Norma Sanchez New Astronomy 2013, Hector de Vega & Norma Sanchez PRD 2013, Hector de Vega, Salucci, & Norma Sanchez MNRAS 442, 2717(2014), Norma Sanchez IJMP 2016; uses minimal mass of dwarf galaxies -> minimum mass of DM particle, of 1.91 keV; mentions that the fraction of baryons even in large galaxies reaches only 5 %; shows from the work with Paolo Salucci the URC; very similar to the Burkert profile; M min = 3 •10 4 M (2 keV/m s ) 16/5 for galaxies, in the extreme limit; this applies to the smallest and most compact galaxies; emphasizes that cusped density profiles produce distribution functions which are divergent at the center; in the outside region of halos temperature is lower, since virialization starts before thermalization; Harvey et al. Science 2015: collisions between clusters show that self-interaction between DM particles is extremely limited; result cross-section divided by mass < 0.47 cm 2 /g; DM verified at 7.6 σ; needs DM to be Fermionic; including Shi-Fuller DM is between 2 and 9 keV; just thermal case mass between 2 and 4 keV; she points to sterile neutrinos, but says that many possibilities exist in particle physics; Norma Sanchez + 2015 suggests that SMBHs are the limit, so argues that SMBHs grow from Fermionic DM. Special General discussion session: In the discussion Norma Sanchez states that in the work with Héctor they found direct solutions in a degenerate configuration including a SMBH as a proper solution, she calls a rich family of solutions, as a direct extension of the degenerate cores in galaxies, then also finding no SMBHs in small galaxies; Nicola Menci says that with low efficiency accretion one can yield SMBHs from accretion; I mention the work by Jan van Paradijs about super-Eddington accretion; Nicola Menci says that at z > 1 CDM simulations give too many AGN, and calls it a crisis; by more than one order of magnitude at z > 5; comment then to Nicola Menci that Giant Radio Galaxies show that the efficiency overall is quite high in BH accretion; analysis on the subject DM and galaxies through the 25 years of intense activity of the school versus the clarification in cosmology i.e. CMB for instance as done from COBE and WMAP, which established the Standard cosmological model with inflation; Nicola Menci suggests an optimistic view of the situation of the field, after Norma Sanchez points out the inertia, in the DM research in general, not for WDM of course which is going fast! saying "from 40 years" and more (from the times of Chalonge and Szicky); Paolo Salucci voices optimism; I also agree mentioning CR physics. Héctor de Vega medal; she also mentions the previous two presentations of the Medal; mentions Poincare wrote about GWs, he called them "ondes gravifiques", with a number of important papers from 1902 to 1906. *** We thank all again, both lecturers and participants, for having contributed so much to this exciting meeting and look forward to seeing you again in the next Workshop of this series. We thank the CIAS Observatoire de Paris for its support and all those who contributed so efficiently to the successful organization of this meeting, particularly Nicole Letourneur, Djilali Zidani, Sylvain Cnudde, Jean-Pierre Michel, Emmanuel Vergnaud and Jerome Berthier. Warm Dark Matter Astrophysics in Agreement with Observations and keV Sterile Neutrinos: Highlights and Conclusions of the Chalonge -de Vega Meudon Workshop 2016 In Memoriam Héctor de Vega Ecole Internationale d'Astrophysique Daniel Chalonge -Héctor de Vega Meudon, CIAS, 15-17 June 2016. Medal: The Héctor de Vega Medal VI. Programme Héctor de Vega 2016 VII. Live Minutes of the Workshop by Peter Biermann I. PURPOSE OF THE WORKSHOP, CONTEXT AND INTRODUCTION FIG. 1: Poster of the Workshop (b) Geneva Observatory, University of Geneva, CH-1290 Sauverny, Switzerland † and (c) Max-Planck Institute for Radioastronomy, Auf dem Hügel 69, Bonn, Germany (d) Dept. Phys., Karlsruher Institut für technologie, Karlsruhe, Germany (e) Dept. of Phys. & Astr., Univ. of Alabama, Tuscaloosa, AL, USA (f ) Dept. of Phys. & Astr., Univ. of Bonn, Germany * ‡ (Dated: July 18, 2016) www.chalonge.obspm.fr/Cias Meudon2016.html The presentations by the lecturers are available on line (in .pdf format) in 'Programme and Lecturers' at: http://www.chalonge.obspm.fr/Programme CIAS2016.html VI. PROGRAMME H ÉCTOR DE VEGA 2016 VII. LIVE MINUTES OF THE WORKSHOP BY PETER BIERMANN , PRC 91, 064302 (2015); Rujula et al. 1510.054462; Robertson et al. PRC 91, 035504 (2015); problems with higher order excitations; now move on to sterile neutrinos; Filianin et al. 1402.4400; mass of sterile neutrino distance from kink to end-point of decay spectrum; 1602.04816; identification of sterile neutrino signatures could be limited by the complex structure of the 163 Ho spectrum! Filianin et al. JPhysG 41, 095004 (2014), other Electron Capture (EC) isotopes; 123 Te, 157 Tb, 163 Ho, 179 Ta, 193 Pt, 235 Np; these other EC candidates can measure other masses of sterile neutrinos. V. A NEW MEDAL: THE H ÉCTOR DE VEGA MEDAL In the honor of Héctor de Vega, the Scientist and the Human Person, a Medal with his portrait and his name was created, coined and edited: The Héctor de Vega Medal. Science with a great intellectual exigency and a human face. FIG. 2: The Héctor de Vega Medal The first side of the medal shows the name, dates and an artistic engravure of Héctor de Vega's portrait inspired by the outstanding picture taken by Nadia Charbit Blumenfeld. In the reverse side of the Medal the following text is engraved in french:
01591015
en
[ "phys.cond.cm-msqhe" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01591015/file/ARXIV_FDT_V2.pdf
Adeline Crépieux Out-of-equilibrium fluctuation-dissipation relations verified by the electrical and thermoelectrical ac-conductances in a quantum dot The electrical and heat currents flowing through a quantum dot are calculated in the presence of a time-modulated gate voltage with the help of the out-of-equilibrium Green function technique. From the first harmonics of the currents, we extract the electrical and thermoelectrical trans-admittances and ac-conductances. Next, by a careful comparison of the ac-conductances with the finite-frequency electrical and mixed electrical-heat noises, we establish the fluctuation-dissipation relations linking these quantities, which are thus generalized out-of-equilibrium for a quantum system. It is shown that the electrical ac-conductance associated to the displacement current is directly linked to the electrical noise summed over reservoirs, whereas the relation between the thermoelectrical ac-conductance and the mixed noise contains an additional term proportional to the energy step that the electrons must overcome when traveling through the junction. A numerical study reveals however that a fluctuation-dissipation relation involving a single reservoir applies for both electrical and thermoelectrical ac-conductances when the frequency dominates over the other characteristic energies. I. INTRODUCTION The fluctuation-dissipation theorem (FDT) is a relation which states that the time-correlation function of an unperturbed system is equal to the response function of the perturbed system [START_REF] Kubo | Statistical Physics II[END_REF] . For example, in a conductor, the current fluctuations are directly related to the acconductance. This means that the response of the system to the action of an external force is closely connected to the way their eigenstates can fluctuate. If they can not, the system will not react to the perturbation. The FDT was first evidenced in electrical conductors by Johnson 2 and Nyquist 3 . It is often thought that the FDT applies only at equilibrium and for linear response but in reality its validity domain is wider. The FDT has been discussed far and wide for over sixty years [4][START_REF] Sh | Electronic noise and fluctuations in solids[END_REF][START_REF] Hartnagel | Microwave Noise in Semiconductor Devices[END_REF][START_REF] Marconi | [END_REF] and continues to be a pivotal issue, notably concerning its generalization to non-linear, non-equilibrium, non-perturbative, interacting, and nano-scale systems [8][9][10][11][12][13][14][15][16][17][18][19][20][21] . In some other works, this relation is used to deduce the electrical ac-conductance from the calculation of noise without having to include ac-voltage in the calculation 22,23 , and constitutes a useful ingredient in the theoretical studies of electrical time-dependent transport in quantum systems [24][25][26][27][28][29][30] , which are fully accessible experimentally [31][32][33][34][35][36][37][38][39][40] . In the last years, these theoretical studies have been extended to the heat and thermoelectrical ac-transport in quantum systems [41][42][43][44][45][46][47][48][49][50][51][52] but no direct connection has been established until now between the thermoelectrical trans-admittance and the fluctuations mixing the electrical and heat currents in a quantum system. In this paper, using the out-of-equilibrium Keldysh Green function formalism, we perform a direct calculation of the time-dependent electrical and heat currents associated to a quantum dot (QD) submitted to an ac-gate voltage. Next, we derive the exact expressions of the electrical and thermoelectrical trans-admittances and ac-conductances, and compare them to the expressions of the electrical and mixed noises in order to establish whether the FDT is verified. This paper is organized as follows: the model and the formal expression of the electrical and heat currents are given in Sec. II. Sections III and IV present respectively the calculation of both currents for a time-independent and a time-dependent gate voltage. The expressions of the trans-admittance are given in Sec. V, and those of the ac-conductances in Sec. VI. The derivation of the FDT is exposed in Sec. VII, and we conclude in Sec. VIII. II. MODEL We consider a non-interacting QD with a single energy level, ε dot (t), which can be driven in time by a gate voltage, connected to left (L) and right (R) reservoirs (see Fig. 1). To describe this system, we use the Hamiltonian H = H L + H R + H dot + H T , with H α=L,R (t) = k∈α ε k c † k (t)c k (t) , (1) H dot (t) = ε dot (t)d † (t)d(t) , (2) H T (t) = α=L,R k∈α V k c † k (t)d(t) + h.c. , (3) where c † k (c k ) is the creation (annihilation) operator of one electron in the reservoirs, d † (d) is the creation (annihilation) operator of one electron in the QD, ε k is the band energy of the reservoir, and V k is the transfer amplitude of one electron from the QD to the reservoirs and vice-versa. We set = e = 1 in all the intermediate results, and restore these constants in the final results. The electrical and heat current operators from the α reservoir to the central region through the α barrier are respectively defined as Î0 α (t) = -Ṅα (t), and Î1 to α (t) = -Ḣα (t) + µ α Ṅα (t), where N α (t) = c † k (t)c k (t), which lead Îη α (t) = i k∈α (ε k -µ α ) η × V k c † k (t)d(t) -V * k d † (t)c k (t) , (4) where η = 0 gives the electrical current, and η = 1 gives the heat current. Their average values are thus given by Îη α (t) = 2Re k∈α (ε k -µ α ) η V k G < c k d (t, t) , (5) where G < c k d (t, t ′ ) = i c † k (t ′ )d(t) is the Keldysh Green function mixing c and d operators, which is equal to [START_REF] Kadanoff | Quantum statistical mechanics: Green's function methods in equilibrium and nonequilibrium problems[END_REF][START_REF] Langreth | Linear and non-linear electrons transport in solids[END_REF][START_REF] Jauho | [END_REF][START_REF] Haug | Quantum Kinetics in Transport and Optics of Semiconductors[END_REF] G < c k d (t, t ′ ) = V * k ∞ -∞ dt 1 G r dot (t, t 1 )g < k (t 1 , t ′ ) +G < dot (t, t 1 )g a k (t 1 , t ′ ) , (6) where G < dot (t, t ′ ) = i d † (t ′ )d(t) is the Keldysh Green function associated to the QD, and G r dot (t, t ′ ), its retarded counterpart. g < k (t, t ′ ) = i c † k (t ′ )c k (t) 0 is the Keldysh Green function associated to the disconnected reservoir, and g a k (t, t ′ ), its advanced counterpart. These two latter Green functions are given by 57 g < k (t, t ′ ) = if α (ε k )e iε k (t ′ -t) , (7) g a k (t, t ′ ) = iΘ(t ′ -t)e iε k (t ′ -t) , (8) where Θ is the Heaviside function and f α , the Fermi-Dirac distribution function of the electrons in the reservoir α. When we report Eqs. (6-8) in Eq. ( 5), we get Îη α (t) = - 2 h Γ α Im ∞ -∞ dε 2π (ε -µ α ) η ∞ -∞ dt 1 e iε(t-t1) × f α (ε)G r dot (t, t 1 ) + Θ(t -t 1 )G < dot (t, t 1 ) , (9) where Γ α = 2π|V | 2 ρ α in the wide band approximation (energy dependency is neglected in the reservoir density of states ρ α , and in the hopping amplitude V ≡ V k ). Thus, the knowledge of the dot Green function, G r,< dot , allows us to fully determine the time-dependent current. We have [START_REF] Jauho | [END_REF][START_REF] Haug | Quantum Kinetics in Transport and Optics of Semiconductors[END_REF] G r,a dot (t, t ′ ) = g r,a dot (t, t ′ )e ±(ΓL+ΓR)(t ′ -t)/2 , (10) with g r,a dot (t, t ′ ) = ∓iΘ(±t ∓ t ′ )e -i t t ′ dt1ε dot (t1) , and G < dot (t, t ′ ) = i ∞ -∞ dt 1 ∞ -∞ dt 2 G r dot (t, t 1 )G a dot (t 2 , t ′ ) × α Γ α ∞ -∞ dε 2π f α (ε)e iε(t2-t1) . ( 11 ) Given a time-variation of the dot energy level ε dot (t), we have all the ingredients to calculate the time-dependent current. In the following, we first remind the expressions of the currents in the time-independent case, and next treat the case where the gate voltage is modulated in time. III. STATIONARY ELECTRICAL AND HEAT CURRENTS In the time-independent case, we have ε dot (t) = ε dc , which leads to g r,a dot (t, t ′ ) = ∓iΘ(±t ∓ t ′ )e -iε dc (t-t ′ ) , thus G r,a dot (t, t ′ ) = ∓iΘ(±t ∓ t ′ )e [iε dc ±Γ](t ′ -t) , (12) with Γ = (Γ L + Γ R )/2, and G < dot (t, t ′ ) = i α Γ α ∞ -∞ dε 2π f α (ε)e -(iε dc +Γ)t+(iε dc -Γ)t ′ × t -∞ dt 1 e (iε dc -iε+Γ)t1 t ′ -∞ dt 2 e (-iε dc +iε+Γ)t2 , (13) which leads after calculation to G < dot (t, t ′ ) = i α Γ α ∞ -∞ dε 2π f α (ε)e iε(t ′ -t) (ε -ε dc ) 2 + Γ 2 . ( 14 ) We remark that, as it should be in the time-independent case, the Green function at times t and t ′ depends of the time difference t -t ′ only. Inserting Eqs. (12) and (14) in Eq. ( 9), we get the Landauer formula for the electrical and heat currents Îη α = 1 h ∞ -∞ dε 2π (ε -µ α ) η T (ε) f α (ε) -f α (ε) , (15) where α = R for α = L, and α = L for α = R. T (ε) = Γ L Γ R /[(ε -ε dc ) 2 + Γ 2 ] is the transmission coefficient through the double barrier. IV. TIME-MODULATED ELECTRICAL AND HEAT CURRENTS When a gate-voltage modulated in time is applied, i.e., when ε dot (t) = ε dc + ε ac cos(ωt), the bare retarded and advanced Green functions of the QD defined as g r,a dot (t, t ′ ) = ∓iΘ(±t ∓ t ′ )e -i t t ′ dt1ε dot (t1) are equal to g r,a dot (t, t ′ ) = ∓iΘ(±t ∓ t ′ )e -iε dc (t-t ′ ) × exp -i(ε ac /ω) sin(ωt) -sin(ωt ′ ) . Using the relation e ix sin(y) = ∞ n=-∞ J n (x)e iny , where J n is the Bessel function, we get for the retarded and advanced bare Green functions of the QD g r,a dot (t, t ′ ) = ∓iΘ(±t ∓ t ′ )e iε dc (t ′ -t) × ∞ n=-∞ ∞ m=-∞ J n ε ac ω J m ε ac ω e inωt ′ -imωt , (17) and for the retarded and advanced Green functions of the QD G r,a dot (t, t ′ ) = ∓iΘ(±t ∓ t ′ )e (iε dc ±Γ)(t ′ -t) × ∞ n=-∞ ∞ m=-∞ J n ε ac ω J m ε ac ω e inωt ′ -imωt . ( 18 ) We calculate now the QD Keldysh Green function starting from Eq. ( 11), we insert the expressions of the retarded and advanced Green functions of Eq. ( 18), and we perform the double integration over time. We obtain G < dot (t, t ′ ) = i α Γ α × n,m,p,q J n ε ac ω J m ε ac ω J p ε ac ω J q ε ac ω × ∞ -∞ dε 2π f α (ε)e (-iε+i(n-m)ω)t e (iε+i(p-q)ω)t ′ (iε dc + Γ -iε + inω)(-iε dc + Γ + iε -iqω) . ( 19 ) We now calculate the currents, given by Eq. ( 9), by reporting the expressions of the retarded Green function given by Eq. ( 18) and of the Keldysh Green function given by Eq. ( 19), and performing the integration over time, we get where f M (ε) = α=L,R f α (ε)/2 is the average distribution function over the two reservoirs, and where we have introduced the transmission amplitude defined as Îη α (t) = 2 h n,m J n ε ac ω J m ε ac ω ×Re e i(n-m)ωt ∞ -∞ dε 2π (ε -µ α ) η f α (ε)τ (ε -nω) - 2 h n,m,p,q J n ε ac ω J m ε ac ω J p ε ac ω J q ε ac ω ×Re e i(n-m+p-q)ωt ∞ -∞ dε 2π (ε -µ α ) η f M (ε + (q -p)ω) ×τ * (ε -pω)τ (ε -(n + p -q)ω) , (20) τ (ε) = iΓG r dot (ε) = iΓ/(ε -ε dc + iΓ), assuming sym- metrical barriers Γ L = Γ R = Γ. For clarity, all the characteristic energies of the problem are summarized in Table 1. FIG. 2: Time-evolution of the electrical current Î0 L,R (t) and the heat current Î1 L,R (t) in the left/right reservoir (purple/orange curved lines) for kBT / ω = 0.1, Γ/ ω = 0.1, eV / ω = 1, εac/ ω = 0.2 when the time-independent potential profile through the junction is symmetrical (ε dc / ω = 0.5) and non-symmetrical (ε dc / ω = 1). The black curve lines correspond to the displacement currents Îη d (t) = Îη L (t) + Îη R (t) . The dashed lines indicates the currents in the stationary case, i.e. when εac = 0. The right reservoir is grounded: µR = 0. To illustrate this result, we plot in Fig. 2 the timeevolution of the electrical and heat currents. When the potential profile through the junction is symmetrical, i.e., when ε dc = (µ L + µ R )/2, the left and right electrical cur-rents oscillate in phase around their time-independent values, given by Eq. ( 15) taking η = 0, which are of opposite sign since the averaged displacement electrical current 40 , Î0 d (t) , equals to the sum of left and right currents, cancels in the stationary case due to charge conservation. In the presence of time-modulation, the displacement current is non-zero (see black curved lines in Fig. 2). The left and right heat currents oscillate in phase opposition around the same stationary value, given by Eq. ( 15) taking η = 1, since the heat transferred from the left and right reservoirs to the QD is the same when the potential is symmetrical (indeed, the energy distances µ L -ε dc and ε dc -µ R are equal as depicted in the top left corner of Fig. 2). On the contrary, when the potential profile is non-symmetrical, i.e., when ε dc = (µ L + µ R )/2, we observe that the left and right electrical currents are out of phase. Moreover, the stationary heat currents from the left and right reservoirs are different in that case (dashed orange and purple straight lines): the left stationary heat current vanishes since the energy difference between the left reservoir and the QD is zero, whereas the stationary right heat current is almost unchanged in comparison to the symmetrical case. We observe also that the stationary electrical currents are half reduced due to the face that the energy barrier between the QD and the right reservoir is the double compared to the symmetrical case. For both symmetrical and non-symmetrical profiles, the amplitude of oscillations of the displacement heat current, Î1 d (t) , is attenuated in comparison to the amplitude of the left and right heat currents. No such attenuation is observed for the displacement electrical current. V. ELECTRICAL AND THERMOELECTRICAL TRANS-ADMITTANCES From the expression of Îη α (t) given by Eq. ( 20), we deduce the trans-admittance Y η α (ω) = dI η(1) α (ω)/dV ac , with V ac = ε ac /e, and I η(1) α (ω) the first harmonic of the current defined through the relation Îη α (t) = I η(0) α + 2 ∞ N =1 Re I η(N ) α (ω)e -iN ωt . (21) To identify the N th harmonic of the current, I η(N ) α (ω), we rewrite Eq. ( 20) making the change of index m = n + N in the first contribution, and the change of index m = n + p -q + N in the second contribution. We get I η(N ) α (ω) = 1 h ∞ -∞ dε 2π (ε -µ α ) η f α (ε) × n J n ε ac ω J n-N ε ac ω τ * (ε -nω) +J n ε ac ω J n+N ε ac ω τ (ε -nω) - 1 h n,p,q ∞ -∞ dε 2π (ε -µ α ) η f M (ε + (q -p)ω) × J n ε ac ω J n+p-q-N ε ac ω J p ε ac ω J q ε ac ω ×τ (ε -pω)τ * (ε -(n + p -q)ω) +J n ε ac ω J n+p-q+N ε ac ω J p ε ac ω J q ε ac ω ×τ * (ε -pω)τ (ε -(n + p -q)ω) . (22) We assume at this stage that ε ac → 0, and we keep only the contributions proportional to ε ac / ω, since to get the trans-admittance we have to take the derivative of the first harmonic of the current, I η( 1) α (ω), according to ε ac . To get these contributions, we consider the Taylor expansion of the products of Bessel functions. Concerning the product J n J n∓1 , its Taylor expansion gives a contribution proportional to ε ac provided that one of the Bessel function index is equal to ±1 and the other is equal to 0. Concerning the product J n J n+p-q∓1 J p J q , it gives a contribution proportional to ε ac / ω provided that one of the Bessel function index is equal to ±1 and the others are equal to 0. Keeping only these contribution, we find that the trans-admittance reads as Y η α (ω) = 1 2h ω ∞ -∞ dε 2π (ε -µ α ) η × f α (ε) τ (ε) -τ * (ε) -τ (ε -ω) + τ * (ε + ω) +f M (ε -ω) T (ε -ω) -τ * (ε)τ (ε -ω) +f M (ε + ω) τ * (ε + ω)τ (ε) -T (ε + ω) +f M (ε) τ * (ε)τ (ε -ω) -τ * (ε + ω)τ (ε) . ( 23 ) This is the key result of this paper which is valid for any values of the source-drain voltage, temperature, frequency and coupling strength to the reservoirs. It will be used in the next section to deduce the ac-conductances. Figure 3 shows the profiles of the trans-admittances Y 0 α (ω) and Y 1 α (ω) as a function of left/right voltage eV and dc-gate voltage ε dc . The dashed blue lines indicated the place where the left and right trans-admittances are equal. We can see that this is the case at equilibrium, i.e., at V = 0 (see the vertical blue line presents in each graph) and also out-of-equilibrium. In particular, we explain in the next section why the real parts of left and right electrical trans-admittances are equal each other when ε dc = eV /2, i.e. for symmetrical potential profile. The dashed red lines indicated the place where the left and right trans-admittances are opposite. This occurs for the thermoelectric trans-admittance but never for the electrical trans-admittance. The important point to notice at this stage is that Y η L (ω) and Y η R (ω) take distinct absolute values except in some particular situations which are: at equilibrium (as expected) and out-ofequilibrium on the blue and red dashed curved lines. VI. ELECTRICAL AND THERMOELECTRICAL AC-CONDUCTANCES The electrical ac-conductance, G 0 α (ω), is given by the real part of the trans-admittance Y 0 α (ω) associated to the electrical current. From Eq. ( 23), making the change of variable ε → ε -ω for terms involving the argument ε + ω, we get G 0 α (ω) = e 2 2h ω × ∞ -∞ dε 2π f α (ε -ω)T (ε) -f α (ε)T (ε -ω) +f M (ε -ω) T (ε -ω) -2Re τ (ε)τ * (ε -ω) +f M (ε) 2Re τ (ε)τ * (ε -ω) -T (ε) . (24) The thermoelectrical ac-conductance, G 1 α (ω), is given by the real part of the trans-admittance Y 1 α (ω) associated to the heat current. From Eq. ( 23), making the change of variable ε → ε -ω for terms involving the argument ε + ω, we get G 1 α (ω) = e 2h ω ∞ -∞ dε 2π × (ε -µ α ) T (ε -ω) f M (ε -ω) -f α (ε) +Re τ (ε)τ * (ε -ω) f M (ε) -f M (ε -ω) +(ε -ω -µ α ) T (ε) f α (ε -ω) -f M (ε) +Re τ (ε)τ * (ε -ω) f M (ε) -f M (ε -ω) . ( 25 ) From Eqs. ( 24) and ( 25), it can be checked that the conductances G 0 α (ω) and G 1 α (ω) are both even function in frequency. In order to identify the conditions to get identical left and right ac-conductances, it is needed to calculate the following differences using Eq. ( 24) G 0 L (ω) -G 0 R (ω) = e 2 2h ω × ∞ -∞ dε 2π f R (ε) -f L (ε) T (ε -ω) -f R (ε -ω) -f L (ε -ω) T (ε) , (26) and G 1 L (ω) -G 1 R (ω) = e 2h ω ∞ -∞ dε 2π × (ε -ω -µ L )f L (ε -ω) -(ε -ω -µ R )f R (ε -ω) T (ε) -(ε -µ L )f L (ε) -(ε -µ R )f R (ε) T (ε -ω) . (27) Both differences cancel at equilibrium (small voltage V and large temperature T ) since in that case we have f L (ε) = f R (ε). Moreover, G 0 L (ω) -G 0 R ( ω) cancels also out-of-equilibrium when the profile of the potential through the junction is perfectly symmetric. Indeed, Eq. ( 26) can be written alternatively using T (ε) = Γ 2 /(ε 2 + Γ 2 ), as G 0 L (ω) -G 0 R (ω) = e 2 h ω ∞ -∞ dε 2π T (ε) × F (ω, ε -ε dc -µ L ) -F (ω, ε -ε dc + µ R ) ,( 28 ) where F (ω, ε) = [1 + sinh(ε/k B T )/ sinh( ω/k B T )] -1 . The above difference vanishes when the electron-hole symmetry point is reached, here when ε dc = (µ L +µ R )/2, i.e, ε dc = eV /2 since we take µ R = 0, in full agreement with the two first upper graphs of Fig. 3. In Fig. 4, we plot the ac-conductances spectrum associated to the left and right parts of the junction for several dc-gate voltage values. We see that as expected from Fig. 3 and explained by Eq. ( 28), the left and right electrical ac-conductances coincide when the potential profile is symmetrical, whereas the left and right thermoelectrical ac-conductances take opposite values. The physical justification is the following: for a symmetrical potential profile, the energy differences are the same in absolute value but opposite in sign when the electrons flow from the QD to the left and right reservoirs (energy gain for one direction of propagation and energy loss for the other). VII. OUT-OF-EQUILIBRIUM FDT To establish the FDT, we need to compare the expressions of the ac-conductances to the difference S η1η2 αβ (-ω)-S η2η1 βα (ω), where the finite-frequency currentcurrent correlator S η1η2 αβ (ω) is defined as S η1η2 αβ (ω) = dte iωt δ Îη1 α (t)δ Îη2 β (0) , ( 29 ) with δ Îη1 α (t) = Îη1 α (t) -Îη1 α . The reason for considering the difference S η1η2 αβ (-ω) -S η2η1 βα (ω) is double: it allows to suppress the terms involving the product of two functions f α shifted with energy ω, which are not present in the conductances G 0 α (ω) and G 1 α (ω), and it allows at the same time to be fully consistent with the Kubo formula 1 . Note that the noise depends on the reservoir indexes when the following conditions are all filled: non-zero frequency, energy dependent transmission coefficient and asymmetry of the potential profile across the system [START_REF] Zamoum | [END_REF] . It was confirmed by two recent experiments on carbon nanotube quantum dot 59 and on tunnel junction 60 . We start with the comparison between the electrical ac-conductance, G 0 α (ω), and the difference S 00 αβ (-ω) -S 00 βα (ω) involving the electrical noise, and continue next with the comparison between the thermoelectrical acconductance, G 1 α (ω), and the difference S 10 αβ (-ω) -S 01 βα (ω) involving the mixed noise, i.e., the correlator between the electrical and heat currents. A. Direct comparison between electrical ac-conductance and electrical noise The electrical noise S 00 αβ (ω) was calculated in Ref. 58 for a similar system in the absence of gate-voltage modulation (i.e., for ε ac = 0). Considering the difference S 00 αβ (-ω)-S 00 βα (ω), we get for the auto-correlator (α = β) S 00 αα (-ω) -S 00 αα (ω) = e 2 h ∞ -∞ dε 2π × f α (ε -ω) T (ε) + |τ (ε) -τ (ε -ω)| 2 -f α (ε) T (ε -ω) + |τ (ε) -τ (ε -ω)| 2 +f α (ε -ω)T (ε -ω) -f α (ε)T (ε) , (30) and for the cross-correlator (α = β) S 00 αα (-ω) -S 00 αα (ω) = e 2 h ∞ -∞ dε 2π × f α (ε) -f α (ε -ω) τ * (ε)τ (ε -ω) + f α (ε) -f α (ε -ω) τ (ε)τ * (ε -ω) . (31) Comparing Eqs. ( 24) and ( 30), we show that the electrical ac-conductance and auto-correlator are related together through the exact relation 4 ωG 0 α (ω) = S 00 αα (-ω) -S 00 αα (ω) + 2e 2 h ∞ -∞ dε 2π f α (ε) -f α (ε -ω) Re τ (ε)τ * (ε -ω) . (32) The additional contribution appearing in the second line of Eq. ( 32) is related to the cross-correlator given by Eq. (31). Moreover, we notice that the sum of the left and right electrical ac-conductances calculated from Eq. ( 24) gives ω α G 0 α (ω) = e 2 h ∞ -∞ dε 2π f M (ε -ω) -f M (ε) × τ (ε -ω) -τ (ε) 2 , (33) which coincides exactly with the sum over reservoirs of the difference S 00 αβ (-ω) -S 00 βα (ω) through the relation 4 ω α G 0 α (ω) = α,β S 00 αβ (-ω) -S 00 βα (ω) . ( 34 ) This result is a generalization of the FDT to on out-ofequilibrium situation. It is valid at any frequency, voltage, temperature and coupling strength between the QD and the reservoirs. The important point to underline here is the need to sum over reservoirs to get a simple relation between the ac-conductance and the noise. Indeed, an additional term is present when the sum over reservoirs is not taken (see in Eq. ( 32)). The justification for taking the sum over reservoirs is the following: since the time-modulation is applied to the gate-voltage which acts on the QD, i.e. on the central part of the junction, the relevant current here is the displacement current defined as Îη d (t) = Îη L (t) + Îη R (t). Thus, these are the fluctuations of the total current which is formally related to the total ac-conductance. It is important to underline that it is the double sum over the anti-symmetrized noises, i.e., the difference between the absorption noise and the emission noise: S 00 αβ (-ω)-S 00 βα (ω), which is related to the ac-conductance. Such a relation could not be obtained for symmetrized noise since in that case we would have on one hand, α,β [S 00 αβ,sym (-ω) -S 00 βα,sym (ω)] = 0, and on the other hand, 4 ω α [G 0 α (ω) -G 0 α (-ω)] = 0, since the total conductance is an even function with frequency (see Eq. ( 24)). Finally, we want to underline that even if the QD is placed in an out-of-equilibrium situation, the left and right reservoirs stay at equilibrium, this is very probably the reason why Eq. ( 34) is verified. At this stage, it is important to understand how the equilibrium limit (zero-voltage) can be reached from these results. In that limit, using the fact that f M (ε) = f α (ε) = f α (ε), the auto-correlator and the cross-correlator of Eqs. (30) and (31) simplify to S 00 αα (-ω) -S 00 αα (ω) = e 2 h ∞ -∞ dε 2π f M (ε -ω) -f M (ε) × T (ε) + T (ε -ω) + |τ (ε) -τ (ε -ω)| 2 , S 00 αα (-ω) -S 00 αα (ω) = e 2 h ∞ -∞ dε 2π f M (ε -ω) -f M (ε) × -T (ε) -T (ε -ω) + |τ (ε) -τ (ε -ω)| 2 , (35) and the ac-conductance of Eq. ( 33) gives 2 ωG 0 α (ω) = e 2 h ∞ -∞ dε 2π f M (ε -ω) -f M (ε) × τ (ε -ω) -τ (ε) 2 . ( 36 ) All these three quantities gained the particularity to become independent of the reservoir index α, as expected at equilibrium, and are related through the relation 4 ωG 0 α (ω) = β S 00 αβ (-ω) -S 00 βα (ω) . (37) At high frequency, the FDT simplifies even more since we have S 00 αα (-ω) -S 00 αα (ω) ≈ 0, and the KMS relation 61,62 : S 00 αα (-ω) = e ω/kBT S 00 αα (ω), thus S 00 αα (ω) = 4 ωN (ω)G 0 α (ω) , (38) where N (ω) = [exp( ω/k B T ) -1] -1 (ω) = 2e h α ∞ -∞ dε 2π × (ε -µ α ) f M (ε -ω) -f α (ε) × T (ε -ω) -Re{τ (ε)τ * (ε -ω)} +(ε -ω -µ α ) f α (ε -ω) -f M (ε) × T (ε) -Re{τ (ε)τ * (ε -ω)} . ( 39 ) The objective is to compare this expression to the sum of the left and right thermoelectric ac-conductances, calculated from Eq. ( 25), and given by 4 ω α G 1 α (ω) = 2e h α ∞ -∞ dε 2π × (ε -µ α ) f M (ε -ω) -f α (ε) T (ε -ω) + f M (ε) -f M (ε -ω) Re τ (ε)τ * (ε -ω) +(ε -ω -µ α ) f α (ε -ω) -f M (ε) T (ε) + f M (ε) -f M (ε -ω) Re τ (ε)τ * (ε -ω) . (40) Comparing Eqs. ( 39) and ( 40), we get 4 ω α G 1 α (ω) = Re αβ S 10 αβ (-ω) -S 01 βα (ω) + e h α ∞ -∞ dε 2π Re{τ (ε)τ * (ε -ω)} × (ε -µ α )[f α (ε) -f α (ε)] +(ε -ω -µ α )[f α (ε -ω) -f α (ε -ω)] . (41) The additional term is proportional to the energy that the electrons must overcome when they travel through the double barrier. At equilibrium, since we have f α (ε) = f α (ε), the above relation reduces to 4 ω α G 1 α (ω) = Re αβ S 10 αβ (-ω) -S 01 βα (ω) . (42) Moreover, it can be shown that we have a KMS-type relation between positive frequency and negative frequency mixed noises: αβ S 10 αβ (-ω) = e ω/kB T αβ S 01 αβ (ω), thus Re αβ S 01 αβ (ω) = 4 ωN (ω) α G 1 α (ω) , (43) which corresponds to a FDT between the sum over mixed noises to the total thermoelectrical ac-conductance. Outof-equilibrium, we have an additional term in the relation connecting the mixed noise and the thermoelectrical acconductance, which however vanishes at large frequency as shown in the next section. C. Numerical comparison between ac-conductances and noises We have seen in the previous subsections that the FDT holds out-of-equilibrium for the electrical ac-conductance provided that the sum over reservoirs is taken, but not for the thermoelectrical ac-conductance since an additional term is present. However, in some situations, the two thereafter relations are notwithstanding verified 4 ωG 0 α (ω) = S 00 αα (-ω) -S 00 αα (ω) , (44) 4 ωG 1 α (ω) = Re S 01 αα (-ω) -S 10 αα (ω) . (45) To discuss that point, we plot in Fig. 5 the acconductances and the noises as a function of frequency assuming a symmetrical potential profile through the junction, for which we have shown in Fig. 4 that G 0 L (ω) = G 0 R (ω) and G 1 L (ω) = -G 1 R (ω). Playing with the values of temperature, frequency, voltage and coupling strength, we notice that Eqs. (44) and (45) do not apply when these energies are of the same order of magnitude (compare the purple and blue curves and the green and red curves in the graphs on the right side of Fig. 5). On the contrary, when the frequency is the highest energy, all the graphs of Fig. 5 show the remarkable feature that Eqs. ( 44) and ( 45) are verified, since the purple and blue lines coincide in the upper graphs and the green and red lines coincide in the bottom graphs at high frequency. This allows to conclude that the FDT involving a single reservoir is verified for both electrical and thermoelectrical ac-conductances in that limit. For completeness, we want to underline that the results presented here are obtained in case of non-interacting QD, and could be altered in the presence of electron-phonon interaction 64 or electron-electron interaction 13 . VIII. CONCLUSION The calculation of electrical and thermoelectrical acconductances associated to a QD and the comparison to finite-frequency noises have allowed to check whether the FDT holds out-of-equilibrium. We have established a generalized FDT for electrical ac-conductance which requires a summation over reservoirs, and we have shown that an additional term (which cancels at equilibrium) is present in the relation linking the thermoelectrical acconductance and the mixed noise. With the help of nu-merical calculation, we have shown that the standard FDT, i.e. without the sum over reservoirs, is indeed valid out-of-equilibrium for both electrical and thermoelectrical ac-conductances provided that the frequency is higher than the other characteristic energies of the system. FIG. 1 : 1 FIG. 1: Schematic picture of the QD connected to left and right reservoirs with a gate modulated voltage. The green arrows indicate the convention chosen for the definition of left and right currents. Notation Designation ε dc = eV dc dc-gate voltage amplitude εac = eVac ac-gate voltage amplitude ω Gate voltage modulation frequency Γ Coupling strength between the QD and the leads µL, µR Chemical potentials of the left and right leads eV = µL -µR Voltage gradient between the left and right leads kBT Temperature of the leads FIG. 3 : 3 FIG. 3: Real part and imaginary part of the electrical transadmittance Y 0 L,R (ω) and the thermoelectric trans-admittance Y 1 L,R (ω) as a function of voltage eV / ω (horizontal axis) and dc-gate voltage ε dc / ω (vertical axis), at kBT / ω = 0.5 and Γ/ ω = 0.5. The amplitudes vary from negative values (black and purple colors) to positive values (orange and white colors). The dashed blue lines indicated the place where Re{Y η L (ω)} = Re{Y η R (ω)} or Im{Y η L (ω)} = Im{Y η R (ω)}, and the red ones the place where Re{Y η L (ω)} = -Re{Y η R (ω)} or Im{Y η L (ω)} = -Im{Y η R (ω)}. The right reservoir is set to the ground: µR = 0. FIG. 4 : 4 FIG. 4: Ac-conductances spectrum for kBT /eV = 0.1, Γ/eV = 0.1, and varying values of the dc-gate energy ε dc from 0 to eV /2. The blue and red curved lines corresponds to a symmetrical potential profile with the value ε dc = eV /2, for which we have G 0 L (ω) = G 0 R (ω) (blue curved lines) and G 1 L (ω) = -G 1 R (ω) (red curved lines). The right reservoir is set to the ground: µR = 0. G 0 α (ω) is in units of e 2 /h, the quantum of conductance, and G 1 α (ω) is in units of e 2 V /h. FIG. 5 : 5 FIG. 5: Comparison between ac-conductances and noises for two distinct couple of values {kBT, Γ} at ε dc = eV /2 (symmetrical potential profile). The blue curves stand for the electrical ac-conductance G 0 L (ω), and the red curves for the thermoelectrical ac-conductance G 1 L (ω). The purple and magenta curves stand for [S 00 LL (-ω) -S 00 LL (ω)]/4 ω, and [S 00 LR (-ω) -S 00 RL (ω)]/4 ω. Note that in the upper left graph, the purple curve is not visible since it coincides exactly with the blue curve. The green and orange curves stand for Re S 01 LL (-ω) -S 10 LL (ω) /4 ω, and Re S 01 LR (-ω) -S 10 RL (ω) /4 ω. G 0 L (ω) is in units of e 2 /h, the quantum of conductance, and G 1 L (ω) is in units of e 2 V /h. TABLE I : I List of characteristic energies. is the Bose-Einstein distribution function. This last relation corresponds to the standard FDT.The mixed noises S 01 αβ (ω) and S 10 αβ (ω) were calculated in Ref. 63 for a similar system in the absence of gatevoltage modulation (i.e., for ε ac = 0). Considering the double sum over reservoirs, its real part reads as Re B. Direct comparison between thermoelectrical ac-conductance and mixed noise αβ S 10 αβ (-ω) -S 01 βα Acknowledgments. The author wants to acknowledge R. Deblock, R. Delagrange, P. Eyméoud, J. Gabelli, P. Joyez, M. Lavagna, T. Martin, F. Michelini and R. Zamoum for discussions on time-dependent transport and finite-frequency noise.
01681604
en
[ "sde", "sdu.stu", "sdu.envi" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01681604/file/Dehghani-et-al_2017HAL.pdf
Mehdi Dehghani Morteza Djamali Emmanuel Gandouin Hossein Akhani email: [email protected] A pollen rain-vegetation study along a 3600 m mountain-desert transect in the Irano-Turanian region; implications for the reliability of some pollen ratios as moisture indicators Keywords: Chenopodiaceae/Artemisia pollen ratio Desert steppe, Iranian flora, Montane steppes, Pollen assemblage, Vegetation A set of 42 modern pollen samples has been investigated to determine the relationship between pollen percent-ages and vegetation composition along a 3600 m elevational mountain-desert transect in central Iran. The studied transect shows three main vegetation groups including a "high altitude zone" (embracing subnival, alpine and montane subzones), a "xerophytic desert steppe zone", and a "halophytic zone", correlated with the groups defined in Correspondence Analysis (CA) of vegetation dataset and Principal Component Analysis (PCA) of pollen dataset. The subnival subzone is characterized by high values of Asteraceae, Brassicaceae and Cyperaceae pollen, while alpine and montane subzones are characterized by the highest pollen diversity with a predominance of grass pollen along the whole transect. The halophytic zone is dominated by Chenopodiaceae pollen while xerophytic desert steppe shows a high occurrence of Artemisia pollen. The comparison of pollen percentages with the corresponding vegetation plots shows a high congruency between pollen and vegetation compositions of alpine subzone and undisturbed xerophytic desert steppe but a weak correlation between those of the subnival and montane subzones and human affected xerophytic desert steppe. In addition, pollen representation of frequently encountered or important plant taxa in the Irano-Turanian region is provided. The widely used Chenopodiaceae/ Artemisia = C/A pollen ratio, as an aridity index, is shown to be unreliable in the Irano-Turanian steppes. Our results suggest that a combined graph of all four indices (C/A pollen ratio, Poaceae/Artemisia = P/A, Poaceae/ Chenopodiaceae = P/C and (A + C)/P ratios) can represent the vegetation and climate relationships more accurately. In conclusion, surface pollen composition can reflect the actual vegetation zones/subzones in Irano-Turanian steppes. Together, P/A and P/C ratios are more confident to differentiate mesic from arid steppes, while C/A and (A + C)/P ratios provide a useful tool to differentiate halophytic desert vegetation developed in endorheic depressions with saline soils from xerophytic desert steppe developed in well-drained soils. Introduction A reliable regional calibration scheme of different plant taxa contributing in modern pollen assemblages is essential to correctly interpret fossil pollen spectra. In spite of expanding fossil pollen studies in the Middle Eastern countries since sixties [START_REF] Wright | Pleistocene glaciation in Kurdistan[END_REF][START_REF] Van Zeist | Late Quaternary vegetation history of western Iran[END_REF][START_REF] Van Zeist | Palynological investigations in western Iran[END_REF][START_REF] El-Moslimany | History of Climate and Vegetation in the Eastern Mediterranean and the Middle East from the Pleniglacial to the Mid-Holocene[END_REF][START_REF] Bottema | A late Quaternary pollen diagram from Lake Urmia (northwestern Iran)[END_REF][START_REF] Bottema | Anthropogenic indicators in the pollen diagrams of the Eastern Mediterranean[END_REF][START_REF] Tzedakis | Vegetation change through glacial-interglacial cycles: a long pollen sequence perspective[END_REF][START_REF] Ramezani | The late-Holocene vegetation history of the Central Caspian (Hyrcanian) forests of northern Iran[END_REF][START_REF] Djamali | An Upper Pleistocene long pollen record from the Near East, the 100 m-long sequence of Lake Urmia, NW Iran[END_REF]Djamali et al., , 2009b[START_REF] Djamali | Olive cultivation in the heart of the Persian Achaemenid Empire: new insights into agricultural practices and environmental changes reflected in a late Holocene pollen record from Lake Parishan, SW Iran[END_REF], modern surface pollen studies are quite scarce in the region [START_REF] Mcandrews | Modern pollen rain in western Iran, and its relation to plant geography and Quaternary vegetational history[END_REF][START_REF] El-Moslimany | Ecological significance of common nonarboreal pollen: exam-ples from drylands of the Middle East[END_REF]; [START_REF] Davies | Modern pollen precipitation from an elevational transect in central Jordan and its relationship to vegetation[END_REF]. In Iran, only four studies on modern pollen vegetation calibration have been published, two in the Euro-Siberian and two in the Irano-Turanian floristic regions. In the Euro-Siberian floristic region in northern Iran, comparison of surface pollen percentages with vegetation composition along a forest steppe transect in Golestan National Park in north-eastern Iran using descriptive and numerical approaches, helps to distinguish different vegetation types, with the worst correspondence found in transitional zones or ecotones (Djamali et al., 2009a). [START_REF] Ramezani | Pollen-vegetation relationships in the central Caspian (Hyrcanian) forests of northern Iran[END_REF] studied a 20 km long altitudinal transect in the South Caspian region which included only forest communities, providing some data on pollen production and dispersal of common place trees in central Hyrcanian forest in northern Iran. In the Irano-Turanian region a set of sixty samples along four transects in the Zagros Mountains of western Iran was studied by [START_REF] Mcandrews | Modern pollen rain in western Iran, and its relation to plant geography and Quaternary vegetational history[END_REF]. This was a first attempt to interpret the fossil pollen data obtained from a few sediment cores from lakes Zaribar (also Zeribar) and Mirabad. They found that pollen rain assemblages taken from Mesopotamian steppes and piedmont pseudo-savannas contained high amounts of Plantago pollen, while those of oak woodlands and plateau steppes were characterized by high percentages of Quercus, and Artemisia, and chenopod pollen, respectively. However, their study suffered from two main limitations including lacking of numerical analysis and poor floristic data which was available at that time. Another study was done 15 years later in an arid region in northeastern corner of Central Iran, as a complementary study to palynological investigation of a late Holocene alluvial sediment core by [START_REF] Moore | Pollen studies in dry environments[END_REF]. They distinguished five vegetation types on limestone outcrops, Ephedra zone, Zygophyllum zone, saline areas and disturbed areas in which 14 plots were taken subjectively, rather than a real transect along an ecological gradient. They showed that pollen of Artemisia and Chenopodiaceae dominates the pollen percentages and that C/A ratio is variable in different vegetation types with the highest value in saline areas with dominance of members of Chenopodiaceae. They also noted that arboral pollen is very scarce in the region with interesting presence of some tree pollen coming from south Caspian temperate forests by long distance dispersal. Pollen ratios of Chenopodiaceae, Artemisia and Poaceae have been widely used as ecological indices in palynological studies [START_REF] El-Moslimany | Ecological significance of common nonarboreal pollen: exam-ples from drylands of the Middle East[END_REF][START_REF] Van Campo | Pollen-and diatom-inferred climatic and hydrological changes in Sumxi Co Basin (Western Tibet) since 13,000 yr BP[END_REF]Davis and Fall, 2001;[START_REF] Djamali | An Upper Pleistocene long pollen record from the Near East, the 100 m-long sequence of Lake Urmia, NW Iran[END_REF][START_REF] Zhao | Sensitive response of desert vegetation to moisture change based on a near-annual resolution pollen record from Gahai Lake in the Qaidam Basin, northwest China[END_REF]. The Chenopodiaceae/Artemisia (C/A) pollen ratio as an aridity index in open vegetation types [START_REF] El-Moslimany | Ecological significance of common nonarboreal pollen: exam-ples from drylands of the Middle East[END_REF] has received the highest attention by many authors, although its reliability in different environments has rarely been evaluated [START_REF] Djamali | An Upper Pleistocene long pollen record from the Near East, the 100 m-long sequence of Lake Urmia, NW Iran[END_REF][START_REF] Zhao | Application and limitations of the Chenopodiaceae/Artemisia pollen ratio in arid and semiarid China[END_REF]. A semiquantitative aridity index ((A + C)/P) has also been proposed to determine moisture variability and delimitation of steppe from desert steppe. This palynological index is suggested to be supported as a trustworthy moisture index by sedimentological data and palaeoclimate reconstructions at millennial scale [START_REF] Fowell | Mid to late Holocene climate evolution of the Lake Telmen Basin, North Central Mongolia, based on palynological data[END_REF]. Presenting a detailed study on quantitative pollen-vegetation relationships in the Irano-Turanian floristic region is the subject matter of this paper. This region forms one of the richest floristic regions of the world with unique and remarkably diversified steppe vegetation covering an area of N6 million km 2 , housing more than 17,000 species in the Middle East and Central Asia [START_REF] Davis | Centres of Plant Diversity[END_REF]. Here, we will study both vegetation and modern pollen assemblages along an exceptionally long mountain-desert transect of 3600 m elevation range and N150 km horizontal distance from Alborz mountains in northern Iran to the central Iranian deserts crossing a range of vegetation and bioclimatic zones (see Fig. 1B) in the heart of the Irano-Turanian region (sub-region IT2 sensu [START_REF] White | Phytogeographical links between Africa and Southwest Asia[END_REF]. We aim at: 1) Determining pollen contribution of different Irano-Turanian floristic elements and vegetation communities in surface samples to provide a calibration scheme for more accurate interpretation of fossil pollen diagrams in the region. 2) Verifying the ecological significance of Chenopodiaceae, Artemisia and Poaceae pollen values and ratios which are used by palynologists as aridity indices [START_REF] El-Moslimany | History of Climate and Vegetation in the Eastern Mediterranean and the Middle East from the Pleniglacial to the Mid-Holocene[END_REF][START_REF] El-Moslimany | The late Pleistocene climates of the Lake Zeribar region (Kurdi-stan, western Iran) deduced from the ecology and pollen production of nonarboreal vegetation[END_REF][START_REF] El-Moslimany | Ecological significance of common nonarboreal pollen: exam-ples from drylands of the Middle East[END_REF][START_REF] Fowell | Mid to late Holocene climate evolution of the Lake Telmen Basin, North Central Mongolia, based on palynological data[END_REF][START_REF] Djamali | An Upper Pleistocene long pollen record from the Near East, the 100 m-long sequence of Lake Urmia, NW Iran[END_REF][START_REF] Zhao | Application and limitations of the Chenopodiaceae/Artemisia pollen ratio in arid and semiarid China[END_REF]. Study area Physical setting Our study area (Fig. 1) stretches from subnival zone close to upper vegetation limit of Damavand Volcano (35° 56′ 0.60″N, 52° 6′ 21″E, 4327 m) to Salt Playa Lake (Daryacheye Namak) in western Kavir National Park in central Iran (34° 40′ 42″N, 52° 4′ 21″E, 835 m). Damavand is a potentially active stratovolcano, located in the Central Alborz Mountains, in 60 km north-east of Tehran. The Damavand peak with 5671 m above sea level elevation is the highest summit in the Middle East and also the highest volcano in Asia separating interior Iranian deserts from the Caspian Sea. Alborz Mountain Range, 650 km long, separates the south Caspian lowland (down to 26 m below sea level) with montane temperate deciduous forests on the northern flanks [START_REF] Akhani | Plant biodiversity of Hyrcanian relict forests, N Iran: an overview of the flora, vegetation, palaeoecology and conservation[END_REF] from Irano-Turanian mountain steppes on the southern flanks [START_REF] Akhani | Vegetation patterns of the Irano-Turanian steppe along a 3,000 m altitudinal gradient in the Alborz mountains of northern Iran[END_REF]. The central Iranian plateau contains the Kavir National Park/Kavir Protected Area with a surface area of 670,000 ha, located in the eastern edge of the Salt Playa Lake, which has been protected since 1964 [START_REF] Rechinger | Plants of the Kavir Protected Region, Iran. Iran[END_REF][START_REF] Firouz | Environmental and nature conservation in Iran[END_REF]. About 75% of this area which is part of an elevated plateau of 650 to 850 m a.s.l., is almost equally composed of interfingering peneplain surface, saline soils, and salt-encrusted depressions [START_REF] Krinsley | Climatic and anthropogenic control of surface pollen assemblages in East Asian steppes[END_REF]. While the flanks of Damavand are mainly composed of trachyandesitic lava flows and pyroclastic material with nutrient-rich soils, the Alborz Mountains are of various geological compositions from Tertiary volcaniclastic sediments to Mesozoic limestone and marl formations [START_REF] Allen | Accommodation of late Cenozoic oblique shortening in the Alborz range, northern Iran[END_REF][START_REF] Davidson | The geology of Damavand volcano, Alborz Mountains, northern Iran[END_REF]. According to regional weather stations along the selected transect (Rineh, Larijan, Abali, Damavand, Garmsar and Siah Kuh), mean annual rainfall decreases steadily towards low elevations, ranging from 548, 538, 373, 117 to 68 mm/year respectively; while mean annual temperature increases in the same direction from 9.89, 9, 13, 19 and 19 accordingly. The meteorological stations along the studied transect as well as the climate diagrams showing the length of dry and humid periods are presented in Fig. 1. There is no meteorological data available for high elevations of our sampling sites but obviously the pattern of temperature and precipitation fluctuations is consistent along the whole transect. Similarly, [START_REF] Khalili | Precipitation patterns of central Elburz[END_REF] showed that high altitudes of Alborz Mountain Range show strong continentality and that the mean annual precipitation increases with altitude in southern slopes of Alborz. In a bioclimatic point of view, our study transect crosses the Mediterranean Pluviseasonal-Continental, Mediterranean Xeric-Continental, and Mediterranean Desertic-Continental moving from north to south according to Global Bioclimatic Classification System [START_REF] Djamali | Application of the global bioclimatic classification to Iran: implications for understand-ing the modern vegetation and biogeography[END_REF]. The bioclimate is thus continental everywhere with increasing aridity and longer dry seasons southward. Material and methods Vegetation measurements and surface pollen sampling Field works From August to November 2014, an altitudinal study transect was defined along N150 km aerial distance from Damavand Volcano to Salt Playa Lake in central Iran across which the vegetation composition was quantified and the surface pollen samples were collected within 42 plots (Figs. 1A, B andD, and2). All vegetation plots were of the same size (20 × 20 m) except for samples 14 (10 × 10), 23 (10 × 10), 24 (10 × 10 m) and 35 (10 × 30 m) in which the physical conditions did not permit following exactly the protocol (Table 1). Since the selected transect was cut by roads and residential areas, sampling was done at 50-2000 m distances from the road side to avoid disturbances on the sampled vegetation plots. Vegetation measurements and zonation Vegetation measurements were performed following the methodology of Zürich-Montpellier school of plant sociology [START_REF] Braun-Blanquet | Pflanzensoziologie. Grundzüge der Vegetationskunde[END_REF] along elevational gradient from 4327 m to 765 m a.s.l. For each vegetation plot all possible vegetation data were gathered and all taxa were identified based on regional references especially Flora Iranica (Rechinger, 1963(Rechinger, -2015)). The nomenclature follows mostly the Flora Iranica with updates of family Chenopodiaceae in [START_REF] Akhani | Diversification of the old world Salsoleae s.l. (Chenopodiaceae): molecular phylogenetic analysis of nuclear and chloroplast data sets and a revised classification[END_REF]. All 42 plots data were stored in TURBOVEG 2.15 for windows database [START_REF] Hennekens | TURBOVEG, a comprehensive data base management system for vegetation data[END_REF]. The final table from TURBOVEG was imported to JUICE 7.0.84 software [START_REF] Tichý | JUICE, software for vegetation classification[END_REF]. Using TWINSPAN method [START_REF] Hill | TWINSPAN: a FORTRAN program for arranging multivariate data in an ordered two-way table by classification of the individuals and attributes[END_REF] the halophytic communities are separated from the rest of plant communities by cut level 2. The montane to subnival main vegetation groups were subsequently separated from desert steppe plots by cut level 3. We continued clustering up to 9 cut levels and finally determined 5 vegetation zones. Furthermore, in each zone, communities/community groups were separated manually based on dominated and/or characteristic species. Because of the absence of a reliable syntaxonomical system and the fact that our plots could not cover all plant communities existing from the top of Damavand to the lowland desert, we did not classify all plant communities following Braun-Blanquet's nomenclature, but rather we named each community or community group according to representative dominant and physiognomically important species. Surface pollen sampling Surface pollen samples were taken randomly at least from 20 points within the vegetation plots, either from moss polsters, plant detritus or soil samples according to the availability of each material. When all materials were present, the priority was given to moss polsters, plant detritus and soil in order of importance. The collected modern pollen samples were then thoroughly mixed and kept in paper envelopes for subsequent laboratory treatments. Pollen analysis of surface samples Pollen sample preparation followed the modified procedure outlined by [START_REF] Faegri | Textbook of Pollen Analysis[END_REF] and that of [START_REF] Moore | Pollen Analysis[END_REF]. The samples were treated in 10% NaOH for an hour followed by multiple washing in water, sieving with a 160 µm mesh and treating with 37% HCl for 24 h in room temperature. Subsequently, samples were treated with 40% HF followed by retreatment with 37% HCl. Finally the samples were acetolysed and sieved at 10 µm mesh filter before being mounted between slides and coverslips in glycerol as a mounting medium. Pollen grains were identified and counted using a Nikon Eclipse E200 microscope at a magnification of × 500. On the average, 490 pollen grains per sample were tallied including the aquatic plants encountered in a few sampling quadrats (samples 4, 9, 12, 16, 19-20, 25-27). Pollen identifications were performed using the European and Mediterranean pollen collections as well as the "Middle East Pollen Reference Collection" (MEPRC) hosted at Institut Méditerranéen de Biodiversité et d'Ecologie with occasional use of pollen atlases of Europe and North Africa [START_REF] Moore | Pollen Analysis[END_REF][START_REF] Reille | Pollen et Spores d'Europe et d'Afrique du Nord[END_REF][START_REF] Reille | Pollen et Spores d'Europe et d'Afrique du Nord[END_REF][START_REF] Reille | Pollen et Spores d'Europe et d'Afrique du Nord[END_REF][START_REF] Beug | Leitfaden der Pollenbestimmung für Mitteleuropa und Angrenzende Gebiete[END_REF]. Pollen percentages of each pollen taxon were then calculated based on the total pollen sum of terrestrial and aquatic taxa. Undetermined and damaged pollen and fern spores were excluded from the total pollen sum. A pollen diagram was then created (Fig. 4) based on calculated pollen percentages in TILIA-TGView software [START_REF] Grimm | TILIA and TGView software. Ver 2.0.2[END_REF][START_REF] Grimm | TILIA and TGView software. Ver 2.0.2[END_REF]. Multivariate statistical analyses We applied a number of multivariate analyses to two matrices of pollen and vegetation data. First and second runs of Correspondence Analysis (CA) on vegetation data revealed that samples taken in saline soils (plots D33-37) as well as plot number 14, sampled from an abandoned sand exploitation area, act as outliers and mask proper clustering of other plots (Fig. 1S). However, Principal Component Analysis (PCA) on pollen data is not congruent with CA graph (Fig. 2S). Therefore, we decided to remove plot numbers D33-37, and D14 from both pollen and vegetation datasets. Finally, the matrix of pollen percentages versus vegetation plots was composed of 61 pollen types and 36 samples. Rare taxa (occur in only one sample) were removed from the analyses. The matrix was square-root transformed in order to stabilize the variance. The vegetation matrix (72 plant taxa and 36 samples) was composed of plant abundances of Zürich-Montpellier vegetation-scale technique against plot numbers. In our multivariate analyses of vegetation data, we deliberately deleted most species including those with low abundances and those with occurrence in a limited number of plots. In our data matrix we used the same values recorded for Braun-Blanquet cover abundance scales (1, 2, 3, 4 and 5). The value 0 was arbitrarily given to species with less than 1% contribution in the vegetation, which are traditionally shown by a "+" instead of a figure. PCA and CA were applied to 61 (variables) × 36 (samples) matrix of pollen and 72 (variables) × 36 (samples) matrix of vegetation data, respectively, using the "Ade-4" and "factoextra" packages from the R software version 3.2.2 (R Development Core Team, 2012). Co-Inertia Analysis (CoIA) [START_REF] Dolédec | Co-inertia analysis: an alternative method for studying spe-ciesenvironment relationships[END_REF][START_REF] Dray | Co-inertia analysis and the linking of ecological data tables[END_REF] was performed with the same software using Ade-4 package. CoIA allows studying the common structure of a pair of data tables and to measure the adequacy between two data sets. CoIA is very flexible and is suitable for quantitative and/or qualitative or fuzzy environmental variables [START_REF] Dray | Co-inertia analysis and the linking of ecological data tables[END_REF]. A Monte Carlo permutation test, where the rows of one matrix are randomly permuted followed by a recomputation of the total inertia [START_REF] Thioulouse | The use of permutation tests in co-inertia analysis: application to the study of nematode-soil relationships[END_REF] was used to check the significance of the co-structure of this CoIA. Results and discussion Vegetation communities and their floristic composition A total of 312 plant taxa were encountered and identified (33 species only to generic level) in the 42 vegetation plots along the selected transect. The TWINSPAN clustering resulted in three main vegetation groups including a "high altitude zone" from 1700 to 4400 m a.s.l. (zone A), a "xerophytic desert steppe" from 770 to 1700 m (zone B) and finally a "halophytic zone" from 770 to 800 m (zone C) (Table 1). Zone A: High altitude zone. This zone includes subnival (plots 1-4, Fig. 2A), alpine (plots 5-18, Fig. 2B) and montane subzones (plots 19-22) and is further divided into 13 communities or 7 community groups: A1) This includes subnival communities (4327-3974 m a.s.l., D1-4) starting from somewhere close to upper vegetation line of Damavand volcano and contains two communities. Draba siliquosa-Erysimum caespitosum community (A1.1) occupies poor soils largely covered by scree, gravel and rocky substrate. By decreasing altitude, Carex pseudofoetida-Astragalus macrosemius community (A1.2) occurs on slightly developed soil layer. These subnival communities are equivalent to the association Dracocephaletum aucheri described from the area [START_REF] Noroozi | Phytosociology and ecology of the highalpine to subnival scree vegetation of N and NW Iran (Alborz and Azerbaijan Mts[END_REF]. A2) The thorny cushion form alpine community group (2970-3613 m. a.s.l., D5-10) is dominated largely by Onobrychis cornuta and is divided into two communities of Cousinia harazensis and Astragalus ochroleucus. A3) The six plots from alpine zone form a group of communities of perennial grasses and perennial herbs with few thorny species on well-developed soil layer containing higher moisture (2437-2722 m, D11-13, 15). This zone is subjected to intensive grazing and is locally harvested for fodder plants. Zone B: Xerophytic desert and semi-desert steppe zone. This zone includes three main vegetation types. The first is the riverine vegetation on saline soils with high water table dominated by Tamarix species (B1, B2, plots D25 and D30). These belong to the Irano-Turanian class Tamaricetea arceuthoidis [START_REF] Akhani | The Tamaricetea arceuthoidis: a new class for the continental riparian thickets of the Middle East, Central Asia and the subarid regions of the Lower Volga valley[END_REF]. The second and third zones comprised characteristic desert and semi-desert Artemisia steppe, in which Artemisia aucheri and Artemisia sp. occur in higher elevations (B3, mostly in 8541520 m, plots D24, D26-29, D31) but Artemisia inculta (formerly known in literatures as Artemisia sieberi) dominates extreme xerophytic desert steppes of interior Iran (B4, 767-1040 m, plots D37-41, Fig. 2C). The soil type is a major determining factor in formation of plant communities in this area. Kaviria aucheri dominates in marly hills (B3.5, plot D23), Pteropyrum aucheri occurs in dry river bed (B4.1) and Xylosalsola richteri on fixed sandy soils (B4.2). Zone C: Halophytic communities. The halophytic communities occur in two parts of our transect. One is located in a playa near Garmsar and the second near the margin of Salt Playa Lake. The four plots (D33-36, elevations 765-787 m, Fig. 2D) are sampled from hypersaline soils in which Halocnemum strobilaceum is present in very species poor communities (species richness of only 2-4). Three community types (C1, C2, and C3) differ by their local habitat changes favoured by species such as Phragmites australis, Halostachys belangeriana, Alhagi maurorum, Tamarix androsovii and Tamarix pycnocarpa. Correspondence Analysis of vegetation data Fig. 3 illustrates the CA scatter plots of simplified vegetation plots. First tests of CA revealed that samples taken in saline soils (plots D33-37) act as outliers in CA scatter plots due to high coordination among them masking the distinctive patterns among other communities, almost all developed on well drained soils. This group of plots (removed from the CA) encompasses predominantly halophyte taxa of Zone C, described above (Table 1). The grouping of the remaining plots resulted from CA analysis of simplified vegetation data set and TWINSPAN analysis of complete data set largely matched each other (Fig. 3, Table 1). By removing the outliers from the vegetation dataset, the resulted CA diagram displays two main groups corresponding to Zones A, and B (Table 1). Three subzones including subnival, alpine and montane groups are less clearly distinguished in CA diagram (Fig. 3). Zone A Surface pollen assemblages Pollen percentage diagram Altogether 80 pollen types were encountered in the 42 pollen samples of which only 65 types were depicted in a summarized percentage diagram (Fig. 4). Based on CONISS analysis of pollen percentages verified by visual inspection of the ecologically important taxa, five Pollen Assemblage Zones (PAZ) were recognized: Pollen assemblage zone A (plots 1-3). This pollen zone, encompassing samples taken from high elevations of subnival zone (N4100 m; Fig. 1D) is characterized by high values of Achillea/ Matricaria-type pollen (up to 65%). These high values are not very surprising as Achillea aucheri is the dominant plant in the vegetation community around plots 1 and 2 (Table 1, Fig. 2A). Pollen percentages of Artemisia, Poaceae, and Amaranthaceae/Chenopodiaceae remain low throughout the zone except for spectrum 3 with values of up to 43.5% of Artemisia have been encountered, produced by Artemisia melanolepis (Table 1). Other noticeable pollen types in this narrow zone especially in spectrum 3 belong to Cyperaceae (Carex pseudofoetida) and Brassicaceae (Draba siliquosa, Erysimum caespitosum) (Table 1). There is no significant amount of tree pollen including the cultivated trees with the exception of Pinus pollen which keeps almost the same values all along the transect. In summary, in this zone only few taxa are present but with very sporadically high pollen values with herbaceous pollen predominating the spectra. Pollen assemblage zone B (plots 4-21). This pollen zone constitutes the widest segment of the transect corresponding to alpine subzone (4100-2500 m; Fig. 1D,Table 1) and montane subzone (2500-1800 m; Fig. 1D). This PAZ remarkably displays the most diversified pollen assemblage along the whole transect. The most significant feature of this zone is the predominance of grass pollen (Poaceae) with values Fig. 5. PCA scatter plots (pollen data) for both samples and the most contributing pollen taxa into the PCA axis 1 and PCA axis 2. PCA axes 1 and 2 explain respectively 16.4% and 9.5% of the total variance. ranging from 9.2% to 68.8%. Family-level pollen types of Fabaceae, Caryophyllaceae, Asteraceae-Cichorieae, Apiaceae, and Lamiaceae (Menthatype) and to some extent Polygonaceae (Polygonum aviculare-type) are also well represented with highest values at the transition of alpine to montane Irano-Turanian zones (~2500 m). The dung-associated spores of Sporormiella show a constant presence with their highest values all along the zone suggesting a strong grazing pressure. Once, the cultivated/cultivationassociated plants (e.g. Cerealia-type and Centaurea solstitialis-type) and ruderal species (Plantago, Rumex, Cichorieae, and Euphorbia) are also taken into account, the whole zone indicates the strong human activities in the form of agro-pastoralism. This agro-pastoral pressure is more visible in lower part of the zone corresponding to montane Irano-Turanian subzone (2500-1800 m). To the lower part of the zone we also note the increasing values of cultivated trees (Juglans, Platanus, Pinus, and possibly Cupressus/Thuja (Cupressaceae)) showing the proximity of permanent habitations in villages and small towns and artificial plantations in nearby areas. Cousinia pollen is also well present in the zone but is more frequent in the spectra corresponding to upper alpine subzone. Of particular interest is the presence of Eremurus-type (or Sisyrinchium-type) pollen which was already present in subnival (samples D1-3) but reappears in the lower alpine and montane Irano-Turanian sub-zone (samples D15-22). The absence of producers of this pollen in our plots shows that this pollen type has a good production and dispersal in these two vegetation belts (most probably belonging to different species) since some Eremurus species are widely distributed in southern slopes of Alborz mountains [START_REF] Wendelbo | Liliaceae I[END_REF]. Exceptionally high values of Plumbaginaceae and Mentha-type in spectrum 18 as well as C. solstitialis-type in spectrum 21 can be attributed to high values of Acantholimon erinaceum (~30%), Thymus kotschyanus (~20%), and Centaurea virgata (~30%), respectively, in the vegetation community surrounding them. The latter comes from an abandoned farmland. Samples 19 and 20 are taken from a zone, well-known for walnut cultivation, namely Damavand, Tehran. Pollen assemblage zone C (plots 22-32). This pollen zone covers the submontane Irano-Turanian xerophytic steppe (Fig. 1D). In this PAZ, grass pollen decreases drastically down to 0.8% (sample 26) while Artemisia and Amaranthaceae/Chenopodiaceae pollen percentages increase so that one or the other takes advantage in different spectra depending on their contributions in corresponding sampled vegetations. In general, this zone is characterized by dominating presence of Artemisia aucheri and different species of Chenopodiaceae (e.g. Kaviria aucheri, Haloxylon ammodendron, Noaea mucronata, Salsola rosmarinus, Halothamnus subaphyllus and so on) in the sampling points (Table 1). There are also high values of Filago/Senecio-type pollen and a significant presence of Apiaceae. Dominance of plants belonging to Asteraceae subfamily Asteroideae (e.g. Senecio glaucus L. and Pulicaria gnaphaloides) explains the increased values of Filago/Senecio-type pollen. Sporormiella spores exist in the whole zone although with lower values compared to PAZ-B, showing the grazing activities throughout the zone. Lower values of Cerealia-type pollen suggest that pastoral activities are more important than cereal cultivation. Cultivated trees (Juglans, Platanus, and possibly Cupressus/Thuja) also indicates the proximity of urban zones and habitations. Exceptionally high pollen percentages of Ephedra distachyatype (sample 24) and Tamarix (samples 25 and 30) can be explained by presence of Ephedra and abundant shrubs of Tamarix, respectively, in the surrounding vegetation communities (Table 1). Pollen assemblage zone D (plots 33-37). This pollen zone falls within halophytic vegetation zone but inside a relatively endorheic depression (Fig. 1D). Chenopodiaceae pollen predominates the pollen spectra of PAZ-D at expense of Artemisia and grass pollen. This zone has the least pollen diversity over the study transect. Predominance of Chenopodiaceae can be explained by dominance and high representation of species such as Halocnemum strobilaceum, Halostachys belangeriana, and Salsola rosmarinus developed on saline soils and mud flats (Fig. 2D). Pollen assemblage zone E (plots 38-42). Like PAZ-D, this zone falls within xerophytic desert steppe zone but on well drained soils (Fig. 1D). Chenopodiaceae pollen falls instantly from 94.7% down to 4.9% while Artemisia pollen increases abruptly up to 71.8% except for the last spectrum (sample 42) in which high amount of Ephedra pollen illustratively compresses the frequency of Artemisia pollen since the sample was taken in an Ephedra strobilacea community. As for two previous PAZs, grass pollen remains low while the pollen percentages of Asteroideae and Brassicaceae increase. High amount of Calligonum-type pollen in spectrum 38 reflects the major role of Pteropyrum aucheri in the vegetation community inside and outside the sampled plot (Table 1). Considerable amounts of Filago/Senecio pollen type in this PAZ comes most likely from the annual desertic species of Senecio glaucus which grows in the region. Likewise, the PAZ-D, Sporormiella spores are almost absent from most of the spectra in this pollen assemblage zone indicating low grazing and fairly absence of livestock due to long term conservation and remoteness of the region. PCA of pollen data Fig. 5 presents the PCA scatter plots for both samples (plots) and the most contributing plant taxa into the PCA axis 1 and PCA axis 2, in which the contributions of axes 1 and 2 to the total variance are 16.4% and 9.5% respectively. PCA analysis of pollen data set, delimits four distinct groups. Group A (plots 1-3) includes subnival samples which are located in negative sides of both PCA axes (Fig. 5A, D1-D3). The most important pollen types in group A are Asteraceae and Cyperaceae (Table 1, Fig. 4). Group B (plots 4-18), with highest pollen diversity along the transect is located in negative side of PCA axis 1 and positive side of PCA axis 2 corresponding with well diversified mesic elevation steppes dominated by thorny cushion taxa and Poaceae. The most characteristic pollen types in forming group B are Poaceae, Caryophyllaceae, Lamiaceae, Fabaceae and Cichorieae, yet the role of Polygonum aviculare-type pollen is of secondary importance comparing the afore-mentioned taxa. Group C (plots 19-22) comprises four samples sitting in positive side of both PCA axes. This group of samples has been taken in vicinity of residential places and contains pollen of cultivated taxa or pollen indicating human activities including that of Juglans regia, Centaurea virgata (Centaurea solstitialis-type), Pinus, Euphorbia and Rosaceae. (Not all shown in Fig. 5). Group D (plots 23-32 and 38-42) lies mostly on negative side of PCA axis 2, entailing samples taken in Artemisia dominated thermo-xerophytic steppes. The most characteristic pollen types of group B are Artemisia, Chenopodiaceae, Ephedra and Filago (partly shown in Fig. 5). The anomalous positions of plot 26 in the scatter plot (Fig. 5) which loads highly on PCA axis 1, is due to vegetation composition coming from a barren gypsum substrate near a village where only less than 1 % of the quadrat was occupied by plant taxa. This plot is dominated by Artemisia, Chenopodiaceae and tree pollen types (Fig. 4). Pollen-vegetation relationships Co-inertia analysis of pollen and vegetation data Co-inertia analysis was done to reveal the amount of similarities between vegetation composition data and corresponding pollen percentages. The shorter arrows indicate higher correlation between vegetation composition and pollen data sets (Fig. 6). The best correlation between pollen and vegetation data sets was found in plots 5-10, 12-13, 16-18, 21, 24, 38-39, and 42 whereas the least correlation was obtained between pollen and vegetation data sets in plots 1-4, 11, 15, 19-20, 22-23, 25-30 (Fig. 6). The weak correlation between the vegetation and pollen assemblages of plots 1-4 (subnival communities), is due to low pollen production of many plant species in this zone and a significant contribution of background longdistance transport pollen originating from a vast region. The vegetation in this altitude is mostly open and the area is exposed to strong multidirectional winds. With the exception of plots numbers 11, 15 dominated by Astragalus spp. and Chaerophyllum macrospermum (both are extremely underrepresented in pollen assemblages), the remaining alpine plots show good correlation between vegetation and pollen assemblages. In contrast, most plots found in montane zone and higher elevations of xerophytic desert steppe zone show a weak correlation between plant and pollen assemblages. This is mainly a reflection of human activities (e.g. mining, agriculture, animal husbandry and planting economical and ornamental plants) and higher diversity in vegetation resulted from habitat heterogeneity. More detailed studies are necessary to look for actual vegetation composition and its mirror in pollen assemblages. In the lowest elevations of xerophytic desert steppe zone, there is a relatively good correlation between plant and pollen assemblages (Figs. 1D and6). As the samples from saline environments in the depressions are excluded from the analyses (see above), the Co-Inertia could not help testing the correlation of pollen assemblages and vegetation composition in this zone. The final results of co-inertia analysis is based on maximally covariant coinertia axes which are derived from principal components analysis (PCA) or correspondence analysis (CA) methods. Since the co-inertia axes in this paper were derived from CA approach, the clustering of samples in co-inertia diagram follows the same pattern as that of CA in Fig. 3. 4.5.2. Pollen representation of plant taxa 4.5.2.1. Trees. Although the studied transect was almost free of trees and large shrubs within and around quadrats (except for Tamarix and also Juglans regia in vicinity), arboreal pollen transported in by wind was often found but rarely represented in a significant amount (Fig. 4). A well-represented case is the Alnus pollen, coming from Hyrcanian forests indicating its high pollen production and dispersal (Djamali et al., 2009a). Other well dispersed arboreal pollen grains are those of Quercus, Carpinus, and Platanus. Tamarix seems to be a moderately dispersed under-represented taxon, supplying only 6-13% of pollen assemblages where it comprises about 50% of the coverage (plots 25 and 30). The anemophilous cultivated tree, Juglans regia shows over-representation and effective dispersion. According to our findings where Juglans regia proportion in fossil pollen reaches 10%, one can conclude that walnut cultivation has taken place in the vicinity of the region, while extensive walnut cultivation can lead to up to 48% pollen representation in fossil pollen diagrams in Irano-Turanian region (see plots 19, 20 and 23). Pinus pollen is found in almost all plots along the studied transect but its frequency rises in the vicinity of cities and residential areas (Fig. 4). The application of our finding for interpretation of fossil pollen spectra is that representation of Pinus pollen from 6 to 11% shows its occurrence in the original vegetation and below 6% is more likely originates from remote dispersal. In spite of the absence of Pinus trees in natural vegetation of Iran, some species (e.g. P. eldarica) are widely cultivated as ornamental and artificial forestation in most parts of Iran in particular residential areas in southern and northern slopes of Alborz mountains [START_REF] Djamali | Olive cultivation in the heart of the Persian Achaemenid Empire: new insights into agricultural practices and environmental changes reflected in a late Holocene pollen record from Lake Parishan, SW Iran[END_REF][START_REF] Akhani | Vegetation patterns of the Irano-Turanian steppe along a 3,000 m altitudinal gradient in the Alborz mountains of northern Iran[END_REF]. 4.5.2.2. Shrubs. Ephedra shows by far, the highest pollen production and dispersal among shrubby species and was found with significant amounts in almost all plots (Fig. 4). This extreme pollen production and dispersal has previously been shown by some authors [START_REF] Welten | Über das glaziale und spätglaziale Vorkommen von Ephedra am nordwestlichen Alpenrand[END_REF][START_REF] Bortenschlager | Pollenanalytische Ergebnisse einer Firnprofiluntersuchung am Kesselwandferner (3240 m, Ötztal, Tirol)[END_REF][START_REF] Yuecong | Pollen indication to source plants in the eastern desert of China[END_REF][START_REF] Zhao | Modern pollen representation of source vegetation in the Qaidam Basin and surrounding mountains, north-eastern Tibetan Plateau[END_REF]. When Ephedra is present within the sampling site, its pollen dominates the pollen spectra (Fig. 4, plots 24 and 42). Prosopis farcta is a ruderal species and a good indicator of severely overgrazed areas of the xerophytic desert zone which displays a moderate representation in modern pollen rain. This pollen type, found exclusively in xeric steppes, is encountered in small numbers in the absence of the parent plant (Fig. 4). When the plant is present in considerable amount inside or around the sampled plots (less than 5%), P. farcta pollen can reach up to 7.6% of pollen sum (Fig. 4,plot 30). Lycium has about six species in Iran of which only Lycium ruthenicum was found in our transect, inhabiting river beds, roadsides and well-drained to saline soils. Our pollen data suggests that Lycium pollen is poorly dispersed and relatively under-represented in modern pollen rain. Lycium pollen is found only in plot 30 where it produces 1.1% of total pollen while the parent plant comprises about 7% of the corresponding vegetation (Fig. 4, plot 30). Pteropyrum species are desert shrubs growing often along seasonal river and water runnels in desertic areas or gypsum hills of the southern parts of Iran, showing low to slightly moderate pollen representation. When Pteropyrum is present in the sampling vegetation, its pollen largely contributes to the pollen sum (e.g. Fig. 4, plot 38) otherwise its proportion in pollen assemblages is negligible (Fig. 4, compare plot 38 with the rest). 4.5.2.3. Herbs and dwarf Shrubs. Entomophilous taxa of highlands including Apiaceae, Astragalus, Onobrychis, Thymus, Verbascum and Plumbaginaceae which alternatively play fundamental roles in high elevation vegetation only contribute meagre percentages in modern pollen assemblages (see Fig. 4). The family Fabaceae, as the largest family of the Iranian flora [START_REF] Akhani | Flora Iranica: facts and figures and a list of publications by KH Rechinger on Iran and adjacent areas[END_REF], is poorly represented in modern vegetation as well as in modern pollen assemblages from the desert steppes along our transect (Fig. 4, plots 27-42). However, they are well represented in vegetation of the montane to alpine steppes in which their role in pollen rain is highly insignificant (Fig. 4, e. g. plots 3-13 and 15-16). For instance, in several cases when Onobrychis cornuta forms more than 30% coverage in the vegetation, its contribution to modern pollen assemblage varies only between zero and around 1% (Fig. 4, plots 6-9). In the case of Astragalus spp. even the coverage percentage of up to 80% in the sampled vegetation represented very low pollen percentages (e.g. 0.2% in plot 11 and 9.5% in plot 15 (Table 1 and Fig. 4)). Apiaceae is also highly under-represented in studied transect. In a local stand of Chaerophyllum macrospermum with above 80% coverage, only 13% of total pollen belongs to Apiaceae (Fig. 4, plot 15). However, pollen contribution of Apiaceae in dry steppes dominated by Artemisia-Chenopodiaceae is still far more insignificant than that of high elevation steppes (see Apiaceae pollen percentages in Fig. 4). Achillea aucheri subsp. aucheri an endemic taxon in high altitude of central Alborz produces Matricaria-type pollen. Although encountered sporadically throughout the studied transect, this pollen type is highly represented (90% of pollen percentages in plots with 20% vegetation cover) in subnival plots dominated by Achillea aucheri subsp. aucheri (Fig. 4, plots 1-2). Apparently, this over-represented plant in pollen assemblages has a limited range of dispersal since its pollen suddenly vanishes in adjacent plots without individuals of the species. Several species of Cousinia grow along the transect including Cousinia harazensis, an endemic to Damavand Mt., Cousinia pterocaulis, Cousinia multiloba, Cousinia eryngioides and Cousinia cylindracea. Cousinia pollen is found in small amounts along the transect but it can reaches up to 11% where producing plant forms about 20% coverage (Fig. 4, plot 7). Apparently, Cousinia pollen dispersal is limited. This can be critically important since different species of Cousinia are rather geographically distinct, therefore pollen identification to species level will help climate interpretation in fossil pollen sequences [START_REF] Djamali | Ecological implications of Cousinia Cass. (Asteraceae) persistence through the last two glacial-interglacial cycles in the continental Middle East for the Irano-Turanian flora[END_REF]. Euphorbia represents around 74 species in Iran [START_REF] Pahlevani | Seed and capsule morphology of Iranian perennial species of Euphorbia (Euphorbiaceae) and its phylogenetic applica-tion[END_REF], growing in various habitats. Only Euphorbia cheiradenia was found in our transect which occupies disturbed or overgrazed areas. The results of our study propose that Euphorbia cheiradenia is highly underrepresented and its pollen never exceeds 2% even in samples in which the plant coverage is over 20% (Fig. 4, plot 19). Pollen of Poaceae is found in all plots except one (Fig. 4, plot 35), reflecting the ubiquity of this diversified family. At present, pollen identification is not feasible to determine lower taxonomic ranks in grasses. Supposedly, different taxa show different abilities in pollen production making it difficult to speculate about their pollen representation. For instance in plot 34 where Phragmites australis covers more than 70% of the area, only 1.7% of pollen sum belongs to Poaceae (Fig. 4). This may be partly due to intensive grazing or unsuitable conditions diminishing flowering of reed plants in the area. In plots 18 and 21, in spite of relative dominance of Poaceae, only 10% of total pollen is produced while in plots 4, 5 and 9 with almost the same coverage, a range of 63-68% of total pollen belongs to Poaceae (see Fig. 4). We suggest that not only in Poaceae but also in other taxa (including Artemisia and Chenopodiaceae), different species have different pollen production and dispersal. Furthermore, associated species could affect the amount of pollen representation in pollen rain assemblages. Chenopodiaceae/Artemisia pollen ratio application The recent classification of Angiosperm Phylogeny Group, APG VI [START_REF] Chase | An update of the Angiosperm Phy-logeny Group classification for the orders and families of flowering plants: APG IV[END_REF] suggested inclusion of Chenopodiaceae into a wide sense Amaranthaceae. The similarity of pollen grains of some groups of Amaranthaceae (Amaranthoideae) with traditional Chenopodiaceae and their distinctiveness from other groups (Gomphrenoideae) has been known in literatures [START_REF] Borsch | Pollen types in the Amaranthaceae. Morphology and evolutionary significance[END_REF][START_REF] Borsch | Structure and evolution of metareticulate pollen[END_REF]. In this study, we preferred to keep the name of Chenopodiaceae not only following most of the recent classifications of Caryophyllales (such as [START_REF] Hernández-Ledesma | A taxo-nomic backbone for the global synthesis of species diversity in the angiosperm order Caryophyllales[END_REF], but also because of the rarity of Amaranthaceae s. str. in Irano-Turanian flora. Pollen of Artemisia and Chenopodiaceae is found in all studied plots which proves their high pollen production and good pollen dispersal, already noticed by many authors (e.g. [START_REF] Mcandrews | Modern pollen rain in western Iran, and its relation to plant geography and Quaternary vegetational history[END_REF][START_REF] Moore | Pollen studies in dry environments[END_REF][START_REF] El-Moslimany | Ecological significance of common nonarboreal pollen: exam-ples from drylands of the Middle East[END_REF]Djamali et al., 2009c;[START_REF] Zhao | Application and limitations of the Chenopodiaceae/Artemisia pollen ratio in arid and semiarid China[END_REF]. Regardless of high pollen production and dispersal, the Artemisia and Chenopodiaceae pollen frequencies in pollen rain spectra seem well reflecting their abundance in sampled locations (Fig. 4, Table 1). The high elevation belt (Zone A including subnival, alpine and montane parts of our transect) is represented by a rather uniform low average percentage of 6.8% of chenopod pollen, in spite of the fact that species of this family are absent or rarely occur in the vegetation. The Artemisia shows similar pattern in terms of pollen percentage (7.9%) but in contrary to chenopods, several species of Artemisia are associated with different plant communities in the high altitude vegetation. An interesting pattern we found in Artemisia pollen diagram is its higher pollen percentages in desert steppes than in high altitude plant communities even in plots with similar coverage. This might represent different pollen production and dispersal capacities of lowland versus highland species of Artemisia or the masking of Artemisia by the pollen of adjacent plants in the higher altitude vegetation. Another interesting result in our study is the different patterns of pollen percentages of Chenopodiaceae in xeric and halophytic zones. The extraordinarily high pollen production in euhalophytic chenopods compared to species in the ruderal or xeric habitats might be interpreted as a result of different pollen production rates in different taxonomic groups. Our own field observations show that usually species of Salicornioideae, as the dominant species in hygrohalophytic communities, produce large amounts of pollen. The available data from a few coastal Mediterranean species belonging to the genera Atriplex and Halimione (Chenopodioideae), Sarcocornia and Arthrocnemum (Salicornioideae), Suaeda vera (Suaedoideae) and Salsola vermiculata (Salsoloideae) show a complex pattern that can't be used to explain the pollen representation of halophytic communities in the Irano-Turanian inland areas [START_REF] Fernández-Illescas | Pollen production of Chenopodiaceae species at habitat and landscape scale in Mediterranean salt marshes: an ecological and phenological study[END_REF]. Inspection of pollen count datasets reveals that three pollen types are constantly present in plots including Poaceae, Artemisia and Chenopodiaceae, offering the possibility to use them as ecological indicators. El-Moslimany (1990) introduced Chenopodiaceae/Artemisia pollen ratio as an index of aridity in open habitats based on the hypothesis that Artemisia species require higher moisture than those of chenopods. In spite of the wide use of this ratio in palynological studies [START_REF] El-Moslimany | Ecological significance of common nonarboreal pollen: exam-ples from drylands of the Middle East[END_REF][START_REF] Davies | Modern pollen precipitation from an elevational transect in central Jordan and its relationship to vegetation[END_REF][START_REF] Zhao | Sensitive response of desert vegetation to moisture change based on a near-annual resolution pollen record from Gahai Lake in the Qaidam Basin, northwest China[END_REF][START_REF] Van Campo | Pollen-and diatom-inferred climatic and hydrological changes in Sumxi Co Basin (Western Tibet) since 13,000 yr BP[END_REF][START_REF] Zhao | Modern pollen representation of source vegetation in the Qaidam Basin and surrounding mountains, north-eastern Tibetan Plateau[END_REF], some authors doubted its reliability [START_REF] Yasuda | The earliest record of major anthropogenic deforestation in the Ghab Valley, northwest Syria: a palynological study[END_REF][START_REF] Herzschuh | Reliability of pollen ratios for environmental reconstructions on the Tibetan Plateau[END_REF][START_REF] Djamali | An Upper Pleistocene long pollen record from the Near East, the 100 m-long sequence of Lake Urmia, NW Iran[END_REF][START_REF] Zhao | Application and limitations of the Chenopodiaceae/Artemisia pollen ratio in arid and semiarid China[END_REF]. Based on the analysis of large data sets by [START_REF] Zhao | Application and limitations of the Chenopodiaceae/Artemisia pollen ratio in arid and semiarid China[END_REF], the C/A ratio corresponds with annual precipitation only in steppe areas with annual precipitation of b450-500 mm. Additionally, they insisted on the requirement of careful pollen-vegetation-climate relationships prior to applying this index. In studies on fossil pollen from hypersaline lakes in NW and SW Iran, [START_REF] Djamali | An Upper Pleistocene long pollen record from the Near East, the 100 m-long sequence of Lake Urmia, NW Iran[END_REF]Djamali et al. ( , 2009a)), doubted the reliability of C/A ratio in reconstruction of climate because they found that Artemisia and Chenopodiaceae can coexist in the same region. They mentioned the importance of local edaphic conditions particularly soil moisture and salinity and local human effects in the formation of halophytic or ruderal communities of Chenopodiaceae which are not related to macro-climatic conditions. Furthermore, chenopod-dominated communities are important features of many coastal halophytic vegetation and some cultivated plants like Sugar Beet (Beta vulgaris), Spinach (Spinacia oleracea) or many weedy species of the genus Chenopodium s.l. are indicators of crop cultivation. In such cases high amounts of chenopod pollen is by no means an index of aridity. Therefore, some authors suggest the applicability of Poaceae/Artemisia pollen ratio as a better index of moisture availability against C/A ( Liu et al., 2006;Djamali et al., 2009a). The C/A index has further been criticized in Artemisia dominated hyperarid deserts of Badain Jaran, Taklamakan and Tengger in western China [START_REF] Yang | Hydrological and climatic changes in deserts of China since the late Pleistocene[END_REF]. Our study shows that in plots in which Artemisia and Chenopods are absent, we can still see small amounts of their pollen in the pollen assemblages, but the C/A ratio in such plots does not show a particular pattern and changes randomly. However, in those plots in which Artemisia or Chenopodiaceae are dominated, the pollen assemblages and the C/A ratios of surface samples reflect their relative contribution to actual vegetation (Figs. 4 and7, Table 1). In Fig. 7 ratios of C/A, P/C (Poaceae/Chenopodiaceae), P/A (Poaceae/Artemisia) and (A + C)/P ((Artemisia + Chenopodiaceae)/Poaceae) are compared. The C/A and (A + C)/P ratios in xerophytic desert steppes are much lower than that of neighbouring halophytic vegetation despite the fact that macro-climatic conditions of both communities are similar. Further, the soil moisture of halophytic communities is much higher than that of xerophytic desert steppes. It is important to note that soil moisture in halophytic communities could not necessarily be interpreted as higher precipitation. The water table in low endorheic depressions and playa environments is usually high due to the intersection of regional groundwater with ground surface. The (A + C)/P aridity index was proposed by [START_REF] Fowell | Mid to late Holocene climate evolution of the Lake Telmen Basin, North Central Mongolia, based on palynological data[END_REF] to differentiate steppes from desert steppes. Based on this index values above 5 are considered as an indication of arid conditions while values below 5 are interpreted as indications of relatively moist steppes or forest steppes. Contrary to C/ A ratio, (A + C)/P aridity index well distinguishes the highland mesic steppes from the lowland xerophytic and halophytic steppes in studied transect (Fig. 7). Similar to C/A ratio, high values of (A + C)/P index corresponds to the halophytic formations, rather than indicating drier conditions as previously interpreted by other authors [START_REF] Fowell | Mid to late Holocene climate evolution of the Lake Telmen Basin, North Central Mongolia, based on palynological data[END_REF][START_REF] Zhao | Modern pollen representation of source vegetation in the Qaidam Basin and surrounding mountains, north-eastern Tibetan Plateau[END_REF], see also Fig. 7). We suggest that C/A ratio is more confident as an index to distinct the halophytic from non-halophytic desert vegetation, comparing to (A + C)/P (compare plot No 34 in C/A and (A + C)/P graphs in Fig. 7). We suggest to depict pollen ratios of all the three ubiquitous pollen types (Poaceae, Artemisia and Chenopodiaceae) in a combined graph as a better tool to reflect vegetation and moisture. Fig. 7 shows that higher P/A and P/C ratios are good indications of high elevation mesic steppes, while higher C/A and (A + C)/P ratios are good indicators of xerophytic desert steppes and chenopod dominated saline soils. The subnival subzone is particularly of interest because it indicates low P/A and P/C values but variable C/A ratios and (A + C)/P values around 5. The poor grass vegetation and occurrence of alpine species of Artemisia (Artemisia melanolepis and Artemisia chamaelifolia) and orophilous chenopod species (Blitum virgatum) well explain this deviation. This finding is of great interest to locate this vegetation zone in the fossil pollen diagram from the Irano-Turanian mountain regions. It also shows that very local floristic conditions may change the general trends in the palynological ratios used as aridity or moisture indices. Conclusions Along the 3600 m elevational gradient transect from Damavand Mountain to Salt Playa Lake located in Irano-Turanian desert of Dashte Kavir we found: 1. The pollen assemblage of subnival vegetation belt is characterized by an assemblage of Achillea/Matricaria-type pollen followed by Artemisia, Cyperaceae and Brassicaceae showing a weak correlation with the local vegetation. This might be due to different representation of plant taxa in pollen assemblages and significant contribution of background longdistance transport pollen coming by strong winds in the region. 2. The alpine and montane vegetation subzones (1800-4100 m) show the highest pollen diversity along the whole transect and are characterized by predominance of grass pollen. Pollen types of Fabaceae, Caryophyllaceae, Asteraceae-Cichorioideae, Apiaceae, and Lamiaceae (Mentha-type) are also well represented. The stronger correlation between modern vegetation and pollen assemblages in the alpine subzone decreases in the heavily disturbed places, in particular in the montane subzone. 3. The xerophytic desert steppe vegetation zone is well characterized by drastic reduction of grass pollen and a conspicuous increase of Artemisia and Chenopodiaceae pollen percentages, well correspond with the actual vegetation. The human activities in non-protected parts of the transect have weakened the correlation between surface pollen assemblage and actual vegetation evidently due to agropastoral activities. 4. The halophytic vegetation zone, showing the lowest species richness and the least pollen diversity over the entire transect, is characterized by absolute dominance of Chenopodiaceae pollen because of dominant chenopod vegetation. 5. 6. Entomophilous shrubs encountered in the studied transect including Prosopis farcta, Lycium ruthenicum, Pteropyrum aucheri and Tamarix spp. were poorly to moderately presented in pollen assemblages. Members of the families Apiaceae, Lamiaceae, Scrophulariaceae, Plumbaginaceae and Fabaceae which dominate the high elevations are very under-represented in modern pollen samples in Irano-Turanian region. 7. The C/A pollen ratio, which has widely been used as an aridity index in open vegetations, is less reliable in the Irano-Turanian region. This ratio is informative to show the contribution of these two taxa in the parent vegetation only when their pollen percentages are high, otherwise it shows neither the parent vegetation composition nor the moisture conditions. We found depicting and comparing the combined graphs of all four indices (C/A, P/A, P/C and (A + C)/P) better represent the vegetation and climate relationships. In summary, P/A and P/C ratios are more confident to differentiate mesic from arid steppes while C/A and (A + C)/P ratios provide useful tools to distinguish halophytic and nonhalophytic desert vegetation. Fig. 1 . 1 Fig. 1. A. Location of the pollen-vegetation transect from Damavand Volcano to Salt Playa Lake (Daryacheye Namak) in Central Iran. B. Topographic map of the studied transect as well as locations of the meteorological stations along the transect (NPL: Namak Playa Lake). C. Climatic diagrams of meteorological data showing mean monthly temperature and mean monthly precipitation curves as well as the length of dry and humid periods. D. Topographic profile representing the elevational position of each sampling point against the main vegetation belts along the studied transect. 136Fig. 2 . 2 Fig. 2. Photographs of the vegetation belts along the studied transect. A. Subnival community near the peak of Damavand Mt. dominated by Achillea aucheri and Dracocephalum aucheri (Dracocephaletum aucheri). B. Alpine communities dominated by grasses and cushion-like plants. C. Artemisia steppe near Ghasre Bahram, Kavir National Park, dominated by Artemisia inculta. D. Halophytic community near Mobarakieh dominated by Halostachys belangeriana with scattered Tamarix shrubs. Photo credit: A, C, D (H. Akhani), B (M. Dehghani). Galium verum, Stachys lavandulifolia, Chondrilla juncea and Eryngium billardieri are characteristic species of this community group; the two latter species are indicators of overgrazed habitats. The five communities (A3.1, A3.2, A3.3, A3.4, and A3.5) distinguished in this group reflect the heterogeneity of the area mostly related to slope directions, substrate variability, disturbance and water supply. This group with an average species richness of 37 represents the most diverse zone along the studied transect. A4) Plot No. 16 represents a water runnel on alpine area and surrounding parts in an elevation of 2310 m, largely covered by the ruderal species Sophora alopecuroides on moist soil and species such as Thymus kotschyanus, Papaver bracteatum, Astragalus microcephalus, Elymus hispidus, and Marrubium anisodon. A5) The community of Dysphania botrys forms on sandy soils in remnants of a former sand mine with some other ruderal species such as Heliotropium europaeum and Kali tragus in elevation of 2402 m with very low species richness of 4. A6) Two plots from elevations of 1907 and 2055 (D19-20) are located near the city of Damavand on the hills of montane steppes which have largely been overgrazed. Taeniatherum caput-medusae, an invasive annual grass species not favored by herbivores (Clausnitzer et al., 1999), and thorny ruderal species such as Gundelia tournefortii and Cirsium congestum are indicators of large parts of montane steppes affected by overgrazing and land degradation. A7) The two plots sampled from elevation of 2089 m and 1699 m (D21-22) are also representative of wastelands in montane steppes which are in verge of vegetation restoration. Therefore, many species are Irano-Turanian indicators of wastelands such as Echinophora platyloba, Cousinia eryngioides, Centaurea virgata and Scariola orientalis. Fig. 3 . 3 Fig.3. Correspondence Analysis scatter plots (vegetation data) for both samples and the most contributing plant taxa into the CA axis 1 and CA axis 2. Contrib: contribution color scale of both samples and taxa to the CA-axes 1-2 factorial plan. CA axes 1 and 2 explain respectively 15.2% and 10.1% of the total variance. Fig. 4 . 4 Fig. 4. Pollen percentages diagram of a selection of pollen types encountered along the studied transect. Fig. 6 . 6 Fig. 6. Co-inertia scatter plots displaying the degree of correlation between vegetation and pollen data. Shorter arrows show stronger similarities between vegetation and pollen assemblages. Fig. 7 . 7 Fig. 7. Comparison of pollen ratios of Artemisia, Chenopodiaceae and Poaceae in different vegetation zones of a 3600 m elevational transect in central Iran, along with a climatic curve showing the changes of mean annual temperature and precipitation of five meteorological stations along the studied transect. Arrows beside each index indicate moisture correlation with respective index. Table 1 1 A summary description of plant communities, location of sampled plots and the materials from which the pollen rain samples were taken. D = Debris, L = Lichen, M = Moss polster, S = Soil. Plots Plot No. 1 2 3 4 5 6 7 8 9 10 12 13 18 17 11 15 16 14 19 Date (2014.X.XX) 8 . 1 5 8.15 8.15 8.14 8.14 8.14 9.07 9.07 9.07 9.08 9.08 9.08 9.09 9.09 9.08 9.08 9.09 8.13 9.09 Plot area (m2) 400 400 400 400 400 400 400 400 400 400 400 400 400 400 400 400 100 400 Altitude (m) 4295 4196 3974 3613 3364 3281 3 1 3 3 2 9 8 9 2870 2595 2598 2547 2 4 3 7 2722 2538 2310 02 205 24 5 Cover total (%) 25 85 80 85 80 75 80 85 85 90 90 90 75 80 100 75 60 90 Slope (degrees) 30 20 30 25 20 20 5 25 12 5 5 7 30 6 7 0 0 10 Aspect (degrees) S S S S S S S S S S N E W S S NE 0 0 E Species richness 6 19 19 22 27 25 36 38 30 48 37 26 38 37 36 34 4 34 Pollen source L,S S, D S, D M, S S, D S, D S, D S,D S, D S, D S, D S, D S, D S, D S,D S,D S,D S S, D Shannon Diversity Index 1.46 332. 262. 412. 632. 42. 982. 3.16 662. 543. 073. 2.65 92. 842. 372. 942. 370. 982. Zone A Sub nival Alpine Dracochephalum aucheri A1 1 + 2 1 . . . . . . . . . . . . . . . Achillea aucheri s. aucheri 2 2 + + . . . . . . . . . . . . . . . Draba siliquosa 1 + + . . . . . . . . . . . . . . . . Erysimum caespitosum A11 1 1 1 . . . . . . . . . . . . . . . . Veronica aucheri + + r . . . . . . . . . . . . . . . . Carex pseudofoetida s. acrifolia . 1 3 . Catabrosella parviflora A12 . Onobrychis cornuta s. cornuta . . . . 2 3 3 3 3 . . . 3 . + . . . . Minuartia lineata A2 . . . . 1 2 1 1 1 1 1 . . . . . . . . Melica jacquemontii s. jacquemontii . . . . . 1 + . + 1 . . . . . . + . . Cousinia harazensis A21 . . . . 1 2 3 . . . . . . . . . . . . Polygonum rottboellioides . . . . . 2 + + + . . . . . + . . . . Astragalus ochroleucus A22 . . . . . + . 2 1 1 . . . . . . . . . Galium verum . . . . . . . + + . 1 1 1 1 . 1 . . . Eryngium billardieri A3 . . . . . . . . . . + 1 2 + . 1 . . + Chondrilla juncea . . . . . . . . . . 2 1 . 1 1 1 + . . Stachys lavandulifolia . . . . . . . . . . 2 1 2 . 2 . . . 1 Dactylis glomerata A31 . . . . . . . . . . 2 1 . . . 1 . . . Dianthus orientalis s. stenicalyx . . . . . . . . + . 2 2 . . . + . . . Arenaria gypsophiloides A32 . . . . . . . . . . . + 2 . . + . . . Acantholimon erinaceum . . . . . 2 + . . . . . 3 . . . . . 2 Achillea vermicularis A33 . . . . . . . . . . . . . 2 . . . . 1 Echinops elbursensis . . . . . . . . . . . . . 2 . . . . . Astragalus compactus A34 . . . . . . . . . . . 1 . . 4 . . . . Astragalus retamocarpus A34 . . . . . . . . . . 1 2 . . . 5 . . . Chaerophyllum macrospermum . . . . . . . . . . 2 . . . . 5 . . . Blitum virgatum A5 . . . . + . . . . . . . . . . . . . . . . Astragalus macrosemius . . 3 1 . . . . . . . . . . . . . . . . 2 2 . . . . . . . . . . . . . . . Cerastium purpurascens . . 2 r . . . . . . . . . . . . . . . Acantholimon demavendicum . . + + . . . . . . . . . . . . . . . Sophora alopecuroides A4 . . . . . . . . . . . . . . . . 3 . . + . . . . . . . . 1 1 4 . Taeniatherum caput-medusae . . . . . . . . . . . . . . . . . . 3 Asperula glomerata . . . + . . . . . . . . . . . . . . + Dendrostellera lessertii A6 . . . . . . . . . . . . . . . . . . . Gundelia tournefortii . . . . . . . . . . . . . . . . . . . Cirsium congestum v. sorocephalum . . . . . . . . . . . . . . . . . . + Buffonia oliveriana . . . . . . . . . . . . . . . . . . + Cousinia eryngioides A7 . . . . . . . . . . . . . . . . . . Echinophora platyloba . . . . . . . . . . . . . . . . . . Tamarix ramosissima B1 20 21 22 25 30 26 0210. 0210. 0310. 400 400 400 9 699 1907 208 1 80 85 85 15 5 0 E W 0 36 31 31 S, D S, D S 063. 762. 093. Montane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 . . 1 + . 2 . . + . + 1 . . + + . + 2 + . 2 1 . . . . . . . . . . . . . . . . . . . . . . . . Halothamnus subaphyllus B32 . . . . . . . . . . . . . . . . . . . . . . Haloxylon ammodendron (cult.) B33 . . . . . . . . . . . . . . . . . . . . . . Teucrium polium Stipa lessingiana B34 . . . . . . . . . . . . . . . . . . . . . . . . . . . . + . . . . . . . . . + . . . . . Kaviria aucheri Artemisia inculta B4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acantholepis orientalis Pteropyrum aucheri B41 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lactuca glaucifolia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anabasis setifera . . . . . . . . . . . . . . . . . . . Lycium ruthenicum . . . . . . . . . . . . . . . . . . . Tamarix serotina B2 . . . . . . . . . . . . . . . . . . . Tamarix arceuthoides . . . . . . . . . . . . . . . . . . . Artemisia aucheri/Artemisia sp. B3 . . . . . . . . . . . . . . . . . . . Leptalum filifolium . . . . . . . . . . . . . . . . . . . Reaumuria alternifolia B31 . . . . . . . . . . . . . . . . . . . Xylosalsola richteri . . . . . . . . . . . . . . . . . . . Ephedra strobilaceum B42 . . . . . . . . . . . . . . . . . . . Halothamnus glaucus . . . . . . . . . . . . . . . . . . . Stipagrostis plumosa . . . . . . . . . . . . . . . . . . . 27 31 29 32 24 28 23 38 39 40 41 42 37 33 34 35 36 10.03 10.04 10.03 10.03 10.04 0410. 10.04 10.03 10.03 10.03 0212. 0212. 0112. 0112. 0112. 0212. 10.04 0410. 0410. 12.02 400 400 400 400 400 400 400 100 400 100 400 400 400 400 400 400 400 400 300 400 310 8 20 0 34 9 0 519 040 034 5 9 1 99 15 1237 93 10 854 151 117 1 1 1 99 83 835 767 787 775 771 765 60 55 2 30 30 70 65 30 75 40 35 35 30 30 30 20 50 80 75 20 0 0 0 0 0 0 0 30 15 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 S W E 0 0 0 0 0 0 0 0 0 0 12 11 3 13 10 12 9 15 33 22 16 18 18 14 10 19 4 3 4 2 S S, D S S,D S S S S,D S, D S S S S S,D S, D S, D S S S S 1.4 2.05 1.08 2.21 1.87 781. 1.39 2.42 2.83 2.32 392. 052. 632. 282. 941. 72. 1.28 340. 920. 0.48 Zone B Zone C xerophytic desert steppe Halophytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 . . . . . . . . . . . . . . . . . . . 1 . . . + . . . . . . . . . . . . . . . . 2 . . . . . . . . . . . . . . . . . . . 2 . . . . . . . . . . . . . . . . . . . 2 . . . . . . . . . . . . . . . . . . + + . 2 1 3 2 2 3 + . . . . . . . . . . . + 1 2 . Table 1 (continued) Halocnemum strobilaceum C Alhagi maurorum C1 Phragmites australis Tamarix androssowii C2 Halostachys belangeriana Tamarix pycnocarpa C3 Species of higher categories or widespeard ones Thymus kotschyanus Psathyrostachys fragilis Veronica biloba Ziziphora clinopodioides Polygonum serpyllaceum Alopecurus textilis Bromus variegatus Artemisia melanolepis Poa araratica Festuca valesiaca Artemisia chamaemelifolia A Zone Papaver bracteatum Tanacetum polycephalum of Species Agropyron cristatum Verbascum cheiranthifoilum Draba nemorosa Bromus tomentellus Achillea biebersteinii Poa bulbosa Stipa arabica Astragalus microcephalus Scariola orientalis Centaurea virgata s. squarosa Euphorbia cheiradenia Elymus hispidus Erodium oxyrrhynchum s. oxyrrhynchum Kaviria tomentosa Peganum harmala B Zone Caroxylon nitrarium Climacoptera crassa of Species Salsola rosmarinus Ziziphora tenuior Senecio glaucus Scabiosa olivieri Widspeard (mostly annual species) Bromus tectorum Taraxacum species Bromus danthoniae Alyssum szovitsianum Herniaria incana Medicago sativa Cousinia pterocaulos Cirsium lappaceum v. tomentosum Nepeta racemosa Filago arvensis Potentilla species Alyssum desertorum Galium species Ceratocephala testiculata Silene bupleuroides s. bupleuroides Stipa caucasica Rochelia persica Veronica kurdica Cerasus pseudoprostrata Marrubium anisodon Alyssum minus Drabopsis verna Hypericum scabrum Astragalus jodotropis Alyssum marginatum Noaea mucronata Eremopyrum distans-bonaepartis Noaea minuta Launaea acanthodes 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1 1 2 . . . . . . . . . . . . . . . . . . . . . . . 1 . . . . . . . . . . . . . . 2 1 . . . . . . . . . . . . . . . . . . . . . . . . + . . . . . . . . . . . . . . . 2 4 + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . . 1 2 1 1 2 2 3 2 + 2 2 2 + 3 . 2 . + . . . . . . . . . . . . . . . . . . . . . . . . . . + 3 . 1 1 1 . . 1 2 . . . 2 . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . . 1 1 + . . . . + 1 . . . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . . . . . + . . + . . . . + + . . . . . . . . . . . . . . . . . . . . . . . . . r 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 3 2 1 + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3 3 1 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 . 1 . 1 . . . . . . . + . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 r . 2 1 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + . . 2 3 3 3 3 2 3 . . . . 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 . 1 1 1 . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . 2 2 + . 1 . . 1 + 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + 1 . 2 + 2 2 . 1 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + 1 1 1 1 2 + . 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + 1 + 1 1 . 1 2 . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + 1 1 . + . 2 + + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 1 . . 1 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + . 2 . 2 2 1 . . 1 . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + . 2 1 + 2 3 1 . + 2 . . 1 . . . . . . . . . . + . . . . + . . . . . . . . . . . . . . . . + . . . . 2 . . . 2 . 1 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . + 1 2 . . 2 . . . 2 . 1 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + + . . + + 1 + . . . 1 1 2 2 . . . . . . . + . . . . . . . . . . . . . . . . . . . . . . + 1 . . . + . . 2 1 3 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 . . 1 . 2 1 . + . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 3 . 3 3 2 2 2 2 . . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + + + . . . + + + 1 + + . . . . . . . . . . . . . . . . . . . . . . . . . . + . + 1 . . . . 1 . . + + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . . + . . + . + + . . . + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + . . r . . . . 1 + . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1 . . . . . . . . . 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + . . + . + . + + + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + + + + + + . + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + . + + . + + . . . . . . . . . . + + 1 2 + . . . + . . + . 2 1 . + . . . . . . . + . 1 . + . . . . . . . . . . . . + + . + + + + . + + + + 1 . + + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + + . . . + . 1 . + . 3 1 1 + . . . . . . . . + + . . . . . . . . . . . . . . . . . . . 1 . . 1 1 + . . . 1 + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . + . + . . + . + + 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + + 1 . . + 1 + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + . . 1 . . . 1 . . + . 1 . . . . . . . . . . . . . . . . . . . . . . . . . + 1 . 1 . + . . . . . . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + + 1 . . . . . . 1 . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + . 1 . + . . . . + . + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . 1 + . . . 1 + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . . . . . 1 . + . . 1 + . . . . . . . . . . . . . . . . . . . . . . . . . + . . . . + 1 . . . . . + . . . . . . . . . . . . + . . . . . . . . . . . . . . . . . + . . . 1 . . . . 1 . . . . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . 1 . . 2 . . . . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + . . . . . + . + . . . . . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + . . . . 1 + . . . + . . . . . . . . . . . . . . . . . . . . . . . . . . . + 1 . . . . + . . . . 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 + 2 . . . + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . . 2 . . + 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + . 2 + . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 . . . 2 . . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 . . . + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + . . + . . . 1 . . . . . . . . . . . + . . + . + . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 . 2 . . . 1 . . . . 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 . . . . + . . . . + + . + . . . + . . . . . . . . . . . . . . . . . . . . . . . . r + . . . + . . . . + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + + . . . . . . + . . . . . . . . . . . Other species Table 1 (continued) + . 2 + + + + . . . . . . . . + 1 + . . . . . + . . . . . . . . . . . . . . 2 . . . . . . . . . . . . . . . . . . . . 2 . . . . . . . . . . . . . . . . . . . . 1 + . + . . . . . . . . . . . . . . . . 2 + . . . . . . . . . . . . . . . . . . . . 3 . . . . . . . . . . . . . . . . . . . . 2 3 2 2 + 1 . . . . . . . . + . . . . 1 + + + + . + . . . . 1 . . . . . . 2 . . 2 1 + . . . . . . . . . . . . . . . + + + + + . . + . . . . . . . . . . . . . . . . . 2 1 1 . . . . . . . . . . . . . . . . . + 2 . . . . . . . . . . . . . . . . . . + . + . . . . . . . . . . . . . . . . . + + . . . . . (continued on next page) Acknowledgments This paper is part of the Ph.D. thesis of the first author in School of Biology, University of Tehran supported by the Iranian Ministry of Science and Technology, the Franco-German ANR-DFG project entitled "PALEO-PERSEPOLIS" (ANR-14-CE35-0026-01) and the Iran National Science Foundation (INSF-940087). The Cultural Service of the French Embassy in Iran is acknowledged for facilitating the mobility of the first author to France. Furthermore, the improving suggestions by two anonymous referees are much appreciated. Appendix A. Supplementary data Supplementary data to this article can be found online at https://doi. org/10.1016/j.revpalbo.2017.08.004.
01355318
en
[ "info.info-rb" ]
2024/03/05 22:32:18
2016
https://hal.science/hal-01355318/file/liu2016.pdf
Shao Liu email: [email protected] Binbin Chen email: [email protected] Stéphane Caro email: [email protected] Sébastien Briot email: [email protected] Laurence Harewood email: [email protected] Chao Chen email: [email protected] A cable linkage with remote centre of motion à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction In a minimally-invasive surgery (MIS), the surgical instrument is constrained to have four degrees-of-freedom (DOF) through the incision port [START_REF] Kuo | Kinematic design considerations for minimally invasive surgical robots: an overview[END_REF]: pitch, yaw, translation along the longitudinal axis and roll. The first three DOF in combine function as a spherical coordinate system to define the position of end-effector inside the patient's body. Remote centre of motion (RCM) mechanisms provide the two rotational DOF while permitting the surgical instrument to pivot around the incision port, hence eliminate potential damage to the incision port and promote the safety of MIS procedures [START_REF] Taylor | Medical robotics in computer-integrated surgery[END_REF].The RCM function refers to the capability of a mechanism to rotate its link(s) around a remote point without having a physical revolute joint at the point [START_REF] Kuo | Robotics for minimally invasive surgery: a historical review from the perspective of kinematics[END_REF][START_REF] Zong | Classification and type synthesis of 1-DOF remote center of motion mechanisms[END_REF]. A remote centre (RC) can be constrained virtually or mechanically [START_REF] Chen | Novel linkage with remote center of motion[END_REF]. Mechanical RCM mechanisms are more reliable and considered suitable for clinical applications [START_REF] Kuo | Robotics for minimally invasive surgery: a historical review from the perspective of kinematics[END_REF]. Mechanical RCM mechanisms that generate single RC and are applied on robotic MIS systems include isocentres [START_REF] Ghodoussi | Robotic surgery -the transatlantic case[END_REF], circular tracking arcs [START_REF] Guerrouad | SMOS: stereotaxical microtelemanipulator for ocular surgery[END_REF][START_REF] Hempel | An MRI-compatible surgical robot for precise radiological interventions[END_REF], parallelograms [START_REF] Madhani | The black falcon: a teleoperated surgical instrument for minimally invasive surgery[END_REF][START_REF] Taylor | Remote Center-of-Motion Robot for Surgery[END_REF][START_REF] Blumenkranz | Manipulator for Positioning Linkage for Robotics Surgery[END_REF][START_REF] Feng | Development of a medical robot system for minimally invasive surgery[END_REF][START_REF] Kim | Design and evaluation of a teleoperated surgical manipulator with an additional degree of freedom for laparoscopic surgery[END_REF][START_REF] Zhu | Motion/Force/Image control of a diagnostic ultrasound robot[END_REF], synchronous transmissions [START_REF] Stoianovici | Remote Center of Motion Robotic System and Method[END_REF] and spherical linkages [START_REF] Lum | The REVAN: design and validation of a telesurgery system[END_REF]. In addition, there are RCM mechanisms that generate multiple RCs [START_REF] Bai | Kinematics and synthesis of a type of mechanisms with multiple remote centers of motion[END_REF]. A commonly used approach to synthetise a two-DOF RCM mechanism is to combine a planar RCM mechanism with a revolute joint [START_REF] Kuo | Robotics for minimally invasive surgery: a historical review from the perspective of kinematics[END_REF]. The axis of the revolute joint coincides with the one-DOF RC to add the second DOF. Such approach results in fully decoupled rotational DOF, whose benefits include reduced complexity in control, promoted level of confidence in safety as well as rapid and intuitive manual positioning of the entire mechanism or individual DOF [START_REF] Kuo | Kinematic design considerations for minimally invasive surgical robots: an overview[END_REF]. The translational DOF required in the MIS applications is often achieved by mounting an independent translational mechanism on the two-DOF RCM mechanism. A typical example of such three-DOF mechanism is the clinically-approved da Vinci series robotic surgical system [START_REF] Blumenkranz | Manipulator for Positioning Linkage for Robotics Surgery[END_REF][START_REF] Solomon | Multi-ply Strap Driver Trains for Robotic Arms[END_REF][START_REF] Haber | Novel robotic da Vinci instruments for laparoendoscopic single-site surgery[END_REF]. In other approaches, various types of RCM mechanisms that also provide translational DOF are explored [START_REF] Long | Type synthesis of 1R1T remote center of motion mechanisms based on pantograph mechanisms[END_REF][START_REF] Li | Kinematic design of a novel spatial remote center-of-motion mechanism for minimally invasive surgical robot[END_REF][START_REF] Hadavand | A novel remote center of motion mechanism for the force-reflective master robot of haptic tele-surgery systems[END_REF]. However, these mechanisms have coupled DOF [START_REF] Long | Type synthesis of 1R1T remote center of motion mechanisms based on pantograph mechanisms[END_REF], or are relatively bulky in terms of the transverse dimension [START_REF] Li | Kinematic design of a novel spatial remote center-of-motion mechanism for minimally invasive surgical robot[END_REF], or have large sweeping volume upon rotation of the planar RCM mechanism around the revolute joint, due to the large enclosed area by the outer boundary of the planar RCM mechanism [START_REF] Hadavand | A novel remote center of motion mechanism for the force-reflective master robot of haptic tele-surgery systems[END_REF]. The parallelogram-based structure is widely used as the planar RCM mechanism in the robotic MIS systems [START_REF] Kuo | Robotics for minimally invasive surgery: a historical review from the perspective of kinematics[END_REF]. However, there are footprint issues associated with the parallelogram-base linkages (PB-linkage), which the consequences being poor access for bedside assistance [START_REF] Haber | Novel robotic da Vinci instruments for laparoendoscopic single-site surgery[END_REF] and the compromise in optimal surgical functioning [START_REF] Taylor | Is smaller workspace a limitation for robot performance in laparoscopy?[END_REF]. The term "footprint" is mostly referred to as the sweeping volume of the RCM mechanism, which is generated by the rotation of the planar RCM mechanism around the revolute joint. The sweeping volume is thus related to the area enclosed by the outer boundary of the planar RCM mechanism. When the output link of the parallelogram is short, the output joint of the PB-linkage is positioned closely to the incision ports. Given that the space around an incision port is often crowded with robotic or manual surgical tools, the collision-free workspace is reduced and the chance of interference is increased. The transmission for the translational mechanism mounted on the output link of the PB-linkage goes through the output joint, causing further expansion in size. In opposite case where a longer output link is used to displace the output joint away from the mechanism, the size of the parallelogram and thus the enclosed area is increased. The consequence being the increase in the sweeping volume, which again leads to increase in chance of interference. Apart from the sweeping volume, longer links occupy more space even when the linkage is stationary. It also increases the weight and inertia of the system. Quantitative analysis on the footprint of the PB-linkage though three approaches is presented in Section 5. This paper proposes a cable linkage with RCM, in the attempt to address the footprint issue associated with the PB-linkage. The entire RCM mechanism is kept relatively faraway from the RC, when the distance between the input joint and the RC is given. A cantilever is rigidly mounted on to the output link. It is the only part of the entire RCM mechanism that is operated near the RC. Therefore, the cable system can leave more collision-free workspace for the neighbouring robotic surgical arms or human surgeons to operate. In addition, the enclosed area of the planar RCM is relatively small, resulting in a smaller sweeping volume. Further, the links are relatively short, which reduces the space taken by the links when stationary, and potentially reduce the weight and inertia of the mechanism. A comparison between footprint of the proposed linkage and the PB-linkage is presented in Section 5. Cable-pulley mechanism provides advantages such as structure simplicity, compactness, light weight, low friction and low backlash [START_REF] Hong | A method for representing the configuration and analyzing the motion of complex cable-pulley systems[END_REF]. Therefore it is widely applied on serial and parallel robotic manipulators, as summarised in [START_REF] Tsai | Design of tendon-driven manipulators[END_REF] and [START_REF] Tang | An overview of the development for cable-driven parallel manipulator[END_REF], respectively. In MIS applications, the evolution from linkage-based da Vinci system [START_REF] Blumenkranz | Manipulator for Positioning Linkage for Robotics Surgery[END_REF] to cable-based da Vinci systems [START_REF] Solomon | Multi-ply Strap Driver Trains for Robotic Arms[END_REF][START_REF] Haber | Novel robotic da Vinci instruments for laparoendoscopic single-site surgery[END_REF] shows significant deduction in the size of linkage. As such, the proposed planar RCM mechanism is developed based on cable-pulley mechanism. Cable tension analysis is essential for proof of functioning of the cable linkage. Approaches for describing cable tension of cable-constrained open-chain linkage and multi-link parallel manipulator are available in [START_REF] Tsai | Kinematic analysis of tendon-driven robotic mechanisms using graph theory[END_REF] and [START_REF] Lau | Generalized modeling of multilink cable-driven manipulators with arbitrary routing using the cable-routing matrix[END_REF], respectively. However, a more generalised approach based on mechanical constraint [START_REF] Chen | Power analysis of epicyclic transmissions based on constraints[END_REF] is used in the analysis. The reason being that the proposed linkage is based on four-bar linkage and is affected by the singularity of an unconstrained four-bar linkage. The constraint approach provides indication on the constraint status of the mechanism, thus enable justification on the removal of singularity. The cable tension is solved as generalised force along mechanical constraints. Numerical solution of cable tension is obtained using QR decomposition and verified with static finite element simulation in ANSYS. The rest of the paper is arranged as follows. Section 2 describes the design of the cable linkage and the proof of the RCM function. Section 3 applies the constraint-based analysis. Constraint equations are derived. Cable tension is solved and verified with finite element analysis. The functioning of the cable loops is hence proven. Section 4 calculates the minimum required cable stiffness to achieve a given overall stiffness of the linkage. Section 5 compares the footprints of the cable linkage and PB-linkage in a simplified surgical scenario. Section 6 introduces the prototype of the cable linkage. The cable linkage with RCM In this section, the design of cable linkage with RCM is presented, along with the proof of RCM under the condition that the cable is in tension. Design of cable linkage The design of the cable linkage with RCM is illustrated in Figs. 1 and2, where the schematic diagram of links (without cable loops) and the full schematic diagram are presented, respectively. The cable linkage consists of eight links and seven pulleys that are arranged in three cable loops. As illustrated in Fig. 1, the links are AF, AC, CE, EH, BG, DG, CI and IG, respectively. Joint I is a passive prismatic joint while all other joints (indicated by the small circles) are revolute joints. Note that joints B and D do not divide links AC and CE, respectively. For convenience, links CI, IG and passive prismatic joint I are grouped and termed "diagonal link CG" in the later descriptions. Different configurations of the cable linkage are presented in Figs. 3 and4. ROM in the figures stands for range of motion. The limits in ROM above and below the ground (AO), and the centre configuration where all links are overlapped with the ground are illustrated in Fig. 3. Two mid-point configurations in between the centre and two limits in ROM are illustrated in Fig. 4. Note that pulleys and cable loops are not drawn for clarity of figures. In Fig. 3, all labelled joints are for the centre configuration only, where all the links overlap with the ground while D, E and H are coincident with B, A and F, respectively. Point O is the remote centre. Links AF, AC, CE and EH are the ground, input, connector and output links, respectively. Links AC and CE, with virtual links AO and EO, form virtual four-bar linkage ACEO. Links BG and DG constraint the motion of the linkage. Link CG is the diagonal link of the virtual four-bar linkage. It contains a passive prismatic joint, Joint I, to accommodate for the change in length of CG with respect to movement of linkage. The distance between Joints C and I is constant while the distance between Joints I and G changes. The shape and size of output Link EH can be optimised freely to suit specific MIS applications. The three cable loops are called Loops PL, FB and PU, for the lower, middle and upper loops in Fig. 2, respectively. Loop PL connects Pulleys P1 and P2, which are rigidly attached to Links AF and BG, respectively. Loop FB connects Pulleys P3 to P5. Pulley P3 is rigidly attached to Link AC. Pulley P4 rotates freely at I. Pulley P5 is rigidly attached to Link CE. The cable connecting P4 and P5 has crossed configuration. Loop PU connects Pulleys P6 and P7, which are rigidly attached to Links DG and EH, respectively. Note that although there exist two forms of cable loop, which are the end-less tendon drive and the open-ended tendon drive, as classified in [START_REF] Tsai | Design of tendon-driven manipulators[END_REF], they are not distinguished in the 2D design of the cable linkage. The reason is that the synchronised rotation of pulleys, which is the essential mechanical constraint of the cable linkage, can be achieved equivalently using two forms of cable loop under the assumptions of zero cable slippage elongation. Loops PL and PU ensure Links BG and EH are parallel to AF and DG, respectively. Slip of cable on pulleys in Loops PL and PU ruins the parallel constraint, thus cause failure in maintaining the RCM function. Loop FB permits the cable linkage to pass the configuration where all links overlap with the ground, which is the singular configuration of an unconstrained four-bar linkage. Slip of cable in Loop FB does not affect the RCM function. In non-overlapped configurations, Loop FB is redundant. In the overlap configuration, slip of cable in Loop FB can cause the cable linkage to turn into an undesired configuration, where the output link stays coincident with the ground regardless of the input angle. Thus the position of the remote centre is not affected, despite the lost of mobility. More details regarding the singularity are described in the later paragraphs of this section. The outer shape of the cable linkage, which is virtual four-bar linkage ACEO, is symmetrical to simplify the geometry. Links AC and CE have equal link length. Virtual links AO and CO have another equal link length, which is longer than that of AC and CE. Such structure ensures that the link length of the RCM mechanism is always smaller than the distance between the input joint A and the remote centre O. Hence the RCM mechanism is considered to be relatively faraway from the remote centre. The geometry of the linkage is fully defined by two parameters, v and r, expressed mathematically as v = L AC L AO = L CE L EO (1) and r = L AC L AB = L CE L DE (2) To ensure Links AC and CE are shorter than AO and CO, v is smaller than one. With a given length AO, the range of motion (ROM) of the RCM mechanism is fully defined by v. As shown in Fig. 3, the centre (diagonal) line CO is perpendicular to ACE at two ends of ROM. Therefore, the ROM is ROM = 4 arcsin v ( 3 ) Parameter r is greater than one. It defines the geometry of four-bar linkage BCDG, where the length of Links BG and DG is L BG = L DG = 1 - 1 r L AO (4) Eq. ( 4) indicates that the larger the r, the longer Links BG and DG are. Four-bar linkage BCDG occupies additional space between Links AC, CE and the RC, thus a smaller r is desired to leave more clearance around the RC. The configuration where all links overlap with ground is a bifurcation (singularity) configuration of an unconstrained four-bar linkage, where two possible configurations can be achieved upon crossing and ∠BCD = ∠BGD = 0 (5) ∠BCD = 2∠BCG (6) where Eq. ( 6) can be equivalently expressed as ∠BGD = 2∠BGC (7) The first configuration shown in Eq. ( 5) is an undesired configuration where Links BG, DG and EH stay overlapped with ground (AF/AO) regardless of the input angle. In this case the mobility of the linkage is lost. Therefore, Eq. ( 6) or [START_REF] Guerrouad | SMOS: stereotaxical microtelemanipulator for ocular surgery[END_REF] needs to be enforced to fully constrain the linkage. The cable linage achieves Eq. ( 6). In Loop FB, ∠BCG and ∠DCG are related to the relative rotations of Pulleys P3 and P5 with respect to diagonal link CG, respectively. Pulley P4 allows the two angles to be synchronised hence eliminates the chance of turning into Eq. [START_REF] Chen | Novel linkage with remote center of motion[END_REF]. In this design, P3 and P5 are placed at Joint C to maximise the clearance between Links AC, CE and the RC. The cable on one side of a loop is in tension in one direction of motion of the linkage. The sides of loops that are in tension in the upward and downward motions of the linkage are shown in Figs. 5 and6, respectively. Proof of RCM function The proof of RCM function of the cable linkage is conducted based on the condition that the cable loops are functioning correctly hence the linkage is in the desired configuration. Such condition is proven in the Section 3. Given that an RCM mechanism requires a link to rotate around the remote centre, the proof is conducted through two steps: 1. Joint E rotates around O with a constant radius. Point H rotates around O with a constant radius. where E and H form output link EH. To simplify the expressions, the following link lengths are assigned. L 1 = L AC = L CE = vL AO L 2 = L AB = L DE = L AC r = vL AO r L 3 = L AF = L BG = L DG = L EH = 1 - 1 r L AO (8) Fig. 6. Cables in tension in downward motion. Generalised coordinates To conduct the proof, ten generalised coordinates are assigned to fully describe the linkage. The generalised coordinates q are listed in Table 1. h 1 is the actuator input angle thus it is the independent generalised coordinate. All other generalised coordinates are dependent ones. The angles are measured with respect to the "previous" link, as shown in the "Reference", and in counter-clockwise (CCW) direction. There is no h 3 and h 5 since Pulleys P3 and P5 are rigidly attached to links and do not have their own independent rotation. The graphical representation of the generalised coordinates is shown in Fig. 7. Step 1 -position of joint E The distance between Joint E and O is L EO = (xE -L AO) 2 + y 2 E ( 9 ) where x E and y E are the horizontal and vertical positions of Joint E with respect to Joint A, respectively x E = L 1 cos h 1 + L 1 cos (h1 + h 8 + p + h 9) y E = L 1 sin h 1 + L 1 sin (h1 + h 8 + p + h 9) (10) where the p term is added as h 9 is measured with respect to GC instead of CG. The objective of Step 1 is to prove that L EO is a constant. h 8 and h 9 need to be eliminated when Eq. ( 10)is substituted into Eq. ( 9). h 8 is calculated by subtracting the angle of Links AC with respect to ground from the angle of Link CG with respect to ground, i.e. h 8 = ∠ CG -∠ AC (11) Since Loop PL ensures that triangles BCG and ACO are similar triangles, the angle of Link CG with respect to ground is the same as the angle of diagonal line CO with respect to ground ∠ CG = ∠ CO = -arctan L 1 sin h 1 L AO -L 1 cos h 1 (12) Fig. 7. Generalised coordinates. The angle of Link AC with respect to ground is simply h 1 ,thus h 8 = ∠ CG -∠ AC = -arctan L 1 sin h 1 L AO -L 1 cos h 1 -h 1 (13) Due to the symmetrical structure of four-bar linkage BCDG h 9 = h 8 = -arctan L 1 sin h 1 L AO -L 1 cos h 1 -h 1 ( 14 ) Substituting Eqs. ( 10), ( 13) and ( 14) into Eq. ( 9) yields L EO = L AO ( 15 ) Hence Joint E is proven to rotate around O. Step 2 -position of point H The position of Point H is determined through a similar approach as that of Joint E. The distance between Point H and O is L HO = (xH -L AO) 2 + y 2 H ( 16 ) where x H and y H are x H = L 1 cos h 1 + L 1 cos (h1 + h 8 + p + h 9) + L 3 cos (h1 + h 8 + p + h 9 + h 7) y H = L 1 sin h 1 + L 1 sin (h1 + h 8 + p + h 9) + L 3 sin (h1 + h 8 + p + h 9 + h 7) (17) Given cable loop PU, the angles of Links EH and DG with respect to the ground are equal. Therefore, h 7 and h 6 are equal. From Eq. ( 7), the angle of Link DG with respect to ground is given by ∠ DG = 2∠ CG = -2 arctan L 1 sin h 1 L AO -L 1 cos h 1 ( 18 ) Hence h 7 = h 6 = ∠ DG -∠ CE = -2 arctan L 1 sin h 1 L AO -L 1 cos h 1 -(h1 + h 8 + p + h 9) = h 1 -p (19) Substituting Eqs. ( 12), ( 13), ( 17) and ( 19) into Eq. ( 16) yields L HO = L AO r ( 20 ) Therefore L HO is a constant. Since both L EO and L HO are constant, Link EH is proven to rotate around the remote centre and the cable linkage is proven to be an RCM mechanism. Cable tension analysis The RCM function proven in Section 2 is conducted under the condition that the cable loops are functioning. To achieve such condition, the following criteria must be achieved: 1. The cable loops (especially Loop FB) must not obstruct the movement of the links. 2. The two sides of a cable loop must be in tension in each direction of motion of the linkage, respectively. The functioning of cable linkage is proven through constraint approach analysis. Criterion 1 is proven based on the constraint equations derived from cable loops. Criteria 2 is proven based on the solutions of cable tension. In addition, the cable linkage is also proven to be fully-constrained (singularity-free) based on the number of constraint equations derived. The full proof is presented in this section. The constraint approach Such constraint-based analysis is based on a generalised constraint approach [START_REF] Chen | Power analysis of epicyclic transmissions based on constraints[END_REF], which is briefed below. The dynamics of a constrained physical system is described as [START_REF] Shabana | Computational Dynamics[END_REF] Q i = Q c + Q e (21) where Q i , Q c and Q e are the generalised inertia, constraint and external forces applied on the generalised coordinates q, respectively. The generalised constraint force Q c applied on q is related to the mechanical constraints of the systems through Q c = -C T q k ( 22 ) where k is the Lagrange multiplier, which represents the generalised constraint force acting along the mechanical constraints. C q is the derivative of the constraint matrix c q with respect to the generalised coordinates q, which is C q = ∂c q ∂q (23) c q represents the mechanical constraints of the system. Assuming that c q and q have dimensions of m and n respectively, the dimensions of Q c , C q and k are n×1, m×n, and m×1, respectively. The cable linkage (one-DOF) is fully constrained when n equals (m + 1). In the cable linkage, the constraints from cable loops are contained in c q . Hence, c q can be used to prove Criterion 1. C q and c q also satisfy ∂c ∂q d qd + ∂c ∂q i qi = C qd qd + C qi qi = 0 (24) where q i and q d are the independent and dependent generalised coordinates, respectively. C qi and C qd are the derivative matrices determined using q i and q d , respectively. In static analysis, the generalised inertia force Q i vanishes. Therefore, combining Eqs. ( 21) and [START_REF] Hadavand | A novel remote center of motion mechanism for the force-reflective master robot of haptic tele-surgery systems[END_REF] gives Q e = C T q k (25) or equivalently Q ei = C T qi k Q ed = C T qd k ( 26 ) where Q ei and Q ed are the generalised external forces applied on q i and q d , respectively. Eq. ( 25) or ( 26) is used to determine the generalised constraint force k acting along the mechanical constraints, when external load Q e is given. Cable tension is contained in k and thus proves Criterion 2. One constraint equation for each pair of pulleys needs be derived to determine the cable tension, for each direction of motion of the linkage. The direction of such constraint force is dependent on the way in which the constraint equation c q is written in. An example is given below. Consider an arbitrary constraint equation that is defined as c q = p a -p b ( 27 ) where p a and p b are the physical quantities of Bodies A and B, respectively, which are used to construct the constraint equation. In this case the k determined using such constraint equation is the generalised constraint force acting on Body B by Body A. In the opposite case, if the positions of p a and p b are swapped in Eq. ( 27), k will give the generalised constraint force acting on Body A by Body B. Analysis on cable loops In this analysis, it is assumed that the cable is inextensible and there is no slip between the cable and pulleys. All the bodies are assumed to be rigid bodies with zero mass. Constraint equations from cable loops Given constant length of cable section connecting two pulleys and constant distance between pulleys, the sum of lengths of cable sections wrapping on two pulleys is also a constant. This is expressed mathematically as L PA + L PB = L const. ( 28 ) where L PA and L PB are the lengths of cable sections wrapping on arbitrary driving pulley PA and driven pulley PB. L const. is a constant that contains the overall length of cable section and the distance between two pulleys. Since the lengths of cable wrapping on pulleys are related to the rotation of pulleys and their adjacent links, generalised coordinates are embedded in Eq. ( 28). Rearranging Eq. ( 28) into the form of Eq. ( 27) yields (Lconst. -L PB) -L PA = 0 (29) thus according to the definition of Eq. ( 27), Eq. ( 29) gives the force applied by the cable section attached to Pulley PB on that attached to PA. Since the cable can wrap on a pulley in either clockwise or counter-clockwise direction, the expressions for L PA and L PB need to take into account the direction. The cases corresponding to clockwise and counter-clockwise cable wrapping are illustrated in Figs. 8 and9, respectively. In Figs. 8 and9, "Ref" represents an arbitrary reference where the angles are measured with respect to. h in , h q and h out are the angle where the cable starts to wrap on the pulley, generalised coordinate and angle where cable leaves the pulley, respectively. In the clockwise case, h in is the largest angle, followed by h q and h out . The length of cable from h in to h q is L CW = R (h in -h q) ( 30 ) and the length of cable from h q to h out is L CW = R (hq -h out ) (31) In the counter-clockwise case, h out is the largest angle, followed by h q and h in . The length of cable from h in to h q is L CCW = R (hq -h in ) (32) Fig. 9. Counter-clockwise cable wrapping. and the length of cable from h q to h out is L CCW = R (hout -h q) ( 33 ) The h in , h q and h out for all the pulleys are summarised in Tables 2 and3. Tables 2 and3 correspond to the cable sections for upward and downward motions of the cable linkage, respectively, as illustrated in Figs. 5 and6, respectively. 0 is the additional constant angle introduced by crossed cable between P4 and P5. CW and CCW indicate clockwise and counter-clockwise directions, respectively. Substituting the h terms from Tables 2 and3 into Eq. ( 28) yields four constraint equations for each of the upward and downward motions of the cable linkage, respectively, which are between P1 and P2, P3 and P4, P4 and P5, and P6 and P7, respectively. Constraint equation from joint positions Apart from the cable loops, there are constraint equations from joint positions. Such constraint equations are derived based on shared joint position of two links. For example, position of Joint G can be derived from two paths: Links AB-BG and Links AC-CG, respectively. The position derived from the two paths must be identical x G|BG -x G|CG = 0 y G|BG -y G|CG = 0 ( 34 ) where x G|BG and y G|BG are the positions derived from Links AB-BG, and x G|CG and y G|CG are the positions derived from Links AC-CG. Six constraint equations are obtained, the paths are summarised in Table 4. Note that for Point H, the positions derived from path AC-CE-EH are equated to the generalised coordinates x H and y H . Proof of Criterion 1 Criterion 1 states that with given cable length, the rotation of pulleys must be consistent with the rotation of the links they are attached to, otherwise it will jam the linkage. It can be readily seen that Criterion 1 is met in Loops PL and PU. The proof for Loop FB based on constraint approach is presented below. The aim is to prove that the output rotation of the cable loop (h 9 ) is consistent with the h 9 derived from the links, as shown in Eq. ( 13), when a constant cable length from P3 to P5 is given. Substituting the h for P3 to P5 from Table 2 into Eq. ( 28) yields two constraint equations R h 4 -- p 2 + R h 8 - p 2 -0 = L const.34 (35) for P3 and P4, and R p 2 + 0 -h 9 + R p 2 + 0 -h 4 = L const R (p + h 8 -h 9 + 20) = L const.34 + L const.45 (37) Eq. ( 37) indicates that when given the correct cable length between P3 and P5 to start with, h 9 equals h 8 within full ROM of the cable linkage. Such outcome is consistent with Eq. ( 13), which means that the rotation of P5 is consistent with that of Link CE, thus Loop FB is not obstructing the movement of the linkage. Constraint matrix c q and derivative matrix C q Assembling the constraint equations derived from cable loops and joint positions yields the constraint matrices c q . Individual constraint equations within the matrix are arranged in the following order 1. c qP12 for Pulleys P1 and P2 in Loop PL. c q1 = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ -R (h1 + h 2) R (h4 + h 8) R (p -h 4 -h 9 + 20) R (h6 -h 7) (L1 -L 2) cos h 1 -L 3 cos (h1 + h 2) + L CG cos (h1 + h 8) (L1 -L 2) sin h 1 -L 3 sin (h1 + h 2) + L CG sin (h1 + h 8) (L1 -L 2) (cos h 1 -cos (h1 + h 8 + h 9)) -L 3 (cos (h1 + h 2) + cos(h 1 + h 8 + h 9 + h 6 )) (L 1 -L 2 ) (sin h 1 -sin (h1 + h 8 + h 9)) -L 3 (sin (h1 + h 2) + sin (h1 + h 8 + h 9 + h 6)) x H -L 1 cos h 1 + L 1 cos (h1 + h 8 + h 0) + L 3 cos (h1 + h 8 + h 9 + h 7) y H -L 1 sin h 1 + L 1 sin (h1 + h 8 + h 0) + L 3 sin (h1 + h 8 + h 9 + h 7) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (38) for the upward motion of cable linkage. Note that for the constraint equations derived from cable loops, the constants associated with cable lengths are not included, as they do not appear in the derivative matrix C q . For downward motion of the cable linkage, the joint constraint equations (last six) are the same. The cable constraint equations (first four) have negative signs to those in Eq. (38). The derivative matrix C q is given by where each column in the matrix contains the derivative of constraint matrix c q with respect to one of the generalised coordinates. The derivative matrices C qi and C qd for independent and dependent generalised coordinates, respectively, are which the two matrices are used for Eq. ( 26). C q = ∂cq ∂h 1 ∂cq ∂h C qi = ∂c q ∂h 1 ( Constraint status of cable linkage With ten generalised coordinates and ten constraint equations, the cable linkage seems to be over-constrained. However, certain retain constraint equations are redundant or vanish, resulting in fully-constrained one-DOF RCM mechanism. The cases where the links in non-overlapped and overlapped configurations are analysed separately. In non-overlapped configurations, Loop FB is redundant. Loop FB introduces one generalised coordinate h 4 and two constraint equations c qP34 and c qP45 . By excluding the generalised coordinate and constraint equations, the cable system is described by nine generalised coordinates with eight constraint equations. The cable linkage is fully constrained and has one DOF. The overlapped configuration is the singular configuration of a four-bar linkage, thus c qGx|DG from four-bar linkage BCDG vanishes, resulting in fully-constrained cable linkage. According to the constraint approach, the elimination of c qGx|DG can be proven mathematically by observing zeros for its corresponding terms in C q . Writing h 6 to h 9 , and L CG in terms of h 1 , and substitute into C q yields ∂ ∂q c qGx|DG = 0 0 0 0 0 0 0 0 0 (42) where h 6 to h 9 are given by Eqs. ( 13), ( 14)and ( 19), respectively, and L CG = 1 - 1 r (LAO -L 1 cos h 1) 2 + (L1 sin h 1) 2 (43) Thus the cable linkage is described by ten generalised coordinates with night constraint equations. The cable linkage is fully constrained and has one DOF. Calculation of cable tension In the analysis, no external load is applied on Pulleys P2 to P6 and the passive prismatic joint I. Further, the upward and downward are simulated by applying vertical force F Hy at point H. The torque on Joint E (t 7 ) and horizontal force on Point H (F Hx ) are zero. The generalised external force Q e applied on the generalised coordinates is given by Q e = t 1 t 2 • • • t 6 t 7 F CG F Hx F Hy = t 1 0 • • • 0 0 0 0 F Hy (44) where t 1 is the actuator input torque. Since the actuator input torque t 1 is unknown, Eq. ( 26) is used instead of Eq. ( 25), such that the constraint force can be calculated based on given t 7 , F Hx and F Hy , and then determine t 1 using the constraint force k. The corresponding Q ei and Q ed are Q ei = t 1 ( 45 ) and Q ed = 0 • • • 0 0 0 0 ∓F Hy (46) According to Eq. ( 26), the constraint force under given generalised external force is k = C -T qd Q ed (47) Following Section 3.5, Eq. ( 47) at the overlapped and non-overlapped configurations is solved separately, which yields analytical and numerical solutions, respectively. Analytical solution of k at overlapped configuration At the overlapped configuration, Eq. ( 47) which is solved with c qGx|DG is excluded from C qd . The dimension of C qd is 9 × 9 hence can be inversed directly. The analytical solution of cable tension, which are the first four elements in the constraint force k, is given by k 1 = ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ - F Hy L AO (-1+r+v+rv) rR(1+v) - F Hy vL AO rR - F Hy vL AO rR - F Hy L AO (-1+r) rR ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ (48) for the upward direction of motion, and k 2 = -k 1 ( 49 ) for the downward direction of motion. In both k, the first to fourth rows correspond to cable tension between P1 and P2, P3 and P4, P4 and P5, and P6 and P7, respectively. In Eq. ( 48), since F Hy is negative, r is greater than 1, and R and v are positive, all the elements are positive. As such, the cable forces within Loops PL, FB and PU are all tension. Similarly, in Eq. ( 49), since F Hy is positive, the cable forces are all positive. Hence, the cable loops are in tension at the overlapped configuration in both directions of motion of the cable linkage. Numerical solution of k at non-overlapped configuration At the non-overlapped configurations, the redundant constraints cannot be removed, as cable tension needs to be solved. The dimension of C qd is 10 × 9 and cannot be inversed directly. QR decomposition is applied to obtain the numerical solution C qd = Q 1 Q 2 R 1 0 ( 50 ) where h 6 to h 9 , and L CG in C qd are represented in terms of h 1 before QR decomposition is applied. Eq. ( 47) is written with Q 1 and R 1 as k = Q 1 R -T 1 Q ed (51) The numerical solution of constraint force is plotted in Fig. 10, for full ROM of a sample cable linkage whose dimensions are listed in Table 5. The numerical solutions are identical for the upward and downward motions of the cable linkage. In Fig. 10, P12, P34, P45 and P67 represent the cable tension in between Pulleys P1 and P2, P3 and P4, P4 and P5, and P6 and P7, respectively. The maximum cable tension is observed at the overlapped configuration. The magnitudes in Loops PL (P12), FB (P34 and P45) and PU (P67) are 119.1 N, 52.48 N, and 38.4 N, respectively, which agree with the analytical solutions. The cable tension P34 and P45 are identical, which also agrees with the analytical solution. The minimum tension are observed at both ends of ROM. The numerical solution indicates that the cable tension in all loops is positive within the full ROM. Verification of cable tension and proof of Criterion 2 The numerical solution obtained from the constraint approach is verified with static rigid body simulation conducted using ANSYS Rigid Dynamics package, with a 2D model of the same sample cable linkage. The model used is shown in Fig. 11. In the 2D model, links are used to mimic cable sections. The links are connected to pulleys with revolute joints, such that only tension and compression are allowed. The cable tension in one side of the cable loop is obtained by summing the magnitudes of tensile and compressive of the two links which are connected to the same pulley. To justify the direction of cable force, the joint force at the link corresponding to the cable that is expected to be in tension must be tension as well. The external force F Hy is simulated by a remote force applied at the tip of Link EH. The cable tension obtained through simulation is compared with the calculated results in Figs. 12 and13. The legends with "S" correspond to the simulation results. The comparison indicates that the cable tension in Loops PL and PU, as well as the actuator input torque t 1 obtained through the constraint approach is accurate. In Loop FB, exact solution is obtained at the overlapped configuration. The errors are relatively large towards two ends of the ROM, but the trend in the change in cable tension with respect to input angle is well captured. Therefore, the cable tension calculated from the constraint approach is verified, and it can be concluded that Criterion 2 for functioning of cable loops is proven. Calculation on cable stiffness The stiffness of the cable linkage is defined as the deformation of Point H under external load k = F Hy dy H = 8N 5mm = 1.6N/mm (52) The stiffness of the cable must be sufficient to achieve the overall stiffness. Since cable tension and elongation change with respect to the movement of the linkage, the maximum value is determined as the minimum required cable stiffness k = max k i dL i ( 53 ) where k is the minimum required cable stiffness in the cable loop. k i and dL i are the cable tension at different actuator input angle and its corresponding elongation, respectively. Due to the assumption of inextensible cable, conventional definition of elongation, which is defined as deformation of cable, cannot be applied in the analysis. Here, the elongation is defined as the infinitesimal travel distance of cable with respect to infinitesimal displacement at Point H dy H . The derivation of cable elongation is presented as follows. The travel distance of cable is related length of cable wrapping on pulley and hence can be written in terms of generalised coordinates where the function is the suitable equation among Eqs. ( 30) to (33). h angles are listed in Table 2 or 3. The mathematical expressions are (use P1, P3 and P6 for upward motion as examples) dL = function (R, h in , h q , h out ) (54) dL PL = Rdh 1 dL FB = Rdh 8 dL PU = Rdh 6 (55) where dL PL , dL FB and dL PU are the infinitesimal cable travel in Loops PL, FB and PU, respectively. dh 1 , dh 8 and dh 6 are the infinitesimal change in generalised coordinates. Eq. (55) needs to be further related to dy H using Eq. ( 24). Rearranging yields dq d = -C -1 qd C qi dq i ( 56 ) where dq d and dq i are the infinitesimal change in dependent and independent generalised coordinates, respectively. dy H , dh 8 , and h 6 are contained in dq d , while dh 1 is dq i . Eq. ( 56) writes dh 1 in terms of dy H . All other dependent generalised coordinates are written in terms of dh 1 and hence related to dy H . Eq. ( 56) is solved numerically using QR decomposition as dq d = -R -1 1 Q T 1 C qi dq i ( 57 ) where Q 1 and R 1 are obtained from Eq. ( 50). Substituting Eq. (57) into Eq. ( 55) yields the infinitesimal travel distance of cable in the cable loops. Further substituting such infinitesimal travel distance of cable into Eq. ( 53) gives the minimum required cable stiffness corresponding to the overall system stiffness. The results are illustrated in Fig. 14. The minimum required cable stiffness are 240.5 N/mm, 137.7 N/mm and 77.51 N/mm for Loops PL, FB and PU, respectively. In all three cable loops, the largest k are observed at the overlapped configuration. The pattern of change in k with respect to input angle in Fig. 14 is very similar to the pattern of change in tension with respect to input angle in Fig. 10, which means the cable tension is the dominating factor in stiffness. Comparison on device footprint For a 2-DOF RCM mechanism based on the cable linkage, the footprint is described through three approaches 1. The sweeping volume of linkage. 2. The space behind the input joint to permit the rotation of input link. The volume of links. In this section, the footprint of cable-linkage is compared to that of a PB-linkage through the above-mentioned approaches. The comparison of footprint is conducted mathematically under a simplified surgical scenario, as shown in Figs. 15 and16, for the cable linkage and PB-linkage, respectively. Five configurations of both RCM mechanisms are shown, which are the two extremes in ROM, centre and two mid-point configurations as presented in Figs. 3 and4. In the simplified surgical scenario, the incision port is Point O. The ROM to be achieved is symmetric around the ground axis. R C is a coefficient less than one. It gives the radius of the circular region in terms of distance L AO , where the links (except for the output link) of the RCM mechanisms are not allowed to enter such region. The region with R C is set, such that more clearance near the incision port is left for the output links of multiple surgical arms and human surgeons to occupy. L AO is the same for both RCM mechanism. The ROM is changed from 30 to 120 • . R C is changed from 0.5 to 0.825. The dimensions and hence the footprints of the RCM mechanisms are calculated based on the given set of ROM and R C . For the cable linkage, the lengths of input and connector links (AC and CE) are determined from the ROM using Eq. ( 3). The lengths of BG and DG are calculated from Eq. ( 4), where r is assumed to be 1.22 and not changed in the simulation. For the PBlinkage, the length of input link AC is the same as the radius defined by R C and L AO , such that the output joint C is just on the edge of the circle. The length of connector link BC equals L AO . The results of the comparison are presented as the percentage difference of the footprint of cable linkage, over that of the PB-linkage, expressed as d = f CL -f PB f PB × 100% (58) where f CL and f PB are the footprints of the cable linkage and PB-linkage, respectively. Approach 1 In MIS applications, a planar RCM mechanism with smaller enclosed area is desired, such that the sweeping volume and the chance of collision between surgical arms are reduced. In Approach 1, the footprint is described as the characteristic enclosed area. Such characteristic enclosed area is calculated by summing the four out of five configurations in Figs. 15 and16, such that the change in enclosed area with respect to movement of linkage is taken into account. The overlapped configuration is not considered, as the enclosed area is zero. The mathematical expression is f 1 = f top + f mid-top + f mid-bottom + f bottom (59) where f 1 is the characteristic enclosed area and the four remaining f terms are the enclosed area at four configurations, respectively. The enclosed area of the cable linkage and PB-linkage is defined by four-bar linkage ACEO and parallelogram ABCO in Figs. [START_REF] Stoianovici | Remote Center of Motion Robotic System and Method[END_REF] and 16, respectively. The results are illustrated as a contour plot in Fig. 17. The blank region at the top right corner of Fig. 17 is where the cable link fails to stay outside the radius with given R C . The edge of such region corresponds to the configuration of the cable linkage, where Joint G is on the edge of the circle, as illustrated in Fig. 15. The result reveals that the enclosed area of the cable linkage is smaller than that of the PB-linkage in about half of the design points (sets of ROM and R C ). The cable linkage has smaller enclosed area at large R C (>0.5). The smallest enclosed area is achieved at the maximum R C of 0.81, and the corresponding percentage difference is -35 %. Cable linkage has a relatively smaller enclosed area because its geometry is fully defined by the ROM (when r is given). In the contrast, the length of Links AB and CO and hence the footprint of the PB-linkage increase with R C . Approach 2 Both RCM linkage require additional clearance behind input joint A to allow the movement of linkage. As such, the region needs to be kept clear during the operation. A large region will increase the space needed to maneuver the linkage. In addition, a large region requires a larger base to hold the RCM mechanism, thus increases the overall dimension of the robotic manipulator. The footprint in terms of the aforementioned region is represented by the sweeping L 4 in Figs. 18 and 19 around Joint A and behind Joint A. For cable linkage, L 4 equals L AC . Link AC is behind Joint A when actuator input angle is within p/2 to 3p/2. For PB-linkage, L 4 equals the length Link AB and thus the radius of circle. Link AB sweeps behind Joint A in the full ROM. The mathematical expressions are f CL2 = 1 2 pL 2 4 ( 60 ) f PB2 = ROM 2p pL 2 4 The results are illustrated as a contour plot in Fig. 20. The result shows that the footprint (Approach 2) of the cable linkage is smaller than that of the PB-linkage in more than half of the design points. The maximum reduction in footprint is 83% at ROM of 30 • and R C equals 0.8. The footprint of the cable linkage is smaller in the cases where ROM is small and/or R C is large. On the other hand, since the dimension of the cable linkage is defined by the ROM, the footprint is large at large ROM and/or small R C . Approach 3 There are cases in MIS applications where a surgical manipulator stays stationary while the other robotic arms or human surgeons are conducting manipulations. Therefore, the footprint in terms of the volume taken by the stationary links themselves must be taken into account. Since the volume of the links is proportional to the length of links, the sum of lengths of links is used as the measurement of footprint in Approach 3. For cable linkage, the links involved are Links AC, CE, BG and DG. Diagonal link CG is not considered as it is enclosed by BG and DG. For PB-linkage, the links involved are Links AB and BC. The mathematical expressions are f CL3 = L AC + L BG + L DG + L CE (61) f PB3 = L AB + L BC The results are illustrated as a contour plot in Fig. 21. - The figure shows that the footprint (Approach 3) is smaller than that of the PB-linkage in all the achievable design points. The percentage reduction in footprint varies between -50% and -80 %, which is significant. Again, the cause of the difference is the fact that the footprints of two RCM mechanisms are dependent on ROM and R C , respectively, while irrelevant to the other. The comparison in footprint through three approach shows that the cable linkage has a smaller sweeping volume and requires less space behind the input joint in more than half of the design points. In addition, the overall lengths of the links are shorter in all design points, resulting in smaller and lighter linkage. Therefore it can be concluded that the cable linkage has advantages over the PB-linkage in terms of footprint in MIS applications. The scenario that suits the cable link best is where the required ROM is relatively small while the RCM mechanism has to be kept faraway from the incision port. Prototype of cable linkage A passive concept-proving prototype of the cable linkage is built. The dimensions of the linkage are given in Table 5. The ROM is 69 • , which satisfies the requirement of 60 • ROM in most of the abdominal MIS applications [START_REF] Lum | The REVAN: design and validation of a telesurgery system[END_REF]. The additional 9 • are added such that v is 0.3 and L 1 is an integer. r is selected to be 1.22 to minimise rounding of L 2 and L 3 . The selection of v and r helps reducing machining error and thus the potential positioning error of the remote centre. Note that due to the limit on the available cable, the minimum required stiffness presented in Fig. 14 is not applied on the cable in Loop FB. The mechanical constraint of a cable loop can be achieve identically through end-less tendon drive or open-ended tendon drive, as classified in [START_REF] Tsai | Design of tendon-driven manipulators[END_REF]. End-less tendon drive is literally closed-loop cable. It is structurally simpler and can theoretically be fully controlled by less actuators comparing to the open-ended tendon. On the other hand, open-ended tendon drive, which actively controls a pulley with two antagonistic tendons that are connected to their own actuators, respectively, yields zero or small backlash in the expanse of complexity. As the prototype is built for proof of concept, closed-loop cable is implemented in the prototype. To minimise the backlash, timing belt is used and the pre-tension is applied. The CAD model of the cable linkage, showing two limits and the centre of the ROM, is illustrated in Fig. 22. The centre and lower limit configurations are combined in the figure by photoshop hence are semi-transparent. The sphere at the bottom right corner of the figure indicates the RC. The design of output link is arbitrary and yet to be optimised for specific MIS applications. The output ROM is 105 to 175 • from the positive x-direction. The belts and cable sections are not shown. A side view of the prototype is illustrated in Fig. 23. In the prototype, the widths of Links AC and CE are 16 mm. The radius of pulleys in Loops PL and PU is reduced to 12 mm due to the available pulley size. It can be observed from the figure that the area enclosed by Loops PL and PU is not significantly larger than the sizes of Links AC and CE, despite the outstanding tensioners located in the middle of the links. In addition, the pulleys in Loop FB are mostly enclosed by Links BG and DG in the side view, thus do not cause significant increase in the size of mechanism. Top views of the prototype are illustrated in Figs. 24 and 25. Fig. 24 annotates the links and Loops PL and PU. In both loops, timing belts are used to minimise backlash. It also eliminates the chance of slip of cable on pulleys in Loops PL and PU, where the consequence of such cable slip is the lost of RCM function. To increase the rigidity of the mechanism, a duplicated pair of Links AC and AF is introduced. Both links are located above Links CE, P6 and P7 in Fig. 24. Fig. 25 annotates Loop FB. In this prototype, Loop FB is achieved by using two cable loops, one between P3 and P4 and the other between P4 and P5. Two pulleys, P4 and P4 are rigidly connected to each other and mounted at Joint I. P4 is connected to P3, and P4 is connected to P5 with crossed cable. Pulleys P3 and P5 are rigidly connected to Links AC and CE, respectively, by 3D-printing the pulley and the link as one part. To enable the crossed configuration of cable between P4 and P5, a round belt is used. The presence of tension in the correct sides of cable loops is vital for verifying the design concept. On the other hand, the exact magnitude of tension is of less importance, as it is primarily used as a prototyping guideline. The sign of cable tension can be readily observed through higher transverse cable stiffness on the driving side of a cable loop than that on the driven side. By manually applying force at the output link and comparing the cable stiffness, it is found that the presence of tension within the full range of motion is consistent with Figs. 5 and6, as well as the outcomes in Section 3 . Apart from that, the prototype is manipulated to the centre configuration, where loads are applied manually in the attempt to force the prototype into the undesired configuration. Jamming of linkage is observed, thus verifies that the linkage can only turn into the desired configuration. Conclusion This paper introduces a cable linkage with RCM. The design of the cable linkage is presented, and proof on RCM function is conducted mathematically. The tension in all cable sections is determined by means of a constraint approach. The analytical solution at the overlapped configuration is derived. The numerical solution at non-overlapped configurations are obtained through QR decomposition. The tensions are positive within the full working range, validating the function of the linkage. The cable tension calculated is verified through finite element simulation. The constraint approach yields exact solution of cable tension in Loops PL and PU within full ROM, and in Loop FB at overlapped configuration. The error in Loop FB at nonoverlapped configurations is small. The constraint approach is further used to determine the minimum required cable stiffness, corresponding to a given overall system stiffness at the output link. Both the tension and infinitesimal displacement of cable are derived from the constraint approach and solved with QR decomposition. Quantitative comparison between the footprints of the cable linkage and PB-linkage is conducted in a simplified surgical scenario. The footprint is described in three approaches. In two of the approaches, the cable linkage yields smaller footprint in half of the design points. In the last one the cable linkage achieves 50% to 80% reduction in footprint. A prototype is constructed for proof of concept. Closed-loop cable is used to simplify the architecture. Timing belts are used in Loops PL and PU to prevent slip of cable on pulley. Loop FB is further divided into two loops, in between P3 and P4, and P4 and P5, respectively. Round belt is implemented between P4 and P5 to enable crossed cable configuration. The correct presence of tension within full range of motion, as calculated in Section 3, is validated. The prototype is also proved to be prevented from turning into the undesired configuration. Fig. 1 . 1 Fig. 1. Configuration of links. Fig. 2 . 2 Fig. 2. Configuration of links and cable loops. Fig. 3 . 3 Fig. 3. Configurations at two extremes and centre of ROM. Fig. 4 . 4 Fig. 4. Configurations at mid-points. Fig. 5 . 5 Fig. 5. Cables in tension in upward motion. Fig. 8 . 8 Fig. 8. Clockwise cable wrapping. 2 . 2 c qP34 for Pulleys P3 and P4 in Loop FB. 3. c qP45 for Pulleys P4 and P5 in Loop FB. 4. c qP67 for Pulleys P6 and P7 in Loop PU. 5. c qGx|CG for x position of Joint G from paths AB-BG and AC-CG. 6. c qGy|CG for y position of Joint G from paths AB-BG and AC-CG. 7. c qGx|DG for x position of Joint G from paths AB-BG and AC-CD-DG. 8. c qGy|DG for y position of Joint G from paths AB-BG and AC-CD-DG. 9. c qHx|EH for x position of Point H from generalised coordinate x H and path AC-CE-EH. 10. c qHy|EH for y position of Point H from generalised coordinate y H and path AC-CE-EH. The constraint matrices are Fig. 10 . 10 Fig. 10. Numerical solution of k for sample cable linkage. Fig. 11 . 11 Fig. 11. 2D model in ANSYS. Fig. 12 . 12 Fig. 12. Comparison of cable tension in Loops PL and PU. Fig. 13 . 13 Fig. 13. Comparison of cable tension in Loop FB. Fig. 14 . 14 Fig. 14. Cable stiffness. Fig. 15 . 15 Fig. 15. Cable linkage in surgical scenario. Fig. 16 . 16 Fig.16. PB-linkage in surgical scenario. Fig. 17 . 17 Fig. 17. Percentage difference in footprint -Approach 1. Fig. 18 . 18 Fig.18. Cable linkage in surgical scenario. Fig. 19 .Fig. 20 . 1920 Fig. 19. PB-linkage in surgical scenario. Fig. 21 . 21 Fig. 21. Percentage difference in footprint -Approach 3. Fig. 22 . 22 Fig. 22. CAD model of cable linkage. Fig. 23 . 23 Fig. 23. Right side view of prototype. Fig. 24 . 24 Fig. 24. Top view of prototype 1. Fig. 25 . 25 Fig. 25. Top view of prototype 2. Table 1 1 List of generalised coordinates. q Definition Reference h 1 Angle of Link AC AF h 2 Angle of Link BG and P2 AC h 4 Angle of P4 CG h 6 Angle of Link DG and P6 CE h 7 Angle of Link EH and P7 CE h 8 Angle of Link CG AC h 9 Angle of Link CE GC L CG Distance between CG n/a x H Horizontal position of H A y H Vertical position of H A Table 2 2 Cable and pulley angles for upward motion. Pulley h in (rad) hq (rad) hout (rad) Direction Reference P1 n/a 0 h 1 + p/2 C W AF P2 p/2 h 2 n/a CW AC P3 n/a 0 h 8 -p/2 CCW AC P4 -p/2 h 4 p/2 + 0 CCW CG P5 p/2 + 0 h 9 n/a CW GC P6 n/a h 6 p/2 C W CE P7 p/2 h 7 n/a CW CE Table 3 3 Cable and pulley angles for downward motion. Pulley h in (rad) hq (rad) hout (rad) Direction Reference P1 n/a 0 h 1 -p/2 CCW AF P2 -p/2 h 2 n/a CCW AC P3 n/a 0 h 8 + p/2 C W AC P4 p/2 h 4 -p/2 -0 CW CG P5 -p/2 -0 h 9 n/a CCW GC P6 n/a h 6 -p/2 CCW CE P7 -p/2 h 7 n/a CCW CE Table 4 4 Joints for constraint equations. Joint Path 1 Path 2 G AB-BG AC-CG G AB-BG AC-CD-DG H x H and y H AC-CE-EH Table 5 5 Dimensions of sample cable linkage. Parameters Magnitude L AO (mm) 400 v 0.3 r 1.220 L 1 (mm) 120 L 2 (mm) 98.4 L 3 (mm) 72 R (mm) 15 F Hy (N) ∓8
01773525
en
[ "scco.psyc", "scco.neur" ]
2024/03/05 22:32:18
2006
https://hal.science/hal-01773525/file/2006_Bidet-Ildeietal_CPL_visusalperception.pdf
Christel Bidet-Ildei email: [email protected] David Méary Jean-Pierre Orliaguet David Meary Visual Perception of Elliptic movements in 7-to-11-year-old children : Influence of Motor Rules : dès 7 ans les ajustements perceptifs sont conformes au principe d'isochronie. Ces résultats permettent de discuter les liens motricité-perception. INTRODUCTION The human visual system is extremely sensitive to human motion. In the past, several studies showed that, even when movement is represented by a simplified point-light display, observers can discriminate human body movements from moving objects [START_REF] Bingham | Dynamics and the orientation of kinematic forms in visual event recognition[END_REF]. They identify actions such as walking or dancing [START_REF] Johansson | Visual perception of biological motion and a model for its analysis[END_REF][START_REF] Johansson | Visual motion perception[END_REF], the gender and the identity of a person (Cutting & Kozlowski, 1977;Kozlowski & Cutting, 1977) and even the properties of handled objects such as the weight of a lifted object [START_REF] Runeson | Visual perception of lifted weight[END_REF], 1983). Other results showed that when subjects have to visually evaluate the velocity of human movements (i.e., pointing movement, handwriting, drawing an ellipse) they prefer those that conformed to motor laws [START_REF] Meary | Visual perception of writing and pointing movements[END_REF][START_REF] Viviani | The effect of movement velocity on form perception: geometric illusions in dynamic displays[END_REF]. For example when a subject is asked to adjust the velocity of a reaching movement, a writing movement or an elliptic movement, he/she tends to choose durations of movement which are respectively in line with the Fitts' Law, the isochrony principle and the two-third power law. To explain this high sensibility to human movement, it has been suggested that visual identification of human movement could be not only based on visual experience but also on motor experience (i.e., [START_REF] Jeannerod | Neural simulation of action: a unifying mechanism for motor cognition[END_REF][START_REF] Jeannerod | Mental imaging of motor activity in humans[END_REF]. In others words the recognition of human movements would be the result of motor-perceptual interactions. Several results are in accord with this view. Observers better recognise point-light displays representing their own movement than movements of their friends [START_REF] Beardsworth | The ability to recognize oneself from a video recording of one's movements without seeing one's body[END_REF][START_REF] Loula | Recognizing people from their movement[END_REF]. Patients with motor deficits often have difficulties to recognise human movements as this is the case for dysgraphic [START_REF] Chary | Influence of motor disorders on the visual perception of human movements in a case of peripheral dysgraphia[END_REF] and apraxic [START_REF] Heilman | Two forms of ideomotor apraxia[END_REF] patients. Finally, neuroimaging studies show that both observation and execution of movements activate common brain regions [START_REF] Hari | Activation of human primary motor cortex during action observation: a neuromagnetic study[END_REF][START_REF] Peuskens | Specificity of regions processing biological motion[END_REF][START_REF] Saygin | Point-light biological motion perception activates human premotor cortex[END_REF]. Some developmental studies also indicate that the motor competence of the observer could be involved, at least in part, in the visual perception of human movements. For example, children with articulatory disorders (i.e., d/b confusion) have more difficulties in lips reading than children without such motor difficulties [START_REF] Desjardins | An exploration of why preschoolers perform differently than do adults in audiovisual speech perception tasks[END_REF]. In the same way, reading deficits is often associated with motor disorders [START_REF] Felmingham | Visual and visuomotor performance in dyslexic children[END_REF]. Finally, an experiment carried out by Louis-Dam, [START_REF] Louis-Dam | Anticipation motrice et anticipation perceptive[END_REF] showed that in children the ability to visually anticipate the forthcoming movement in a motor sequence is directly influenced by their level of motor competence. In this theoretical context, the aim of the present research is to bring additional evidence that, in children, visual perception of human movement is influenced by motor rules. To this end, 7 to 11 year-old children were asked to evaluate and to adjust the velocity of a dot depicting an elliptic movement. In motor production, [START_REF] Viviani | A developmental study of the relationship between geometry and kinematics in drawing movements[END_REF] showed that from 7 years, the duration of elliptic movements conforms to the isochrony principle: the duration of the movement tended to be constant irrespective of the perimeter of the ellipse. If, as showed by former studies that visual perception of human movement tends to conform to motor rules, the visual evaluation of time movement will also tend to conform to the isochrony principle from 7 years of age. METHOD Participants Forty-five children participated in the experiment. They were divided into 3 age groups of 15 children each: 7 years (mean age 6 years 11 months), 9 years (mean age 8 years 9 months) and 11 years (mean age 10 years 9 months). A group of 15 adults, students at the university, was used as control. All subjects were right-handed and had normal or corrected to normal vision. Stimuli The visual stimuli consisted of 6 elliptic movements. The perimeter of the ellipse was respectively 2. 94, 5.25, 9.36, 16.69, 29.74 or 53 cm long. The semi axis ratio b/a (0.425) and the eccentricity of the ellipse ∑ (0.9) were constant whatever the perimeter of the ellipse (see figure 1). The major semi axis of the ellipse a was rotated by 45 degrees counter-clockwise. A software permitted to modify the trajectory perimeter (Pe) or the movement period (P). For all period values, the velocity profile of the movement respected the twothird power law (covariation velocity-curvature) observed in the production of movement [START_REF] Lacquaniti | The law relating the kinematic and figural aspects of drawing movements[END_REF]. Procedure Participants seated at a distance of 50 cm from a computer screen (17', resolution 1024*768 pixels, sampling rate 85 Hz) in a dimly illuminated room (see figure 2). Each trial consisted in the presentation of a black spot (Ø = 0.4 cm) depicting an elliptic movement on a white background area (22*22 cm). Participants were asked to adjust the period of movement (that is to find their "preferred velocity") by pressing the arrows (← or →) of the keyboard. The adjustments allowed increasing or decreasing of the duration of the period by step of 25 ms. The experiment was run in a single session including 6 blocks of trials. Each block comprised 6 trials corresponding to each perimeter of the ellipse. Therefore each subject performed 36 trials. The order of presentation of blocks and trials, and the initial period value of each trial (Pi) were randomized. Short periods of rest separated each block of trial. Data analysis The results were analysed by using the same formalisation and the same procedure than those used in Viviani & Schneider's experiment (1991). The relation between the perimeter of the ellipse (Pe) and the final period (Pf) chosen by the participants was formalised by the power function Pf = P0 * Pe γ where γ represents the exponent of the function and P0 a baseline period which depends of each subject (see figure 3). Then, we evaluated for each subject the coherence of the results, the degree of isochrony and the movement speed by measuring the correlation coefficient r), the exponent γ) and the baseline period (P0) according to age. RESULTS The mean of the correlation coefficient, of the exponent, and of the baseline period were calculated and statistically evaluated with an ANOVA with age as between factor. Because the correlation coefficients and the baseline period (P0) did not follow a normal distribution, data has been transformed. For the coefficients of correlation we used the Fisher hyperbolic tangent transform [START_REF] Kendall | The advanced theory of statistics[END_REF]) and the baseline period was analysed by using the logarithm of P0. Results obtained in children and in adults were analysed separately. Correlation coefficient (r): the statistical analysis revealed a significant effect of age [F (2,42) = 6.09, p < 0.01]. We observed an increase of r value between 7 (0.71) and 9 years (0.88), a stabilisation between 9 and 11 years (0.85) and again an increase between 11 year-old children and adults (0.95) [t (28) = 3.67, p < 0.01]. This result indicates that the within variability is higher at 7 years and tends to decrease with age. However, it should be noted that whatever the age, the values of the coefficient correlations are very high. This result shows that the link between the final period and the perimeter of the ellipse is well approximated by the power function. Exponent γ): there was no effect of age [F (2,42) = 1.54, p = 0.22]. It is noteworthy that the mean value of γ (0.41 ± 0.14) is not different from the value (0.44) obtained by [START_REF] Viviani | A developmental study of the relationship between geometry and kinematics in drawing movements[END_REF] in the motor task [t (44) = 1.61, p = 0.11]. In addition, we did not observe any significant difference between children (0.41 ± 0.14) and adults (0.46 ± 0.19) [t (58) = 0.98, p = 0.33]. Baseline period (P0): age has no effect on the performances [F (2,42) = 0.40, p = 0.67]. Whatever the age, the mean baseline period was 0.43 s and no significant difference was observed between children (0.42 ± 0.29) and adults (0.37 ± 0.20) [t (58) = 0.73, p = 0.46]. DISCUSSION The purpose of this experiment was to know if the visual perception of human movements was influenced by the rules of motor production. Our results showed a very high similarity between the performances observed in the visual perception of elliptic movements and those obtained by Viviani & Schneider in the motor production of ellipses (1991). Indeed the exponents of the power function which defines the relation between perimeter and time movement (the final period chosen by the subject) are not different in the perceptive (0.41) and motor tasks (0.44). Therefore these findings indicate that in children, from 7 year-old, the isochrony principle is observable both in production and in perception. Though the perimeter was multiplied by 18 (2.53 to 53 cm), the final period chosen by the participant was only multiplied by 3 (930 to 2606 ms). Therefore, these findings demonstrate that a similar principle determines motor and perceptual performances and suggest that the perceptual abilities of children may depend on their level of motor development. Such similarities between perceptual and motor behaviours may be interpreted within the motor simulation theory [START_REF] Jeannerod | Neural simulation of action: a unifying mechanism for motor cognition[END_REF]. According to this theory, the motor system is considered at a covert stage as a simulation system that is activated in self-intended action and also in the recognition of others' action. At this covert stage, action is not executed but the way to reach the goal and the consequences of the action on both the organism and the external world are simulated. Therefore it can be hypothesised that when perceiving a movement, children based their decision on an internalized simulation of the ellipse movement that leads to prefer stimuli that share common kinematic properties with their own motor productions, that is those which conform to the isochrony principle. This assumption is in accordance with clinical observations showing the role of motor competence in the visual perception of human movement (i.e., [START_REF] Chary | Influence of motor disorders on the visual perception of human movements in a case of peripheral dysgraphia[END_REF] and with neuroimaging studies showing the activation of neuronal motor structures during the visual perception of human movements [START_REF] Chaminade | Is perceptual anticipation a motor simulation? A PET study[END_REF][START_REF] Decety | Brain activity during observation of actions. Influence of action content and subject's strategy[END_REF][START_REF] Hari | Activation of human primary motor cortex during action observation: a neuromagnetic study[END_REF][START_REF] Nishitani | Temporal dynamics of cortical representation for action[END_REF][START_REF] Rizzolatti | Localization of grasp representations in humans by PET: 1. Observation versus execution[END_REF]. Though, the numerous evidences for a connection between perception and action, few authors consider that the visual perception of dynamic events could be determined by general invariants which have emerged from evolution of species and which would be present from the beginning of the development, that is long before the emergence of motor competences [START_REF] Shepard | Path-guided apparent motion[END_REF][START_REF] Vallortigara | Visually inexperienced chicks exhibit spontaneous preference for biological motion patterns[END_REF]. This hypothesis is supported by some results obtained in infants and in patients with impaired motor functions. [START_REF] Fox | The perception of biological motion by human infants[END_REF] demonstrated that 8 week-old infants exhibit a preference for a point-light walker over the same configuration inverted 180 degrees. Moreover it has been showed that by 3-5 months of age, infants discriminate a point-light walker from displays with perturbed local rigidity [START_REF] Bertenthal | Perception of biomechanical motions by infants: implementation of various processing constraints[END_REF] or with scrambled spatial relations between the dots [START_REF] Bertenthal | Infant sensitivity to figural coherence in biomechanical motions[END_REF]. In addition Pavlova and her colleagues (2003) have shown that adolescents with congenitally impaired locomotion can exhibit high sensitivity to a point-light walker. Therefore these results show that motor competences are not necessary to perceive and recognize human movements. Animal studies point to a similar conclusion by suggesting that the perception of biological movement could be an intrinsic capacity of the vertebrate visual system. For example, [START_REF] Vallortigara | Visually inexperienced chicks exhibit spontaneous preference for biological motion patterns[END_REF] reported that chicks, hatched and reared in darkness, exhibit a preference to biological movements from the first presentation of the stimuli after birth. They tend to prefer the biological movement of a hen than a rigid or random motion. It is noteworthy that a similar behaviour is observed when the stimuli represent the movement of a cat. This latter result suggests that the preference for biological motion in newborn chick is not species-specific and therefore does not depend on motor ability. Taken together these results suggest that the isochrony principle observed in visual perception could be explained either by an activation of the motor system or by a genetic predisposition. Our experiment carried out in 7-to-11-year-old children does not permit to conclude, though it seems hardly plausible that all the motor rules are available at birth. It however demonstrates that in children visual perception of an elliptic movement is directly influenced by some intrinsic properties of the motor system, i.e., the isochrony principle. Figure 2 : 2 Figure 2: Kinematic characteristics of the stimulus. A) Form of the elliptic trajectory. B) Velocity profile for one period (one cycle) with Vt corresponding to the tangential velocity and t to the movement time. Figure 2 : 2 Figure 2: Experimental set up. Subject was in front of a screen. A dot depicted an elliptic trajectory. The task consisted in adjusting the velocity by using the arrows of the keyboard. Figure 3 : 3 Figure 3: Example of the power approximation Pf = P0 * Pe γ for one participant chosen randomly in the adult group. Figure 4 : 4 Figure 4: Mean of the correlation coefficients (r), of the exponents γ) and the baseline periods (P0) as a function of age.
01592822
en
[ "spi", "spi.meca.mefl" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01592822/file/ijnmf_preprint.pdf
Shang-Gui Cai email: [email protected] Abdellatif Ouahsine Julien Favier Yannick Hoarau Moving immersed boundary method Keywords: Immersed boundary method, Projection method, Fractional step method, Implicit scheme, Lagrange multiplier, Incompressible viscous flow come L'archive ouverte pluridisciplinaire INTRODUCTION The immersed boundary method (IBM) has emerged in recent years as an alternative to traditional body-conforming mesh method for simulating fluid flows over complex and moving objects. Through adopting an appropriate boundary force in fluid equations for the presence of immersed solid boundaries, the simulations can be performed on a very simple Cartesian mesh. This significantly eases complicated mesh generations and eliminates moving boundary related issues, such as mesh distortions and mesh interpolation errors due to deforming-mesh and re-meshing. Since first introduced by Peskin [START_REF] Peskin | Flow patterns around heart valves: A numerical method[END_REF] for modeling blood flow through a beating heart, IBM has been extended to various applications in scientific and engineering fields. In the original method, the immersed elastic membrane is represented by a series of massless Lagrangian markers where the boundary force is evaluated by using constitutive laws. Discretized delta functions are employed as kernel functions for the data exchange between the two independent meshes of fluid and solid. The immersed finite element method (IFEM) [START_REF] Wang | Extended immersed boundary method using FEM and RKPM[END_REF][START_REF] Zhang | Immersed finite element method[END_REF][START_REF] Liu | Immersed finite element method and its applications to biological systems[END_REF] was later developed in finite element formulations for general structures that occupy finite volumes within the fluid domain. Previous methods are well suited for deformable solids owing to their physical basis, but the constitutive laws are generally not well posed when solids reach the rigid limit. Beyer and LeVeque [START_REF] Beyer | Analysis of a one-dimensional model for the immersed boundary method[END_REF] provided a solution by using a spring to attach the solids to an equilibrium location with a restoring force. Goldstein et al. [START_REF] Goldstein | Modeling a no-slip flow boundary with an external force field[END_REF] and Saiki and Biringen [START_REF] Saiki | Numerical simulation of a cylinder in uniform flow: Application of a virtual boundary method[END_REF] also proposed a feedback forcing strategy to control the velocity near the objects, which behaves as a system of springs and dampers. Nevertheless, artificial constants are introduced, which are ad hoc and should be chosen large enough in order to accurately impose the no-slip boundary condition. However large value makes the system very stiff and results in instabilities. The time step is severely limited, leading to a CFL number several magnitude smaller than the usual one [START_REF] Goldstein | Modeling a no-slip flow boundary with an external force field[END_REF][START_REF] Fadlun | Combined immersed-boundary finite-difference methods for threedimensional complex flow simulations[END_REF]. Mohd-Yosuf [START_REF] Mohd-Yosuf | Combined immersed Boundary/B-spline methods for simulation of flow in complex geometries[END_REF] and Fadlun et al. [START_REF] Fadlun | Combined immersed-boundary finite-difference methods for threedimensional complex flow simulations[END_REF] proposed the direct forcing immersed boundary method to avoid the use of artificial constants via modifying the discrete momentum equation. No additional constraints are introduced to the time step. Instead of using the discrete delta function for velocity interpolation and force distribution, local velocity reconstruction approaches were employed to enforce the boundary condition. However Uhlmann [START_REF] Uhlmann | An immersed boundary method with direct forcing for the simulation of particulate flows[END_REF] observed strong oscillations towards the boundary force. He attributed this problem to insufficient smoothing and re-used the discrete delta function in his direct forcing immersed boundary method. Although other strategies have also been proposed to enhance the local velocity reconstruction, special treatment should be taken for the phase change of cells near the moving boundaries [START_REF] Tseng | A ghost-cell immersed boundary method for flow in complex geometry[END_REF][START_REF] Wang | Algorithms for interface treatment and load computation in embedded boundary methods for fluid and fluid-structure interaction problems[END_REF][START_REF] Lakshminarayan | An embedded boundary framework for compressible turbulent flow and fluid-structure computations on structured and unstructured grids[END_REF]. In fact the boundary force is an unknown that is strongly coupled to the fluid velocity field. In the work of Uhlmann [START_REF] Uhlmann | An immersed boundary method with direct forcing for the simulation of particulate flows[END_REF], the boundary force is calculated explicitly by a tentative fluid velocity. The no-slip boundary condition can never be satisfied and large errors occur near the immersed boundaries. Kempe and Fröhlich [START_REF] Kempe | An improved immersed boundary mehod with direct forcing for the simulation of particle laden flows[END_REF] reduced the error by adding a forcing loop within a few iterations. Further improvement of the boundary condition imposition, however, requires numerous iterations for convergence as the multidirect forcing immersed boundary method [START_REF] Luo | Full-scale solutions to particle-laden flows: Multidirect Forcing and immersed boundary method[END_REF][START_REF] Breugem | A second-order accurate immersed boundary method for fully resolved simulations of particle-laden flows[END_REF]. Taira and Colonius [START_REF] Taira | The immersed boundary method: A projection approach[END_REF] proposed the implicit immersed boundary projection method (IBPM) by formulating the boundary force and the pressure into a modified Poisson equation and solving them simultaneously in an enlarged system with sophisticated solvers. Despite the mathematical completeness and rigour, IBPM may have convergence problems when an immersed boundary point is very close to a fluid grid point, as the singular property of the interpolation and distribution functions deteriorates significantly the condition number of the coefficient matrix of the original well-defined pressure Poisson equation (PPE) [START_REF] Ji | A novel iterative direct-forcing immersed boundary method and its finite volume applications[END_REF]. In this paper we propose the moving immersed boundary method (MIBM) to optimally maintain the accuracy of the implicit IBPM and the efficiency of the explicit direct forcing IBM. The projection method is served as the basic fluid solver where the proposed MIBM is integrated as a plug-in. Analogous to the role of the pressure in the projection method to satisfy the divergence-free condition, the boundary force is regarded as another Lagrange multiplier for the no-slip constraint in proposed MIBM. The global scheme follows the fractional step fashion and the fluid velocity, pressure and the boundary force are solved sequentially through the idea of operator splitting [START_REF] Cai | Improved implicit immersed boundary method via operator splitting[END_REF][START_REF] Cai | Implicit immersed boundary method for fluid-structure interaction[END_REF][START_REF] Cai | Computational fluid-structure interaction with the moving immersed boundary method[END_REF]. We follow the derivation of PPE in the projection method and derive an additional moving force equation for the boundary force. Therefore, the PPE is unchanged and immune from the convergence problem. Moreover, the force coefficient matrix is formulated to be symmetric and positive-definite so that generic linear system solvers can be applied directly. The organization of this paper is as follows. First the fluid Navier-Stokes equations are discretized and a second order projection method is introduced as our fundamental fluid solver. In the following, the MIBM is presented in details and compared to other immersed boundary methods. In Section 5 a couple of numerical simulations are performed to validate the proposed MIBM. Finally, conclusions are drawn in Section 6. FLUID EQUATIONS AND DISCRETIZATION Consider the non-dimensionalized Navier-Stokes equations for an incompressible viscous fluid ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ∂u ∂t + ∇ • (u ⊗ u) = -∇p + 1 Re ∇ 2 u in Ω × [0, T ], ∇ • u = 0 in Ω × [0, T ], u| Γ = w in [0, T ], u| t=0 = u 0 in Ω, (1) where u, p are the fluid velocity vector and the pressure. Re = UL/ν designates the Reynolds number, based on the reference velocity U , the reference length L and the kinematic viscosity ν. Directly solving above equations is very difficult. First the equations are non-linear due to the convective terms; Secondly there is no equation to compute the pressure directly; Moreover the pressure and the velocity are coupled through the continuity (incompressibility or divergence-free) condition, and the pressure is often regarded as a Lagrange multiplier to satisfy this constraint; Besides the solution of the pressure is not unique and is determined up to an additive constant. Numerical solutions will be discussed in this paper for overcoming these difficulties. The above equations are discretized in space on a staggered mesh in order to prevent the socalled even/odd decoupling or checkerboard effect, as shown in Figure 1. The spatial derivatives are approximated by second-order central differences. To discretize the equations in time, fully implicit scheme is superior to explicit one in terms of stability, which in turn requires non-linear iterations. This could be expensive in computation and the convergence is not always ensured. Non-linear iterations can be avoided by linearizing the convective terms, resulting in a non-symmetric coefficient matrix for the velocity. The matrix needs to be re-computed at each time step, which becomes very costly when the grid number increases. Fully explicit formulation seems to be very efficient as no iterations are needed. But the time step should be kept small enough to maintain stability. In two dimensions the constraints on the time step are the diffusive stability condition ∆t Re 2 1 ∆x 2 min + 1 ∆y 2 min -1 , (2) and the convective stability condition of the usual CFL (Courant-Friedrichs-Lewy) type ∆t min ∆x min u max , ∆y min v max . (3) It is easy to see that the diffusive constraint is more severe. Reducing the mesh size by half requires a four times smaller time step and it becomes more severe as the dimension increases. At low Reynolds number regime, the time constraint due to (2) dominates (3). It might be thought that for moderate to high Reynolds number flows, the diffusive stability condition is less restrictive. However in practice, the grid spacing is usually kept small under these circumstances for capturing small turbulence and the time step constraint of ( 2) is proportional to the square of the minimal mesh size. In the present work, a semi-implicit time discretization scheme is employed, namely the convective terms are treated explicitly for avoiding the nonlinearity while the diffusive terms are treated implicitly for circumventing the severe diffusive time constraint. As a result, the entire system is linear and stable under the standard CFL condition. The velocity coefficient matrix remains symmetric and constant. To obtain a second order accurate system, we employ the second order Adams-Bashforth (AB2) scheme for the non-linear terms and the Crank-Nicolson (CN) scheme for the linear terms. The system now can be written as ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ u n+1 -u n ∆t + 3 2 N (u n ) - 1 2 N (u n-1 ) = -Gp n+1 + 1 2Re L(u n+1 + u n ), Du n+1 = 0, u n+1 | Γ = w n+1 , (4) where L, N , G, D, are the discretized linear, non-linear, gradient, divergence operators, respectively. The superscript n + 1 and n represent the current time level and the past time level. The initial condition is hereafter omitted for convenience. The projection method, also refereed to fractional step method or time-splitting method, emerged in late 1960s as an effective tool to solve the pressure-velocity coupling problem, by splitting the system into a serial decoupled elliptic equations. The projection method is rooted in the Helmholtz-Hodge decomposition, which states that any smooth vector field v could be decomposed into the sum of a divergence-free part and a gradient of a potential field v = v d + Gφ, (5) where φ is often related to the pressure in the projection method. By taking the divergence of ( 5) and applying Dv d = 0, φ is the solution of the following Poisson equation Lφ = Dv. ( 6 ) Once φ is calculated, the solenoidal velocity can be recovered by v d = v -Gφ. (7) Previous projection methods The original projection method proposed by Chorin [START_REF] Chorin | Numerical solution of the Navier-Stokes equations[END_REF] and Témam [START_REF] Témam | Sur l'approximation de la solution des équations de Navier-Stokes par la méthode des pas fractionnaires (II)[END_REF] decouples the dynamic momentum equation from the kinematic incompressibility constraint by first estimating a tentative velocity û regardless of the pressure term, and then using the pressure to project the predicted velocity û into its solenoidal part u n+1 . The two sub-steps of prediction and projection are performed as ⎧ ⎪ ⎨ ⎪ ⎩ û -u n ∆t + 3 2 N (u n ) - 1 2 N (u n-1 ) = 1 2Re L(û + u n ), û| Γ = w n+1 , (8) ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ u n+1 - û ∆t = -Gφ n+1 , Du n+1 = 0, u n+1 • n| Γ = w n+1 • n. (9) By taking divergence of the first equation of ( 9) along with the incompressibility constraint, the actual realization of the projection step follows ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ Lφ n+1 = 1 ∆t Dû, ∂φ n+1 ∂n | Γ = 0, u n+1 = û -∆tGφ n+1 , (10) which is the same as ( 6) and [START_REF] Saiki | Numerical simulation of a cylinder in uniform flow: Application of a virtual boundary method[END_REF] when ∆t is absorbed to φ n+1 . In the projection method of Chorin [START_REF] Chorin | Numerical solution of the Navier-Stokes equations[END_REF] and Témam [START_REF] Témam | Sur l'approximation de la solution des équations de Navier-Stokes par la méthode des pas fractionnaires (II)[END_REF], the final pressure is set to p n+1 = φ n+1 . In spite of its efficiency, the original projection method suffers an irreducible splitting error of O(∆t), which deteriorates the original second order time discretization and prevents its extension to a higher-order method [START_REF] Perot | An Analysis of the Fractional Step Method[END_REF][START_REF] Liu | Projection method I: Convergence and numerical boundary layers[END_REF][START_REF] Guermond | An overview of projection methods for incompressible flows[END_REF]. The error term can be found by adding the two sub-steps [START_REF] Fadlun | Combined immersed-boundary finite-difference methods for threedimensional complex flow simulations[END_REF] and [START_REF] Mohd-Yosuf | Combined immersed Boundary/B-spline methods for simulation of flow in complex geometries[END_REF], and then comparing to (4) 1 2Re L(û -u n+1 ) = ∆t 2Re LGp n+1 , (11) which is due to the time splitting scheme with the implicit treatment of the diffusive terms. Explicit treatment, however, would result in a severe limitation on the time step. It is rather natural to apply the physical boundary condition to the intermediate velocity û in the prediction step [START_REF] Fadlun | Combined immersed-boundary finite-difference methods for threedimensional complex flow simulations[END_REF]. As a result, an artificial Neumann boundary condition ∂p n+1 /∂n| Γ = 0 is enforced on the pressure. This artificial homogeneous Neumann boundary condition introduces a numerical boundary layer to the solution, which prevents the method to be fully first-order [START_REF] Liu | Projection method I: Convergence and numerical boundary layers[END_REF][START_REF] Guermond | On stability and convergence of projection methods based on pressure Poisson equation[END_REF][START_REF] Guermond | An overview of projection methods for incompressible flows[END_REF]. Improvements to the original projection method have been proposed in [START_REF] Goda | A multistep technique with implicit difference schemes for calculating two-or three-dimensional cavity flows[END_REF][START_REF] Braza | Numerical study and physical analysis of the pressure and velocity fields in the near wake of a circular cylinder[END_REF][START_REF] Van Kan | A second-order accurate pressure-correction scheme for viscous incompressible flow[END_REF] to achieve a higher order time accuracy by using an incremental scheme, which can be summarized as ⎧ ⎪ ⎨ ⎪ ⎩ û -u n ∆t + 3 2 N (u n ) - 1 2 N (u n-1 ) = 1 2Re L(û + u n ) -Gp n , û| Γ = w n+1 , (12) ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ u n+1 - û ∆t = -Gφ n+1 , Du n+1 = 0, u n+1 • n| Γ = w n+1 • n, (13) where an old value of pressure is retained in the prediction step, and φ n+1 here represents the pseudo pressure. The second sub-step is often performed as ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ Lφ n+1 = 1 ∆t Dû, ∂φ n+1 ∂n | Γ = 0, u n+1 = û -∆tGφ n+1 , (14) and the final pressure is updated by p n+1 = p n + φ n+1 . ( 15 ) To study the spitting error, we sum up ( 12) and ( 13) and compare to [START_REF] Liu | Immersed finite element method and its applications to biological systems[END_REF]. Considering that the pseudo pressure is the approximation of φ n+1 = p n+1p n = ∆tp t , the splitting error is found to be of second order [START_REF] Perot | An Analysis of the Fractional Step Method[END_REF][START_REF] Armfield | An analysis and comparison of the time accuracy of fractional-step methods for the navierstokes equations on staggered grids[END_REF] 1 2Re L(û -u n+1 ) = ∆t 2Re LGφ n+1 = ∆t 2 2Re LGp t . Note that the physical boundary condition is still assigned to the intermediate velocity in the prediction step [START_REF] Wang | Algorithms for interface treatment and load computation in embedded boundary methods for fluid and fluid-structure interaction problems[END_REF] ∂n | Γ = ∂p n ∂n | Γ = • • • = ∂p 0 ∂n | Γ , (17) is enforced on the final pressure. This pressure boundary condition is not physical, thus it introduces a numerical boundary layer and prevents the scheme to be fully second order [START_REF] Guermond | An overview of projection methods for incompressible flows[END_REF]. This error is irreducible, hence using a higher order time stepping scheme will not improve the overall accuracy. Rotational incremental pressure-correction projection method To obtain a solution of second order accuracy with consistent boundary conditions, we propose to use the rotational incremental pressure-correction projection method of [START_REF] Timmermans | An approximate projection scheme for incompressible flow using spectral elements[END_REF][START_REF] Guermond | An overview of projection methods for incompressible flows[END_REF]. The essential idea of this method is to absorb the splitting error into the pressure so that the sum of the substeps is consistent with the original discretized momentum equation [START_REF] Liu | Immersed finite element method and its applications to biological systems[END_REF]. By considering the identity ∇ 2 u = ∇(∇ • u) -∇ × ∇ × u, the error term ( 16) can be rewritten as 1 2Re L(û -u n+1 ) = 1 2Re G(Dû), (18) where ∇ × ∇ × û = ∇ × ∇ × u n+1 is used, which can be verified by the Helmholtz-Hodge decomposition. Now the error term in this form can be absorbed into the pressure p n+1 = p n + φ n+1 - 1 2Re Dû. Most importantly, the pressure boundary condition is consistent with the original system. Therefore, no numerical boundary layer will be generated with this scheme. Higher than second order accuracy can be achieved if a higher-order time-stepping scheme is used. In the present work, the second order accuracy is found to be sufficient with the AB2 scheme and the CN scheme. The overall rotational incremental pressure-correction projection method can be summarized as follows ⎧ ⎪ ⎨ ⎪ ⎩ û -u n ∆t + 3 2 N (u n ) - 1 2 N (u n-1 ) = 1 2Re L(û + u n ) -Gp n , û| Γ = w n+1 , (20) ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ Lφ n+1 = 1 ∆t Dû, ∂φ n+1 ∂n | Γ = 0, u n+1 = û -∆tGφ n+1 , (21) p n+1 = p n + φ n+1 - 1 2Re Dû. Pressure Poisson equation solver and Parallel computing The aforementioned discretized equations lead to a set of linear systems to be solved. Among them the pressure Poisson equation is the most time-consuming part due to its high condition number, which is generally solved iteratively to save computational time and storage. (Bi-CGSTAB), generalized minimum residual (GMRES), etc., are very efficient for this problem. In addition, more efficiency can be achieved if pre-conditioning is applied, such as the incomplete Cholesky (IC) factorization, the incomplete lower-upper (ILU) decomposition and the approximate inverse (AINV) [START_REF] Chow | Approximate inverse preconditioners via sparse-sparse iterations[END_REF]. The MG method is found to be more efficient when used as a pre-conditioner in conjunction with Krylov solvers instead of a pure solver. To further improve this work, the code is extended to allow parallel computing. First we integrate our method into the PETSc library [START_REF] Balay | PETSc users manual[END_REF], which employs the MPI for communications between the CPU cores. In the second mode, we parallelize the code through using the CUDA CUSP library on GPU [START_REF] Dalton | Cusp: Generic parallel algorithms for sparse matrix and graph computations[END_REF]. In fact the CPU consists of a few cores optimized for sequential serial task, while the GPU may have massive smaller cores at the same price which is extremely efficient for handling multiple tasks simultaneously. Therefore, we send the parallelable and computationally intensive parts of the application to the GPU and run the remainders on the CPU. From the practical point of view, the second mode runs significantly fast. Table I illustrates the performances of the two parallelization modes. The test is performed by solving the PPE on a 400 × 400 grid with the Neumann boundary condition applied at all the boundaries. As a matter of fact, this system does not possess a unique solution (up to an additional constant). We pin a fixed value to one cell to remove the zero eigenvalue, as suggested in [START_REF] Taira | The immersed boundary method: A projection approach[END_REF][START_REF] Kassiotis | Nonlinear fluid-structure interaction problem. Part I: implicit partitioned algorithm, nonlinear stability proof and validation examples[END_REF]. The calculation is done on the platform PILCAM2 with the CPU Intel Xeon X7542 and the GPU Quadroplex 2200 S4. The process time decreases approximately by half when we double the cores of CPU from 1 to 16 in the test with CG solver. About 1.5-2.8 times' acceleration has been achieved when the MG method is applied as a preconditioner for the CPU parallelization. The parallelization of GPU greatly accelerates the calculation up to 40 times in the test with MG preconditioner. Different preconditioners like the AINV and the MG are compared in Table II φ 4 (r) = ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ 1 8 3 -2|r| + 1 + 4|r| -4r 2 , |r| < 1, 1 8 5 -2|r| --7 + 12|r| -4r 2 , 1 |r| < 2, 0, otherwise, (30) which is widely used in the literature. Roma et al. [START_REF] Roma | An adaptive version of the immersed boundary method[END_REF] also designed a 3-point-width function specially for the staggered mesh [START_REF] Beyer | Analysis of a one-dimensional model for the immersed boundary method[END_REF]; ----, the 3-point-width function of Roma et al. [START_REF] Roma | An adaptive version of the immersed boundary method[END_REF]; --, the 4-point-width function of Peskin [START_REF] Peskin | The immersed boundary method[END_REF]. φ 3 (r) = ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ 1 3 1 + -3r 2 + 1 , |r| < 0.5, 1 6 5 -3|r| --3(1 -|r|) 2 + 1 , 0.5 |r| < 1.5, 0, otherwise. ( 31 ) -3 -2 -1 0 1 2 3 0 0.2 0.4 0.6 0.8 1 Figure 4. Comparison of the one-dimensional function φ(r). • • • •, the 2-point-width hat function The functions are plotted and compared in Figure 4. The one-dimensional function of Roma et al. [START_REF] Roma | An adaptive version of the immersed boundary method[END_REF] has a relative smaller support than the four-point version of Peskin [START_REF] Peskin | The immersed boundary method[END_REF], providing a sharper interface and a better numerical efficiency while maintaining good smoothing properties. The discrete delta functions used in the present work have the following properties: • δ h has a narrow support to reduce the computational cost and to obtain a better resolution of the immersed boundary. • δ h is second order accurate for smooth fields. • δ h satisfies certain moment conditions to meet the translation invariant interpolation rule, namely the total force and torque are equivalent between the Lagrangian and Eulerian locations. x∈gh δ h (x -X)h 2 = 1 (zeroth moment condition), (32) ũ = u * + ∆tf n+1 . (38) (6) Implicit treatment of the viscous term 1 ∆t û - 1 2Re Lû = 1 ∆t ũ. (39) (7) Project the fluid velocity into the divergence-free field and update the pressure Lφ n+1 = 1 ∆t Dû, (40) u n+1 = û -∆tGφ n+1 , (41) p n+1 = p n + φ n+1 - 1 2Re Dû. Here u * , ũ, û, u n+1 represent the fluid velocity at each stage of the fractional step method, i.e., the prediction step of explicit terms, the immersed boundary forcing step, the viscous prediction step, and the projection step. U b (X l ) is the solid velocity of the lth element at the immersed boundary. The method of Uhlmann [START_REF] Uhlmann | An immersed boundary method with direct forcing for the simulation of particulate flows[END_REF] is favored in the literature as it is computational inexpensive due to its explicit treatment of the boundary force. However, numerical simulations have shown that it fails to impose the velocity boundary condition exactly on the immersed boundary [START_REF] Kempe | An improved immersed boundary mehod with direct forcing for the simulation of particle laden flows[END_REF]. A forcing error is introduced which is irreducible and depends on the time step and Reynolds number Re. This error comes from the fact that the tentative fluid velocity u * is used for the boundary force evaluation. The ideal velocity should be the final fluid velocity u n+1 while it is unknown at the immersed boundary forcing step. Otherwise we need to iterate the whole system to achieve u n+1 implicitly, which could be too cumbersome to perform. But this implies one way of reducing the forcing error by choosing the closest value to the final velocity. Kempe and Fröhlich [START_REF] Kempe | An improved immersed boundary mehod with direct forcing for the simulation of particle laden flows[END_REF] suggested to perform the viscous prediction step first and then use the intermediate velocity û to compute the boundary force. To further improve the accuracy, a forcing loop is added in the immersed boundary forcing step. This additional loop is performed within few iterations without convergence. The method of Kempe and Fröhlich [START_REF] Kempe | An improved immersed boundary mehod with direct forcing for the simulation of particle laden flows[END_REF] can be expressed as (1) Prediction of the explicit terms u * = u n + ∆t - 3 2 N (u n ) - 1 2 N (u n-1 ) -Gp n + 1 2Re Lu n . (43) (2) Viscous prediction step 1 ∆t û - 1 2Re Lû = 1 ∆t u * . (44) (3) Immersed boundary forcing loop Loop for k = 1 to 3 with û(0) = û Û(k) (X l ) = nx i=1 ny j=1 û(k-1) δ h (x i,j -X l )h 2 , (45) (k) (X l ) = U n+1 b (X l ) -Û(k) (X l ) ∆t , (46) f (k) (x i,j ) = nb l=1 F (k) (X l )δ h (x i,j -X l )∆V l , (47) ũ(k) = û(k) + ∆tf (k) , (48) û(k) = ũ(k) . (49) End loop (4) Projection step and update of the final fields Lφ n+1 = 1 ∆t Dũ, (50) u n+1 = ũ -∆tGφ n+1 , (51) p n+1 = p n + φ n+1 - 1 2Re Dû. If full convergence of the forcing loop is required more iterations are needed, such as the multidirect forcing scheme of [START_REF] Luo | Full-scale solutions to particle-laden flows: Multidirect Forcing and immersed boundary method[END_REF][START_REF] Breugem | A second-order accurate immersed boundary method for fully resolved simulations of particle-laden flows[END_REF]. However, the convergence rate of this iteration becomes very slow after several iterations. The computational cost increases hugely when more Lagrangian points are involved in the additional forcing loop. Therefore the number of iteration is usually kept low for the computational efficiency. Even though the error is reduced, the method of Kempe and Fröhlich [START_REF] Kempe | An improved immersed boundary mehod with direct forcing for the simulation of particle laden flows[END_REF] is still explicit. The exact no-slip boundary condition can never be satisfied. To impose the no-slip boundary condition exactly, Taira and Colonius [START_REF] Taira | The immersed boundary method: A projection approach[END_REF] proposed the implicit immersed boundary projection method (IBPM) by combining the boundary force and the pressure into a modified Poisson equation and solving them simultaneously in one single projection step. However convergence problem may occur as one boundary point is very close to a fluid grid point [START_REF] Ji | A novel iterative direct-forcing immersed boundary method and its finite volume applications[END_REF], because the singular property of the interpolation and distribution functions undermines the coefficient matrix condition number of the PPE. Ji et al. [START_REF] Ji | A novel iterative direct-forcing immersed boundary method and its finite volume applications[END_REF] proposed to iterate each part to get rid of the convergence problem, which is inevitable computational expensive. Novel implicit immersed boundary method In this subsection we present a novel implicit but efficient IBM variant, termed as the moving immersed boundary method (MIBM) in this paper. The objective of MIBM to maintain the efficiency of the explicit direct forcing IBM but with an improved accuracy like the multidirect forcing IBM and the IBPM. To this end, we first take the immersed boundary forcing part from the explicit IBM of Kempe and Fröhlich [START_REF] Kempe | An improved immersed boundary mehod with direct forcing for the simulation of particle laden flows[END_REF] for consideration, i.e., [START_REF] Tritton | Experiments on the flow past a circular cylinder at low Reynolds numbers[END_REF], [START_REF] Wang | An immersed boundary method based on discrete stream function formulation for two-and three-dimensional incompressible flows[END_REF], [START_REF] Lai | An immersed boundary method with formal second-order accuracy and reduced numerical viscosity[END_REF] and [START_REF] Williamson | Oblique and parallel modes of vortex shedding in the wake of a circular cylinder at low Reynolds numbers[END_REF]. By dropping the superscripts for convenience, the immersed boundary forcing part is written as Û = T û, (53) F = U b - Û ∆t , (54) ũ = û + ∆tf . ( (55) ) 56 We require that the interpolated velocity satisfies the no-slip wall boundary condition on the immersed interface after the immersed boundary forcing, namely T ũ = U b , then T (û + ∆tf ) = U b . (57) Substituting ( 55) into [START_REF] Wang | Two dimensional mechanism for insect hovering[END_REF] gives T (û + ∆tSF) = U b , (58) which can be rearranged in order to separate the boundary force (T S)F = U b -T û ∆t . ( 59 ) We donate M = T S the moving force coefficient matrix. M is a function of the solid position, which changes its value as the boundary moves. Thus the force is redistributed just like the boundary force moves. The moving force equation can be rewritten in a more concise form MF = F e , (60) where F e = (U b -T û)/∆t is exactly the explicit forcing value used in [START_REF] Kempe | An improved immersed boundary mehod with direct forcing for the simulation of particle laden flows[END_REF]. Compared to the modified Poisson equation in the IBPM of [START_REF] Taira | The immersed boundary method: A projection approach[END_REF], the moving force equation ( 60) is much smaller in size and easier to work with. At each dimension (x or y), the size of the force coefficient matrix is n b × n b since T ∈ R nb×nxny and S ∈ R nxny×nb . Therefore, for moving boundaries, its update is computational less expensive than the modified Poisson equation. Note that S = (∆V l /h 2 )T T if the same function is used for interpolation and spreading, where ∆V l /h 2 ≈ 1 is the volume ratio between the fluid and the solid cell. As a result, the moving force coefficient matrix M = (∆V l /h 2 )T T T is symmetric. It is also found that M is positive-definite irrespective of the time step and the approximation order as in the IBPM [START_REF] Taira | The immersed boundary method: A projection approach[END_REF]. Moreover, the moving force equation is well conditioned, which converges quickly by using the conjugate gradient method. Now we incorporate this moving force equation into the rotational incremental pressurecorrection projection method. For the sake of simplicity, we rewrite the governing equations [START_REF] Liu | Projection method I: Convergence and numerical boundary layers[END_REF] as u n+1 -u n ∆t = H + P + F, (61) Du n+1 = 0, (62) T u n+1 = U n+1 b , (63) where H, P and F are the operators defined as H := - 3 2 N (u n ) - 1 2 N (u n-1 ) + 1 2Re L(u n+1 + u n ) -Gp n , (64) P := -Gφ n+1 , (65) F := SF n+1 . ( 66 ) To decouple the momentum equation (61) from the divergence free condition (62) and the no-slip wall condition on the interface (63), we perform the following operator splitting algorithm: (1) Prediction step by ignoring the immersed objects û -u n ∆t = H(û). (67) (2) Immersed boundary forcing step for satisfying the no-slip wall condition on the interface ũ - û ∆t = F, (68) T ũ = U n+1 b . (69) Applying ( 69) to (68) gives the moving force equation that we have defined previously MF n+1 = U n+1 b -T û ∆t . ( 70 ) Once the boundary force is determined, we correct the fluid velocity with ũ = û + ∆tSF n+1 . (71) (3) Projection step for obtaining the divergence free velocity u n+1 and the final pressure p n+1 u n+1 - ũ ∆t = P, (72) Du n+1 = 0. ( 73 ) Applying the divergence operator to (72) and using the divergence free condition (73) gives Lφ n+1 = 1 ∆t Dũ, (74) u n+1 = ũ -∆tGφ n+1 . ( 75 ) The final pressure is advanced by p n+1 = p n + φ n+1 - 1 2Re Dû. (76) Figure 6 shows the global structure of MIBM. The overall scheme follows the regular fractional step method so that the velocity, the pressure and the force are decoupled. Even though the interface velocity condition is enforced before the projection step, we have found that the velocity on the immersed boundary is essentially unchanged after the projection step. The same observation has also been made by Kempe and Fröhlich [START_REF] Kempe | An improved immersed boundary mehod with direct forcing for the simulation of particle laden flows[END_REF] and Fadlun et al. [START_REF] Fadlun | Combined immersed-boundary finite-difference methods for threedimensional complex flow simulations[END_REF]. It is worth noting that the present MIBM recovers to the explicit method of Kempe and Fröhlich [START_REF] Kempe | An improved immersed boundary mehod with direct forcing for the simulation of particle laden flows[END_REF] with one iteration in the forcing loop, if M is set to the identity matrix. However it is not the case, hence our method is implicit. Comparison of performance To demonstrate the accuracy and efficiency of present moving immersed boundary method, we perform the following test Given u 0 (x, y) = e x cos y -2, 0 x, y 1, Find F such that u(x, y) = u 0 (x, y) + ∆tSF = U b on Γ s , where Γ s is described with a circle of a radius of 0.2 at (0.52, 0.54) and U b = 0. The domain is covered by 64 × 64 nodes with around 81 Lagrangian points on the circle surface. ∆t is set to 1. In this test, the fluid equations are not solved and only the immersed boundary forcing part is considered. The initial field u 0 (x, y) can be seen as a predicted fluid velocity component in one direction. This test is to examine different forcing strategies for imposing the desired velocity U b at the interface Γ s via a boundary force F . To facilitate the accuracy study, we define the velocity error norms of L 2 and L ∞ as follows for i = 1, . . . , n x , j = 1, . . . , n y where u ref represents the reference value. It is worth noticing that the L 2 -norm is a good measure of the global error while the L ∞ -norm provides a good indicator for the local error. Figure 7a displays the result of the explicit direct forcing IBM of Uhlmann [START_REF] Uhlmann | An immersed boundary method with direct forcing for the simulation of particulate flows[END_REF], where u is far away from zero over the immersed boundary compared to Figure 7c. The accuracy is improved after 3 iterations with the method of Kempe and Fröhlich [START_REF] Kempe | An improved immersed boundary mehod with direct forcing for the simulation of particle laden flows[END_REF], as shown in Figure 7b. Figure 7d reveals that the results are nearly the same for present MIBM with the iterative multidirect forcing IBM of Luo et al. [START_REF] Luo | Full-scale solutions to particle-laden flows: Multidirect Forcing and immersed boundary method[END_REF] and Breugem [START_REF] Breugem | A second-order accurate immersed boundary method for fully resolved simulations of particle-laden flows[END_REF]. Table III compares the computational time and velocity error on the interface of these immersed boundary methods. The error is measured in L 2 -norm and the tolerance is 1 × 10 -15 . The method of Uhlmann [START_REF] Uhlmann | An immersed boundary method with direct forcing for the simulation of particulate flows[END_REF] is the quickest due to its explicit nature, but it suffers a large error of 3.01 × 10 -1 on the immersed interface. The forcing loop of Kempe and Fröhlich [START_REF] Kempe | An improved immersed boundary mehod with direct forcing for the simulation of particle laden flows[END_REF] reduces the error by a factor of 4 with 3 iterations. However, the error of 7.41 × 10 -2 is still considered large. ||e u || 2 = 1 n x n y nx i=1 ny j=1 (u i,j -u ref i,j ) 2 1/2 , ( 77 ) ||e u || ∞ = max|u i,j -u ref i,j |, (78) The iterative multiforcing IBM of Luo et al. [START_REF] Luo | Full-scale solutions to particle-laden flows: Multidirect Forcing and immersed boundary method[END_REF] and Breugem [START_REF] Breugem | A second-order accurate immersed boundary method for fully resolved simulations of particle-laden flows[END_REF] is required to converge towards the machine precision, but it takes approximately 606 times more additional computational effort than the explicit method of Uhlmann [START_REF] Uhlmann | An immersed boundary method with direct forcing for the simulation of particulate flows[END_REF]. Actually, the convergence rate in the multiforcing IBM decreases dramatically after about 10 iterations, as shown in Figure 8. In order to reduce the error to 1 × 10 -6 around 1000 iterations are needed and 4443 iterations for the machine precision. The present MIBM converges to the same machine precision only with 60 iterations by using the conjugate gradient solver. The iteration can be further reduced if preconditioning is taken, but we find that the conjugate gradient solver is sufficient for fast convergence. The computation is not increased considerably compared to the explicit method of Uhlmann [START_REF] Uhlmann | An immersed boundary method with direct forcing for the simulation of particulate flows[END_REF], as we can see that the present method only takes twice the amount of computational time of the direct forcing IBM of Uhlmann [START_REF] Uhlmann | An immersed boundary method with direct forcing for the simulation of particulate flows[END_REF]. It also worth noticing that present MIBM is almost as efficient as the method of Kempe and Fröhlich [START_REF] Kempe | An improved immersed boundary mehod with direct forcing for the simulation of particle laden flows[END_REF]. Iteration Velocity Error Figure 8. Comparison of convergence between present MIBM (--) and the multidirect forcing IBM of Luo et al. [START_REF] Luo | Full-scale solutions to particle-laden flows: Multidirect Forcing and immersed boundary method[END_REF] and Breugem [START_REF] Breugem | A second-order accurate immersed boundary method for fully resolved simulations of particle-laden flows[END_REF] (----). RESULTS Taylor-Green vortices We first consider the two-dimensional unsteady case of an array of decaying vortices to assess the accuracy of the fluid solver. The analytical solution of the Taylor-Green vortices is given by u(x, y, t) = -cos(πx)sin(πy)e -2π 2 t/Re , v(x, y, t) = sin(πx)cos(πy)e -2π This simulation is performed on a square domain Ω = [-1.5, 1.5] × [-1.5, 1.5] and the Reynolds number Re is prescribed to 10. The initial and boundary conditions are provided by the exact solution. We advance the equations for 0 t 0.2. To study the temporal accuracy, we compare the results at t = 0.2 to a reference solution obtained by a very fine time step ∆t = 1 × 10 -4 with the spatial resolution of ∆x = ∆y = 9.375 × 10 -3 . The errors on the velocity component u are computed by subtracting the reference solution from other numerical solutions (∆t ∈ [0.00125, 0.01]), to cancel out the error due to spatial discretization. The L 2 , L ∞ error norms are then displayed in Figure 9a on a log-log plot. A second order temporal accuracy is observed, which confirms previous error estimation analysis for the rotational incremental pressure-correction projection method. We also expect a second order spatial accuracy since the second order central differencing scheme is used for all the derivatives in this case. We use a small time step ∆t = 1 × 10 -4 to ensure that the temporal discretization error is negligible compared to the spatial one, and then vary the computational grids (n x × n y = 20 × 20, 40 × 40, 80 × 80, and 160 × 160). The error is obtained by comparing the results to the analytical solution. Figure 9b shows the spatial discretization error, indicating a second order spatial accuracy. It is well known that the discrete delta function undermines the space accuracy of the original fluid solver. Now we embed a circular cylinder of a unit radius in the center of the computational domain to study the accuracy of our MIBM. The time dependant no-slip boundary condition at the immersed cylinder surfaces is enforced by current MIBM. Figure 9b shows the variation of the velocity error as a function of the mesh size. It is evident that current MIBM introduces errors the original fluid solver but it still retains the second order accuracy, which corresponds to the interpolation properties of the discrete delta function for smooth fields. Lid-driven cavity flow with an embedded cylinder In this test, we compare current immersed boundary method with the traditional body-conforming mesh method. The domain configuration and the boundary conditions are taken the same as in the classical lid-driven cavity flow case, namely the top wall is moving with a constant velocity u ∞ = 1 while the others are stationary walls, except that we place a cylinder in the domain center. In order to compare with Vanella and Balaras [START_REF] Vanella | A moving-least-squares reconstruction for embedded-boundary formlations[END_REF], the diameter of the cylinder is set to D = 0.4L with L being the cavity length. The Reynolds number is 1000 in this study based on the cavity length. A uniform mesh of 200 × 200 is employed in the immersed boundary method, and the same mesh size is used for the body-conforming mesh method for comparison. Body-conforming mesh method (0.6906, 0.6872) (0.0791, 0.0721) (0.8849, 0.1063) Table IV. Comparison of vortices center positions for the proposed immersed boundary method and the body-conforming mesh method, where (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ) are the vortices centers at the upper right to the cylinder, at the lower left corner and at the lower right corner respectively. The flow reaches a final steady state as the time advances. Figure 10 shows the vorticity contours and streamlines for the flow at Re = 1000, which are similar to the results of [START_REF] Vanella | A moving-least-squares reconstruction for embedded-boundary formlations[END_REF]. As we can see, three vortices emerge in the flow. One at the upper right position of the cylinder and two near the bottom at each corners. It is noteworthy that the upper vortex is generated by the presence of the cylinder. The flow fields outside the cylinder are essentially the same for current MIBM and the body-conforming mesh method. The only difference is that there is a flow inside the cylinder in the immersed boundary method, which however is the key idea of the immersed boundary method to replace the solid domain with fluid. The velocity component u at the vertical midline x = 0.5 and the velocity component v at the horizontal midline y = 0.5 are plotted in Figure 11. The velocity profiles of both methods match pretty well. The location of the three vortices centers are also listed in Table IV. Very close results have been obtained. Next we study the grid convergence for assessing the accuracy of present method for nonsmoothed field. A series of computations are performed on a hierarchy of grids (70 × 70, 90 × 90, 126 × 126, 210 × 210 and 630 × 630). The variation of error of the velocity component u along with the grid spacing is displayed in Figure 12, showing a convergence rate of about 1.13. This is because the flow becomes not smooth near the immersed surface in this case, and the discrete [START_REF] Beyer | Analysis of a one-dimensional model for the immersed boundary method[END_REF] analysed various discrete delta functions and pointed out that the second order accuracy can be recovered through using different functions for interpolation and spreading. This results in non-symmetric coefficient matrix of the boundary force in MIBM, which can be solved with the GMRES or Bi-CGSTAB methods. Flow over a stationary circular cylinder The flow past a stationary circular cylinder is considered as a canonical test case to validate current method, since a great amount of experimental and numerical studies at different Reynolds numbers are available for comparison. The flow characteristics depend on the Reynolds number Re = u ∞ D/ν, based on the inflow velocity u ∞ , the cylinder diameter D = 1 and the fluid kinematic viscosity ν. The simulation is performed in a rectangular domain, where the fluid flows from the left to the right (see Figure 13). At left boundary, a uniform velocity of u ∞ = 1 is imposed; The free slip boundary conditions are applied at lateral boundaries; At outlet, the convective boundary condition ∂u/∂t + u ∞ ∂u/∂x = 0 is employed for reducing the reflection effects because of the finite artificially truncated domain. The cylinder is placed at the center of the computational domain. The fluid domain is covered with a uniform mesh, and the cylinder surface is represented by a set of uniformly distributed Lagrangian points with δs ≈ h. For comparison the drag and lift coefficients are defined as C D = F D 1 2 ρu 2 ∞ D , C L = F L 1 2 ρu 2 ∞ D , (80) where F D , F L are the drag and lift forces on the cylinder exerted by the fluid, respectively. The fluid density ρ is set to 1 here. As a matter of fact, the spreading and interpolation operators constructed from the regularized delta function conserve the total force, hence F D and F L can be computed directly by summing up the forces over all the Lagrangian points F D F L = - nb l=1 F(X l )∆V l . (81) D Inflow u ∞ Convective outlet Free-slip Free-slip The time-averaged values of the wall vorticity ω z and the wall pressure coefficient C P are shown in Figure 21 for Re = 100. Good agreements have been found compared to the results of Braza et al. [START_REF] Braza | Numerical study and physical analysis of the pressure and velocity fields in the near wake of a circular cylinder[END_REF]. The effects of different discrete delta functions on the results are also tested in for Re = 100, 200, where a domain of Ω = 30D × 30D is used and the mesh resolution is set to h = 0.029D. (Ω = 30D × 30D, h = 0.04D) 1.355 ±0.042 ±0.677 0.200 Present (Ω = 30D × 30D, h = 0.029D) 1.365 ±0.044 ±0.696 0.200 Present (Ω = 30D × 30D, h = 0. A careful grid convergence study is also performed to examine the order of accuracy in this case. Since the exact solution does not exist, we use the solution calculated on a highly resolved grid of 630 × 630 as our reference for computing the error. The computation domain is taken as [-2D, 2D] × [-2D, 2D] with the Reynolds number Re = 100. The equations are advanced until 0.2 and a relative small time step of 5 × 10 -4 is chosen such that the time discretization error will not influence the results. Same computations but on different grids are performed and compared the reference solution, namely 45 × 45, 70 × 70, 90 × 90, 126 × 126 and 210 × 210. The distribution of velocity error in the x-direction for the 90 × 90 grid is shown in Figure 22. Large magnitudes of error in velocity are located near the cylinder. Figure 23 displays the L 2 norm of this error on a log-log plot. A convergence rate of around 1.21 is observed. Re = 1000 We further extend our method to a higher Reynolds number flow Re = 1000. At this regime, the convection effects become predominant and the boundary layer thickness decreases, which can be estimated by δ ≈ D/ √ Re = 0.032. To capture the thin boundary layer, a fine grid resolution of h = 0.01D is taken, as recommended in [START_REF] Mittal | A versatile sharp interface immersed boundary method for incompressible flows with complex boundaries[END_REF][START_REF] Apte | A numerical method for fully resolved simulation (FRS) of rigid particle-flow interactions in complex flows[END_REF]. Note that the grid resolution is only marginal for resolving the boundary layer at this Reynolds number. Nevertheless, the results are satisfactory and the essential features of the flow are well captured. The computational domain is chosen to be [-20D, 20D] × [-20D, 20D]. The two-point-width hat function φ 2 is employed in this case as it provides a sharp interface. Figure 25 shows the instantaneous vorticity field. The coefficients of drag and lift are plotted in Figure 24. Note that the flow is inherently three-dimensional at this Reynolds number. We compare our simulations with other two-dimensional results available in the literature. The properties of the drag and lift coefficients are summarized in Table VIII. Good agreements have been found. The computational domain is chosen to be 14D × 14D, as shown in Figure 26. The cylinder is initially located at the center of the computational domain. The outflow boundary condition ∂u/∂n = 0 is applied at the domain contours. A uniform mesh of 560 × 560 is adopted for the fluid domain and the cylinder is represented by 126 points due to δ s ≈ h. The transient no-slip velocity boundary condition at the cylinder surface is enforced by present MIBM at each time level u(t) = -2πfA cos(2πft). (83) The pressure and vorticity contours at four different phases (φ = 2πft = 0 • , 96 • , 192 • , 288 • ) are shown in Figure 27, where two counter-rotating vortices are formulated during the oscillation. The vortices contours are drawn from -3 to 3 with an increment of 0.4, which display the same structure as in [START_REF] Dütsch | Low-Reynolds-number flow around an oscillating circular cylinder at low Keulegan-Carpenter numbers[END_REF]. Figure 28 shows the profiles of the velocity components u and v at four different streamwise locations (x = -0.6D, 0D, 0.6D, 1.2D) for three phase (φ = 2πft = 180 • , 210 • , 330 • ). The experimental results of [START_REF] Dütsch | Low-Reynolds-number flow around an oscillating circular cylinder at low Keulegan-Carpenter numbers[END_REF] by LDA measurements are also plotted for comparison. The velocity profiles outside the cylinder agree well those of [START_REF] Dütsch | Low-Reynolds-number flow around an oscillating circular cylinder at low Keulegan-Carpenter numbers[END_REF]. The only discrepancy is the velocity inside the cylinder. Since the present IBM treats the solid domain as fluid, the velocity is non-zero inside the cylinder. From Figure 28 we can see that this treatment, however, does not influence the flow field outside the solid. Various internal treatments of the body have been discussed in the work of [START_REF] Iaccarino | Immersed boundary technique for turbulent flow simulations[END_REF], such as applying the force inside the body and thus changing the velocity distribution. Iaccarino and Verzicco [START_REF] Iaccarino | Immersed boundary technique for turbulent flow simulations[END_REF] also concluded that for direct forcing IBM, there is essentially no difference. Therefore, for simple implementation we just leave the interior of the solid free to develop a flow without imposing anything. Flow around a flapping wing In this example, we investigate the flow induced by a flapping wing, in order to demonstrate the ability of current method for handling non-circular object in both translational and rotational motions. The configuration of this problem is shown in Figure 30. The hovering wing is a geometrical 2D ellipse with major axis c (chord length) and minor axis b. The aspect ratio is defined as e = c/b. The wing is initially located at the origin with an angle of attack of θ 0 , then shifts along a stroke plane inclined at an angle β. The translational and rotational motions of the hovering wing are described as follows A(t) = A 0 2 cos( 2πt T ) + 1 , (84) θ(t) = θ 0 1 -sin( 2πt T + φ 0 ) , (85) where A 0 is the translational amplitude, 2θ 0 the rotational amplitude, T the flapping period and φ 0 the phase difference. The chord length c and the maximum velocity U max = πA 0 /T along the flapping path are used as the length and the velocity scales, respectively. The Reynolds number is defined as Re = U max c/ν. We employ the same parameters as used in [START_REF] Wang | Two dimensional mechanism for insect hovering[END_REF][START_REF] Xu | An immersed interface method for simulating the interaction of a fluid with moving boundaries[END_REF][START_REF] Yang | A simple and efficient direct forcing immersed boundary framework for fluid-structure interactions[END_REF] As suggested by Yang and Stern [START_REF] Yang | A simple and efficient direct forcing immersed boundary framework for fluid-structure interactions[END_REF], this simulation is performed on a large square domain of [-24c, 24c] × [-24c, 24c] to obtain a better periodicity for the results. A uniform mesh of 2400 × 2400 is employed to cover the computational domain and the mesh spacing around the wing is 0.02c, which is slightly finer than the grid resolution used in [START_REF] Xu | An immersed interface method for simulating the interaction of a fluid with moving boundaries[END_REF][START_REF] Yang | A simple and efficient direct forcing immersed boundary framework for fluid-structure interactions[END_REF]. A larger time step is selected in the present study (∆t = 0.01) based on the CFL number (CFL max = 0.72), while a much smaller time step ∆t = 0.001 is used in the immersed interface method (IIM) of Xu and Wang [START_REF] Xu | An immersed interface method for simulating the interaction of a fluid with moving boundaries[END_REF] to reduce the body shape distortion. : c = 1, e = 4, A 0 = 2.5c, θ 0 = π/4, T = πA 0 /c, β = π/3, φ 0 = 0, Re = 157. Figure 31 shows the vorticity fields near the flapping wing in one flapping period at four different positions, which are very similar to those given in [START_REF] Wang | Two dimensional mechanism for insect hovering[END_REF][START_REF] Xu | An immersed interface method for simulating the interaction of a fluid with moving boundaries[END_REF][START_REF] Yang | A simple and efficient direct forcing immersed boundary framework for fluid-structure interactions[END_REF]. A pair of leading and trailing edge vortices of opposite rotation is formed into a dipole. The dipole moves downward, generating the The time history of the drag and lift coefficients are plotted in Figure 32 and compared to the results of [START_REF] Wang | Two dimensional mechanism for insect hovering[END_REF][START_REF] Xu | An immersed interface method for simulating the interaction of a fluid with moving boundaries[END_REF][START_REF] Yang | A simple and efficient direct forcing immersed boundary framework for fluid-structure interactions[END_REF]. Good agreements have been found. Note that in order to maintain the shape of the rigid body in the immersed interface method of [START_REF] Xu | An immersed interface method for simulating the interaction of a fluid with moving boundaries[END_REF], a feedback control technique is employed and the time step is kept small to reduce the shape distortion. The present immersed boundary method is found to be much more satisfactory, since no additional springs for feedback control are needed and the no-slip boundary condition is exactly imposed at the interface. A grid convergence study is also conducted to assess the accuracy of current MIBM in this case. A domain size of [-4D, 4D] × [-4D, 4D] is chosen and the grid spacing varies sequentially. The numerical solution after one flapping period is used for the analysis. A fine time step of 10 -4 is selected in order to ensure the analysis is not influenced by the temporal discretization error. Figure 33 shows the error of the horizontal velocity in L 2 norm as a function of the grid spacing. A convergence rate of around 1.29 is observed. Flow past an impulsively started cylinder As our last example we present results of a suddenly accelerated circular cylinder in a quiescent fluid at different Reynolds numbers Re = U 0 D/ν ranging from 40 to 3000, with U 0 being the cylinder moving velocity. Initially we place the cylinder with unit diameter (D = 1) at the origin and suddenly set it into motion to the left at a constant velocity U 0 = -1, as illustrated in Figure 34. We first consider the Reynolds number Re = 40 and compare our results to the IBPM of Taira and Colonius [START_REF] Taira | The immersed boundary method: A projection approach[END_REF]. A uniform grid is used to cover the computational domain with no-slip boundary condition applied at all outer boundaries. The grid resolution is h = 0.01D and the time step is set to ∆t = 0.001. Two computational domains are employed to examine the effect of finite domain size on the results, namely a large domain of [-16.5D, 13.5D] × [-15D, 15D] as used by Taira and Colonius [START_REF] Taira | The immersed boundary method: A projection approach[END_REF] and a relative smaller domain [-8D, 4D] × [-5D, 5D] as used by Mimeau et al. [START_REF] Mimeau | Vortex penalization method for bluff body flows[END_REF]. The time history of the drag coefficient is plotted in Figure 36a results are in excellent agreement with the immersed boundary projection method [START_REF] Taira | The immersed boundary method: A projection approach[END_REF] on the large computational domain. When the computational domain is reduced the resulting drag coefficient is increased, which has also been observed in previous test cases. The snapshots of the vorticity field are shown in Figure 35a. Good agreements have been found compared to IBPM of Taira and Colonius [START_REF] Taira | The immersed boundary method: A projection approach[END_REF]. At this regime, a grid convergence study has been performed on a domain [-2D, 2D] × [-2D, 2D]. The time step is set to ∆t = 0.0001 and the grid spacing changes sequentially. The numerical errors are computed at t = 0.5 based on a very fine grid. Figure 37 shows the variation of the L 2 norm error for the horizontal velocity as a function of the grid spacing. A little better than first order spatial accuracy is observed. Next we increase the Reynolds number to Re = 550 and compare our results to the vortex methods of Koumoutsakos and Leonard [START_REF] Koumoutsakos | High-resolution simulations of the flow around an impulsively started cylinder using vortex methods[END_REF] and Mimeau et al. [START_REF] Mimeau | Vortex penalization method for bluff body flows[END_REF]. In this case, the computational domain [-8D, 4D] × [-5D, 5D] is used and the mesh resolution is set to h = 0.005D as suggested by Mimeau et al. [START_REF] Mimeau | Vortex penalization method for bluff body flows[END_REF]. The time step ∆t = 0.001 is used. The time evolution of drag coefficient is displayed in Figure 36b. The current method has difficulties in drag prediction at early times of impulsive motion, which is also encountered by the immersed boundary projection method of Taira and Colonius [START_REF] Taira | The immersed boundary method: A projection approach[END_REF] and the vortex penalization method of Mimeau et al. [START_REF] Mimeau | Vortex penalization method for bluff body flows[END_REF]. At later stage, our results are comparable to those using vortex method. The corresponding vorticity fields are shown in Figure 35b, which compare well with the simulation results of [START_REF] Mimeau | Vortex penalization method for bluff body flows[END_REF][START_REF] Mittal | A versatile sharp interface immersed boundary method for incompressible flows with complex boundaries[END_REF][START_REF] Koumoutsakos | High-resolution simulations of the flow around an impulsively started cylinder using vortex methods[END_REF][START_REF] Ploumhans | Vortex methods for high-resolution simulations of viscous flow past bluff bodes of general geometry[END_REF]. At Re = 1000, the grid is further refined to h = 0.0025D in order to solve the very thin boundary layer, while the computational domain [-8D, 4D] × [-5D, 5D] is kept unchanged. The time step is reduced to ∆t = 0.0005. As mentioned by Mimeau et al. [START_REF] Mimeau | Vortex penalization method for bluff body flows[END_REF], the two-dimensional simulation performed here is valid since only the impulsive start of the flow is considered before the onset of three-dimensional instabilities. Figure 36c and Figure 35c show the drag time evolution and the snapshots of vortex structures at different stages, respectively. We notice that the predicted drag coefficient with present method is slightly higher than that with vortex methods [START_REF] Mimeau | Vortex penalization method for bluff body flows[END_REF][START_REF] Koumoutsakos | High-resolution simulations of the flow around an impulsively started cylinder using vortex methods[END_REF]. This can be attributed to the finite domain size used in the present study. Finally we increase the Reynolds number to Re = 3000. At this Reynolds number, the simulation is quite challenging as it requires a very fine grid to capture the boundary layer. We reduce the grid size to h = 0.00125D and adjust the time step respectively to ∆t = 0.0002. Due to memory limits, we select a much smaller computational domain CONCLUSIONS We presented a new implicit but very efficient formulation of immersed boundary method for simulating incompressible viscous flow past complex stationary or moving boundaries. The current method treats the boundary force and the pressure as Lagrange multipliers for satisfying the no-slip and the divergence-free constraints. The fractional step method is applied to decouple the pressure as well as the boundary force from the fluid velocity field, and the two Lagrange multipliers are solved separately within their own systems. The main advantages of current approach are the accurate imposition of the no-slip condition and the efficiency in computation. The system matrices are well conditioned and generic solvers can be used directly. Especially for moving boundaries, only the boundary force coefficient matrix is updated while the coefficient matrices of velocity and pressure remain unchanged. Even though we have only dealt with rigid boundary in this article, deformable body with its motion known a priori can also be handled. A variety of distinct two dimensional flows are simulated and the results are in excellent agreement with available data sets in the literature, demonstrating the fidelity of the proposed method. G. CAI ET AL. Figure 1 . 1 Figure 1. Staggered mesh arrangement for the pressure and the velocity. 6 S 6 .-G. CAI ET AL. not smooth in the support domain. Peskin[START_REF] Peskin | The immersed boundary method[END_REF] constructed a smoothed 4-pointwidth function as follows 13 ( 5 ) 135 Correct the fluid velocity with the boundary force to account for the immersed objects G. CAI ET AL. F G. CAI ET AL. Figure 6 . 6 Figure 6. Global structure of the moving immersed boundary method. G. CAI ET AL. Figure 7 . 7 Figure 7. Contour of the scalar field after the boundary forcing: (a) The explicit direct forcing IBM of Uhlmann [10]; (b) The improved explicit direct forcing IBM of Kempe and Fröhlich [14]; (c) The multidirect forcing IBM of Luo et al. [15] and Breugem [16]; (d) Present MIBM. Figure 9 . 9 Figure 9. Temporal (a) and spatial (b) convergence analysis of current fluid solver and moving immersed boundary method for the decaying vortices problem. Figure 10 . 10 Figure 10. Vorticity contours and streamlines of the lid-driven cavity flow with a cylinder at Re = 1000, where the vorticity contour value is varied from -3 (blue) to 3 (red) with an increment of 0.4. Results of present MIBM are listed on the left; Results of the body-conforming mesh method are on the right. Figure 11 . 11 Figure 11. Comparison of velocity profiles of the lid-driven cavity flow with a cylinder at Re = 1000: (a) Distribution of velocity component u along x = 0.5; (b) Distribution of velocity component v along y = 0.5.Solid lines represent current method and dashed lines are the traditional body-conforming mesh method. Figure 12 . 12 Figure 12. L 2 error norm of the horizontal velocity component (u) as a function of grid spacing for the lid-driven cavity flow with an embedded cylinder. in present work can no longer maintain the second order accuracy. Beyer and LeVeque Figure 13 .= 40 Figure 16 . 134016 Figure 13. Sketch of the flow over a stationary circular cylinder. Figure 17 . 17 Figure 17. The wall pressure coefficient Cp and the wall vorticity Wz for flow over a stationary cylinder at Re = 40. --, results of boundary-fitted grid of Braza et al. [29]; , present h = 0.04D; △, present h = 0.029D; +, present h = 0.02D. 02D) 1 .Figure 21 . 121 Figure 21. The wall pressure coefficient Cp and the wall vorticity Wz for flow over a stationary cylinder at Re = 100. Time-averaged values are used. --, results of boundary-fitted grid of Braza et al. [29]; , present h = 0.04D; △, present h = 0.029D; +, present h = 0.02D. Figure 22 . 22 Figure 22. Distribution of the horizontal velocity error on the 90 × 90 grid for the flow over a stationary circular cylinder. Figure 23 . 23 Figure 23. L 2 error norm of horizontal velocity u versus the computational grid size for the flow over a stationary circular cylinder. Figure 24 .Figure 28 . 2428 Figure 24. Time evolution of drag and lift coefficients for the flow over a stationary cylinder at Re = 1000. Figure 29 . 29 Figure 29. L 2 norm of the horizontal velocity component (u) versus grid spacing for the oscillating cylinder problem. Figure 29 shows 29 Figure29shows the results of convergence study on a domain of [-2D, 2D] × [-2D, 2D]. A time step of 10 -4 is selected and the calculation is performed for 2000 time steps. A slightly better than first order accuracy is found in this case. Figure 30 . 30 Figure 30. Configuration for flow over a flapping wing. Figure 33 . 33 Figure 33. L 2 norm error of the horizontal velocity component (u) as a function of grid spacing for the flapping wing problem. Figure 34 . 34 Figure 34. Sketch of the flow past an impulsively started cylinder. [-4D, 2D] × [-3D, 3D]. The temporal G. CAI ET AL. Figure 35 .Figure 36 . 3536 Figure 35. Computed vorticity contours for a suddenly started cylinder at different stages in the start-up process. Contour levels are set from -3 to 3 in increments of 0.4. Figure 37 . 37 Figure 37. L 2 norm error of velocity (u) for the impulsively started cylinder problem. ACKNOWLEDGEMENT The first author greatly acknowledges the financial support of the China Scholarship Council. The calculations were performed on the platform PILCAM2 at the Université de Technologie de Compiègne and the HPC at the Université de Strasbourg. . The resulting homogeneous Neumann boundary condition of φ n+1 implies that ∂p n+1 P e e r R e v i e w O n l y Table I I Methods of parallelization Cores CG CG+MG Time (s) Speed-up Time (s) Speed-up CPU 1 63.00 1.0 25.25 1.0 2 30.68 2.05 10.59 2.38 4 15.26 4.13 4.56 5.23 8 7.79 8.09 2.13 11.84 16 4.20 15.00 1.10 22.92 20 8.88 7.09 1.11 22.71 GPU 240 10.36 6.08 0.63 40.08 None 0 10.36 1.0 10.36 1.0 AINV 1.26 6.52 1.59 7.78 1.33 MG 0.51 0.12 86.33 0.63 16.44 . Time consummation and speed-up of the CPU and GPU parallelization for solving the pressure Poisson equation on a 400 × 400 grid. The tolerance is set to 1 × 10 -10 . Preconditioner Construction time (s) Application time (s) Speed-up Total time (s) Speed-up Table II II . Comparison of different preconditioners in GPU parallelization with the CUSP library, where the CG solver is used for solving the PPE on a 400 × 400 grid. The tolerance is set to 1 × 10 -10 . Table III . III Comparison of the computational time and the velocity error. The iteration number is fixed for the explicit methods of Uhlmann[START_REF] Uhlmann | An immersed boundary method with direct forcing for the simulation of particulate flows[END_REF] and Kempe and Fröhlich[START_REF] Kempe | An improved immersed boundary mehod with direct forcing for the simulation of particle laden flows[END_REF], while others are solved until convergence under a tolerance of 1 × 10 -15 . Process time (s) Interpolation Forcing Distribution Total Iter. Error Uhlmann [10] 2.77 × 10 -3 1.00 × 10 -6 3.23 × 10 -3 6.02 × 10 -3 1 3.01 × 10 -1 Kempe and Fröhlich [14] 8.15 × 10 -3 1.00 × 10 -6 8.92 × 10 -3 1.71 × 10 -2 3 7.41 × 10 -2 Luo et al. [15] and Breugem [16] 1.16 × 10 1 1.17 × 10 -3 1.31 × 10 1 3.65 × 10 1 4443 9.96 × 10 -16 Present 4.32 × 10 -4 1.19 × 10 -4 4.41 × 10 -4 1.33 × 10 -2 60 8.29 × 10 -16 10 0 10 -3 10 -6 10 -9 10 -12 10 -15 100 200 300 400 500 600 700 800 900 1000 Table VI . VI Comparison of the drag, lift coefficients and the Strouhal number for the flow around a stationary cylinder at Re = 100, 200. The experimental results are marked with (⋆). C D C ′ D C ′ L St Re = 100 Williamson [48] ⋆ - - - 0.164 Uhlmann [10] Ji et al. [18] Braza et al. [29] Liu et al. [49] Mimeau et al. [50] Xu and Wang [51] Present (Ω = 30D × 30D, h = 0.04D) Present (Ω = 30D × 30D, h = 0.029D) 1.377 ±0.010 ±0.337 0.160 1.453 ±0.011 ±0.339 0.169 1.376 ±0.010 ±0.339 0.169 1.359 ±0.019 ±0.293 0.16 1.350 ±0.012 ±0.339 0.165 1.40 0.165 ±0.010 ±0.32 0.171 1.423 ±0.013 ±0.34 1.380 ±0.010 ±0.343 0.160 Present (Ω = 30D × 30D, h = 0.02D) 1.379 ±0.010 ±0.346 0.160 Present (Ω = 40D × 40D, h = 0.029D) 1.366 ±0.010 ±0.342 0.160 Present (Ω = 60D × 60D, h = 0.029D) 1.353 ±0.010 ±0.335 0.160 Re = 200 Williamson [48] ⋆ - - - 0.197 Taira and Colonius [17] Ji et al. [18] Braza et al. [29] Liu et al. [49] Mimeau et al. [50] Xu and Wang [51] Present 1.35 1.354 ±0.044 ±0.682 0.20 0.196 ±0.048 ±0.68 1.386 ±0.040 ±0.766 0.20 1.31 0.192 ±0.049 ±0.69 1.44 ±0.05 0.200 ±0.75 1.42 ±0.04 0.202 ±0.66 Table VII VII 30 S.-G. CAI ET AL. P e e r R e v i e w O n l y Table VII . VII Effects of different discrete delta functions on the drag, lift coefficients and the Strouhal number for the flow around a stationary cylinder at Re = 100 and 200. C D C ′ D C ′ L St Re = 100 φ 2 1.388 ±0.010 ±0.346 0.166 φ 3 1.377 ±0.010 ±0.339 0.166 φ 4 1.379 ±0.011 ±0.343 0.166 Re = 200 φ 2 1.391 ±0.047 ±0.709 0.198 φ 3 1.365 ±0.044 ±0.696 0.200 φ 4 1.358 ±0.045 ±0.688 0.195
01549601
en
[ "info.info-ar" ]
2024/03/05 22:32:18
2017
https://hal-lirmm.ccsd.cnrs.fr/lirmm-01549601v3/file/cancellation.pdf
David Defour email: [email protected] FP-ANR: A representation format to handle floating-point cancellation at run-time When dealing with floating-point numbers, there are several sources of error which can drastically reduce the numerical quality of computed results. One of those error sources is the loss of significance or cancellation, which occurs during for example, the subtraction of two nearly equal numbers. In this article, we propose a representation format named Floating-Point Adaptive Noise Reduction (FP-ANR). This format embeds cancellation information directly into the floating-point representation format thanks to a dedicated pattern. With this format, insignificant trailing bits lost during cancellation are removed from every manipulated floating-point number. The immediate consequence is that it increases the numerical confidence of computed values. The proposed representation format corresponds to a simple and efficient implementation of significance arithmetic based and compatible with the IEEE Standard 754 standard. Introduction Floating-point numbers, which are normalized by the IEEE Standard 754 standard [START_REF]IEEE Standard for Floating-Point Arithmetic[END_REF], correspond to a bounded discretization of real numbers. Therefore, a floating-point number corresponds to the representation of an exact number combined with errors due to discretization, accumulation of rounding errors or cancellation. In other words, a floatingpoint number embeds useful information along with noise linked to those errors. When numerical noise becomes dominant, for example during catastrophic cancellation, there are no more useful bits of information in the representated numbers. Unfortunately, the occurence of this situation is undetectable just by looking at the representation. This is due to the fact that with the widely used IEEE Standard 754 representation format, there is no way of distinguishing useful numerical information from noise. This problem has been identified and addressed since the late 1950s with significance arithmetic [START_REF] Goldstein | Significance arithmetic on a digital computer[END_REF]. Significance arithmetic addressed these issues by tailoring the number of digits to their needs. Significance arithmetics is regaining interest thanks to the Unum proposal [START_REF] Gustafson | The end of numerical error[END_REF] or indirectly through numerous problems encountered with exascale computers and the lack of confidence in numerical results [START_REF] Collange | Numerical reproducibility for the parallel reduction on multi-and many-core architectures[END_REF]. If the Unum system is based on real problems, the proposed solution is subject to criticism for numerous reasons as pointed out by W. Kahan [START_REF] Kahan | A critique of John L. Gustafson's. the end of error -Unum computation and his a radical approach to computation with real numbers[END_REF]. On the other hand, indirect solutions based on software solutions to detect cancellation [START_REF] Denis | Verificarlo: Checking floating point accuracy through monte carlo arithmetic[END_REF], [START_REF] Jézéquel | CADNA: a library for estimating round-off error propagation[END_REF], or avoiding rounding errors [START_REF] Collange | Numerical reproducibility for the parallel reduction on multi-and many-core architectures[END_REF] are not meant to be efficient nor effective for real time execution. This article proposes a new way to represent significant information in floating-point numbers. The solution consists of an altered IEEE Standard 754 representation format of the mantissa. That information is stored using a simple pattern that replaces insignificant digits. This makes such number representation almost as accurate as original IEEE Standard 754 numbers. Therefore, the proposed solution corresponds to a simple, efficient and IEEE Standard 754 compliant implementation of significance arithmetic. Preliminaries Floating-point numbers are approximations of real numbers. The concept of approximation is associated with the concept of errors. Digits of a floating-point representation number can be split into two parts; a significant and an insignificant part. This section provides some background on IEEE Standard 754 floating-point arithmetic, errors and significance arithmetic. The IEEE Standard 754 standard The current version of the floating-point standard, the IEEE Standard 754 [-2008] [START_REF]IEEE Standard for Floating-Point Arithmetic[END_REF] published in August 2008, includes the original binary formats along with three new basic formats (one binary and two decimal). Definition 1 (Floating-Point Numbers). A IEEE Standard 754 representation format is a "set of representations of numerical values and symbols" made of finite numbers, two infinities and two kinds of NaN (Not A Number). The set of finite numbers are described by a set of three integers (s,m,e) corresponding respectively to the sign, the mantissa and the exponent. The numerical value associated with this representation is (-1) s × m × b e . Values that can be represented are determined by the base or radix b (2 or 10), the number (p) of digits in the mantissa and the exponent parameter emax such that: 0 ≤ m ≤ b p -1 and 1 -emax ≤ e + p -1 ≤ emax It should be pointed out that the number e + p -1 is called the "exponent" in some literature. The value Zero is represented with a 0 mantissa and a sign bit specifying a positive or negative zero. In the case of binary formats, representation of finite numbers is made unique by choosing the smallest representable exponent. Numbers with an exponent in the normal range have the leading bit set to 1. It corresponds to an implicit bit as it is not present in the memory encoding, allowing the memory format to have one more bit of precision. This extra bit is not present for subnormal numbers which have an exponent outside the normal exponent range. For example, the IEEE Standard 754 double precision format (or binary64) is represented with 64 bits which are split into 1 sign bit, p = 52 bits of mantissa and e = 11 bits of exponent, whereas single precision format (or binary32) is represented with 32 bits split into 1 sign bit, p = 23 bit of mantissa and e = 8 bits of exponent. Floating-Point Errors Floating-point numbers representation format differs by their radix and the number of bits used for their encoding. The 2008 revision of the IEEE Standard 754 defines formats for radix 2, ranging from 16 to 128 bits. For each of these formats, the number of bits that represent the exponent and the mantissa is fixed. Therefore, the floatingpoint representation of numerical value have to be either rounded or padded with zeros in the least significant digits of the mantissa. This means that by construction, FP numbers embed errors in their representation. These errors can be separated into three groups: data uncertainty, rounding and cancellation. 2.2.1. Uncertainty. Uncertainty [START_REF] Denker | Uncertainty as applied to measurements and calculations[END_REF] in data is linked to initial input values produced by measurements, experimentations using physical sensors, or numerical model such as polynomial approximation [START_REF] Funaro | Polynomial approximation of differential equations[END_REF]. For example, a physical sensor producing the twenty digit value x = 12345.678901234567890 with a process exhibiting an uncertainty of U = 10 -5 , corresponds to a real value in the interval [x • (1 -U ); x • (1 + U )] = [12345.555; 12345.802]. This translates into 5 significant digits, the rest of the information corresponds solely to noise or insignificant digits. As floating-point numbers are of a fixed size, noise is kept in the representation of those numbers and remains present in all computation that follows. As those extra digits do not carry any numerical meaning, it may lead to an overconfidence in the numerical quality of the result. Rounding. Because floating-point numbers have a limited number of digits, they cannot represent real numbers accurately. When there are more digits than the format allows, the number is rounded and the leftovers are omitted. The standard defines five rounding rules, two rounding to the nearest (ties to even, ties away from zero) and three directed rounding (toward 0, -∞, +∞). Floating-point operations in IEEE Standard 754 satisfy: f l(a • b) = (a • b) • (1 + ) | | ≤ u • ∈ {+, -, ×, /} Where u = b/2•b -p depends on the radix b and the precision p , and f l() denote the result of a floating-point computation. 2.2.3. Cancellation. Cancellation occurs when two nearby quantities are subtracted and the most significant digits cancel each other. Cancellations are very common but when many digits are lost, the effect can be severe as the number of informative digits is reduced. In that case, this results in catastrophic cancellation that has a dramatical impact on the sequel of the computation. For example, let x = 1.5 × 2 0 and y = 1.0 × 2 26 be two floating-point numbers stored in binary32 format. Then the sequence of operations r = f l(f l(x + y) -y) produces the result r = 0.0 which has no correct digits, as the correct real result should be 1.5. This is due to the catastrophic cancellation which occurred during the subtraction. Such cancellations cannot be detected without additional examination of the source and destination of data elements, leaving no trace of the fact that r = 0.0 was completely incorrect. Such sequences are used for example in numerical algorithms that compute errors such as the 2sum algorithm [START_REF] Møller | Quasi double-precision in floating point addition[END_REF], [START_REF] Knuth | The Art of Computer Programming[END_REF]. Significance arithmetic Significance arithmetic [START_REF] Goldstein | Significance arithmetic on a digital computer[END_REF], [START_REF] Gray | Normalized floating-point arithmetic with an index of significance[END_REF], [START_REF] Bond | Significant digits in computation with approximate numbers[END_REF] brings a solution to the problem of representing an approximation of the error along with floating-point numbers. It relies on the concept of significant and insignificant digits. Definition 2 (Significant and insignificant digits). Significant digits of a number are digits that carry meaning contributing to a number. The number of significant digits for a p-digits number X is represented by α X and the number of insignificant digits p -α X . Significance arithmetic sets two methods to calculate a bound for the propagated and generated error called normalized significance and unnormalized significance. The normalized significance always keeps the floating-point number normalized and provides an index of significance. The unnormalized significance does not normalize floating-point numbers and uses the count of digits remaining after leading zeros as an indication of their significance. The normalized method allows as many digits as possible of a number to be retained. This requires an added index that defines the number of significant digits. There exists software implementations of significance arithmetic such as for the FORSIG [START_REF] Hyman | Forsig: an extension of fortran with significance arithmetic[END_REF] library written in Fortran, or Python [START_REF] Johansson | Basic implementation of significance arithmetic[END_REF]. With the unnormalized method [START_REF] Ashenhurst | Unnormalized floating point arithmetic[END_REF], only digits considered significant are retained. The integration of a specific pattern in the mantissa to categorize significant and insignificant digit has already been proposed for decimal computer in the BCD format [START_REF] Langdon | Method and means for tracking digit significance in arithmetic operations executed on decimal computers[END_REF]. It relies upon unused bit patterns in the BCD format which are bit-field 1010 and 1011 corresponding to respectively digits 10 and 11. More recently, Gustafson [START_REF] Gustafson | The end of numerical error[END_REF] extended significance arithmetic by proposing the Unum representation format which is able to represent exact and approximate numbers with varying mantissa and exponent field length. Even though significance arithmetic offers an approximation of the error, it is not suitable for every numerical problem related to the management of error. In particular, significance arithmetic is not meant for self-correcting numerical algorithm. A format to embed cancellation information The proposed representation format: Floating-Point Adaptive Noise Reduction (FP-ANR) is detailed in this section. It allows the user to split the mantissa in two: the significant and insignificant part. Insignificant digits, or noise, can come from initial uncertainty, or cancellation generated during computation. This format corresponds to an implementation of significance arithmetics based on the existing IEEE Standard 754 format. In this article, we will consider the radix-2 arithmetic, where bit or digit will refer to the same notion. The representation format Our goal is to propose a non-intrusive solution while being able to keep track of uncertainty due to cancellation. By non-intrusive, we mean that the proposed solution must be compatible with existing floating-point representation format without exhibiting a large overhead. This discards any solutions relying on shadow memory, or extra fields. The proposed format, named FP-ANR, is based on a modification of the mantissa that integrates information on cancellation. The modification of the mantissa consists in replacing uninformative bits, or bits lost during cancellation, by a given pattern. This pattern must be self-detectable to avoid using extra fields as in the Unum. There are two possible patterns: First, a 1 followed by as many 0s as needed, second a 0 followed by as many 1s as needed. With any one of these solutions, one can easily deduce the number of cancelled bits by scanning the mantissa from right to left to detect the first 1 (or 0 respectively). The assembly instruction that performs this operation is usually named Count Trailing Zero/One. The rest of this article will focus on the first pattern (1 followed by 0s). The first 1 encountered from right to left in the mantissa will be called the significant flag. With FP-ANR, one bit of the mantissa is used to represent the significant flag. It means that a number with a p-bit mantissa will have at most p -1 informative bits which is 1 bit less than the corresponding IEEE Standard 754 representation format which FP-ANR is built upon. For example, the value 1.0 which corresponds to the binary32 IEEE Standard 754 representation number The rightmost bit equal to 1 and corresponding to the significant flag, indicates the position between significant and insignificant bit in the mantissa. In other words, this representation corresponds to the floating-point number 1.0 accurate up to 23 bits. Alternatively, the FP-ANR representation string 0 01111111 00000000000010000000000 corresponds to the floating-point number 1.0 as well, but accurate to 13 bits. This slight modification affects the set of finite numbers as defined by the IEEE Standard 754 standard including normal and subnormal numbers. The representation format of special values which includes infinities, NaN and 0 remains unchanged as no significant flag is embedded. The major difference between the IEEE Standard 754 representation format and the FP-ANR format is that IEEE Standard 754 can manipulate exact values such as 1.0 whereas FP-ANR deals solely with approximation (except for 0). This is a drawback as discussed in section 2.3, which is why the proposed format cannot be considered as a universal format. Managing uncertainties With FP-ANR, uncertainty is integrated directly in the mantissa. For example, let us consider a physical process which produces the value x = 1234.56 with an uncertainty U = 10 -5 and its representations (Table 1). With the IEEE Standard 754, there is no direct solution to integrate the information on uncertainty in the representation number. It is still possible to circumvent this problem by using interval arithmetic, but this will requires at least 2 numbers. With FPANR, the information on uncertainty is integrated by evaluating the number of significant digit, which corresponds to |log 2 (U )| = 16 bits. As we can observe, with the IEEE Standard 754 format the value will be translated directly into its binary format where the last 8 insignificant bits correspond to noise. Whereas with FP-ANR we can distinguish significant and insignificant bits. When all signicant bits are lost we can keep track of that information, which is not the case with the other representation format. In that case we have information on the order of magnitude of the insignificance. This concept is similar to the concept of informatical zero represented by @.0 in CADNA [START_REF] Jézéquel | CADNA: a library for estimating round-off error propagation[END_REF]. For example, let us consider the following number where all bits of the mantissa are set to 0 and only the implicit bit is set to 1. 0 01111111 00000000000000000000000 This representation number means that there are no significant bits in the mantissa. However, there is still useful embedded information which is the order of magnitude of the error stored in the exponent. This information can be used in further computation involving such a number: For example, in an addition to discard bits of weight less than the one corresponding to insignificant bit. It can potentially avoid a division by zero resulting from an unwanted catastrophic cancellation where all bits are lost. Addition of FP-ANR As we have seen in section 2.2.3, the least significant bits of the mantissa are usually uninformative as they corresponds to noise due to cancelation or discretization. The information on insignificant bit has to be propagated during operations. This can be done by updating the position of the significant flag found in the result of an addition between two FP-ANR as follow. Let A, B and R be three FP-ANR numbers with respectively α A , α B and α R significant bits. The number of significant bits α R of the results R = A•B with • ∈ {+, is determined by: α R = exp R -M AX((exp A -α A ), (exp B -α B )) where exp X corresponds to the exponents of the FP-ANR number X with X ∈ {A, B, R}. One can notice that the quantities (exp A -α A ) and (exp B -α B ) correspond to the absolute error. Multiplication and division of FP-ANR Propagation of significant information during multiplication corresponds to the simplest case. The number of significant bits resulting from a multiplication between two FP-ANR numbers is estimated as follows: Let X with X ∈ A, B, R be a FP-ANR representation of the number x with x ∈ {a, b, r} respectively, with α X significant bits. The number α R of significant bits in the results R = A • B can be approximated by α R = M IN (α A , α B ). This approximation corresponds to the minimal number of bit lost during cancellation and does not consider the accumulation of it. We chose to not consider the accumulation of uncertainty as its estimation would grow too fast during uncertainty propagation. This differs from interval arithmetic that will always overestimate the error. For comparison purposes, lets consider the case of accumulation of uncertainty. The number of significant bit α X corresponds to an error e X in X is such that X = x • (1 + e X ) with |e X | ≤ 2 -α X . |e r | ≤ 2 -α A + 2 -α B + 2 -α A -α B . The error e r is maximal when α A = α B . A more accurate approximation that considers the accumulation of uncertainty for the multiplication is α R = M IN (α A , α B ) -2 when α A = α B M IN (α A , α B ) -1 when α A = α B (1) For similar reasons, we decided to not consider the accumulation of uncertainty for the division R = A/B. The number α R of significant bits in the results R = A/B is set to α R = M IN (α A , α B ). This differs from a solution considering the accumulation of uncertainty as follows. The error for the division can be expressed as R = |e r | ≤ 2 -α A + 2 -α B + 2 -α A -α B . Therefore, equation 1 corresponds to a more accurate approximation that consider the accumulation of uncertainty for the division. Other operations using FP-ANR We can consider the propagation of uncertainty in the case of more complex operations as well (e.g. exponential, logarithms or trigonometric function). Such functions have already been considered in previous work on significant arithmetic [START_REF] Goldstein | Significance arithmetic on a digital computer[END_REF]. Let R and X be FP-ANR representations of the number r and x respectively, with α R and α X significant bits. We would like to estimate the number of significant bits α R when R = f (X) with f a function of X. The number of significant digits can be approximated using results on the propagation of uncertainty. It can be done by looking at the extremum on the interval of values corresponding to the initial uncertainty interval [X • (1 - e X ); X • (1 + e X )] with e X the error in X such that |e X | ≤ 2 -α X . This uncertainty can be estimated using a first-order Taylor series expansion. It consists in replacing the function f by its local tangent: f (X • (1 + e X )) = f (X) + f (X) • X • e X + o(X • e X ) with o(x) a function which quickly tends toward 0. Therefore, the uncertainty in the result R can be estimated by: e R ≈ |f (X) • X • e X | This estimation is valid only if the function is considered quasi-linear and quasi-Gaussian on the interval [X • (1 - e X ); X • (1 + e X )] . This corresponds to an estimation of the number of significant bits of the result α R : α R = log 2 f (X) f (X) • X • e X Table 2. APPROXIMATION OF THE NUMBER OF SIGNIFICANT DIGITS α R FOR SOME FUNCTIONS f (X) . R = f (X) f (X) α R ≈ √ X -1 2• √ X α X + log 2 |2| = α X + 1 exp(X) exp(X) α X + log 2 |1/X| = α X -log 2 |X| ln(X) 1/X α X + log 2 |ln(X)| sin(X) cos(X) α X + log 2 sin(X) X•cos(X) cos(X) -sin(x) α X + log 2 cos(X) X•sin(X) Combining this equation with the estimation of the number of significant digits α X = -log 2 |e X |, we get: α R -α X ≈ log 2 f (X) f (X) • X where log 2 is approximated using the exponent part of its floating-point representation format. Table 2 summarizes some of these approximation of the number of significant digits α R for some functions f (X). Rounding in FP-ANR It should be noticed that the presence of the significant flag is independent of the rounding problem. Therefore, we propose to use similar rounding strategies with FP-ANR as done with the IEEE Standard 754 format. The only difference being the bit position, where rounding will be done. With FP-ANR, rounding is operated on the last bit of the significant part, whereas it is done on the last bit of the mantissa for the IEEE Standard 754 representation format. However, the exact impact of rounding remains to be evaluated. FP-ANR and the Table Maker's Dilemma In addition to the propagation of the significant flag, there is another problem regarding elementary functions: the Table Maker's Dilemma [START_REF] Lefèvre | Towards correctly rounded transcendentals[END_REF]. The Table Maker's Dilemma corresponds to the problem of computing approximations of elementary functions with enough bits to ensure correct rounding. This problem is known to be difficult with the IEEE Standard 754 representation format since there is no bound on the number of bits required for every function and every format. With FP-ANR, the Table Maker's Dilemma is circumvented as follows. One can set a target accuracy t function of the number of significant bits α X of the input number X mandatory to evaluate the results of an elementary function. For example, one can set t = 2 • α X . If rounding can be done, then the process ends. If rouding is not possible, this corresponds to a hard to round case meaning that we are not sure of the last bit in the significant part. This uncertainty due to the Table Maker's Dilemma can be integrated in the FP-ANR format by left-shifting the significant flag one position. This way reproducibility and portability of the results provided by correct rounding is preserved. Interaction between FP-ANR and IEEE Standard 754 One major advantage of FP-ANR is that it is compatible with the IEEE Standard 754 representation format. As with any format, compatibility can be assured thanks to conversion. Conversion between those two formats is straightforward as only the mantissa must be modified. From FP-ANR to IEEE Standard 754 format, this can be done by replacing the significant flag with a 0. From IEEE Standard 754 to FP-ANR format, this can be done by replacing the right-most bit of the mantissa by the significant flag (a 1 in the last position). In addition to conversion, one can notice that the IEEE Standard 754 operators can process FP-ANR numbers. This will not lead to a crash or irrelevant results: it merely modifies the meaning of the insignificant bits. Nevertheless, it should not be considered as a serious issue as those bits correspond to noise. However when FP-ANR operators process IEEE Standard 754 numbers, the situation becomes more problematic, as the meaning of the resulting number depends on the position of the last bit set to 1. Implementations Software implementation In this section, we describe a simplified software emulation of the proposed format. This section focuses on basic operations on the FP-ANR format related to the IEEE Standard 754 binary32 format. Two C++ classes have been implemented to deal with single and double precision formats. These two classes are based on the header file of the CADNA library [START_REF] Jézéquel | CADNA: a library for estimating round-off error propagation[END_REF], where the code related to stochastic arithmetic is replaced with operations on significance arithmetic. This library can advantageously replace the IEEE Standard 754 formats (float, double) and major operations for those formats. It is available for download at http://perso.univ-perp.fr/david.defour/ One can notice that the biggest advantage of the FP-ANR over other solutions that require extra memory (shadow memory or extra fields), is that it could be easily integrated in a compiler pass. Indeed, memory allocation, bit manipulations (such as extraction of exponent, sign,...), tricky pointer manipulation are straightforward with the proposed format. However, such implementations is out of the scope for this article and will be developed in future work. Conversion. Programs in Listing 1 rely on the ieee754.h header file provided by many Linux distributions. This header file defines the type ieee754 float that eases access to the bitfield of floating-point numbers. The two functions convert a number between binary32 and the FP-ANR format by managing the significant flag according to the rules defined in section 3.8. / / Remove t h e s i g n i f i c a n t f l a g d . i e e e . s i g n i f i c a n d ˆ= 1<<c ; } * p = 22-c ; r e t u r n ( d . f ) ; } 4.1.2. Operations. We wrote a set of operations over FP-ANR numbers. Listing 2 describes how information on cancellation is propagated during addition and multiplication. Listing 2. Functions to perform addition and multiplication over FP-ANR format f l o a t FpAnrAdd ( f l o a t a1 , f l o a t a2 ){ f l o a t r e s ; i n t e1 , e2 , e r ; i n t p1 , p2 ; r e s = F p A n r 2 F l o a t ( a1 , &p1 ) + F p A n r 2 F l o a t ( a2 , &p2 ) ; f r e x p ( a1 , &e1 ) ; f r e x p ( a2 , &e2 ) ; f r e x p ( r e s , &e r ) ; r e t u r n F l o a t 2 F p A n r ( r e s , e r-MAX( ( e1-p1 ) , ( e2-p2 ) ) ) ; } f l o a t FpAnrMul ( f l o a t a1 , f l o a t a2 ){ f l o a t r e s ; i n t p1 , p2 ; r e s = F p A n r 2 F l o a t ( a1 , &p1 ) * F p A n r 2 F l o a t ( a2 , &p2 ) ; r e t u r n F l o a t 2 F p A n r ( r e s , MIN( p1 , p2 ) ) ; } One can notice that this simplified version implements truncation as the rounding mode. There are two solutions to implement other roundings. The first and easiest solution consists of adding a given quantity to the mantissa followed by a truncation. However, this solution is subject to the double rounding problem [START_REF] Martin-Dorel | Some issues related to double rounding[END_REF]. The second solution consists of allowing the hardware to perform the rounding at the right position in the mantissa. It can be done by right shifting the mantissa so that the least significant bit of the Hardware implementation Hardware implementation of the FP-ANR is more straightforward and simpler than the software solution. As the FP-ANR and the IEEE Standard 754 format are similar, FP-ANR can rely on the existing IEEE Standard 754 hardware implementation. The only difference is the introduction of the necessary hardware to manage the position of the significant flag and rounding. This requires the introduction of a trailing zero count at the input and mantissa shifters for the rounding. This operation can be done with a priority enforcer/encoder corresponding to a chain of elements with a ripple signal, scanning bit of the mantissa from right to left. The ripple signal signifies that "nothing before it" is valid and it could be replaced with a tree of OR gates to split the mantissa between its significant and insignificant part. This could be done using a carry lookahead implementation. Figure 4.2 exhibits a simple implementation of this operation based on a tree of OR gates. Example Cancellation can affect the convergence and accuracy of iterative numerical algorithms. As an example, let us consider Archimedes formulae to compute the approximation of π. The iteration is given by: t 0 = 1 √ 3 , t i+1 = t 2 i + 1 -1 t i , π ≈ 6 × 2 i × t i We have implemented this equation using IEEE-754 double precision and the 64 bits FP-ANR format. Results for iteration up to i = 27 are given in Table 3. One can notice that the accuracy of the approximations based on the IEEE-754 format is increasing up to the 13th iteration, and is slowly decreasing till the 26th iteration. Moreover, starting from the 27th iteration, a problematic Not-a-Number appears. The rightmost columns of Table 3 are reporting for the FP-ANR format both the value and the number of bits which are considered. This valuable information helps to avoid invalid operation resulting in a NaN. On this example, one can notice that with the IEEE-754, there is no way to determine when the result is wrong and how wrong it is. Whereas with FP-ANR, the number of bit that could potentially be considered valid is known at each iterations and invalid operations can be avoided. Comparisons with Other Methods Performance We have tested the overhead for the addition, multiplication and division of the proposed format compared to hardcoded IEEE Standard 754 operations and CADNA [START_REF] Eberhart | High performance numerical validation using stochastic arithmetic[END_REF] operations on an 2,4 Ghz Intel Core i5, with LLVM version 8.1.0. Results are reported in table 4. These results correspond to the implementation of the prototype library available at http://perso.univ-perp.fr/david.defour/. One can notice that the overhead of the FP-ANR over hardcoded operations range between 8.5 for the multiplication and 21 for the addition. If this overhead is higher than the one of CADNA, we should recall that the FP-ANR is intended to be implemented in hardware and therefore available at no cost. Comparison with Unum Recently [START_REF] Gustafson | The end of numerical error[END_REF], Gustafson proposed a modified version of significance arithmetic with an extra field (unum field) which indicates if a number is exact. However, according to William Kahan, the principal architect of IEEE 754-1985, this format presents several drawbacks [START_REF] Kahan | A critique of John L. Gustafson's. the end of error -Unum computation and his a radical approach to computation with real numbers[END_REF]. Among them, he states: • The Unum computation does not always deliver correct results. • The Unums can be expensive in terms of time and power consumption. • The bit length of Unum format can change during computation, which make its hardware implementation harder than with fixed-size format especially regarding memory allocation, de-allocation and accesses. The last two points are serious issues that the FP-ANR format does not exhibit. However, the Unum possesses some properties that the FP-ANR does not, such as being able to handle exact numbers. Comparison with Stochastic arithmetic Stochastic arithmetic provides an estimation of the numerical confidence of computed results. The CESTAC method formalizes a simplified version of discrete stochastic arithmetic using randomized rounding for each floatingpoint operation. This method is implemented using C++ overloaded operators in the CADNA library [START_REF] Jézéquel | CADNA: a library for estimating round-off error propagation[END_REF]. This library detects the number of significant digits with a high degree of confidence. It also detects instability such as cancellation, branching instability and mathematical instability. It consists of replacing each floating-point number by a set of 3 floating-point numbers plus an integer, on which stochastic operation are performed. Thanks to those extra fields, such systems provide a tighter bound than the FP-ANR format. However, similarly to the Unum format, those extra fields manipulated with the CADNA hinder memory management and performance. Comparison with Monte-Carlo arithmetic Another alternative to estimate numerical quality of computed result can be achieved by using the Monte-Carlo arithmetic suggested by Parker [START_REF] Parker | Monte Carlo arithmetic: exploiting randomness in floating-point arithmetic[END_REF]. Monte-Carlo arithmetic gathers rounding and catastrophic cancellation errors by applying randomization on input and output operands at a given virtual precision. A recent implementation of this solution has been proposed with Verificarlo [START_REF] Denis | Verificarlo: Checking floating point accuracy through monte carlo arithmetic[END_REF]. Verificarlo implement a LLVM pass which replaces every floating-point operation automatically with the Monte Carlo Arithmetic. Even though Verificarlo is implemented directly as a compiler pass, which makes it very efficient, the large number of execution samples necessary to collect qualitative results remains a major drawback. The solution proposed by the authors consists of running those numerous execution in parallel. Although this solution reduces the global execution time, it does not reduce the total amount of work to gather this information. Conclusions and Perspectives New representation formats for floating-point numbers were introduced in the IEEE Standard 754 [-2008] revision. This is an attempt to adapt the format to the real need of applications. However, dealing with various formats require a numerical analysis of the program, which is a tedious task that can be solely executed by the expert. Some recent work has been proposed to automate this analysis and/or the benefit of formats changes. In this article, we have presented a solution that brings the significance arithmetic up-to-date, and makes it compatible with the IEEE Standard 754 [-2008]. Significance arithmetic is a concept that adds information on significant digits to each floating-point number. It can provide information on cancellation errors, and if sufficiently accurate, on rounding error. It consists of a representation format with rules for the propagation of error. The proposed solution is a simple pattern embedded in the mantissa of floating-point numbers. This pattern is self-sufficient and does not require extra fields or memory. This solution presents numerous advantages as it is a simple concept to understand, simple to implement and proves to be memory efficient. Tests on a preliminary version shows that the cost for the detection in software of the proposed pattern is higher compared to other solutions. However, the simplicity of the solution suggests that the performance could be improved using hardware support similar to the management of the rounding modes (e.g. specific instructions or execution flag). If implemented in hardware, this solution can definitely help developers gain confidence in their code by providing an estimation of the number of significance digits at no cost or help achieve reproducibility. However, it is not meant to solve all problems related to floating-point arithmetic. Significance arithmetics is similar to the interval arithmetic, produces over-pessimistic bound as results and is unable to solve the loss of correlation between variables. For example, error computation as used in compensated algorithm cannot be evaluated with the significance arithmetic, whereas it works perfectly with the IEEE Standard 754 floating-point arithmetic. For these reasons, we suggest that the FP-ANR format should be used as a complement to the traditional IEEE Standard 754 floatingpoint arithmetic. The error for the multiplication R = A • B corresponds to R = (a • b) • (1 + e a + e b + e a • e b ) and the error term e r = e a + e b + e a • e b is such that 1+e b . This formula can be rewritten by expressing the denominator term for the error as an infinite series R = a b • (1 + e a ) • (1 -e b + e 2 b + ...). Since the error e b is required by the number format to be less than 1, e 2 b and all the higher order terms can be neglected. The error term for the division e r = e a -e b -e a • e b is such that Listing 1 . 1 Functions to convert between FP-ANR and binary32 format# i n c l u d e <i e e e 7 5 4 . h> / / C o n v e r t a b i n a r y 3 2 number f / / t o a p b i t s FP-ANR number f l o a t F l o a t 2 F p A n r ( f l o a t f , i n t p ){ union i e e e 7 5 4 f l o a t d ; d . f = f ; p r e c = MIN ( 2 2 , p ) ; d . i e e e . s i g n i f i c a n d &= ( 0 x7FFFFF<<(23-p ) ) ; / / S e t t h e s i g n i f i c a n t f l a g d . i e e e . s i g n i f i c a n d |= 1<<(22-p ) ; r e t u r n d . f ; } / / C o n v e r t a p b i t s FP-ANR number / / t o a b i n a r y 3 2 number f l o a t F p A n r 2 F l o a t ( f l o a t f , i n t * p ){ union i e e e 7 5 4 f l o a t d ; i n t c ; d . f = t h i s ->v a l u e ; i f ( d . i e e e . s i g n i f i c a n d ! = 0 ) { c = c o u n t t r a i l i n g z e r o s ( d . i e e e . s i g n i f i c a n d ) ; Figure 1 . 1 Figure 1. Generation of the significant flag from a mantissa in FP-ANR format based on a tree of OR gate. Table 1 . 1 BINARY REPRESENTATION OF THE VALUE 1234.56 WITH AN UNCERTAINTY 10 -3 % binary32 0 10001001 00110100101000111101100 FP-ANR 0 10001001 00110100101000110000000 0 01111111 00000000000000000000000 will be represented in the FP-ANR format by 0 01111111 00000000000000000000001 Table 3 . 3 COMPARISON BETWEEN THE IEEE-754 DOUBLE PRECISION FORMAT AND THE FP-ANR FORMAT FOR THE COMPUTATION OF π DECIMALS USING ARCHIMEDES FORMULAE. BOLD NUMBERS CORRESPOND TO VALID DECIMALS. Iter. IEEE-754 FP-ANR (value ; α R ) 0 3.464101615137755e+00 3.464101615137754e+00 52 1 3.215390309173475e+00 3.215390309173465e+00 49 2 3.159659942097510e+00 3.159659942097420e+00 47 3 3.146086215131467e+00 3.146086215131277e+00 45 4 3.142714599645573e+00 3.142714599644023e+00 43 5 3.141873049979866e+00 3.141873049977221e+00 41 6 3.141662747055068e+00 3.141662747046212e+00 39 7 3.141610176599522e+00 3.141610176535323e+00 37 8 3.141597034323337e+00 3.141597034060396e+00 35 9 3.141593748816856e+00 3.141593747306615e+00 33 10 3.141592927873633e+00 3.141592921689153e+00 31 11 3.141592725622592e+00 3.141592703759670e+00 29 12 3.141592671741545e+00 3.141592592000961e+00 27 13 3.141592618900886e+00 3.141592383384705e+00 25 14 3.141592671741545e+00 3.141592025756836e+00 23 15 3.141591935881973e+00 3.141588211059570e+00 21 16 3.141592671741545e+00 3.141571044921875e+00 19 17 3.141581007579364e+00 3.141540527343750e+00 17 18 3.141592671741545e+00 3.141357421875000e+00 15 19 3.141406154737622e+00 3.140625000000000e+00 13 20 3.140543492401100e+00 3.136718750000000e+00 11 21 3.140006864690968e+00 3.125000000000000e+00 9 22 3.134945375658852e+00 3.093750000000000e+00 7 23 3.140006864690968e+00 3.000000000000000e+00 5 24 3.224515243534819e+00 3.000000000000000e+00 3 25 2.791117213058638e+00 2.000000000000000e+00 1 26 0.000000000000000e+00 ERR(2ˆ1) 0 27 NaN ERR(2ˆ1) 0 Table 4 . 4 EXECUTION TIME OF COMMON OPERATIONS IN THE FP-ANR AND THE CADNA FORMAT NORMALIZED WITH THE IEEE STANDARD 754 OPERATIONS. double float Operations FP-ANR CADNA FP-ANR CADNA Addition 11 7.5 11.3 15 Multiplication 4.7 3.5 3.96 4.0 Division 6.1 5.0 7.8 14.2
01773664
en
[ "math.math-ap", "phys.mphy" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01773664/file/NonIntegrableFragmentDistribution_20180423.pdf
Philippe Laurenc ¸ot email: [email protected] MASS-CONSERVING SOLUTIONS TO COAGULATION-FRAGMENTATION EQUATIONS WITH NON-INTEGRABLE FRAGMENT DISTRIBUTION FUNCTION Keywords: 1991 Mathematics Subject Classification, 45K05, Key words and phrases, coagulation -multiple fragmentation -non-integrable fragment distribution -conservation of matter Existence of mass-conserving weak solutions to the coagulation-fragmentation equation is established when the fragmentation mechanism produces an infinite number of fragments after splitting. The coagulation kernel is assumed to increase at most linearly for large sizes and no assumption is made on the growth of the overall fragmentation rate for large sizes. However, they are both required to vanish for small sizes at a rate which is prescribed by the (non-integrable) singularity of the fragment distribution. Introduction A mean-field description of the dynamics of a system of particles varying their sizes by pairwise coalescence and multiple fragmentation is provided by the coagulationfragmentation equation ∂ t f (t, x) = Cf (t, x) + F f (t, x) , (t, x) ∈ (0, ∞) 2 , (1.1a) where the coagulation and fragmentation reaction terms are given by Cf (x) := 1 2 x 0 K(x -y, y)f (y)f (x -y) dy - for x ∈ (0, ∞). We supplement (1.1) with an initial condition f (0, x) = f in (x) , x ∈ (0, ∞) . (1.2) In (1.1), f = f (t, x) denotes the size distribution function of the particles of size x ∈ (0, ∞) at time t > 0 and K is the coagulation kernel which describes how likely particles of respective sizes x and y merge. The first term in the coagulation term (1.1b) accounts for the formation of particles of size x resulting from the coalescence of two particles of respective sizes y ∈ (0, x) and x -y, while the second term describes the disappearance of particles of size x as they merge with other particles of arbitrary size. The first term in the fragmentation term (1.1c) involves the overall fragmentation rate a(x) and accounts for the loss of particles of size x due to breakup while the second term describes the contribution to particles of size x of the fragments resulting from the splitting of a particle of arbitrary size y > x, the distribution of fragments being given according to the daughter distribution function b(x, y). Since no matter is lost during fragmentation events, b is required to satisfy y 0 xb(x, y) dx = y , y > 0 . (1.3) As the merging of two particles of respective sizes x and y results in a particle of size x + y, it is then expected that conservation of matter holds true during time evolution, that is, ∞ 0 xf (t, x) dx = ̺ := ∞ 0 xf in (x)dx , t ≥ 0 , (1.4) provided ̺ is finite. Since the pioneering works by Melzak [START_REF] Melzak | A scalar transport equation[END_REF], McLeod [START_REF] Mcleod | On the scalar transport equation[END_REF], and Stewart [START_REF] Stewart | A global existence theorem for the general coagulation-fragmentation equation with unbounded kernels[END_REF], the existence of weak solutions to (1.1)-(1.2) is investigated in several papers under various assumptions on the coagulation and fragmentation coefficients K, a, and b and the initial condition f in [START_REF] Banasiak | Global strict solutions to continuous coagulation-fragmentation equations with strong fragmentation[END_REF][START_REF]Analytic fragmentation semigroups and continuous coagulation-fragmentation equations with unbounded rates[END_REF][START_REF] Banasiak | Strong fragmentation and coagulation with powerlaw rates[END_REF][START_REF] Dubovskiȋ | Existence, uniqueness and mass conservation for the coagulation-fragmentation equation[END_REF][START_REF] Escobedo | Gelation and mass conservation in coagulation-fragmentation models[END_REF][START_REF] Giri | The continuous coagulation equation with multiple fragmentation[END_REF][START_REF] Giri | Weak solutions to the continuous coagulation equation with multiple fragmentation[END_REF][START_REF] Ph | On a class of continuous coagulation-fragmentation equations[END_REF][START_REF] Laurenc | From the discrete to the continuous coagulationfragmentation equations[END_REF], see also [START_REF] Ball | The discrete coagulation-fragmentation equations: existence, uniqueness, and density conservation[END_REF][START_REF] Da Costa | Existence and uniqueness of density conserving solutions to the coagulationfragmentation equations with strong fragmentation[END_REF][START_REF]The discrete coagulation equations with multiple fragmentation[END_REF][START_REF] Leyvraz | Singularities in the kinetics of coagulation processes[END_REF][START_REF] White | A global existence theorem for Smoluchowski's coagulation equations[END_REF] for the discrete coagulation-fragmentation equation. Assuming in particular the initial condition f in to be a non-negative function in L 1 ((0, ∞), xdx), the solutions to (1.1)-(1.2) constructed in the above mentioned references are either mass-conserving, i.e. satisfy (1.4), or not, according to the growth of the coagulation kernel K for large sizes or the behaviour of the overall fragmentation rate a for small sizes. Indeed, it is by now well-known that, if the growth of the coagulation kernel for large sizes is sufficiently fast (such as K(x, y) ≥ K 0 (xy) λ/2 for some λ > 1 and K 0 ), then a runaway growth takes place in the system of particles and a giant particle (or particle of infinite size) is formed in finite time, a phenomenon usually referred to as gelation [START_REF] Escobedo | Gelation and mass conservation in coagulation-fragmentation models[END_REF][START_REF] Escobedo | Gelation in coagulation and fragmentation models[END_REF][START_REF] Hendriks | Coagulation equations with gelation[END_REF][START_REF] Leyvraz | Existence and properties of post-gel solutions for the kinetic equations of coagulation[END_REF][START_REF] Leyvraz | Singularities in the kinetics of coagulation processes[END_REF]. Since all particles accounted for by the size distribution have finite size, the occurrence of gelation results in a loss of matter in the dynamical behaviour of (1.1). A somewhat opposite phenomenon takes place when the overall fragmentation rate a blows up as x → 0 (such as a(x) = a 0 x γ for some γ < 0 and a 0 > 0). In that case, the smaller the particles, the faster they split, leading to the instantaneous appearance of dust (or particles of size zero) and again a loss of matter takes place, usually referred to as the shattering transition [START_REF] Arlotti | Strictly substochastic semigroups with application to conservative and shattering solutions to fragmentation equations with mass loss[END_REF][START_REF] Filippov | On the distribution of the sizes of particles which undergo splitting[END_REF][START_REF] Mcgrady | Shattering" transition in fragmentation[END_REF]. We shall exclude the occurrence of these phenomena in the forthcoming analysis and focus on a different feature of the fragmentation mechanism, namely the possibility that an infinite number of fragments is produced during breakup. Specifically, a common assumption on the daughter distribution function b in the above mentioned references is its integrability, that is, n 0 (y) := y 0 b(x, y) dx < ∞ for almost every y > 0 , (1.5) which amounts to require that the splitting of a particle of size y produces n 0 (y) daughter particles, a particularly important case being the so-called binary fragmentation corresponding to n 0 (y) = 2 for all y > 0. Clearly, the integrability property (1.5) fails to be true when b is given by [START_REF] Mcgrady | Shattering" transition in fragmentation[END_REF] b ν (x, y) = (ν + 2) x ν y ν+1 , 0 < x < y , ν ∈ (-2, -1] . (1.6) Observe that the restriction ν > -2 guarantees that b ν satisfies (1.3). As far as we know, the existence of weak solutions to the coagulation-fragmentation equation (1.1)-(1.2) when the daughter distribution function b is given by (1.6) and the coagulation kernel is unbounded has not been considered so far, except the particular case K(x, y) = xy, a(x) = x, and ν = -1 which is handled in [START_REF] Laurenc | Absence of gelation and self-similar behavior for a coagulation-fragmentation equation[END_REF] by an approach exploiting fully the specific structure of the coefficients. The purpose of this note is to fill this gap, at least for coagulation kernels growing at most linearly for large sizes. In fact, the main difficulty to be overcome is the following: due to the non-integrability of b ν , the natural functional setting, which is the space L 1 ((0, ∞), (1 + x)dx), can no longer be used. As we shall see below, it might be replaced by the smaller space L 1 ((0, ∞), (x m + x)dx) for some m ∈ (0, 1) such that y 0 x m b(x, y) dx = (ν + 2)y m 1 0 z m+ν dz < ∞ , y > 0 , that is, m > -1 -ν ≥ 0. This choice requires however the coagulation kernel K and the overall fragmentation rate a to vanish in an appropriate way for small sizes. More precisely, we assume that there are K 0 > 0 and m 0 ∈ (-1 -ν, 1) such that 0 ≤ K(x, y) = K(y, x) ≤ K 0 (2 + x + y) , (x, y) ∈ (0, ∞) 2 , (1.7a) and, for all R > 0, L R := sup (x,y)∈(0,R) 2 K(x, y) min{x, y} m 0 < ∞ . (1.7b) We also assume that, for all R > 0, there is A R > 0 such that a(x) ≤ A R x m 0 +ν+1 , x ∈ (0, R) . (1.8) Roughly speaking, K and a are required to vanish faster for small sizes as the singularity of b ν . For instance, the coagulation kernel K and the homogeneous overall fragmentation rate a given by K(x, y) = x α y β + x β y α , a(x) = x γ , (x, y) ∈ (0, ∞) 2 , (1.9) with -1 -ν < α ≤ β ≤ 1 -α and γ > 0 satisfy (1.7) and (1.8), respectively, for any m 0 ∈ (-1 -ν, max{α, γ -1 -ν}]. Observe in particular that no growth constraint is required on a for large sizes. Before stating our results, let us introduce some notation: given m ∈ R, we define the space X m and its positive cone X + m by X m := L 1 ((0, ∞), x m dx) , X + m := {f ∈ X m : f ≥ 0 a.e. in (0, ∞)} , respectively, and denote the space X m endowed with its weak topology by X m,w . We also put M m (f ) := ∞ 0 x m f (x) dx , f ∈ X m . We begin with the existence of mass-conserving weak solutions to (1.1)-(1.2) when the initial condition f in lies in X + m 0 ∩ X 1 for some m 0 ∈ (-ν -1, 1) and decays in a suitable way for large sizes. Theorem 1.1. Let ν ∈ (-2, -1]. Assume that the coagulation kernel K and the overall fragmentation rate a satisfy (1.7) and (1.8), respectively, and that the daughter distribution function b = b ν is given by (1.6). Given an initial condition f in ∈ X + m 0 ∩ X 1 such that ∞ 0 x ln(ln(x + 5))f in (x) dx < ∞ , (1.10) there exists at least one weak solution f ∈ C([0, ∞); X m 0 ,w ∩ X 1,w ) on [0, ∞) to (1.1)-(1.2) such that M 1 (f (t)) = M 1 (f in ) , t ≥ 0 . (1.11) Moreover, if f in ∈ X m for some m > 1, then f ∈ L ∞ (0, T ; X m ) for all T > 0. When b = b ν and ν > -1, the existence of mass-conserving weak solutions on [0, ∞) to (1.1)-(1.2) is already known for f in ∈ X + 0 ∩ X 1 , the assumptions (1.7b) and (1.8) being replaced by the local boundedness of K and a [START_REF] Laurenc | From the discrete to the continuous coagulationfragmentation equations[END_REF][START_REF] Stewart | A global existence theorem for the general coagulation-fragmentation equation with unbounded kernels[END_REF]. In particular, it does not require the additional integrability condition (1.10) on f in and it is yet unclear whether Theorem 1.1 is valid under the sole assumption f in ∈ X + 0 ∩ X 1 . In fact, the condition (1.10) can be replaced by f in ∈ L 1 ((0, ∞), w(x)dx) for any weight function w enjoying the properties listed in Lemma 2.4 below, which includes weights involving multiple iterates of the logarithm function. We nevertheless restrict our analysis to the specific choice of the weight function in (1.10) as we have not yet identified an "optimal" class of initial conditions for which Theorem 1.1 is true. The proof of Theorem 1.1 proceeds along the lines of the existence proofs performed in [START_REF] Laurenc | From the discrete to the continuous coagulationfragmentation equations[END_REF][START_REF] Stewart | A global existence theorem for the general coagulation-fragmentation equation with unbounded kernels[END_REF], the main differences lying in the control of the behaviour for small sizes and its consequences on the behaviour for large sizes. It relies on a weak compactness method in X m 0 , the starting point being the construction of a truncated version of (1.1) for which well-posedness can be established by a classical Banach fixed point argument. The compactness approach involves five steps: we first show that the additional integrability assumption (1.10) is preserved throughout time evolution and additionally provides a control on the behaviour of the fragmentation term for large sizes whatever the growth of the overall fragmentation rate a. We next turn to the behaviour for small sizes and prove boundedness in X m 0 , the outcome of the previous step being used to control the contribution of the fragmentation gain term. The third and fourth steps are more classical and devoted to the uniform integrability and the time equicontinuity in X m 0 , respectively. In the last step, we gather the outcome of the four previous ones to show the convergence of the solutions of the truncated problem as the truncation parameter increases without bound and that the limit thus obtained is a weak solution on [0, ∞) to (1.1)-(1.2) which satisfies the conservation of matter (1.11). We finally use the same argument as in the first step to prove the stability of X m for m > 1. Remark 1.2. For simplicity, the analysis carried out in this paper is restricted to the daughter distribution functions given by (1.6). The proof of Theorem 1.1 can however be adapted to encompass a broader class of daughter distribution functions b featuring a non-integrable singularity for small fragment sizes, see [START_REF] Banasiak | Analytic methods for coagulationfragmentation models[END_REF]. In the same vein, existence of weak solutions (not necessarily mass-conserving) can be proved by a similar approach when the coagulation kernel K features a faster growth than (1.7a) and we refer to [START_REF] Banasiak | Analytic methods for coagulationfragmentation models[END_REF] for a more complete account. We supplement the existence result with a uniqueness result, which is however only valid for a smaller class of coagulation kernels K and initial conditions f in . Theorem 1.3. Let ν ∈ (-2, -1] and δ ∈ (0, 1). Assume that the coagulation kernel K and the overall fragmentation rate a satisfy (1.7) and (1.8), respectively, and that the daughter distribution function b = b ν is given by (1.6). Assume also that there is K 1 > 0 such that K(x, y) ≤ K 1 x m 0 y , (x, y) ∈ (0, 1) × (1, ∞) . (1.12) Given an initial condition f in ∈ X + m 0 ∩ X 2+δ , there is a unique mass-conserving weak solution f on [0, ∞) to (1.1)-(1.2) such that f ∈ L ∞ (0, T ; X 2+δ ) for all T > 0. Observing that any function f in ∈ X + m 0 ∩ X 2+δ satisfies (1.10), the existence part of Theorem 1.3 readily follows from Theorem 1.1. As Theorem 1.1, Theorem 1.3 applies to (1.1)-(1.2) when the coagulation kernel K and the overall fragmentation rate a are given by (1.9) with -1 -ν < α ≤ β ≤ 1 -α, γ > 0, and m 0 ∈ (-1 -ν, max{α, γ -1 -ν}]. We also mention that, when b = b ν and ν > -1, Theorem 1.3 is valid without the assumption (1.12) on K and for f in ∈ X + 0 ∩ X 2 [START_REF] Laurenc | On coalescence equations and related models[END_REF][START_REF] Norris | Smoluchowski's coagulation equation: uniqueness, nonuniqueness and a hydrodynamic limit for the stochastic coalescent[END_REF]. As in [START_REF] Escobedo | On self-similarity and stationary problem for fragmentation and coagulation models[END_REF][START_REF] Laurenc | On coalescence equations and related models[END_REF][START_REF] Norris | Smoluchowski's coagulation equation: uniqueness, nonuniqueness and a hydrodynamic limit for the stochastic coalescent[END_REF][START_REF]A uniqueness theorem for the coagulation-fragmentation equation[END_REF], the uniqueness proof relies on a control of the distance between two solutions in a weighted L 1 -space, the delicate point being the choice of an appropriate weight which turns out to be ξ(x) := max{x m 0 , x 1+δ }, x > 0, here. The superlinearity of ξ for large sizes compensates the sublinearity of ξ for small sizes which gives a positive contribution of the fragmentation term. Existence Throughout this section, the parameter ν ∈ (-2, -1] is fixed and we assume that the coagulation kernel K and the overall fragmentation rate a satisfy (1.7) and (1.8), respectively, while the daughter distribution function b = b ν is given by (1.6). Also, f in is a function in X + m 0 ∩ X 1 enjoying the additional integrability property (1.10) and we set ̺ := M 1 (f in ). Preliminaries. Let us first begin with the definition of a weak solution to (1.1)-(1.2). Definition 2.1. Let T ∈ (0, T ]. A weak solution on [0, T ) to (1.1)-(1.2) is a non- negative function f ∈ C([0, ∞); X m 0 ,w ∩ X 1,w ) on [0, ∞) such that, for all t ∈ (0, T ) and ϑ ∈ Θ m 0 , ∞ 0 (f (t, x) -f in (x))ϑ(x) dx = 1 2 ∞ 0 ∞ 0 K(x, y)χ ϑ (x, y)f (t, x)f (t, y) dydx - ∞ 0 a(x)N ϑ (x)f (t, x) dx , (2.1) where χ ϑ (x, y) := χ(x + y) -χ(x) -χ(y) , (x, y) ∈ (0, ∞) 2 , N ϑ (y) := ϑ(y) - y 0 ϑ(x)b ν (x, y) dx , y > 0 , and Θ m 0 := {ϑ ∈ C m 0 ([0, ∞)) ∩ L ∞ (0, ∞) : ϑ(0) = 0} . Observe that, for ϑ ∈ Θ m 0 and (x, y) ∈ (0, ∞) 2 , |χ ϑ (x, y)| ≤ 2 ϑ C m 0 min{x, y} m 0 and |N ϑ (y)| ≤ 1 + ν + 2 ν + m 0 + 1 ϑ C m 0 y m 0 , so that the right-hand side of (2.1) is well-defined. We next define a particular class of convex functions on [0, ∞) which proves useful in the forthcoming analysis. Definition 2.2. A non-negative and convex function ϕ ∈ C ∞ ([0, ∞)) belongs to the class C V P,∞ if it satisfies the following properties: (a) ϕ(0) = ϕ ′ (0) = 0 and ϕ ′ is concave; (b) lim r→∞ ϕ ′ (r) = lim r→∞ ϕ(r)/r = ∞; (c) for p ∈ (1, 2), S p (ϕ) := sup r≥0 ϕ(r) r p < ∞ . For instance, r → (r + 1) ln(r + 1) -r lies in C V P,∞ . Functions in the class C V P,∞ enjoy several properties which we list now. Lemma 2.3. Let ϕ ∈ C V P,∞ . Then (a) r → ϕ(r)/r is concave on [0, ∞); (b) for r ≥ 0, 0 ≤ ϕ(r) ≤ rϕ ′ (r) ≤ 2ϕ(r) ; (c) for (r, s) ∈ [0, ∞) 2 , sϕ ′ (r) ≤ ϕ(r) + ϕ(s) , (2.2) and 0 ≤ ϕ(r + s) -ϕ(r) -ϕ(s) ≤ 2 sϕ(r) + rϕ(s) r + s , (2.3) 0 ≤ ϕ(r + s) -ϕ(r) -ϕ(s) ≤ ϕ ′′ (0)rs . (2.4) Proof. Since ϕ(r + s) -ϕ(r) -ϕ(s) = r 0 s 0 ϕ ′′ (r * + s * ) ds * dr * , (r, s) ∈ [0, ∞) 2 , the inequality (2.4) readily follows from the concavity of ϕ ′ . The other properties listed in Lemma 2.3 are consequences of the convexity of ϕ and the concavity of ϕ ′ and are proved, for instance, in [19, Proposition 14]. We finally collect some properties of the weight involved in the assumption (1.10). Lemma 2.4. Define W (x) := x ln(ln(x+5))-x ln(ln(5)) for x ≥ 0. Then W ∈ C V P,∞ and, for all m ∈ [0, 1), lim x→∞ xW ′ (x) -W (x) x m = ∞ . 2.2. Approximation. Let j ≥ 2 be an integer and set K j (x, y) := K(x, y)1 (0,j) (x + y) , a j (x) := a(x)1 (0,j) (x) , (2.5) and f in j (x) := f in (x)1 (0,j) (x) (2.6) for (x, y) ∈ (0, ∞) 2 . We denote the coagulation and fragmentation operators with K j and a j instead of K and a by C j and F j , respectively. Simple computations along with (1.7b) and the boundedness of a j resulting from (1.8) show that both C j and F j are locally Lipschitz continuous from L 1 ((0, j), x m 0 dx) into itself [START_REF] Banasiak | Analytic methods for coagulationfragmentation models[END_REF]. Thanks to these properties, we are in a position to apply the Banach fixed point theorem to prove that there is a unique non-negative function f j ∈ C 1 ([0, ∞), L 1 ((0, j), x m 0 dx)) solving ∂ t f j (t, x) = C j f j (t, x) + F j f j (t, x) , (t, x) ∈ (0, ∞) × (0, j) , (2.7a ) f j (0, x) = f in j (x) , x ∈ (0, j) . (2.7b) Introducing the space Θ m 0 ,j := {ϑ ∈ C m 0 ([0, j]) : ϑ(0) = 0}, it readily follows from (2.5) and (2.7) that f j satisfies the following weak formulation of (2.7): for all ϑ ∈ Θ m 0 ,j and t > 0, d dt j 0 ϑ(x)f j (t, x) dx = 1 2 j 0 j-x 0 K(x, y)χ ϑ (x, y)f j (t, x)f j (t, y) dydx - j 0 a(x)N ϑ (x)f j (t, x) dx , (2.8) the functions χ ϑ and N ϑ being defined in Definition 2.1. Choosing ϑ(x) = x, x ∈ (0, j), in (2.8) readily gives the conservation of matter j 0 xf j (t, x) dx = j 0 xf in (x) dx , t ≥ 0 . Extending f j to [0, ∞) × (j, ∞) by zero (i.e. f j (t, x) = 0 for (t, x) ∈ [0, ∞) × (j, ∞)), the previous identity reads M 1 (f j (t)) = M 1 (f in j ) ≤ ̺ = M 1 (f in ) , t ≥ 0 . (2.9) In addition, introducing C 0 := M m 0 (f in ) + ∞ 0 W (x)f in (x) dx < ∞ , (2.10) which is finite according to the integrability properties of f in and in particular (1.10), we infer from (2.6), (2.10), and the non-negativity of W that M m 0 (f in j ) + ∞ 0 W (x)f in j (x) dx ≤ C 0 . (2.11) We now investigate the weak compactness features of the sequence (f j ) j≥2 . In the sequel, C and C i , i ≥ 1, denote positive constants depending on K, a, ν, m 0 , f in , ̺, and C 0 . Dependence upon additional parameters is indicated explicitly. Moment Estimates. We start with a control on the behaviour for large sizes. Lemma 2.5. Let T > 0. There is C 1 (T ) > 0 depending on T such that, for t ∈ [0, T ], ∞ 0 W (x)f j (t, x) dx + t 0 ∞ 0 a(x)[xW ′ (x) -W (x)]f j (s, x) dxds ≤ C 1 (T ) . Proof. On the one hand, since W ∈ C V P,∞ by Lemma 2.4, we infer from (1.7a), (2.3), and (2.4) that, K(x, y)χ W (x, y) ≤ 2K 0 W ′′ (0)xy + 2K 0 [xW (y) + yW (x)] (2.12) for (x, y) ∈ (0, ∞) 2 . On the other hand, the function W 1 : x → W (x)/x is concave according to Lemma 2.3 (a) and it follows from (1.6) that, for y > 0, N W (y) = y 0 [W 1 (y) -W 1 (x)]xb ν (x, y) dx ≥ y 0 W ′ 1 (y)x(y -x)b ν (x, y) dx ≥ yW ′ (y) -W (y) y 2 y 0 x(y -x)b ν (x, y) dx = yW ′ (y) -W (y) ν + 3 . (2.13) Combining (2.8) with ϑ = W , (2.12), and (2.13) gives, for t > 0, d dt ∞ 0 W (x)f j (t, x) dx ≤ K 0 j 0 j-x 0 [W ′′ (0)xy + xW (y) + yW (x)] f j (t, x)f j (t, y) dydx - 1 ν + 3 j 0 a(y)[yW ′ (y) -W (y)]f j (t, y) dy . We further deduce from Lemma 2.3 (b) and (2.9) that d dt ∞ 0 W (x)f j (t, x) dx + 1 2 ∞ 0 a(y)[yW ′ (y) -W (y)]f j (t, y) dy ≤ K 0 W ′′ (0)̺ 2 + 2K 0 ̺ ∞ 0 W (x)f j (t, x) dx , Integrating the previous differential inequality and using (2.11) complete the proof. Exploiting the properties of W for large sizes along with the outcome of Lemma 2.5 provides additional information on the fragmentation term. Corollary 2.6. Let T > 0 and m ∈ (0, 1). There is C 2 (m, T ) > 0 depending on m and T such that T 0 P m,j (s)ds ≤ C 2 (m, T ) , P m,j (s) := ∞ 1 x m a(x)f j (s, x) dx , s ∈ [0, T ] . Proof. Owing to Lemma 2.4, there is x m > 1 such that xW ′ (x) -W (x) ≥ x m for x > x m . Therefore, by (1.8), (2.9), Lemma 2.3 (b), and Lemma 2.5, T 0 ∞ 1 x m a(x)f j (s, x) dxds ≤ T 0 xm 1 x m a(x)f j (s, x) dxds + T 0 ∞ xm x m a(x)f j (s, x) dxds ≤ A xm x m 0 +ν+1 m T 0 xm 1 x m f j (s, x) dxds + T 0 ∞ xm [xW ′ (x) -W (x)]a(x)f j (s, x) dxds ≤ A xm x m 0 +ν+1 m ̺T + C 1 (T ) , and the proof is complete. We now study the behaviour for small sizes. Lemma 2.7. Let T > 0. There is C 3 (T ) > 0 depending on T such that M m 0 (f j (t)) ≤ C 3 (T ) , t ∈ [0, T ] . Proof. Setting ϑ 0 (x) := min{x, x m 0 } for x > 0, we observe that χ ϑ 0 (x, y) ≤ 0 , (x, y) ∈ (0, ∞) 2 , and N ϑ 0 (y) = - 1 -m 0 ν + m 0 + 1 y m 0 , y ∈ (0, 1) , N ϑ 0 (y) = - 1 -m 0 ν + m 0 + 1 1 y ν+1 ≥ - 1 -m 0 ν + m 0 + 1 y m 0 , y ∈ (1, ∞) . We then infer from (1.8) with R = 1 and (2.8) with ϑ = ϑ 0 that d dt M m 0 (f j (t)) ≤ 1 -m 0 ν + m 0 + 1 ∞ 0 y m 0 a(y)f j (t, y) dy ≤ A 1 ν + m 0 + 1 1 0 y m 0 f j (t, y) dy + P m 0 ,j (t) ν + m 0 + 1 ≤ A 1 ν + m 0 + 1 M m 0 (f j (t)) + P m 0 ,j (t) ν + m 0 + 1 . We integrate the previous differential inequality and complete the proof with the help of Corollary 2.6. 2.4. Uniform Integrability. The previously established estimates guarantee that there is no unlimited growth for small sizes nor escape towards large sizes during the evolution of (f j ) j≥2 . To achieve weak compactness in X m 0 , it remains to prevent concentration near a finite size. For that purpose, we recall that, since f in ∈ X m 0 , a refined version of the de la Vallée Poussin theorem ensures that there is Φ ∈ C V P,∞ depending only on f in such that I := ∞ 0 x m 0 Φ(f in (x)) dx < ∞ , (2.14) see [START_REF] Banasiak | Analytic methods for coagulationfragmentation models[END_REF][START_REF] De | Sur l'intégrale de Lebesgue[END_REF][START_REF] Hoàn | Etude de la classe des opérateur m-accrétifs de L 1 (Ω) et accrétif dans L ∞ (Ω)[END_REF][START_REF] Ph | Weak compactness techniques and coagulation equations[END_REF]. Lemma 2.8. Let T > 0 and R > 1. There is C 4 (T, R) > 0 depending on T and R such that R 0 x m 0 Φ(f j (t, x)) dx ≤ C 4 (T, R) , t ∈ [0, T ] , the function Φ being defined in (2.14). Proof. We combine arguments from [START_REF] Laurenc | From the discrete to the continuous coagulationfragmentation equations[END_REF] with the subadditivity of x → x m 0 . Let T > 0, R > 1, and t ∈ [0, T ]. On the one hand, I 1,j (R, t) := R 0 x m 0 Φ ′ (f j (t, x))C j f j (t, x) dx ≤ 1 2 R 0 x 0 x m 0 K(x -y, y)Φ ′ (f j (t, x))f j (t, x -y)f j (t, y)dydx , and, by Fubini's theorem, I 1,j (R, t) ≤ 1 2 R 0 R-y 0 (x + y) m 0 K(x, y)Φ ′ (f j (t, x + y))f j (t, x)f j (t, y)dxdy . Owing to the subadditivity of x → x m 0 and the symmetry of K, we further obtain I 1,j (R, t) ≤ 1 2 R 0 R-y 0 x m 0 K(x, y)Φ ′ (f j (t, x + y))f j (t, x)f j (t, y)dxdy + 1 2 R 0 R-y 0 y m 0 K(x, y)Φ ′ (f j (t, x + y))f j (t, x)f j (t, y)dxdy ≤ 1 2 R 0 R-x 0 y m 0 K(x, y)Φ ′ (f j (t, x + y))f j (t, x)f j (t, y)dydx + 1 2 R 0 R-x 0 y m 0 K(x, y)Φ ′ (f j (t, x + y))f j (t, x)f j (t, y)dydx ≤ R 0 R-x 0 y m 0 K(x, y)Φ ′ (f j (t, x + y))f j (t, x)f j (t, y)dydx . Since Φ ∈ C V P,∞ , we infer from (2.2) with r = f j (t, x + y) and s = f j (t, y) that I 1,j (R, t) ≤ R 0 R-x 0 y m 0 K(x, y) [Φ(f j (t, x + y)) + Φ(f j (t, y))] f j (t, x)dydx ≤ R 0 R x (y -x) m 0 K(x, y -x)Φ(f j (t, y))f j (t, x)dydx + R 0 R-x 0 y m 0 K(x, y)Φ(f j (t, y))f j (t, x)dydx . We finally use (1.7b) to conclude that I 1,j (R, t) ≤ 2L R M m 0 (f j (t)) R 0 y m 0 Φ(f j (t, y)) dy . (2.15) On the other hand, we fix p 0 > 1 satisfying 1 < p 0 < m 0 + 1 -ν ≤ 1 + m 0 -ν < 2 , which is possible as m 0 + 1 > -ν. Using again Fubini's theorem, I 2,j (R, t) := R 0 x m 0 Φ ′ (f j (t, x))F j f j (t, x) dx ≤ R 0 ∞ x x m 0 a(y)Φ ′ (f j (t, x))b ν (x, y)f j (t, y)dydx ≤ ∞ 0 a(y)f j (t, y) min{y,R} 0 x m 0 b ν (x, y)Φ ′ (f j (t, x)) dxdy . Since Φ ∈ C V P,∞ , it follows from (1.6), Definition 2.2 (c) with p = p 0 , and (2.2) with r = f j (t, x) and s = x ν that I 2,j (R, t) ≤ (ν + 2) ∞ 0 a(y)y -ν-1 f j (t, y) min{y,R} 0 x m 0 [Φ(x ν ) + Φ(f j (t, x))] dxdy ≤ (ν + 2)S p 0 (Φ) ∞ 0 a(y)y -ν-1 f j (t, y) y 0 x m 0 +νp 0 dxdy + (ν + 2) ∞ 0 a(y)y -ν-1 f j (t, y) R 0 x m 0 Φ(f j (t, x)) dxdy ≤ (ν + 2)S p 0 (Φ) m 0 + νp 0 + 1 ∞ 0 a(y)y m 0 +ν(p 0 -1) f j (t, y) dy + (ν + 2) ∞ 0 a(y)y -ν-1 f j (t, y) dy R 0 x m 0 Φ(f j (t, x)) dx . Since m 0 + ν(p 0 -1) and -ν -1 both belong to [0, 1) and m 0 + νp 0 + 1 > 0, we deduce from (1.8) and Lemma 2.7 that I 2,j (R, t) ≤ C A 1 1 0 y 2m 0 +1+νp 0 f j (t, y) dy + P m 0 +ν(p 0 -1) (t) + A 1 1 0 y m 0 f j (t, y) dy + P -ν-1,j (t) R 0 x m 0 Φ(f j (t, x)) dx ≤ C A 1 C 3 (T ) + P m 0 +ν(p 0 -1) (t) + [A 1 C 3 (T ) + P -ν-1,j (t)] R 0 x m 0 Φ(f j (t, x)) dx . Combining the previous inequality with (2.7), (2.15), and Lemma 2.7 gives d dt R 0 x m 0 Φ(f j (t, x)) dx = I 1,j (R, t) + I 2,j (R, t) ≤ [(2L R + A 1 )C 3 (T ) + (ν + 2)P -ν-1,j (t)] R 0 x m 0 Φ(f j (t, x)) dx + C A 1 C 3 (T ) + P m 0 +ν(p 0 -1) (t) . Owing to Corollary 2.6, we are in a position to apply Gronwall's lemma to complete the proof. Owing to (2.9), Lemma 2.8, and the superlinearity of Φ at infinity, see Definition 2.2 (b) , we deduce from Dunford-Pettis' theorem that, for all T > 0, there is a weakly compact subset K(T ) of X m 0 depending on T such that f j (t) ∈ K(T ) , t ∈ [0, T ] . (2.16) 2.5. Time Equicontinuity. Having established the (weak) compactness of the sequence (f j ) j≥2 with respect to the size variable x, we now focus on the time variable. Our aim being to apply a variant of Arzelà-Ascoli's theorem, we study time equicontinuity properties of (f j ) j≥2 in the next result. Lemma 2.9. Let T > 0 and R > 1. There are C 5 (T, R) > 0 depending on T and R and C 6 (T ) > 0 depending on T such that, for t 1 ∈ [0, T ) and t 2 ∈ (t 1 , T ], R 0 x m 0 |f j (t 2 , x) -f j (t 1 , x)| dx ≤ C 5 (R, T )(t 2 -t 1 ) + C 6 (T )R (m 0 -1)/2 . Proof. Let t ∈ [0, T ]. First, by Fubini's theorem, I 3,j (R, t) := R 0 x m 0 |C j f j (t, x)| dx ≤ 1 2 R 0 R-x 0 (x + y) m 0 K(x, y)f j (t, x)f j (t, y) dydx + R 0 R 0 x m 0 K(x, y)f j (t, x)f j (t, y) dydx + R 0 ∞ R x m 0 K(x, y)f j (t, x)f j (t, y) dydx . Owing to (1.7) and the elementary inequality (x + y) m 0 ≤ 2 max{x, y} m 0 , (x, y) ∈ (0, ∞) 2 , we further obtain I 3,j (R, t) ≤ 2L R R 0 R 0 x m 0 y m 0 f j (t, x)f j (t, y) dydx + K 0 R 0 ∞ R x m 0 (2 + x + y)f j (t, x)f j (t, y) dydx ≤ 2L R M m 0 (f j (t)) 2 + 4K 0 M m 0 (f j (t))M 1 (f j (t)) , hence, thanks to (2.9) and Lemma 2.7, I 3,j (R, t) ≤ 2L R C 3 (T ) 2 + 4̺K 0 C 3 (T ) . (2.17) Next, using once more Fubini's theorem, I 4,j (R, t) := R 0 x m 0 |F j f j (t, x)| dx ≤ R 0 x m 0 a(x)f j (t, x) dx + R 0 a(y)f j (t, y) y 0 x m 0 b ν (x, y) dxdy + ∞ R a(y)f j (t, y) R 0 x m 0 b ν (x, y) dxdy . We now infer from (1.6), (1.8), and Lemma 2.7 that I 4,j (R, t) ≤ A R R m 0 +ν+1 M m 0 (f j (t)) + ν + 2 m 0 + ν + 1 R 0 a(y)y m 0 f j (t, y) dy + ν + 2 m 0 + ν + 1 R m 0 +ν+1 ∞ R a(y)y -ν-1 f j (t, y) dy ≤ CA R R m 0 +ν+1 C 3 (T ) + CR (m 0 -1)/2 ∞ R a(y)y (m 0 +1)/2 f j (t, y) dy , hence I 4,j (R, t) ≤ CA R R m 0 +ν+1 C 3 (T ) + CR (m 0 -1)/2 P (m 0 +1)/2,j (t) . (2.18) It then follows from (2.7), (2.17), and (2.18) that R 0 x m 0 |∂ t f j (t, x)| dx ≤ I 3,j (R, t) + I 4,j (R, t) ≤ C 5 (R, T ) + CR (m 0 -1)/2 P (m 0 +1)/2,j (t) . Consequently, since (m 0 + 1)/2 ∈ (0, 1), Corollary 2.6 entails R 0 x m 0 |f j (t 2 , x) -f j (t 1 , x)| dx ≤ t 2 t 1 R 0 x m 0 |∂ t f j (t, x)| dxdt ≤ C 5 (R, T )(t 2 -t 1 ) + C 6 (T )R (m 0 -1)/2 , and the proof is complete. Combining (2.9) and Lemma 2.9 allows us to improve the equicontinuity with respect to time of the sequence (f j ) j≥2 . and, for t 1 ∈ [0, T ) and t 2 ∈ (t 1 , T ], ∞ 0 x m 0 |f j (t 2 , x) -f j (t 1 , x)| dx ≤ ω(T, t 2 -t 1 ) . Proof. Let 0 ≤ t 1 < t 2 ≤ T and R > 1. By (2.9) and Lemma 2.9, ∞ 0 x m 0 |f j (t 2 , x) -f j (t 1 , x)| dx ≤ R 0 x m 0 |f j (t 2 , x) -f j (t 1 , x)| dx + R m 0 -1 ∞ R x[f j (t 1 , x) + f j (t 2 , x)] dx ≤ C 5 (R, T )(t 2 -t 1 ) + C 6 (T )R (m 0 -1)/2 + 2̺R m 0 -1 . Introducing ω(T, s) := inf R>1 C 5 (R, T )s + C 6 (T )R (m 0 -1)/2 + 2̺R m 0 -1 , s ≥ 0 , we deduce from the previous inequality which is valid for all R > 1 that ∞ 0 x m 0 |f j (t 2 , x) -f j (t 1 , x)| dx ≤ ω(T, t 2 -t 1 ) and observe that ω(T, •) enjoys the property (2.19) as m 0 < 1. 2.6. Convergence. According to a variant of Arzelà-Ascoli's theorem [START_REF] Vrabie | C 0 -semigroups and applications[END_REF]Theorem A.3.1], it readily follows from (2.16) and Corollary 2.10 that the sequence (f j ) j≥2 is relatively compact in C([0, T ]; X m 0 ,w ) for all T > 0. Using a diagonal process, we construct a subsequence of (f j ) j≥2 (not relabeled) and f ∈ C([0, ∞); X m 0 ,w ) such that, for all T > 0, f j -→ f in C([0, T ]; X m 0 ,w ) . (2.20) A first consequence of (2.20) is that f (t) is a non-negative function for all t ≥ 0 and that f (0) = f in . It further follows from Lemma 2.5, the superlinearity of W for large sizes, and (2.20) that the latter can be improved to f j -→ f in C([0, T ]; X 1,w ) . (2.21) A straightforward consequence of (2.9) and (2.21) is the mass conservation M 1 (f (t)) = M 1 (f in ) , t ≥ sup j≥2 T 0 ∞ R a(x)f j (s, x) dxds = 0 , and thereby allows us to perform the limit j → ∞ in the fragmentation term. We have thus completed the proof of Theorem 1.1, except for the propagation of moments of higher order which is proved in Proposition 2.11 in the next section. 2.7. Higher Moments. We supplement the above analysis with the study of the evolution of algebraic moments of any order. Let f be the mass-conserving weak solution on [0, ∞) to (1.1) constructed in the previous section. Proposition 2.11. Consider m > 1 and assume further that f in ∈ X m . Then f ∈ L ∞ (0, T ; X m ) for all T > 0. Proof. Let T > 0, t ∈ (0, T ), and j ≥ 2. Setting ϑ m (x) := x m for x ∈ (0, ∞), it readily follows from (1.6) that N ϑm (y) = m -1 m + ν + 1 y m ≥ 0 , y > 0 , while there is C 7 (m) > 0 depending only on m such that (x + y)χ ϑm (x, y) ≤ C 7 (m) (x m y + xy m ) , (x, y) ∈ (0, ∞) 2 , by [7, Lemma 2.3 (ii)]. Consequently, by (1.7), χ ϑm (x, y)K(x, y) ≤ 2L 1 min{x, y} m 0 max{x, y} m ≤ 2L 1 (xy) m 0 , (x, y) ∈ (0, 1) 2 , and χ ϑm (x, y)K(x, y) ≤ K 0 (2 + x + y)χ ϑm (x, y) ≤ 3K 0 (x + y)χ ϑm (x, y) ≤ 3K 0 C 7 (m) (x m y + xy m ) , (x, y) ∈ (0, ∞) 2 \ (0, 1) 2 . We then infer from (2.8) with ϑ = ϑ m and the previous inequalities that d dt M m (f j (t)) ≤ L 1 M m 0 (f j (t)) 2 + 3K 0 C 7 (m)M 1 (f j (t))M m (f j (t)) . Recalling (2.9) and Lemma 2.7, we end up with d dt M m (f j (t)) ≤ L 1 C 3 (T ) 2 + 3K 0 C 7 (m)̺M m (f j (t)) , and integrate the previous differential inequality to deduce that M m (f j (t)) ≤ e 3K 0 C 7 (m)̺t M m (f in j ) + L 1 C 3 (T ) 2 3K 0 C 7 (m)̺ , t ∈ [0, T ] . (2.22) Since M m (f in j ) ≤ M m (f in ) < ∞, Lemma 2.9 readily follows from (2.22) after letting j → ∞ with the help of (2.21) and Fatou's lemma. Uniqueness Proof of Theorem 1.3. Consider two weak solutions f 1 and f 2 on [0, ∞) to (1.1)-(1.2) on [0, ∞) enjoying the properties listed in Theorem 1.3. We set F := f 1 -f 2 , σ := sign(f 1 -f 2 ), ξ(x) := max{x m 0 , x 1+δ }, and Ξ(t, x, y) := ξ(x + y)σ(t, x + y) -ξ(x)σ(t, x) -ξ(y)σ(t, y) for (t, x, y) ∈ (0, ∞) 3 . Arguing as in the proof of [13, Theorem 2.9] and [START_REF]A uniqueness theorem for the coagulation-fragmentation equation[END_REF] with Ξ 0 (x, y) := ξ(x + y) -ξ(x) + ξ(y) , (x, y) ∈ (0, ∞) 2 . On the one hand, we infer from (1.7), (1.12), and the subadditivity of x → x m 0 and x → x δ that: (c1) for (x, y) ∈ (0, 1) 2 , K(x, y)Ξ 0 (x, y) ≤ L 1 min{x, y} m 0 [(x + y) m 0 -x m 0 + y m 0 ] ≤ 2L 1 min{x, y} m 0 y m 0 ≤ 2L 1 ξ(x)y m 0 ; (c2) for (x, y) ∈ (0, 1) × (1, ∞), K(x, y)Ξ 0 (x, y) ≤ K 1 x m 0 y x 1+δ + (1 + δ)y(x + y) δ -x m 0 + y 1+δ ≤ K 1 x m 0 y (1 + δ)2 δ y 1+δ + y 1+δ Collecting the estimates in (c1)-(c4) and (f1)-(f2), we infer from (3.1) that there is a positive constant κ > 0 depending only on L 1 , K 1 , δ, K 0 , ν, m 0 , and a such that d dt Since M m 0 (f i ) and M 2+δ (f i ) both belong to L ∞ (0, T ) for i ∈ {1, 2} and f 1 (0) = f 2 (0) = f in , we use Gronwall's lemma to complete the proof. ∞ 0 K 0 (x, y)f (x)f (y) dy (1.1b) and F f (x) := -a(x)f (x) + ∞ x a(y)b(x, y)f (y) dy (1.1c) Corollary 2 . 10 . 210 Let T > 0. There is a function ω(T, •) : [0, ∞) → [0, ∞) depending on T such that lim s→0 ω(T, s) = 0 ,(2.19) y)Ξ(t, x, y)(f 1 + f 2 )(t, y)F (t, x) dydx+ )σ(t, x)b ν (x, y) dx -ξ(y)σ(t, y) dy . Since Ξ(t, x, y)F (t, x) = ξ(x + y)σ(t, x + y)F (t, x) -ξ(x)|F (t, x)| -ξ(y)σ(t, y)F (t, x) ≤ ξ(x + y)|F (t, x)| -ξ(x)|F (t, x)| + ξ(y)|F (t, x)| and F (t, y) y 0 ξ(x)σ(t, x)b ν (x, y) dx -ξ(y)σ(t, y) ≤ y 0 ξ(x)b ν (x, y) dx|F (t, y)| -ξ(y)|F (t, y)| ≤ -N ξ (y)|F (t, y)| , , y)Ξ 0 (x, y)(f 1 + f 2 )(t, y)|F (t, x)| dydx-∞ 0 a(y)N ξ (y)|F (t, y)| dy , (3.1) ≤ K 1 1 + 1 (1 + δ)2 δ ξ(x)y 2+δ ;(c3) for (x, y) ∈ (1, ∞) × (0, 1), K(x, y)Ξ 0 (x, y)≤ K 0 (2 + x + y) (1 + δ)y(x + y) δ + y m 0 ≤ 4K 0 x (1 + δ)2 δ x δ y + x δ y m 0 ≤ 4K 0 1 + (1 + δ)2 δ ξ(x)y m 0 ; (c4) for (x, y) ∈ (1, ∞) 2 , K(x, y)Ξ 0 (x, y) ≤ K 0 (2 + x + y) (1 + δ)y(x + y) δ + y 1+δ ≤ 2K 0 (x + y) (1 + δ)yx δ + (2 + δ)y 1+δ ≤ 2(2 + δ)K 0 x 1+δ y + xy 1+δ + x δ y 2 + y 2+δ ≤ 8(2 + δ)K 0 ξ(x)y 2+δ .On the other hand, owing to (1.6) and (1.8), (f1) for y ∈ (0, 1),-a(y)N ξ (y) = 1 -m 0 ν + 1 + m 0 y m 0 a(y) ≤ A 1 ν + 1 + m 0 ξ(y) ; (f2) for y > 1, -N ξ (y) = ν + 2 ν + 1 + m 0 y -ν-1 + ν + 2 ν + 2 + δ y 1+δ -y -ν-1 -y 1+δ = (ν + 2)(1 + δ -m 0 ) (ν + 1 + m 0 )(ν + 2 + δ) y -ν-1 -δ ν + 2 + δ y 1+δ = δ ν + 2 + δ y -ν-1 Y 2+δ+ν -y 2+δ+ν , with Y 2+δ+ν := (ν + 2)(1 + δ -m 0 ) δ(ν + 1 + m 0 ) . Either y ≥ max{1, Y } and -a(y)N ξ (y) ≤ δ ν + 2 + δ a(y)y -ν-1 Y 2+δ+ν -y 2+δ+ν ≤ 0 , or y ∈ (1, max{1, Y }) and -a(y)N ξ (y) ≤ δ ν + 2 + δ a(y)y -ν-1 Y 2+δ+ν ≤ A Y Y 2+δ+ν y m 0 ≤ A Y Y 2+δ+ν ξ(y) . ) y m 0 + y 2+δ (f 1 + f 2 )(t, y)|F (t, x)| dydx + κ ∞ 0 ξ(y)|F (t, y)| dy ≤ κ [1 + M m 0 ((f 1 + f 2 )(t)) + M 2+δ ((f 1 + f 2 )(t))] ∞ 0 ξ(y)|F (t, y)| dy . 0 . Owing to (1.7), (1.8), Corollary 2.6, (2.20), and (2.21), it is by now a standard argument to pass to the limit in (2.8) and deduce that f is a weak solution on [0, ∞) to (1.1) in the sense of Definition 2.1, see [6, 23, 31] for instance. It is worth pointing out that the behaviour of the fragmentation term for large sizes is controlled by Corollary 2.6 which guarantees that, for all T > 0, lim R→∞
01773708
en
[ "info", "info.info-mo" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01773708/file/FLINS2018%20VF.pdf
Faiza Ajmi Sarah Ben Othman Zgaya Hayfa Slim Biau Hammadi Hayfa Zgaya Biau Slim Hammadi Scheduling Approch to Control the Execution of the Patient Pathway Workflow in the Emergency Department The majority of developed countries are interested in enhancing their healthcare information system in order to expect the overcrowding situations and improve the patient quality of care. In this paper, we focus on the patient pathway workflow modeling using the business process management notation (BPMN) graphical language in the adult emergency department (AED). The goal of our work is to identify the dysfunction situations and improve the performance indicators such as the waiting time. The idea is to optimize this criterion using a real-time scheduling algorithm applied simultaneously on each decision point (gateway) of the patient pathway workflow model. The objective of our algorithm is to assign a dynamic priority to the patients according to their health state evolution. In addition, taking into account the uncertainty of patient arrival an adequate a real time scheduling algorithm is proposed. Our approach is tested on a set of real database from the AED of the regional university hospital (RUH) of Lille (in the north of France). We conducted interviews, performed healthcare tasks analysis, and validated results with the AED medical staff. The goal is to have the workflow model as close as possible to the real functioning of the AED. The simulation results show that the average waiting time of the patients at the AED drop by 11% thanks to our approach. Introduction The patient medical treatment may require a number of various interactions between several services (MRI, biological tests, surgical operations, etc.). At each step of the care process, each patient is associated to a care activity and to a medical resource allocation. However, the functioning of this care process always faces to various problem such as: the lack of coordination between the various services, the complexity of the patient pathway management, financial resources limitations, the unpredictability of the arrival time of patients and their pathologies. Consequently, the health organizations systems are more and more aware of the need to use their resources as efficiently as possible, which requires health authorities to increase emphasis on process optimization in order to control and minimize operating costs and improve the quality services levels. Indeed, because the lack of medical information circulation leads to delays resulting in poor quality of care and increased cost, a connected Workflow system should be set up [1]. In fact, in this article, we use a Workflow model to represent patient pathway with different bottleneck points generating long Waiting Time (WT)s. We propose then to develop a Real-Time Scheduling Algorithm (RTSA) applied in parallel way in each decision point of patient pathway Workflow in order to orchestrate them and reduce the WT during the execution of the workflow instances. Our paper is organized as follows: a state of the art will be presented in the second section. The third section describes the patient pathway Workflow model. The proposed RTSA is presented in sections 4, followed by simulations and results in the section 5. A conclusion and perspectives are presented at the end of this paper. State of the art In the literature, the application of Workflow model in the hospital system has been successful, which rendered the health system a very active area of research [2], [3], [4]. The majority of this research is about providing an efficient health care system for the patient and ensuring its continuity for the next generation [2], [5], [6], [7]. In fact, many research works focus on assessing the ED performance. In [START_REF] Salmon | A structured literature review of simulation modelling applied to Emergency Departments: Current patterns and emerging trends[END_REF], [START_REF] Et | Patient flow within UK emergency departments: a systematic review of the use of computer simulation modelling methods[END_REF] authors present a state of the art about the computer simulation approaches used for understanding the causes of ED overcrowding. In [START_REF] Harzi | Scheduling Patients in Emergency Department by Considering Material Resources[END_REF] authors presented a mixed integer linear programming (MILP) approach to minimize the total waiting time of patient's in the emergency department. Their model is not applicable on a large population size and does not take into account the uncertainty aspect of patient arrival. Real-time optimization approaches have been developed in order to take rapid decisions especially when an unexpected event occurs in the system such as the random arrival of patients. A state of the art in real-time optimization methods is presented in [START_REF] Jaillet | Risk and Optimization in an Uncertain World[END_REF], the computation time is the most important criterion for these methods. For example, appointment-planning optimization in radiotherapy presents a major difficulty because of uncertainty, which is the main characteristic of patient arrival. In this context, the majority of researchers usually propose off-line provisional schedule approaches but the proposed solutions can explode in terms of deadlines and costs. In [START_REF] Erdogan | [END_REF], authors propose a real time approach in order to calculate the best appointment for each patient arrival with a small size problem. In [13], authors propose a Markov decision process to solve the same problem. They obtain an appropriate solution for several instances generated randomly. However, they do not take into account all information generated from the beginning of the resolution process. This information is important to optimize in real-time patient pathway, especially in overcrowding situations.Table 1. Summary of the related research on patient scheduling in the healthcare systempresents a comparison between the different works cited above. The originality of our paper over existing works is the take into account new patient arrival and the health status evolution of the ongoing treating patients. Our objective is to adapt the best solution according to what really happen in the AED. Patient Pathway workflow model The workflow model, implemented using BPMN language (Business Process Management Notation) and presented in Figure 1, represents the patient pathway in the AED. This model has been realize thanks to observations made during several visits over a period of 6 months. The analysis done by the researchers and AED staff medical conducted us to construct a Workflow models. An instance of this model triggers with a single start-event representing the patient arrival and ends with 6 possible end-events representing the different exit-ways of the patient. This workflow model includes several decision points. At each point, a patient real-time scheduling is needed. Decision variables Yk The time that pathway workflow instances assignment is made for patient k ; Zko The starting time for the treatment task Pk,o,h. A solution to the AED workflow model is a feasible schedule of patients' pathway workflow instances assignment and health care task-resource allocations. The goal is to assign each patient to an appropriate pathway workflow instance and schedule the health care treatment tasks. The object here is to minimize the weighted response time between arrival and pathway workflow instances assignment and the objective of the task-resource allocation is to minimize the total care time of all patients. The weights ωk are selected according to priority class of the patients, it is penalized in the objective and this accounts for the priority class assigned during triage. These weights are dynamic and are updated according to the evolution of patient pathology gravity. Waiting Time before the first consultation () k k k kK WT w Y b     Length Of Stay () kk kK LOS c Y    Completion time k c { }, { ,.,.} k kn kn k c Max Y p n o    After all the treatment tasks   ,.,. k o are completed for a patient k, the pathway workflow instances assignment to this patient is released. The goal of our RTSA proposed algorithm is to reduce the average waiting time of the patients. So, when two patients have the same priority level and respectively waiting in the WS and WP rooms, the switch function allows to take care firstly of the patient in the WP. This solution reduces the waiting time in the WP, called primary waiting time. The RTSA algorithm uses the Priority function, which calculates the priority order of each patient in the AED, basing on the application of the three dynamic rules applied on each patient (Table 32). R1 R2 R3 ( ) / k k s Y b r  [( ) / ]* k k s k Y b r w    ( ) , k Max w t k  Simulation and results In order to validate our algorithm we did interviews with the AED medical staff and observations during visits in the RUH of Lille. We defined a sample of 20 patients with different gravity degrees (Table 3). For simulation we consider that the treatment is achieved by a single doctor in a single box available at that time and two types of waiting rooms: WS (secondary waiting room) and WP (primary waiting room). We are here faced to a resource constraints scheduling problem. These patients pathway represented by the Workflow with different decision points are controlled by the RSTA during its execution. This table shows that the WT in the real treatment process functioning (8th column) of these 20 patients (according to visits) is often high compared to the WT with RTSA (10th column). The real functioning of the AED does not take into account the patient consultation time; it considers only the severity of the patients in the WP, that is to say, the patient with the greatest severity is scheduled first. However, if there are two patients who have the same gravity the priority is given to the one who is in the WP and if they have the same WT then sorting is done randomly. This reflects the inconvenient of this functioning and explains an excessive total WT. While applying our RTSA approach in each decision point of the Workflow model the average WT of patient at AED drop of 11% (Figure 2). The originality and effectiveness of our RTSA is shown at the application of the rule R2, thanks to this rule the 3 patients (P7, P10, P2) where their health care state evolution dynamically changed respectively from 3-5-3 to 5-6-7. The RTSA takes into account this evolution of gravity and the algorithm has updated the order in patients' treatment sequence (13th column). Discussion Thanks to our workflow model and our dynamically (re)-orchestrate algorithm of workflow instances (RTSA), the patient wait time rate drops by 11% during the execution of workflow. The results demonstrate the utility of the proposed methodologies to optimize the using of human and material resources to treat patients as quickly as possible while to take into account the evolution of patient health care state gravity. Therefore, the main cause of overcrowding situation is that the demand exceeds the supply generated by the functioning nature of this department. The situation becomes worse, when several patients arrive at the same time to the AED. In fact, in order to anticipate and avoid the overcrowding situation in the AED, we need to develop in a future work a decision support system provided with a computerized resolution architecture based on the multiagent system allowing communication, coordination and negotiation between the different actors of the AED to ensure collaborative optimization that generates good actions to adopt. Conclusion and prospects In this paper, we have modeled the AED patient pathway using a processoriented approach (Workflow). We have chosen the Waiting Time as the most crucial Performance Indicator. In order to control the workflow patient pathway during its execution we have developed a real-time scheduling algorithm (RTSA) at each decision point in the workflow model. The originality of this work is to take into account the evolution of patient health care state gravity at each decision point thanks to the dynamic rules driven RTSA. To evaluate our approach, a simulation of the WT rate before and after scheduling is done and shows a decrease of 11% in the WT within the AED. Fig. 1 . 4 . 1 . 141 Fig.1. Global patient pathway Workflow model in the AED using BPMN modeling language Fig. 2 . 2 Fig. 2. Algorithm 1: RTSA Fig. 3 . 3 Fig. 3. Comparison of WT with and without RTSA Table 3 . 3 Priority rules Table 4 . 4 Comparison of WT with and without RTSA Table 5 . 5 Comparison of WT with and without RTSA Acknowledgments This work was supported and funded by the Federative Research Structures Technologies for Health and Drugs (SFR-TSM): http://sfr-tsm.ec-lille.fr/
01773738
en
[ "info", "info.info-mo" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01773738/file/article_Rairo2018vf.pdf
Sarah Ben Othman email: [email protected] Faten Ajmi Hayfa Zgaya email: [email protected] Slim Hammadi A Cubic Chromosome Representation for Patient Scheduling in the Emergency Department Keywords: emergency department, dynamic scheduling, scheduled and unscheduled patients, genetic algorithm, three-dimensional cubic algorithm In healthcare institution management, hospital flow control and the prediction of overcrowding are major issues. The objective of the present study is to develop a dynamic scheduling protocol that minimizes interference between scheduled and unscheduled patients arriving at the emergency department (ED) while taking account of disturbances that occur in the ED on a daily basis. The ultimate goal is to improve the quality of care and reduce waiting times via a two-phase scheduling approach. In the first phase, we used a genetic algorithm (based on a three-dimensional cubic chromosome) to manage scheduled patients. In the second phase, we took account of the dynamic, uncertain nature of the ED environment (the arrival of unscheduled patients) by continuously updating the schedule. Introduction Controlling hospital flows and anticipating overcrowding phenomena are major challenges in the management of healthcare production systems. Due to fluctuations in patient flow, healthcare stakeholders have to manage congestion and peaks of activity. Long patient waiting times now constitute a key problem in healthcare institutions in general and emergency departments (EDs) in particular. In France, this is often because the arrival of unscheduled patients at the ED interferes with the treatment of scheduled patients (i.e. patients to whom a scheduled consultation time has been already given and who are being treated or are in the waiting room). These unscheduled, real-time perturbations in the ED mean that rescheduling is then required. However, EDs lack procedures and tools for decision-making and appropriate rescheduling. The present study was performed in the ED at Lille University Medical Centre (Lille, France), which is particularly concerned with the issue of scheduling. As elsewhere in France, many patients wait in the ED for hoursas many as 10 hours, in some cases -before seeing a physician. These delays can even endanger the patient's life. The problem of long waiting times highlights the need to review the ED management process and implement measures to improve the quality of patient care. In the present study, we focused on optimizing the care process. We had noticed that the unscheduled arrival of patientsparticularly those requiring emergency treatment -perturbs the treatment process in the ED. If the ED is overcrowded, the arrival of unscheduled patients may interrupt the treatment of scheduled patients and/or require rescheduling around the more urgent cases. We have developed a novel, dynamic approach to patient scheduling based on two complementary processes. The first step concerns the management of scheduled patients in the ED, and is based on a genetic algorithm (GA) with a three-dimensional, cubic chromosome. The second phase involves updating the schedule after the arrival of an unscheduled patient, while taking account of staff availability and skills. The priorities here are to save patients' lives, minimize the overall waiting time for both scheduled and unscheduled patients, and optimize resource use. This approach has proved its effectiveness in improving healthcare processes. It optimizes patient treatment while taking account of the various perturbations that can occur in the ED. Performance indicators (such as total workload of medical staff, overall patient waiting time and response time for healthcare tasks) are then generated and analyzed as a guide to the effectiveness of patient management. State of the art 2.1 Optimization of resource allocation and patient scheduling in healthcare organizations Many studies have focusing on helping health system managers to make decisions and then evaluate their choices' impacts on the system's efficiency and effectiveness [START_REF] Flessa | Where efficiency saves lives: a linear programme for the optimal allocation of health care resources in developing countries[END_REF] [START_REF] Zon | Patient flows and optimal health-care resource allocation at the macro-level: A dynamic linear programming approach[END_REF]. Managers have to make the best possible decisions when faced with the constraints imposed by the environment within which they operate. Furthermore, managers must optimize cost and performance. To this end, optimization systems [START_REF] Baubeau | Les passages aux urgences de 1990 à 1998: une demande croissante de soins non programmés[END_REF] [4] [START_REF] Jacobson | An integer programming model for vaccine procurement and delivery for childhood immunization: A pilot study[END_REF] have been used to evaluate alternatives [6] [7]. The effectiveness of optimization systems is often measured in terms of the cost and the quality of services (a reduction in waiting times, the avoidance of a lack of resources, etc.). Scheduling problems in health facilities are usually linked to the services required by various categories of users. Each service involves several types of resource (e.g. physicians, beds, instruments, etc.), each of which has its own costs. Hence, a range of different data must be gathered, and resources must be assigned to care tasks [START_REF] Burke | Opening the Black Box: Measuring Hospital Information Technology Capability[END_REF]. In this context, the notion of scheduling in healthcare organizations is becoming increasingly complex: (i) staff should have the diverse skills required to meet the patients' needs, (ii) it is not possible to predict a patient's pathway into a healthcare organizations because factors such as the pathology and the institution's management approach are involved, and (iii) the hospital environment is highly stochastic, which complicates resource planning. Allocation of resources A general problem in healthcare is the allocation of scarce medical resources (such as operating theatres or medical staff) so that waiting times are as short as possible. A major difficulty lies in the fact that this distribution must be implemented several months in advance -even when the exact number of patients for each specialty remains uncertain. Another problem arises for cyclical schedules, where the allocation is defined over a short period (a week, for example) and then repeated over the time horizon. In most cases, however, demand varies from week to week: even when the exact demand for each week is known in advance, the weekly schedule cannot be adapted accordingly. Resource optimization Mathematical optimization is increasingly relevant in healthcare management. As Belien (2006) pointed out: "In the near future of public health, resources will become insufficient. Therefore, we need to find effective ways to plan, prioritize and make decisions" [START_REF] Jeroen | Exact and Heuristic Methodologies for Scheduling in Hospitals: Problems, Formulations and Algorithms[END_REF]. The hospital administration's main task is therefore to efficiently distribute the available medical services and resources. A wide variety of assignment and scheduling problems can arise [START_REF] Abdur | Operations Research in Healthcare: a survey[END_REF] [11]. Resource allocation is directly linked to a planning problem which consists in establishing the sequence for patient admission. As a general rule, patients requiring specific therapy are first placed on a waiting list and then admitted to hospital. Performance indicators related to the length of these lists are used to determine effectiveness. Long queues are to be avoided, as they represent an enormous cost to the healthcare system [12] [13]. The cost of queuing is a design parameter that must be established by the hospital board for each specialty. In a typical case, the cost is represented by a convex function in which marginal costs increase as the tail lengthens. Shorter lists are obviously preferred, although it is usually impossible (and sometimes perhaps not even desirable) to avoid a certain degree of queueing. Indeed, the absence of a queue for certain specialties might reveal the inefficient allocation of certain scarce resources. Hence, the basic scheduling problem in healthcare is the allocation of resources to medical specialties so as to minimize queuing costs. Clearly, the attribution process must be determined in advance, and may involve negotiations. Thus, resources are allocated at the beginning of a time horizon which can be quite long, ranging from a few months to several years. The number of patients for each specialty is therefore estimated in advance, and the actual number may differ considerably from the initial estimate. Furthermore, schedules are often created with reference to the planning horizon (e.g. one to four weeks), and then repeated cyclically. Actual demand may vary from one period to another, even when it is known in advance. A schedule must ensure that queues are as short as possible when the demand for care is maximal (relative to the selected schedule). Staff assignment Staff assignment is defined as an optimized construction process for the execution of care tasks. It is generally necessary to assign appropriately qualified staff to specific tasks, in order to meet the service's organization demands while complying with work regulations and seeking to satisfy individual preferences. This method has been adapted and applied to different fields, such as transport systems, healthcare systems, manufacturing, emergency services, and public services. Jaumard et al. (1998) presented a generalized linear programming model (based on the branch and bound algorithm) for the assignment of nurses with different skills [START_REF] Jaumard | A generalised linear programming model for nurse scheduling[END_REF]. The main problem is to find a set of individual schedules that satisfy demand-side constraints while minimizing wage costs and maximizing nursing preferences and quality of care. Millar and Kiragu (1998) used a network model for the cyclical and noncyclical planning of nurse schedules, in which the network's nodes represented a feasible model of work-stretch and off-stretch patterns [START_REF] Millar | Cyclic and non-cyclic scheduling of 12 h shift nurses by network programming[END_REF]. The resulting problem was essentially a model of the shortest path with lateral constraints. According to [START_REF] Blöchliger | Modeling staff scheduling problems. A tutorial[END_REF], construction of a practical model must provide a detailed analysis and a description of the basic elements [START_REF] Blöchliger | Modeling staff scheduling problems. A tutorial[END_REF]. [START_REF] Ernst | Staff scheduling and rostering: A review of applications, methods and models[END_REF] provided a detailed review of applications, models and algorithms for staff assignment, including the assignment of medical residents in hospitals [START_REF] Ernst | Staff scheduling and rostering: A review of applications, methods and models[END_REF]. [START_REF] Musa | Scheduling nurses using goal-programming techniques[END_REF] focused on a single-phase algorithm that took account of scheduling policies for hospital nurses and their preferences for the weekend [START_REF] Musa | Scheduling nurses using goal-programming techniques[END_REF]. [START_REF] Arthur | Multiple objective nurse scheduling[END_REF] were the first to use this method with the following four objectives: taking account of staff preferences, and decreasing the number of staff, the minimum number of employees, and staff dissatisfaction [START_REF] Arthur | Multiple objective nurse scheduling[END_REF]. In the first phase of their approach, a goal-based programming model was used to assign days-on and days-off to nurses over the two-week planning horizon. The second phase dealt with specific changes to nurse assignment via a heuristic procedure. Lastly, [START_REF] Bard | Cyclic preference scheduling of nurses using a Lagrangian-based heuristic[END_REF] developed a dual heuristic to solve nurses' cyclical preference schedules [START_REF] Bard | Cyclic preference scheduling of nurses using a Lagrangian-based heuristic[END_REF]. Formulation of the problem Emergency services are permanently confronted by interference between the care of scheduled patients and the arrival of unscheduled patients (particularly those requiring urgent treatment). At present, there is no satisfactory solution to this problem. The term "emergency" covers two distinct phenomena: recurring flows and sanitary crises. Firstly, recurring flows may be seasonal but the average short-or medium-term trends are known (i.e. per month or per year). However, even when the flows are known, the establishment of an efficient, effective, short-term management structure is a major challenge for healthcare production systems. Secondly, flows due to sanitary crises (flu epidemics, heat waves, cold waves, etc.) cannot be foreseen in terms of their magnitude and nature. In the present study, we considered that a given patient's treatment can be "splittable" or "non-splittable". In fact, a patient's treatment can be interrupted in order to deal with a patient requiring treatment more urgently. A patient may be treated at different times in different places. We next introduce the mathematical model used to formulate the problem, and then assess the set of solutions obtained with our approach. Parameters NP: a set of N patients to be treated, NP={P1, P2,…,PN}. MS: a set of M medical staff members, MS={m1, m2,..,mM}. Ns: the number of scheduled patients in the ED. Nns: the expected number of unscheduled patients. k: the medical staff member index mk. s j P : the subset of patients corresponding to "splittable" treatments. C : Boolean, set to 1 if medical staff members ml and mk are managing patients in common and are placed in healthcare rooms located at different sites during two periods with a gap between them, and set to 0 if not. For this, travel T is necessary. Institutional parameters T w : penalty weighting for patient travel between different sites within the healthcare organization. c w : penalty weighting for using the Emergency Department (ED) corridor c. r p BC : penalty weighting for exceeding the capacity of room r at the period p. k p MS : penalty weighting for exceeding medical staff member mk's workload during period p. PS G : gap of the treatment period spread penalty. The objective function Minimize:   Cw +   T Cw +   c Cw + () r p C BC + ) ( k p C MS +   PS CG (1) where   Cw : the cost generated by the waiting times of both scheduled and unscheduled patients in the ED. W is calculated as follows: , , 1 1, ( ) ns N Ns s j ns k j k k j W Min W W       , where Ws,j is the scheduled patients' waiting time and Wns,k is the unscheduled patients' waiting time.   T Cw : the cost generated by patient travel between the different sites within the healthcare organization. The objective function is a sum of penalty terms. Each of the terms refers to a specific, flexible constraint (see section 3.6). Strong constraints The following strong constraints influence the solution's feasibility. p r SSPT : the sum of splittable patient treatments (or portions of treatments) allocated to treatment room r at period p should not exceed the treatment room's capacity: , € s jj jpr PP A  ≤ (2) Linking the variables jpr X and jpr A related to splittable treatments: € s j j PP  , rR  , p  ,   * s jpr j jpr jpr jpr A Card P X AX        (3) The two parts of the above equations are required to check whether jpr X = 1, 0 jpr A  . p r SPS : the sum of patients with splittable treatments should be equal to:   s j Card P , ∀𝑃 𝑗 ∈ 𝑃 𝑗 𝑆 , () s jpr j r R p H A Card P    (4) p r NSPT : a non-splittable patient treatment should be assigned to a single treatment room: ns jj PP  , 1 jpr p H r R X    (5) pq r RP : a room cannot be used by two patients in two overlapping periods p and q: 1 2 1 2 1, , , j pr j qr j j X X P P NP r R       (6) pq r MSP : two medical staff members managing patients in common cannot be allocated during the same period or during two overlapping periods p and q: 1 2 1 2 1, , j p j q j j X X P P NP     (7) Flexible constraints The solution's quality is determined by the following flexible constraints. SP lk CPP : whenever two medical staff members managing patients in common are placed on different sites in two consecutive periods, a patient travel penalty is applied: , lk SP SP T m l mS k M Cw C    (8) p MPC : whenever a medical staff member is allocated to treat one or more patients in the corridor at period p, a corridor penalty is applied: , j c c jpc p NP p H U w U    (9) r p Cap : whenever at least two patients are treated at the same period p in the same room r specially in the overcrowding situation, a capacity penalty is applied: , r R p     R r S ,, rr p jpr p NP r R p H BC BC U      (10) In the following section, we solve the above-described problem while meeting the different constraints. To optimize the solution, we decided to adopt an aggregative approach without seeking to apply appropriate weightings. In real-life healthcare situations, it is very difficult to define suitable weights for these criteria. The present study assessed the results of simulations that generated some of these criteria separately or (in some cases) together. 4 The rolling-horizon approach to scheduling The scheduling environment Figure 1 shows the scheduling environment with three kinds of patients: urgent patients (UPs), scheduled patients (SPs) and non-scheduled patients (NSPs). Assumptions  A medical staff member is present in the scheduling horizon in the ED. The number of scheduled patients in a scheduling horizon is Ns, whereas the expected number of unscheduled patients is Nns. All the unscheduled patients arrive randomly at the ED; on arrival, they must be assigned with a theoretical scheduled consultation time.  In France, EDs never close. Each arriving patient j should be registered at the reception desk at time tarj. None of the patients who arrive at the ED are rejected, and all patients should be treated during the current scheduling horizon or the next scheduling horizon.  Each patient corresponds to a set of healthcare operations to be executed in a parallel or in a sequential manner by one or more medical staff members (staff physicians, nurses, interns, etc.).  Medical staff members are organized into teams. Each team contains at least one physician. Some teams contain additional staff members (nurses, paediatricians, etc.), depending on the patient's pathology.  The scheduling horizon H starts at time DH and ends at the time FH. In the present study, we consider that the duration of the scheduling horizon is 4 hours.  The scheduling horizon is divided into several periods whose durations are not necessarily equivalent. If two periods have the same duration, the number and the duration of slots in each period may differ. In general, a period contains multiple slots. A slot is allocated to a scheduled patient. Each period contains at least one slot. A slot's scheduled consultation time is given by the start time of the period to which it belongs. Hence, if two or more slots are included in a period, the scheduled patients assigned to the same slot have the same scheduled consultation time.  When the medical staff member becomes available, the waiting patient with the earliest scheduled consultation time is called. If the waiting room is full and it is not possible to call all the patients during the same scheduling horizon, the remaining patients and the new arrivals will receive a scheduled consultation time in the next scheduling horizon.  In the ED, the most urgent cases are given the highest priority. Hence, when patients requiring urgent treatment arrive, the current scheduling can be interrupted and rescheduling is required because these patients should not have to wait for a consultation. Performance measures Let the waiting time for a scheduled patient j (Ws,j) be the sum of the patients' waiting times between the registration and the theoretically scheduled consultation time War, and the waiting time before the first consultation Wfc, where: ) fc k fc k s k W t t   (16) The two equations ( 11) and ( 14) are mathematically equivalent but semantically different. In The scheduling procedure The sequential treatment of patients is dynamically scheduled, which requires the real-time generation of activity plans for each medical staff member. A multidisciplinary medical team is formed, and a treatment role is assigned to each team member. The schedule is updated whenever the patient input stream changes. Figure 2: the scheduling approach Our approach is based on an offline phase and an online phase. The offline phase consists in scheduling the scheduled patients who arrive at the ED. A GA is applied in this phase, and the scheduled patients correspond to the constraints of the GA. The online phase takes account of a dynamic feature; the arrival of unscheduled patients at the ED. These patients are treated with regard to their pathologies and their emergency status. Hence, the online phase uses a shifting and insertion method based on the notion of periods and horizons. The offline phase: scheduling with a GA The treatment plan is generated by applying a dynamic, responsive GA. The algorithm is designed to (i) optimize the assignment of patients to medical staff with the skills needed to treat them, and (ii) minimize patient waiting times and overall costs while maintaining the quality of care. The scheduling algorithm selects the appropriate medical staff member for the treatment of a given patient, according to the staff's availability and skills. An emergency alert resulting from the need for a medical staff member triggers an updating process by the scheduling algorithm. This situation may lead to the interruption of a patient's treatment and the initiation of treatment of a more urgent case or a shift in one or more treatment processes to make space for unscheduled patients requiring urgent treatment. The goal is to minimize the overall waiting time that the patient spends in the ED and the costs described in the third section of this article. The purpose of the GA is to provide an approximate solution to the optimization problem, insofar as an exact method cannot solve the problem within a reasonable time. The potential solution(s) provided by the GA necessarily requires the involvement of the different medical staff members in the ED. In the following section, we will describe our patient scheduling approach in detail. Definition of the chromosome We chose to use a three-dimensional cube chromosome with the following three axes: "medical staff", "patients", and "time". The time axis is divided into intervals of different sizes. The scheduling horizon is divided into several periods that do not necessarily have the same duration. If two periods have the same duration, the number and the duration of slots in each period may differ. In general, a period contains more than one slot. In view of the division of the time axis into many slots, each medical staff member is assigned to a patient in a specific slot from a specific period. For example, Figure 4 shows a medical staff member M1 treating patient P1 in the second slot of period 3. The Figure 5 shows the sequential assignment of medical staff members My1 and My2, to patients Pz1 and Pz2 in periods 3 and 4, respectively. The Figure 6 shows a multiskill cubic assignment, which is made possible by our choice of the type of chromosome. Here, patient PZ1 needs two different skill sets for his/her treatment, medical staff members My1 and My3 are assigned to the same period 3 and the same slot. Patient PZ2 needs a pair of different skills, and medical staff members My2 and My3 are assigned to the same period 4 and the same slot. The initial chromosome population The first step is the formation of an initial population as the starting point for execution of the algorithm. We used two methods to build the initial population:  The first method consists in recovering the initial population solutions (IniPopL) generated by a list algorithm with dynamic priority rules.  The second method consists in generating initial population solutions at random (IniPopR) but which are viable because they comply with the strong constraints. The details of the GA used in the present study are as follows: The controlled crossover schema This used in order to move the start time of the patient treatment process forward or backward for a given medical staff member. It does not change the assignment of patients, i.e. which medical staff member treats which patient). Only the "time" and "patient" axes are considered. Example The time axis is divided into 5 min intervals. Each slot in the time axis is a Boolean equal to 1 if the patient is assigned to the slot, or 0 if not. The crossover yields two viable offspring chromosomes, so no correction is needed. The viability is checked on the "medical staff" axis. In fact, we need two different medical staff members in the slots [START_REF] Abdur | Operations Research in Healthcare: a survey[END_REF][START_REF] Lee | Mediator approach to direct workflow simulation[END_REF][START_REF] Patrick | Dynamic multipriority patient scheduling for a diagnostic resource[END_REF][START_REF] Hans | Robust surgery loading[END_REF][START_REF] Jaumard | A generalised linear programming model for nurse scheduling[END_REF][START_REF] Millar | Cyclic and non-cyclic scheduling of 12 h shift nurses by network programming[END_REF] This phenomenon shows the value of using a three-dimensional cubic chromosome to check compliance with strong constraints. Chromosome A: The controlled mutation schema The mutation is a partially random operation that enables us to modify the solutions and move towards an optimum or perhaps move out of a local optimum. In our case, the mutation modifies the Booleans present in our chromosomes. Not all chromosomes are mutated; the probability of mutation is <1. If the chromosome is selected, it will then go through the slots (according to the "medical staff", "patient to treat", "time" axes) and change their values in accordance with simple rules. The slots are changed at random. Each slot has a predetermined probability of being mutated. If the selected slot is to be changed from 1 to 0, there are no additional conditions; only one patient is treated at a given time by a medical staff member. If the selected slot is to be changed from 0 to 1, we have to check that the medical staff member has the requisite skills and is available to treat the patient in the slot. If the condition is checked, the slot is mutated. This first phase of the mutation can thus remove patients from a medical staff member or assign them to him/her if he has the needed skills and is available. However, the durations of the patient treatment processes may be inaccurate, and the treatment is divided into several slots. This mutation is controlled by the "medical staff" axis. In order to comply with the viability of the final set of generated solutions (resulting from the application of the GA with controlled crossover and mutation operators), we set a number of constraints to be complied with by these operators. These constraints guide us in the search for the optimal solution and accelerate the convergence. Selection After crossover, our population increases as the offspring chromosomes join the parent chromosomes. It is then necessary to select the chromosomes that will be part of the new population before rescheduling. We first evaluated our set of solutions by calculating the value of the objective function (see section 3.3). We calculated its strength of each solution and normalized it as a percentage of the total strength. Selecting only the strongest solutions would not guarantee a great diversity in our solutions, and selecting solutions at random would perhaps remove good solutions. We decided to select a percentage of the best solutions, and then select those that remain on the roulette wheel. The probability of selection corresponds to the normalized strength. This ensures the selection of varied, strong solutions. The online phase: real-time rescheduling This phase deals with the interference between scheduled and unscheduled patients arriving at the ED, which prompts real-time rescheduling. The goal is to reduce the waiting time of both scheduled and unscheduled patients. The process looks at whether an unscheduled patient can be inserted into the schedule generated by the offline phase without affecting his/her neighbouring patients. To this end, the process first seeks medical staff members who have Mutation: changes are in shown grey the appropriate skills for treating the patient to be inserted. To take account of interference between scheduled and unscheduled patients, we need to consider the inter-period waiting time in each horizon. This work assumes that the scheduling horizon which represents consultation time window is divided into several periods as already mentioned above. A consultation for an unscheduled patient is scheduled in the first empty slot in the period, as shown in Figure 7. In principle, the start time of the first empty slot gives the patient's theoretical scheduled consultation time. The maximum acceptable number of unscheduled patients in the period is difficult to estimate because slots in the same period can have different lengths. For nonurgent patients, the real-time rescheduling is performed by the algorithms described below. The end of the consultation window max,H t is used as the scheduled consultation time for patients who are not included in any of the periods in the horizon H. The present approach assumes that each period p has its own length Δp, and that the start time of each period is x minutes behind the start time of its first free slot (Figure 7). The maximum workload level per period is Simulations and results Prior to our simulations, we collected data in the ED at Jeanne de Flandre Hospital (part of Lille University Medical Centre). This ED receives about 24,000 visits per year (an average of 458 per week and 66 per day), of which 20% take place in a short-stay hospitalization unit (SSHU) and 80% take place in an outpatient unit. It has 10 beds in the SSHU, 10 consultation boxes in the outpatient unit, a suturing room, a plaster room, an emergency room, and two waiting rooms. In the event of overcrowding, vacant beds in the SSHU can be transformed into consultation boxes. In the present section, we describe the effectiveness and efficiency of our approach to scheduling. We first describe the real data collected in the ED. Next, we generated realistic random instances of the real data and studied dynamic, rolling-horizon scheduling in more detail. With a view to investigating the interactions between the objective functions and determining how the patients' waiting times affect costs, we carried out several different computational experiments. Description of the data We analyzed a sample of data collected over a period of almost three years, from January 2011 to November 2013. The weekly variations in Figure 10 show troughs for holiday periods and peaks for flu epidemics. Outside the summer holidays, the mean data were very similar from one week to another. In contrast, we observed regular variations over the seven days of the week; this can be seen as recurring peaks in correlogram with a period of 7 (Figure 14). The value of managing overcrowding is emphasized by refining the time horizon. For effective decision-making, it is best to adopt a time scale that enables patient rescheduling. Following our observations in the ED and interviews with the medical staff, we noted that waiting times in the ED could be as long as 5 hours. The ED at Jeanne de Flandre Hospital did not have a decision support system or information system capable of managing overcrowding. Medical staff members gave the highest consultation priority to the most urgent patients and then to previously scheduled patients. Unscheduled patients in the ED had to wait in the waiting room and sometimes in corridors without obtaining a scheduled first consultation time, which increased their level of anxiety. Our approach's level of performance was compared with that of the conventional method used in the ED. A database analysis enabled us to simulate the patients' waiting times, which appeared to be excessive in some cases. Computational results and discussion As emphasized in section 4, our two-phase, rolling-horizon patient scheduling is revised whenever unscheduled patients requiring urgent treatment arrive in the ED, in order to optimize the schedule for patients whose waiting times are longer or shorter than expected. The waiting list varies over time as patients arrive and as patients are treated. We analyzed our computational results in two main steps. Firstly, we investigated the effectiveness of the GA-based algorithm and validated our chromosome model. Secondly, we implemented our approach in a real ED, and evaluated its applicability and performance. In order to evaluate our approach's level of performance, extend our computational results and generalize our method, we applied our two scheduling phases to solve 10 randomly generated problem instances with different numbers of patients. The results were compared with those obtained in practice (according to the ED database used by the medical staff) and those generated by the list algorithm (implemented in Java). These instances were generated from real data. In the ED, emergencies can be treated with the list algorithm in order to find a quick (but not necessary optimal) solution. The list algorithm's dynamic priority rules mean that it is particularly suited to the scheduling problem. It is flexible and is easily implemented in real time. Our problem is solved by dynamic priority rules: the patients' arrival time and level of urgency. The algorithm maintains a list of all the ready-to-be-scheduled tasks after registration at the reception desk. 1 shows the real ED data related to the test problem scheduling, together with the results obtained with the list algorithm, the GA-based approach, and the practical case. The gap between the solutions (related to the mean total waiting time per patient per instance (a day) is shown in Figure 15. Figure 15: the waiting time as a function of the number of patients Table 1 shows that minimization of the waiting time is associated with an increase in the medical staff's workload -especially when using the GA-based approach (see Figure 16). In fact, adjusting the physicians' total idle time minimizes the average total waiting time. Figure 16: the medical staff's workload on day instances As can also be seen from Table 1, the solutions obtained with the list algorithm for the first seven test instances are close to those obtained by the practical solution. For instances 8, 9 and 10, the list algorithm is markedly better than the practical solution. The gap between the solutions was low and never exceeded 5.41%. In fact, the list algorithm uses dynamic priority rules to schedule the patients. These rules depend on the care tasks that have yet to be performed. As the tasks do not have the same care pathway, the waiting time can be reduced for some patients (for the same scenario), while the workload of the medical staff remains the same. Table 2 shows the computation times for the list algorithm and the GA-based approach. In order to avoid the blind aspect of the genetic operators, we designed a controlled genetic crossover and mutation operators for the cubic representation of the chromosome. Furthermore, we integrated the solutions found by the list algorithm in the initial population into the GA, in order to accelerate convergence on the best solution. As can be seen from Table 2, the GA-based approach performs far better than the list algorithm in terms of the computation time and performs better than both the current system and the list algorithm in terms of the waiting time. The computation time are compared in Figure 17. The GA-based approach's computation time is significantly shorter than that of the other methods. In Table 2, the difference in execution time between the GA and the list algorithm is less than one minute. If this time difference is sufficient for the GA to generate high-quality solutions minimizing the patients' waiting time, then the solutions are relevant for clinical practice. As discussed above, the GA-based method can address real-life scheduling problems in EDs, so that the patients' waiting time can be minimized as a function on the urgency of the required treatment. The second phase of our scheduling method reschedules the medical staff's tasks whenever a new patient arrives in the ED; the goal is to minimize the patients' waiting times by optimizing the use of resources (medical staff members) and ensuring that a patient with a more severe condition is prioritized. Hence, to better address scheduling in realworld EDs, the GA-based scheduling approach requires a reliable information system and an adequate amount of training for scheduling staff. In addition, we have already studied and developed a multi agent system to model the communication and the interaction between the different medical staff member and the software agents [START_REF] Ben Othman | Agents endowed with uncertainty management behaviors to solve a multiskill healthcare task scheduling[END_REF]. The scheduled and re-scheduled approaches proposed in this paper are integrated in the agent behaviour in order to communicate to the medical staff member, the new care tasks to realize. Conclusion In the present study, we developed an innovative GA-based approach for scheduling both scheduled and unscheduled patients in an ED. The GA-based method assigns a theoretical consultation time to each patient on arrival. The goal of patient scheduling is to minimize the total waiting time and the overall cost. The GA-based approach grant a higher priority to the most urgent patients, while optimizing the medical staff's workload. The GA's performance has been enhanced by the incorporation of a cubic chromosome representation with novel, controlled genetic operators. In order to demonstrate the superiority of our approach, we applied it to a real ED. Simulations revealed that the GA-based approach improves the performance of patient scheduling in the ED and makes efficient use of the available resources. The computational results of our approach exceeded those of the practical approach. In the future, we intend to improve our approach to multiskilled healthcare task scheduling in the ED by combining a GA with multi-agent systems. cost generated by treating a patient in the corridor of the ED. cost generated by exceeding the capacity of room r at the period p. cost generated by exceeding medical staff member mk's workload at the period p. PS CG : the cost generated by the gap of the treatment period spread penalty. Figure 1 : 1 Figure 1: the scheduling environment  According to the stochastic behaviour of the medical staff's consultation time, let W fact, the scheduling method developed in the present study assigns a theoretical scheduled consultation time to each unscheduled patient at his/her time of arrival and then guides him/her to the waiting room at the scheduled consultation time. On the basis of the scheduled consultation time, War is calculated for each registered patient. The objective is to comfort patients and reduce their stress by keeping them informed of their waiting time prior to the first consultation. If the first consultation time is equal to the scheduled consultation time, are equal to 0. In the event of perturbations (overcrowding, a lack of medical staff, worsening in the patient's health status, etc.), the consultation times are rescheduled, the first consultation time increases, and so the patient's waiting time lengthens.Most of patients prefer early scheduled consultation times -especially when they arrive. To satisfy these preferences, the waiting time (based on the arrival time , ar k W ) should be reduced by assigning available medical staff with appropriate skills as quickly as possible. Figure 3 :Figure 4 : 34 Figure 3: a representation of a cubic chromosome (a patient/medical staff/time cube) Algorithm 1 : 1 construction of the IniPopL Input: Fixed NL, the size of the chromosome population InitPopL Fixed NR, the size of the chromosome population IniPopR Output: IniPopL of NL chromosomes generated by applying the list algorithm IniPopR of NR chromosomes generated by applying a random process Begin i:=0; IniPopL = Ø; while i<=NL do Find a cubic chromosome i as a feasible (or suboptimal) solution of a single objective optimization model by applying a list algorithm; end while IniPopL = IniPopL  {chromosome i}; i:=i+1; end Algorithm 2: the cubic GA approach Input InitPopL, InitPopR, N is the global size of initial population Output: a set of N good scheduling solutions Begin Construction of IniPopL: find NL feasible cubic chromosomes Construction of IniPopR: find NR=N-NL partial feasible solutions at random Merging of IniPopL and IniPopR (N cubic chromosomes) while (stop criterion are not reached) do  Evaluate individuals  Select 2 parents P1 and P2 at random  Apply a controlled crossover algorithm with a probability pc, in order to obtain offspring1 and offspring2  Apply a controlled mutation algorithm with a probability pm  Select N new individuals and build a new population  Update the stopping criterion end while end the mask is 0110, then the chromosome resulting from the crossover will be as follows: , [55-60] and [60-65] on offspring chromosome 2 because the same medical staff member cannot treat two different patients at the same time (a constraint related to the equation 7). Furthermore, patient 2 and patient 4 in offspring chromosome 2 must be treated by two different medical staff members in the slot [80-95]. Figure 7 : 7 Figure 7: an example in which an unscheduled patient's consultation time is determined on the basis of free slots. C  <HWp,h,max ) then Assign-patient w to the period p; End_If Tw= Debp: the slot's start time For i=1 to Nbsp do If FREE[i] ==0 then return Tw Else Tw = Tw + TAB[i] End_For Tw = Finp //the period p is overloaded Return Tw End_ Search-First-Free_Slot Algorithm: Scheduling_new_arrivals (Patient w, Time t, Horizon H) Input: DebH: the Start time of the horizon H represents the start time of the consultation; We consider that the arrival of unforeseen patient follows poisson distribution t: Current arrival time of patients FinH: The end of the horizon H; NBp,H: Number of periods in the horizon H; TABH: Table [1…. NBp,H] contains the different lengths of each period. Output: Tw the start time of the first consultation specifying the horizon, period and slot. Begin If Urgent_patient then No-wait-consultation If t<= DebH then p = 1 (the first period in the horizon) Else For (i = 1 to NBp,H ) do If t<= DebH + TABH [i] then p = i; Save the start of the period Debp Save the end of the period Finp End_If End_For End_If Tw = Search-First-Free_Slot (p, Horizon H, Patient w, Debp, Finp) End Scheduling_new_arrivals Figure 8 : 8 Figure 8: Database from the ED in Jeanne de Flandre Hospital Figure 9 : 9 Figure 9: The number of patients per month at the ED in Jeanne de Flandre Hospital Figure 10 : 10 Figure 10: the number of patients per week at the ED in Jeanne de Flandre Hospital Figure 11 : 11 Figure 11: the number of patients per day at the ED in Jeanne de Flandre Hospital Figure 12 : 12 Figure 12: Correlogram related to the monthly period. ACF: autocorrelation function. Figure 13 : 13 Figure 13: Correlogram relating to the daily period. ACF: autocorrelation function. Figure 14 : 14 Figure 14: Correlogram related to the hourly period. ACF: autocorrelation function. Medical staff Workload in practice (%)Medical staff Workload by GA (%) Figure 17 : 17 Figure 17: Execution times using the list algorithm and the GA-based approach Algorithm: Search-First-Free_Slot (Period p, Horizon H, Patient w, Debp, Finp) Inputs: Nbsp: Number of slots in the period p. TAB [1… Nbsp] a table contains the length of each slot. FREE[1… Nbsp] a table contains 1 or 0. FREE(i) = 0 if the slot i is free, otherwise the slot i is full. Output: Tw the start time of the first free slot Begin Calculate HWp,h ; If (HWp,h + Table 1 : 1 a comparison of the GA-based approach, the list algorithm and the practical case Day Number of patients GA based approach List algorithm Practical Number of no-wait Scheduled patients Unscheduled patients W (min) Medical staff's W Medical staff's workload (%) W Medical staff's consultation s (%) workload workload (%) (%) 1 8 48 205.3 80.2 242.2 82 245.72 80 0.55 2 17 50 212.6 71.3 216.4 73.5 218.47 72 0.5 3 20 44 230.5 67 267.8 65 266.39 65 0.48 4 28 31 298.5 88.2 358.8 87.2 359.08 85 0.58 5 6 58 197.5 80.5 215.2 79 215.65 78 0.52 6 12 105 290.2 88.3 303.4 90.2 305.05 89 0.50 7 12 70 222.6 75 230.2 76 231.04 76 0.64 8 14 38 223.5 85 265.4 90 270.18 90 0.68 9 10 29 278.4 93.2 296.2 92 305.26 92 0.58 10 18 24 187.6 83.2 198.5 82.5 209.63 81 0.78 Table Table 2 : 2 Computation time Day instance GA-based approach List algorithm W (min) Computation time (s) W (min) Computation time (s) 1 205.3 9.3 242.2 10.2 2 212.6 10.5 216.4 11.4 3 230.5 77.9 267.8 65.7 4 298.5 120.6 358.8 119.3 5 197.5 14.2 215.2 15.2 6 290.2 120.2 303.4 140.2 7 222.6 80.9 230.2 76.9 8 223.5 92.6 265.4 146.7 9 278.4 70.2 296.2 89.3 10 187.6 9.6 198.5 10.4 Acknowledgments This work was supported and funded by the Federative Research Structures Technologies for Health and Drugs (SFR-TSM): http://sfr-tsm.ec-lille.fr/ The three components: CRIStAL CNRS UMR 9189 (http://www.cristal.univ-lille.fr/), the EA2694 of the Public Health Laboratory of Lille University (http://ea2694.univ-lille2.fr) and the adult emergency department of CHRU de Lille are full partners of the SFR-TSM
00590815
en
[ "sdv.mp" ]
2024/03/05 22:32:18
2011
https://pasteur.hal.science/pasteur-00590815/file/PNAS_merged_file_25Nov10.pdf
Irina Gutsche Fasséli Coulibaly James E Voss Jérôme Salmon Jacques D'alayer Myriam Ermonval Eric Larquet Pierre Charneau Thomas Krey Françoise Mégret Eric Guittet Félix A Rey email: [email protected] Marie Flamand email: [email protected] Secreted dengue virus nonstructural protein NS1 is an atypical barrel-shaped high-density lipoprotein Keywords: Glycoprotein, secretion, electron cryomicroscopy, HDL, triglycerides INTRODUCTION DENV (genus Flavivirus, family Flaviviridae) is responsible for the major arthropodborne viral human disease of the tropics [START_REF] Mackenzie | Emerging flaviviruses: the spread and resurgence of Japanese encephalitis, West Nile and dengue viruses[END_REF]. It is estimated that 50-100 million dengue cases occur annually, ranging from mild fever to life-threatening dengue hemorrhagic fever (DHF) and dengue shock syndrome (DSS) [START_REF]Dengue: Guidelines for diagnosis, treatment, prevention and control[END_REF]. The number of severe forms can exceed half a million per year, and lead to tens of thousand deaths [START_REF] Mackenzie | Emerging flaviviruses: the spread and resurgence of Japanese encephalitis, West Nile and dengue viruses[END_REF]. DHF is associated to thrombocytopenia, coagulopathy, acute inflammation, frequent hepatomegaly, and most importantly, plasma leakage to which the risk of fatal hypovolemic shock (DSS) is associated [START_REF] Lei | Immunopathogenesis of dengue virus infection[END_REF][START_REF] Halstead | Dengue[END_REF][START_REF] Mairuhu | Treating viral hemorrhagic fever[END_REF]. It has been proposed that an inadequate immune response is the major cause of severe clinical manifestations [START_REF] Fink | Role of T cells, cytokines and antibody in dengue fever and dengue haemorrhagic fever[END_REF][START_REF] Green | Immunopathological mechanisms in dengue and dengue hemorrhagic fever[END_REF][START_REF] Lin | Autoimmune pathogenesis in dengue virus infection[END_REF]. Soluble mediators produced during the acute phase of the disease likely play a pivotal role in vascular permeability, as suggested by the rapid recovery of most DHF patients [START_REF] Basu | Vascular endothelium: the battlefield of dengue viruses[END_REF][START_REF] Seneviratne | Pathogenesis of liver involvement during dengue viral infections[END_REF][START_REF] Lisman | Haemostatic abnormalities in patients with liver disease[END_REF]. There is accumulating evidence that flavivirus NS1, a ~50 kDa nonstructural glycoprotein, participates to different stages of the virus life cycle. Part of NS1 resides in virally-induced intracellular organelles where it plays an essential role in viral replication [START_REF] Welsch | Composition and three-dimensional architecture of the dengue virus replication and assembly sites[END_REF][START_REF] Mackenzie | Immunolocalization of the dengue virus nonstructural glycoprotein NS1 suggests a role in viral RNA replication[END_REF][START_REF] Lindenbach | trans-Complementation of yellow fever virus NS1 reveals a role in early RNA replication[END_REF][START_REF] Muylaert | Genetic analysis of the yellow fever virus NS1 protein: identification of a temperature-sensitive mutation which blocks RNA accumulation[END_REF][START_REF] Westaway | Ultrastructure of Kunjin virus-infected cells: Colocalization of NS1 and NS3 with double-stranded RNA, and of NS2B with NS3, in virus-induced membrane structures[END_REF]. The protein, possibly modified by glycosylphosphatidylinositol (GPI), also associates to lipid rafts at the plasma membrane and mediates a signaling pattern common to GPI-anchored proteins in the presence of specific antibodies [START_REF] Noisakran | Association of dengue virus NS1 protein with lipid rafts[END_REF][START_REF] Noisakran | Characterization of dengue virus NS1 stably expressed in 293T cell lines[END_REF][START_REF] Jacobs | Dengue virus nonstructural protein 1 is expressed in a glycosyl-phosphatidylinositol-linked form that is capable of signal transduction[END_REF]. NS1 is eventually secreted by DENV-infected mammalian cells [START_REF] Flamand | Dengue virus type 1 nonstructural glycoprotein NS1 is secreted from mammalian cells as a soluble hexamer in a glycosylation-dependent fashion[END_REF][START_REF] Pryor | The effects of site-directed mutagenesis on the dimerization and secretion of the NS1 protein specified by dengue virus[END_REF][START_REF] Winkler | Evidence that the mature form of the flavivirus nonstructural protein NS1 is a dimer[END_REF] and released in the blood stream of infected individuals [START_REF] Alcon | Enzyme-linked immunosorbent assay specific to Dengue virus type 1 nonstructural protein NS1 reveals circulation of the antigen in the blood during the acute phase of disease in patients experiencing primary or secondary infections[END_REF][START_REF] Young | An antigen capture enzymelinked immunosorbent assay reveals high levels of dengue virus protein NS1 in the sera of infected patients[END_REF]. The protein is detectable in plasma from the onset of fever up to the first days of convalescence, at concentrations that can exceed several µg/mL [START_REF] Alcon | Enzyme-linked immunosorbent assay specific to Dengue virus type 1 nonstructural protein NS1 reveals circulation of the antigen in the blood during the acute phase of disease in patients experiencing primary or secondary infections[END_REF][START_REF] Young | An antigen capture enzymelinked immunosorbent assay reveals high levels of dengue virus protein NS1 in the sera of infected patients[END_REF][START_REF] Alcon-Lepoder | Secretion of flaviviral non-structural protein NS1: from diagnosis to pathogenesis[END_REF]. The amount of NS1 circulating in human sera appears significantly higher in patients who developed DHF rather than dengue fever [START_REF] Libraty | High circulating levels of the dengue virus nonstructural protein NS1 early in dengue illness correlate with the development of dengue hemorrhagic fever[END_REF], although it is not clear whether this effect is a cause or a consequence of plasma leakage. In vitro, the protein binds cell surface glycosaminoglycans [START_REF] Avirutnan | Secreted NS1 of dengue virus attaches to the surface of cells via interactions with heparan sulfate and chondroitin sulfate E[END_REF] and is targeted to late endosomes upon entry into target cells [START_REF] Alcon-Lepoder | The secreted form of dengue virus nonstructural protein NS1 is endocytosed by hepatocytes and accumulates in late endosomes: implications for viral infectivity[END_REF]. Pre-incubation of hepatocytes with soluble NS1 enhances subsequent infection by a homologous strain of DENV [START_REF] Alcon-Lepoder | The secreted form of dengue virus nonstructural protein NS1 is endocytosed by hepatocytes and accumulates in late endosomes: implications for viral infectivity[END_REF]. In addition, both soluble and cell-surfaceassociated NS1 are capable of modulating complement activation pathways through the formation of immune complexes or binding to host proteins such as the regulatory protein factor H, complement factor C4 or clusterin [START_REF] Avirutnan | Antagonism of the complement component C4 by flavivirus nonstructural protein NS1[END_REF][START_REF] Avirutnan | Vascular leakage in severe dengue virus infections: a potential role for the nonstructural viral protein NS1 and complement[END_REF][START_REF] Chung | West Nile virus nonstructural protein NS1 inhibits complement activation by binding the regulatory protein factor H[END_REF][START_REF] Kurosu | Secreted complement regulatory protein clusterin interacts with dengue virus nonstructural protein 1[END_REF]. In this paper, we investigated the structure/function relationship of DENV NS1. We determined a low-resolution three-dimensional reconstruction of the hexamer, which appears as an open barrel with a wide central channel. We identified that specific lipids associate to the NS1 particle, in particular triglycerides, cholesteryl esters and phospholipids that likely fit into the central cavity. These results point to striking similarities between DENV NS1 and high density lipoproteins (HDL) involved in vascular homeostasis. 6 RESULTS Three-dimensional (3D) organization of the DENV NS1 hexamer To get insights into the NS1 structure/function relationship, we sought to characterize the 3D organization of the secreted hexamer. We analyzed an authentic NS1 protein purified from the extracellular medium of Vero cells infected with DENV serotype 1 (DENV-1). Numerous crystallization trials failed to yield diffraction quality crystals. Using cryo-electron microscopy (cryo-EM), we obtained a 3D reconstruction of the DENV-1 NS1 hexamer. The resolution of the reconstruction (at about 3.0 nm) allows to visualize the protein as an open barrel with a 32 point-symmetry, approximately 10 nm in diameter and 9 nm in height, featuring a prominent central channel running along the molecular 3-fold axis (Fig. 1). Three 2-fold symmetric twisted rods, corresponding to the dimeric subunits, form the walls lining the channel. The lateral interactions between the dimeric building blocks take place along a fairly thin area representing at most 5 nm 2 . Each rod is made of two ellipsoidal lobes, which most likely correspond to the individual protomers (Fig. 1 and Fig. S2). The central channel has an estimated volume of 80 nm 3 with triangular openings of about 9 nm 2 at each end, rotated by 40 degrees about the 3-fold axis (Fig. 1C). The DENV NS1 dimeric subunits behave as membranous proteins in a Triton X-114 detergent phase partitioning assay. The very narrow interfaces observed between the dimeric subunits, together with the previously reported instability of the NS1 hexamer in nonionic detergents [START_REF] Flamand | Dengue virus type 1 nonstructural glycoprotein NS1 is secreted from mammalian cells as a soluble hexamer in a glycosylation-dependent fashion[END_REF][START_REF] Crooks | The NS1 protein of tick-borne encephalitis virus forms multimeric species upon secretion from the host cell[END_REF] and its resistance to high molarities of salt or to chelating agents (Fig. S3), indicated that the dimers are essentially held together by weak hydrophobic interactions. Further elements localizing within the channel are likely to be necessary to hold the dimeric rods together, for instance amphiphilic molecules such as lipids, as inferred by the dual behavior of the protein in a TX-114 detergent phase partitioning assay (Fig. 2). Whereas soluble and membranous proteins segregate in the aqueous and detergent phases, respectively, the detergent-treated NS1 partitions into both phases, with a higher proportion of protein retained in the detergent fraction (Fig. 2A). The DENV E protein, which contains a transmembrane anchor, remains exclusively in the detergent phase, as expected (Fig. 2A). NS1 recovered from the detergent-rich phase is essentially dimeric, as observed by treatment of the corresponding fraction with chemical cross-linker dimethylsuberimidate (DMS) and analysis of the resulting products by SDS/PAGE and Coomassie blue staining or mass spectrometry (Fig. 2B and 2C, respectively). In contrast, NS1 from the aqueous phase maintains its characteristic hexameric pattern with DMS treatment (Fig. 2B,2C). Thus, the soluble hexamer is composed of amphipathic dimeric subunits that behave as membrane proteins upon dissociation. This indicates that dimeric precursors likely interact with lipid membranes prior to hexamer assembly, possibly dragging lipids out of the membrane during the oligomeric transition. The NS1 protein is secreted as a lipoprotein particle rich in triglycerides. chromatography (TLC). A predominant species, well resolved on the TLC plate (Fig. 3A, arrow), was recovered and analyzed by nuclear magnetic resonance (NMR, Fig. 3B). The 1H NMR spectrum displays the characteristic signals of triglycerides (TG), including peaks at 5.17, 4.17 and 4.03 ppm related to protons of the glycerol moiety, and the corresponding cross peaks in 2D double quantum filtered correlation spectroscopy (DQF-COSY) (Fig. 3B). This was corroborated by gaz-liquid chromatography (GLC) analysis of NS1-associated TG (Fig. 3C). TG molecules are formed by three fatty acid chains that can all be different or all alike. Their length and degree of saturation is variable, although aliphatic chains with 16, 18 and 20 carbon atoms, which may contain one or two double bonds, are most frequently observed. GLC analysis of fatty acids derived from DENV-1 NS1 TG shows two major peaks corresponding to saturated palmitic acid (16:0) and unsaturated oleic acid (18:1), as well as minor polyunsaturated palmitoleic acid (16:1) and linoleic acid (18:2), and other peaks that likely correspond to background signals (Fig. 3C). Compared to authentic NS1 recovered from the supernatant of DENV-1 infected Vero cells, recombinant DENV-2 NS1 produced in Drosophila S2 cells also contains TG that display a homogeneous fatty acid profile composed of palmitic acid and stearic acid (18:0) (Fig. 3C). TG molecules extracted from control HDL particles are formed by palmitic, oleic and linoleic acid (Fig. 3C). The NS1 lipid moiety is similar to the lipid cargo of high density lipoproteins. In addition to TG, we were able to isolate cholesteryl ester (CE) molecules from the NS1 hexamer of DENV type-1 and -2. Sterol esters were isolated from recombinant NS1 preparations, separated on TLC plate and saponified. The corresponding sterol fractions were found to be identical to standard cholesterol both by GLC and mass spectrometry (Fig. 4). Other lipid species were identified by TLC and GLC, including mono-and diacylglycerol, phosphatidylcholine (PC) and phosphatidyl-ethanolamine (PE). We estimated the number of lipid molecules by comparing the amount of fatty acid chains (fatty acid methyl esters recovered by transesterification of NS1-associated lipids) to a defined amount of a C17 standard on GLC. By this method, we found that each NS1 hexamer binds 6 TG molecules (i.e. one per protomer), twice as many monoand diacylglycerol molecules, 16-33 CE and 18-27 phospholipids (Table 1). Cholesterol and sphingomyelin (SM) were also observed on TLC but not quantified. Overall, the NS1 lipid composition is very similar to that of HDL, although HDL show a higher lipid:protein weight ratio (Table 1). Accordingly, the NS1 hexamer is denser than what would be expected for an HDL particle of similar size (1.20-1.23 g/mL for the NS1 hexamer in comparison to 1.063-1.12 g/mL for 9-12 nm wide HDL) and rather fits within the class of very high density lipoproteins (VHDL, 1.21-1.25 g/mL), close to soluble proteins (1.26-1.28 g/mL). Of note, we found that the hexameric organization and lipid content is identical for two different DENV serotypes of the NS1 protein (Table 1 and Fig. 3C). Modeling lipid organization within the DENV NS1 channel. We calculated that about 70 lipid molecules could be extracted from a single NS1 hexamer particle, representing a total volume of roughly 75 nm 3 based on the specific volumes of the various lipid species [START_REF] Kumpula | Reconsideration of hydrophobic lipid distributions in lipoprotein particles[END_REF][START_REF] Nagle | Structure of lipid bilayers[END_REF]. These numbers do not take into account molecules of cholesterol and SM that are part of the lipid cargo. Depending on the lipid composition, and the presence of cholesterol in particular [START_REF] Nagle | Structure of lipid bilayers[END_REF], lipid compaction events can occur, suggesting that the whole NS1 lipid cargo can fit well into the 80 nm 3 channel. TG and CE probably constitute the central lipid core while charged lipids such as PC, PE and SM rather occupy the outer layers, at either end of the NS1 channel. As described above, we estimate that there are between 18-27 phospholipids in total (i.e. 9-13 per channel opening). Considering an average surface area of 0.5 nm 2 per phospholipid [START_REF] Kumpula | Reconsideration of hydrophobic lipid distributions in lipoprotein particles[END_REF][START_REF] Nagle | Structure of lipid bilayers[END_REF], these can fill about 60% of the 9 nm 2 triangular surface present at each end of the NS1 channel (Fig. 1), leaving extra space for other polar lipids, such as sphingolipids and glycolipids. Altogether, the amount of lipids extracted from the NS1 particle is compatible with the dimensions of the central channel. DISCUSSION The flavivirus nonstructural protein NS1 has long been reported to undergo a complex maturation process. On the one hand, it is attached to intracellular membranes and the surface of infected cells, on the other, it is secreted in the extracellular medium and circulates in the serum of infected patients. In this study, we used a combination of biochemical and structural approaches to investigate the organization and composition of NS1 released by DENV-infected cells. We obtained a cryo-electron microscopy reconstruction of the secreted form of NS1, which reveals a barrel-like hexameric particle of about 10 nm in diameter, in which the three dimeric rods interact along narrow lateral surfaces and form a wide central channel (Fig. 1). The channel was a most unexpected finding and we investigated its possible contribution to NS1 structure/function. As the contact areas between the dimers appeared insufficient to maintain the hexameric state of NS1 in solution [START_REF] Wodak | Structural basis of macromolecular recognition[END_REF], and were not consistent with the high stability of the protein in an aqueous environment, we searched for the presence of stabilizing elements that would localize within the channel. The dual behavior of the NS1 protein in a TX-114 detergent phase partitioning assay, in which the dimeric subunits partition to the detergent phase just as membrane proteins while the NS1 hexamer remains in the aqueous phase, indicated that amphiphilic molecules such as lipids could possibly be present (Fig. 2). Lipids could indeed be isolated from purified hexamer preparations and we observed a heterogenous population by TLC with one predominant species identified by NMR as TG (Fig. 3). Other NS1-associated lipid species included mono-and diacylglycerol, cholesterol, CE, PC, PE and SM, an overall lipid composition thus very close to that of endogenous HDL circulating in plasma. Despite the notable homology of their lipid content, DENV NS1 and HDL particles have a fundamentally different protein organization. While HDL particles are composed of narrow ribbons of apolipoprotein A1 that tie up a large lipid bundle [START_REF]Biochemistry of Lipids, Lipoproteins and Membranes[END_REF][START_REF] Silva | Structure of apolipoprotein A-I in spherical high density lipoproteins of different sizes[END_REF][START_REF] Catte | Structure of spheroidal HDL particles revealed by combined atomistic and coarse-grained simulations[END_REF], the NS1 hexamer consists of a thick protein shell organized as an open barrel that can only accommodate a much smaller lipid cargo compared to HDL of similar size. Accordingly, the density of the NS1 hexamer (1.20-1.23 g/mL) rather corresponds to the smallest subclass of HDL particles (i.e., less than 7 nm in diameter). We estimate that over 70 lipid molecules associate to an NS1 hexamer, which is in agreement with the amount of lipids that could theoretically fit into the central channel [START_REF] Nagle | Structure of lipid bilayers[END_REF]. According to a model of lipid distribution in lipoprotein particles [START_REF] Kumpula | Reconsideration of hydrophobic lipid distributions in lipoprotein particles[END_REF], we hypothesize that TG, along with CE, preferentially constitute the central core of the lipid cargo, around which polar lipids (PC, PE, SM in particular) can pack, their charged heads facing the aqueous environment at either opening of the NS1 channel. The mechanism by which NS1 acquires its lipid cargo appears to involve an initial interaction of NS1 dimers with intracellular membranes. Since recombinant NS1, lacking the putative glycosylphosphatidylinositol (GPI)-anchor signal present in the downstream NS2A coding region [START_REF] Noisakran | Association of dengue virus NS1 protein with lipid rafts[END_REF][START_REF] Jacobs | Dengue virus nonstructural protein 1 is expressed in a glycosyl-phosphatidylinositol-linked form that is capable of signal transduction[END_REF], shows no difference in lipid binding capacity in comparison to native NS1, a GPI modification cannot account for NS1 attachment to membranes. Early studies following the folding process of NS1 in the endoplasmic reticulum (ER) indicated that the NS1 monomer is water-soluble and becomes membrane-associated once the protein dimerizes [START_REF] Winkler | Newly synthesized dengue-2 virus nonstructural protein NS1 is a soluble protein but becomes partially hydrophobic and membrane-associated after dimerization[END_REF]. We propose that lipidbinding sites form during the dimerization process itself, possibly at the dimer interface (i.e. between the two lobes of the dimer, Fig. 1). The 1:1 molar ratio of TG to NS1 (i.e., six TG molecules per NS1 hexamer, Table 1) suggests that NS1 dimers would bind specifically the hydrophilic head of the lipid. As intracellular TG essentially accumulates between the two leaflets of the ER membrane, from which cytosolic lipid droplets (LD) arise [START_REF] Murphy | Mechanisms of lipid-body formation[END_REF][START_REF] Fujimoto | Cytoplasmic lipid droplets: rediscovery of an old structure as a unique platform[END_REF], one possibility is that NS1 dimers insert into the hemimembrane, thus gaining access to the neutral lipid pool. Such insertion would locally destabilize the organization of lipids, favoring the association of three NS1 dimers around a lipid cargo and its release from the membrane by pinching off, as pictured in Fig. S1. The association of secreted NS1 with TG links the DENV cycle to the biogenesis of LD within the infected cell. This corroborates a recent report indicating that the interaction of the DENV capsid (C) protein with LD is essential for viral replication and virus particle assembly [START_REF] Samsa | Dengue virus capsid protein usurps lipid droplets for viral particle formation[END_REF], thus raising the question as to whether C and NS1 cooperate in any way to hijack nascent LD and support intracellular viral processes. The discovery that DENV NS1 carries lipids in the extracellular milieu (Fig. S1) also has important pathophysiological implications. Lipoproteins are known to play a key role in vascular homeostasis and defects in lipoprotein functions can affect coagulation and predispose to vascular inflammation and thrombosis [START_REF] Stemerman | Lipoprotein effects on the vessel wall[END_REF][START_REF] Byrne | Triglyceride-rich lipoproteins: are links with atherosclerosis mediated by a procoagulant and proinflammatory phenotype?[END_REF]. By diverting certain lipids from their original fate, NS1 has the potential to interfere with the biogenesis of endogenous lipoprotein particles [START_REF] Blasiole | The physiological and molecular regulation of lipoprotein assembly and secretion[END_REF] or to deregulate the intracellular lipid sensing machinery during entry into target cells [START_REF] Glatz | Lipid sensing and lipid sensors[END_REF][START_REF] Huwiler | Lipids as targets for novel anti-inflammatory therapies[END_REF]. In line with our observations, several reports show that DENV-infected patients who developed DHF/DSS present a decrease of high-and low-density lipoprotein content in plasma, and altered levels of cholesterol and triglycerides [START_REF] Van Gorp | Changes in the plasma lipid profile as a potential predictor of clinical outcome in dengue hemorrhagic fever[END_REF][START_REF] Villar-Centeno | Biochemical alterations as markers of dengue hemorrhagic fever[END_REF][START_REF] Suvarna | Serum lipid profile: a predictor of clinical outcome in dengue infection[END_REF]. Moreover, the nutritional status of children appears to impact the risk of developing a fatal DHF/DSS [START_REF] Kalayanarooj | Is dengue severity related to nutritional status? Southeast Asian[END_REF]53). The NS1 lipoprotein particle can also directly modulate host response by binding to factors of the complement system [START_REF] Avirutnan | Antagonism of the complement component C4 by flavivirus nonstructural protein NS1[END_REF][START_REF] Kurosu | Secreted complement regulatory protein clusterin interacts with dengue virus nonstructural protein 1[END_REF], either as part of a viral escape mechanism or as a mean to exacerbate inflammation [START_REF] Falgarone | Chapter 8: Clusterin, a multifacet protein at the crossroad of inflammation and autoimmunity[END_REF][START_REF] Heinecke | The HDL proteome: a marker--and perhaps mediator--of coronary artery disease[END_REF]. In conclusion, our study identifies a novel class of lipoprotein particle, exemplified by the barrel-like DENV NS1 protein. The organization of the NS1 hexamer around a "soft" lipid core explains at least in part the lack of success in the crystallization efforts and the limited resolution of the cryo-EM reconstruction. The striking similarities of the NS1 lipid moiety with that of HDL suggests that NS1 has the potential of interfering with the vascular system and inducing important physiological disorders during its circulation in infected patients. The NS1 properties reported here are common to different DENV serotypes, opening promising new therapeutic avenues to fight dengue disease, such as interfering with NS1 secretion or targeting its hydrophobic channel. EXPERIMENTAL PROCEDURES Three-dimensional reconstruction of the DENV NS1 hexamer. Cryo-electron microscopy analysis of the NS1 protein sample was carried out with a Philips CM12 transmission electron microscope using a LaB6 filament at 120 kV. Images were analyzed with EMAN and EM software packages. Characteristic class averages representing different orientations of the particles on the micrograph were used to calculate an initial 3D reconstruction, which was further refined, as described in the Supporting Information section, to obtain a final electron density map to a resolution of about 3 nm. Triton X-114 detergent phase partitioning assay. Triton X-114 detergent phase partitioning was performed on purified preparations of NS1 using a pre-condensed preparation of Triton X-114 at a 1% final concentration. Following an overnight incubation at 4°C, potential aggregates were pelleted by high speed centrifugation (insoluble fraction) before separating the detergent from the aqueous phase at 30°C. In Full experimental procedures and associated references are available as Supporting Information. a set of experiments, proteins from the aqueous and detergent phases were further cross-linked with dimethylsuberimidate (DMS) at 4°C. Proteins were analyzed by SDS-PAGE and Coomassie Blue staining or by mass spectrometry on a PBS II mass reader (Ciphergen Biosystems, Inc.). The DENV envelope protein E was used a control transmembrane protein and detected by Western Blotting. Characterization of the NS1 lipid moiety. Potential lipid components were extracted using a standard solvent extraction procedure. Lipid were analyzed by TLC and stained with iodine or dichlorofluoresceine. Different lipid classes were trans-esterified for their characterization by GLC. Fatty acid methyl esters (FAMEs) were analyzed on an Agilent Technologies chromatograph model 6890 equipped with a BPX 70 fused silica capillary column (60mx 0.25mm i.d., 0.25 µm film thickness). One of the predominant lipid species was also subjected to NMR (see Supplemental Methods). The NMR spectra were acquired on a Bruker 600MHz spectrometer equipped with a triple resonance z-axis gradient cryoprobe at 298K, in CDCl3. Fig. 1 . 1 Fig. 1. Cryo-EM analysis of DENV-1 secreted NS1. (A) Representative field Fig. 2 . 2 Fig. 2. The NS1 hexamer is composed of amphipathic dimeric subunits. Fig. 3 . 3 Fig. 3. The NS1 hexamer carries a lipid cargo rich in triglycerides. (A) Thin layer Fig. 4 . 4 Fig. 4. Identification of NS1-associated cholesteryl ester by GLC-MS. (A, B) Gas Figure 1 Figure 2 AT 12 Figure 1 Table 1 : 1 Quantification by gas-liquid chromatography of different lipid species associated to the DENV-1 NS1 hexamer a,b a nmol, relative values using a C17 fatty acid calibration standard b lipids recovered from 200 µg purified protein preparations (approximately 4.5 nM of the 50 kD NS1 protomer; See Supplemental Material) c values reported for two different purified NS1 preparations Triglycerides Mono-/Diglycerides Cholesterol Ester ACKNOWLEDGEMENTS We thank Michel Guichardant (INSA Lyon/Institut Multi-disciplinaire de Biochimie des Lipides, Villeurbanne, France) and Paulette Hervé (Institut de Biologie Physico-Chimique, Paris, France) for their contribution to lipid characterization, Claire Huang and Rich Kinney (Center for Disease Control and Prevention, Fort Collins, CO) for providing the DENV-2 16681 cDNA clone, Michèle Bouloy (Institut Pasteur, Paris, France) and Vincent Deubel (Institut Pasteur, Cambodia) for their support. We acknowledge funds from the Institut Pasteur, Paris, France (DARRI 27265). Author Information Correspondence and requests for materials should be addressed to F.A.R. ([email protected]) or M.F. ([email protected]).
01771766
en
[ "sde" ]
2024/03/05 22:32:18
2017
https://amu.hal.science/hal-01771766/file/TDTH-SpecialIssue-SouthAsia_Review_final_submission.pdf
Kira Vinke email: [email protected] María J Martín Sophie Adams email: [email protected] Florent Baarsch email: [email protected] Dim Coumou Alberte Bondeau email: [email protected] Reik V Donner email: [email protected] Arathy Menon email: [email protected] Mahé Perrette email: [email protected] Kira Rehfeld email: [email protected] Maria A Martin email: [email protected] Alexander Robinson email: [email protected] Marcia Rocha email: [email protected] Michiel Schaeffer email: [email protected] Susanne Schwan email: [email protected] Olivia Serdeczny email: [email protected] Anastasia Svirejeva- Hopkins Climatic risks and impacts in South Asia: extremes of water scarcity and excess Keywords: South Asia, Climate Change, Climate Impacts, Water, Agriculture de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction South Asia, here referring to a region comprising the seven countries Bangladesh, Bhutan, India, the Maldives, Nepal, Pakistan and Sri Lanka, has a total population of about 1.6 billion people as of 2010, projected to rise to over 2.2 billion by 2050 (World Bank 2013b). The region has seen robust economic growth in recent years, yet poverty remains widespread and projected changes in the climate could severely affect the rural economy and agriculture. Dense urban populations meanwhile are especially vulnerable to heat extremes, flooding, and disease. This paper aims at providing a condensed overview of the scientific findings on the physical and biophysical impacts in South Asia based on the Turn Down the Heat Report (2013). Both the methodology and scope of this analysis reflect those of the report. The framing of this overview, however, diverges from the original form as can be seen from the structure outlined below. The evaluation of the presented findings thus also follows a different approach. The first section (Climate) gives a concise, state of the art overview of the physical aspects of climate change, such as temperature or precipitation changes, to be expected in South Asia in a 2°C and a 4°C world, respectively. While current warming levels have already led to observable, non-negligible impacts, they are not the focus of this analysis. This paper rather aims at highlighting the difference between a 2°C and 4°C world, in order to clarify the consequences of current choices in climate policy. The climate section will provide the basis for the analysis of climate change impacts on different sectors presented in the next section (Impacts). Here, the focus lies on the combined and multiple, interacting physical and biophysical impacts that climatic changes have on human systems, organized into two sections that are also intertwined: water and agriculture-related impacts. This is followed by an analysis of how these resulting impacts interact with human livelihoods. An overview of the results is given in tabular form in Online Resource 1, a short version thereof in Figure 1. Climate The results on temperature and precipitation in South Asia in this section are, if not referenced otherwise, based on our own analysis (compare [START_REF] Coumou | Historic and future increase in the global land area affected by monthly heat extremes[END_REF] of five bias-corrected CMIP5 models as in the ISIMIP effort [START_REF] Warszawski | The Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP): project framework[END_REF]. The terms "2°C world" and "4°C world" therein refer to the scenarios RCP2.6 and RCP8.5 by the end of the century (which refers to the average of the time period 2071-2099, if not mentioned otherwise). The baseline period is 1951-80. Our understanding of the physical aspects of climate change presented in this section is different for each type of climatic change: For example, in contrast to the processes behind temperature responses to increased greenhouse gas emissions, which are fairly well understood, projecting the hydrological cycle poses inherent difficulties because of the higher complexity of the physical processes and the scarcity of long-term, high-resolution rainfall observations [START_REF] Allen | Constraints on future changes in climate and the hydrologic cycle[END_REF]. Precipitation projections hence have a much larger spread and uncertainty than temperature projections, both for strength and localization. Temperature A warming trend has begun to emerge over South Asia in the last few decades, particularly in India, and appears to be consistent with the signal expected from human-induced climate change [START_REF] Kumar | The once and future pulse of Indian monsoonal climate[END_REF]. As for the 21 st century, a 2°C world shows substantially lower average warming over the South Asian land area than would occur in a 4°C world. In a 4°C world, South Asian summer temperatures are projected to increase by 3°C to nearly 6°C above the baseline by 2100, with the warming most pronounced in Pakistan (see Figure OR1, in Online Resource 1). While that pattern is the same in a 2°C world, the warming by the end of the century is limited to 2°C in the North West and to 1°C to 2°C in the remaining regions. In absolute terms, inland regions in India warm somewhat more than the coast. Relative to the local year-to-year natural variability -which is the relevant measure for adaptation capacity -the pattern is reversed, especially in the southwest. In a 4°C world, the west coast and southern India, as well as Bhutan and northern Bangladesh, even shift to new climatic regimes, with the monthly temperature distribution moving 5-6 standard deviations toward warmer values. These projections are consistent with other assessments based on CMIP3 models (see, e.g. [START_REF] Kumar | The once and future pulse of Indian monsoonal climate[END_REF]. Heat Extremes The exposure to an increase in heat extremes could be substantially limited by holding warming below 2°C compared to the prospects of a 4°C world [START_REF] Coumou | Historic and future increase in the global land area affected by monthly heat extremes[END_REF]. In a 4°C world, our model analysis for South Asia shows a strong increase in the frequency of boreal summer months hotter than 5-sigma (with respect to the historical mean) over the Indian subcontinent, especially in the south and along the coast as well as for Bhutan and parts of Nepal (Figure OR2, right bottom panel). By 2100, there is an approximately 60-percent chance that a summer month will be hotter than 5-sigma in the multimodel mean, very close to the global average percentage. The limitation of surface area for averaging to South Asia, however, implies that there is larger uncertainty about the timing and magnitude of the increase in frequency of extremely hot months. Still, by the end of the 21 st century, most summer months in the north of the region (>50 percent) and almost all summer months in the south (>90 percent) would be hotter than 3-sigma under RCP8.5 (Figure OR2, right top panel). In a 2°C world, in contrast, most of the high-impact heat extremes projected by RCP8.5 for the end of the century would be avoided. Extremes beyond 5-sigma would be virtually absent, except for the southernmost tip of India and Sri Lanka (Figure OR2, bottom left panel). The less extreme months (i.e., beyond 3-sigma), however, would still increase substantially and cover about 20 percent of the surface area of the Indian subcontinent (Figure OR2, top left panel). The increase in frequency of these events would occur in the near term and level off by mid-century. Thus, irrespective of the future emission scenario, the frequency of extreme summer months beyond 3-sigma in the near term would increase several fold. By the second half of the 21 st century, mitigation would have a strong effect on the number and intensity of extremes. For the Indian subcontinent, the multi-model mean of all CMIP5 models projects that warm spells, with consecutive days beyond the 90 th percentile, will lengthen to 150-200 days under RCP8.5, but only to 20-45 days under RCP2.6 [START_REF] Sillmann | Climate extremes indices in the CMIP5 multimodel ensemble: Part 2. Future climate projections[END_REF]. Precipitation A warmer atmosphere can carry significantly more water than a cooler one based on thermodynamic considerations. Taking into account energy balance considerations, climate models generally project an increase in global mean precipitation of about 2 percent per degree of warming. In the 5 bias-corrected GCMs analyzed here, annual mean precipitation increases under both emissions of greenhouse gases and aerosols in the RCP2.6 and RCP8.5 scenarios over most areas of the region (Figure OR3, top row). The notable exception is western Pakistan. The percentage increase in precipitation is enhanced under RCP8.5, and the region stretching from the northwest coast to the southeast coast of the Indian peninsula will experience the highest percentage (~30 percent) increase in annual mean rainfall. The percentage change in summer (JJA) precipitation (i.e., during the wet season) resembles that of the change in annual precipitation (Figure OR3, bottom row). The winter (DJF) precipitation (Figure OR3, middle row) shows a relative decrease in Pakistan and the central and northern regions of India, whereas the rest of the regions show inter-model uncertainty in the direction of change under the RCP8.5 scenario. This is in agreement with previous studies based on the IPCC AR4 (CMIP3) models (e.g., [START_REF] Chou | Asymmetry of tropical precipitation change under global warming[END_REF], which suggest that the wet season gets wetter and the dry season gets drier. Under RCP2.6 the direction of the percentage change in winter rainfall shows large inter-model uncertainty over almost all regions of India. In addition to these patterns, there are observed increases in the frequency of the most extreme precipitation events [START_REF] Gautam | Climate Change and Conflict in South Asia[END_REF][START_REF] Gautam | Aerosol and rainfall variability over the Indian monsoon region: Distributions, trends and coupling[END_REF] , with more extreme events occurring over the west coast and central and northeast India [START_REF] Ajayamohan | Indian Ocean Dipole Modulates the Number of Extreme Rainfall Events over India in a Warming Environment[END_REF][START_REF] Goswami | Increasing trend of extreme rain events over India in a warming environment[END_REF][START_REF] Singh | On climatic fluctuations and environmental changes of the indo-gangetic plains, India[END_REF]. Also, the frequency of short drought periods increases [START_REF] Deka | Trends and fluctuations of rainfall regime in the Brahmaputra and Barak basins of Assam, India[END_REF]. [START_REF] Deka | Trends and fluctuations of rainfall regime in the Brahmaputra and Barak basins of Assam, India[END_REF] attribute this to a superposition of the effects of global warming on the normal monsoon system. They argue that these changes "indicate a greater degree of likelihood of heavy floods as well as short spell droughts. This is bound to pose major challenges to agriculture, water, and allied sectors in the near future." Monsoon Depending on the skill metric, most models are not able to resolve elementary aspects of the monsoon (onset, duration, break/active phases). However, model projections in general show an increase in the Indian monsoon rainfall under future emission scenarios of greenhouse gases and aerosols. The latest generation of models (CMIP5) confirms this picture, projecting an overall increase of approximately 2.3% per degree of warming for summer monsoon rainfall (Menon et al. 2013). The increase in precipitation simulated by the models is attributed to an increase in moisture availability in a warmer world. It is, somewhat paradoxically, found to be accompanied by a weakening of the monsoonal circulation [START_REF] Bollasina | Anthropogenic aerosols and the weakening of the South Asian summer monsoon[END_REF][START_REF] Krishnan | Will the South Asian monsoon overturning circulation stabilize any further?[END_REF][START_REF] Turner | Climate change and the South Asian summer monsoon[END_REF], which is explained by energy balance considerations [START_REF] Allen | Constraints on future changes in climate and the hydrologic cycle[END_REF]. Compared to the pre-industrial period, selected CMIP5 models show an increase in mean monsoon rainfall of 5-20 percent in a 4 °C world [START_REF] Jourdain | The Indo-Australian monsoon and its relationship to ENSO and IOD in reanalysis data and the CMIP3/CMIP5 simulations[END_REF]. A significant uncertainty remains (see also hashed areas in Figure OR3), compare [START_REF] Collins | Observational challenges in evaluating climate models[END_REF] and Sperber et al. (2012). Recent observations of total rainfall amounts during the monsoon period indicate a decline in the last few decades [START_REF] Bollasina | Anthropogenic aerosols and the weakening of the South Asian summer monsoon[END_REF][START_REF] Srivastava | Assessment on vulnerability of sorghum to climate change in India[END_REF][START_REF] Turner | Climate change and the South Asian summer monsoon[END_REF][START_REF] Wang | Recent change of the global monsoon precipitation (1979-2008)[END_REF]. While the observed decline is inconsistent with the projected effects of global warming, there are indications that the decline could be (at least in part) due to the effects of black carbon and other anthropogenic aerosols [START_REF] Bollasina | Anthropogenic aerosols and the weakening of the South Asian summer monsoon[END_REF][START_REF] Turner | Climate change and the South Asian summer monsoon[END_REF]. Also, although most studies agree on the existence of this decrease, its magnitude and significance are highly dependent on the subregion on which the analysis is performed and the dataset that is chosen. While most modeling studies project average annual mean increased monsoonal precipitation on decadal timescales, they also project significant increases in inter-annual and intra-seasonal variability [START_REF] Endo | Future changes and uncertainties in Asian precipitation simulated by multiphysics and multi-sea surface temperature ensemble experiments with high-resolution Meteorological Research Institute atmospheric general circulation models (MRI-AGCMs)[END_REF][START_REF] Kumar | The once and future pulse of Indian monsoonal climate[END_REF][START_REF] May | The sensitivity of the Indian summer monsoon to a global warming of 2°C with respect to preindustrial times[END_REF]Menon, Levermann, and Schewe 2013;[START_REF] Sabade | Projected changes in South Asian summer monsoon by multi-model global warming experiments[END_REF][START_REF] Turner | Climate change and the South Asian summer monsoon[END_REF]:  An increase in the frequency of years with above-normal monsoon rainfall and of years with extremely deficient rainfall [START_REF] Endo | Future changes and uncertainties in Asian precipitation simulated by multiphysics and multi-sea surface temperature ensemble experiments with high-resolution Meteorological Research Institute atmospheric general circulation models (MRI-AGCMs)[END_REF][START_REF] Kripalani | South Asian summer monsoon precipitation variability: Coupled climate model simulations and projections under IPCC AR4[END_REF].  An increase in the seasonality of rainfall, with more rainfall during the wet season [START_REF] Fung | Water availability in +2°C and +4°C worlds[END_REF][START_REF] Turner | Climate change and the South Asian summer monsoon[END_REF], and an increase in the number of dry days [START_REF] Gornall | Implications of climate change for agricultural productivity in the early twenty-first century[END_REF]) and droughts [START_REF] Dai | Increasing drought under global warming in observations and models[END_REF][START_REF] Kim | Future pattern of Asian drought under global warming scenario[END_REF].  An increase in the number of extreme precipitation events [START_REF] Endo | Future changes and uncertainties in Asian precipitation simulated by multiphysics and multi-sea surface temperature ensemble experiments with high-resolution Meteorological Research Institute atmospheric general circulation models (MRI-AGCMs)[END_REF][START_REF] Kumar | The once and future pulse of Indian monsoonal climate[END_REF]. Changes in monsoon variability are expected to pose major challenges to human communities which depend on precipitation and river runoff as major sources of freshwater (see Water-Related Impacts). There are particularly large uncertainties in projections of spatial distribution and magnitude of the heaviest extremes of monsoon rainfall [START_REF] Turner | Climate change and the South Asian summer monsoon[END_REF]. A potential abrupt change in the monsoon [START_REF] Schewe | A statistically predictive model for future monsoon failure in India[END_REF] caused by global warming, toward a much dryer, lower rainfall state could cause major droughts which would likely precipitate a major crisis in South Asia. At this stage such a risk remains speculative -but clearly demands further research given the significant consequences of such an event. Glacial Loss and River Flow Most of the Himalayan glaciers, where 80 percent of the moisture is supplied by the summer monsoon, have been retreating over the past century. The Indus and the Brahmaputra basins depend heavily on snow and glacial melt water, which make them extremely susceptible to climate-change-induced glacier melt and snowmelt [START_REF] Immerzeel | Climate change will affect the Asian water towers[END_REF]. Very substantial reductions in the flow of the Indus and Brahmaputra in late spring and summer are projected for the coming few decades with a shift toward high winter and spring runoff likely well before a 2°C warming. These trends are projected to become quite extreme in a 4°C warming scenario [START_REF] Diffenbaugh | Response of snow-dependent hydrologic extremes to continued global warming[END_REF]. The Ganges, due to high annual downstream precipitation during the monsoon season, is less dependent on melt water [START_REF] Immerzeel | Climate change will affect the Asian water towers[END_REF]). The differences between the river basins are analyzed in some detail by [START_REF] Vliet | Global river discharge and water temperature under climate change[END_REF]. Combined with precipitation changes, loss of glacial ice and a changing snowmelt regime could lead to substantial changes in downstream flow extremes. For example, the Brahmaputra may less frequently experience extreme low flow conditions in the future [START_REF] Gain | Impact of climate change on the stream flow of the lower Brahmaputra: trends in high and low flows based on discharge-weighted ensemble modelling[END_REF]). However, there could be a strong increase in peak flow, which is associated with flood risks [START_REF] Ghosh | Impact of climate change on flood characteristics in Brahmaputra basin using a macroscale distributed hydrological model[END_REF]. Combined with projected sea-level rise, this could have serious implications for Bangladesh and other low-lying areas in the region [START_REF] Gain | Impact of climate change on the stream flow of the lower Brahmaputra: trends in high and low flows based on discharge-weighted ensemble modelling[END_REF]). Sea-level Rise Current sea levels and projections of future sea-level rise are not uniform across the world and projections of local sea-level rise in South Asia show a stronger increase compared to higher latitudes [START_REF] Perrette | A scaling approach to project regional sea level rise and its uncertainties[END_REF]. Using a compilation of IPCC AR5 and other recent studies (see methods in World Bank 2014), regional sea-level rise in South Asia by the end of the 21 st century is projected to be approximately 0.65m (0.4m to 1.2m) in a 4°C and 0.4m (0.2m to 0.7m) a 2°C world (relative to 1986-2005). This is generally around 5-10 percent higher than the global mean. The projections of sea-level rise for a 2°C and a 4°C world (corresponding to RCP 2.6 and RCP 8.5, respectively) start diverging significantly only in the second half of the 21 st century, each roughly 10 years ahead of the corresponding global mean development. The rate of sea-level rise, however, is an indicator with an even more pronounced difference between high-and low emission scenarios, with implications for adaptation processes and commitment to a larger divergence of sea-level rise between 2 and 4°C Worlds post 2100. It features end-of-century rates of 13mm/yr (7 to 24 mm/yr) in a 4°C world, down to 4mm/yr (1 to 7 mm/yr) in a 2°C world. Note that the projections presented here include only the effects of human-induced global climate change and not those of local land subsidence due to natural or human influences; these factors need to be accounted for in projecting the local and regional risks, impacts, and consequences of sea-level rise. Tropical Cyclones For the northern Indian Ocean, recent changes in the total annual tropical cyclone (TC) frequency have not been significant [START_REF] Knutson | Tropical cyclones and climate change[END_REF]. However, the frequency of strong cyclones (SC) of category 4 and 5 has considerably increased over the last decades (1975-89: only 1 SC, 1990-2004: 7 SCs;cf. (Webster et al. 2005)). In parallel, the maximum wind speeds (within individual TCs) also displayed a significant upward trend [START_REF] Elsner | The increasing intensity of the strongest tropical cyclones[END_REF]. Most of the recent projections of future changes in the regional TC characteristics still suffer from an insufficient representation of TC generating mechanisms in state-of-the-art general circulation models (GCMs) [START_REF] Emanuel | Hurricanes and Global Warming: Results from Downscaling IPCC AR4 Simulations[END_REF]. As a consequence, there has been no full quantitative (and partly even no qualitative) consensus among different studies regarding expected changes in the (still generally low) annual TC frequency in the northern Indian Ocean [START_REF] Murakami | Future Changes in Tropical Cyclone Activity Projected by the New High-Resolution MRI-AGCM*[END_REF][START_REF] Tory | Projected Changes in Late-Twenty-First-Century Tropical Cyclone Frequency in 13 Coupled Climate Models from Phase 5 of the Coupled Model Intercomparison Project[END_REF]). However, some general tendencies have been identified recently:  For a moderate 2°C warming scenario by 2100, [START_REF] Gualdi | Changes in Tropical Cyclone Activity due to Global Warming: Results from a High-Resolution Coupled General Circulation Model[END_REF] projected a systematic decrease of TC frequencies, while the average TC duration remains almost unchanged in comparison to present day. In turn, the cyclogenesis potential under the B1 emission scenario has been found to increase by 6% [START_REF] Caron | Analysing present, past and future tropical cyclone activity as inferred from an ensemble of Coupled Global Climate Models[END_REF].  Under the A1B emission scenario (about 3.5°C warming in comparison to pre-industrial level by 2100), ensemble simulations with a high-resolution model have shown decreasing TC frequencies over the Bay of Bengal by 31% until 2100, but a 46% increase over the Arabian Sea [START_REF] Murakami | Future changes in tropical cyclone activity in the North Indian Ocean projected by high-resolution MRI-AGCMs[END_REF]. For the entire northern Indian Ocean [START_REF] Emanuel | Hurricanes and Global Warming: Results from Downscaling IPCC AR4 Simulations[END_REF], this results in an overall increasing frequency. There is, however, a considerable uncertainty (J.-H. [START_REF] Kim | Future changes in tropical cyclone genesis in fully dynamic ocean-and mixed layer ocean-coupled climate models: a low-resolution model study[END_REF] related to the considerable spatial heterogeneity of trends as well as strong intra-annual variability [START_REF] Murakami | Future changes in tropical cyclone activity in the North Indian Ocean projected by high-resolution MRI-AGCMs[END_REF][START_REF] Murakami | Future Changes in Tropical Cyclone Activity Projected by the New High-Resolution MRI-AGCM*[END_REF] Regarding future TC intensities, [START_REF] Murakami | Future Changes in Tropical Cyclone Activity Projected by the New High-Resolution MRI-AGCM*[END_REF] revealed a general upward tendency which is in accordance with a previously reported 10% increase in the cyclogenesis potential [START_REF] Caron | Analysing present, past and future tropical cyclone activity as inferred from an ensemble of Coupled Global Climate Models[END_REF].  Under the even more severe A2 scenario, the possible number of TCs in the northern Indian Ocean would increase even further by about 16% in comparison to present day [START_REF] Caron | Analysing present, past and future tropical cyclone activity as inferred from an ensemble of Coupled Global Climate Models[END_REF], which is accompanied by a moderate increase in cyclogenesis potential [START_REF] Chattopadhyay | On the variability of projected tropical cyclone genesis in GCM ensembles[END_REF].  Recent CMIP5 model results under RCP8.5 are in qualitative agreement with the aforementioned findings [START_REF] Tory | Projected Changes in Late-Twenty-First-Century Tropical Cyclone Frequency in 13 Coupled Climate Models from Phase 5 of the Coupled Model Intercomparison Project[END_REF]. Impacts As climate change impacts are often closely intertwined with one another, a clear demarcation between them poses a methodological challenge. While many classifications may be valid, this paper deploys the following approach in order to provide an overview of the most important physical and biophysical impacts possibly affecting human life in South Asia: We provide a focus on physical and biophysical impacts on human systems, categorized into waterand agriculture-related impacts. Whereas these are closely interconnected (for instance, impacts on water influence agriculture, but agriculture also influences the atmospheric water cycle), they provide a working structure to form a representative description of the most significant impacts and their concomitant effects on livelihoods in South Asia. The complex interactions of impacts with livelihoods are divided into hunger and poverty, health, migration, and conflict. Naturally, these topics are also interlinked and there is a need to further connect the research, in order to fully assess aggregated risks from different sectors for the region. Water-Related Impacts Many of the climate risks and impacts that pose potential threats to populations in South Asia are associated with changes in the hydrological cycle -extreme rainfall, droughts, and declining snow fall and glacial loss in the Himalayas leading to changes in river flow. In the coastal regions those are combined with the consequences of sealevel rise and increased tropical cyclone intensity. The climate of South Asia is dominated by the monsoon: The timely arrival of the summer monsoon, and its regularity, are critical for the rural regions and food production in South Asia. The Indus, the Ganges, and the Brahmaputra basins provide water to approximately 750 million people (209 million, 478 million, and 62 million respectively in the year 2005; [START_REF] Immerzeel | Climate change will affect the Asian water towers[END_REF]. In fact, a fifth of the world's population depends on the ecosystem of the Greater Himalaya region. An increasing occurrence of extremely low snow years and a shift toward extremely high winter/spring runoff and extremely low summer runoff would increase the flood risk during the winter/spring, and decrease the availability of freshwater during the summer [START_REF] Giorgi | Higher Hydroclimatic Intensity with Global Warming[END_REF]. Floods The flooding events influenced or caused by climate change include glacial lake outbursts [START_REF] Bates | of the Intergovernmental Panel on Climate Change[END_REF][START_REF] Lal | Implications of climate change in sustained agricultural productivity in South Asia[END_REF][START_REF] Mirza | Climate change, flooding in South Asia and implications[END_REF], flash floods, inland river floods, extreme precipitation-causing landslides, and coastal river flooding, combined with the effects of sea-level rise and storm-surge-induced coastal flooding. Precipitation is the major cause of flooding [START_REF] Mirza | Climate change, flooding in South Asia and implications[END_REF] for the example of the 2010 flash flood in Pakistan, [START_REF] Webster | Were the 2010 Pakistan floods predictable?[END_REF]. Since 1980, the risks from flooding have grown mainly due to population and economic growth in coastal regions and low-lying areas. In South Asia, almost 45 million people were exposed to floods in 2010, accounting for approximately 65 percent of the global population exposed to floods in that year (UNISDR 2011). The proportion of the population prone to river flooding increases rapidly with higher levels of warming [START_REF] Arnell | The impacts of climate change on river flow regimes at the global scale[END_REF]): Globally about twice as many people are predicted to be prone to flooding in 2100 in a 4°C world compared to a 2°C scenario and by the 2050s increases in the risk of flooding are particularly large for South Asia. Deltaic regions in particular are vulnerable to more severe flooding, loss of wetlands, and a loss of infrastructure and livelihoods as a consequence of sea-level rise and climate-change-induced extreme events [START_REF] Douglas | Climate change, flooding and food security in south Asia[END_REF][START_REF] Syvitski | Sinking deltas due to human activities[END_REF]World Bank 2010). Climate change is not the only driver of an increasing vulnerability to floods and sealevel rise. Human activities inland (such as upstream damming, irrigation barrages, and diversions) as well as activities on the delta (such as water withdrawal) can significantly affect the rate of aggradation and local subsidence in the delta. Subsurface mining is another driver [START_REF] Syvitski | Sinking deltas due to human activities[END_REF]). Bangladesh is one of the most densely populated countries in the world, with a large population living within a few meters of sea level. Flooding of the Ganges-Brahmaputra-Meghna Delta occurs regularly and is part of the annual cycle of agriculture and life in the region. However, the fact that up to two-thirds of the land area of Bangladesh is flooded every three to five years already causes substantial damage to infrastructure, livelihoods, and agricultureand especially to poor households (World Bank 2010). Projections consistently show substantial and growing risks for the country. [START_REF] Mirza | Climate change, flooding in South Asia and implications[END_REF] estimates the flooded area could increase by as much as 29 percent for a 2.5°C increase in warming above pre-industrial levels. At higher levels of warming, the rate of increase in the extent of mean-flooded-area per degree of warming is estimated to be lower [START_REF] Mirza | Climate change, flooding in South Asia and implications[END_REF]. Tropical Cyclones More intense tropical cyclones, combined with sea-level rise, would increase the depth and risk of inundation from floods and storm surges. Although only 15 percent of all tropical cyclones affect South Asia, India and Bangladesh alone account for 86 percent of global deaths from cyclones (UNISDR 2011). Furthermore, the highest risk of inundation is projected to occur in areas with the largest shares of poor people (Meams and Norton 2009). In Bangladesh, for example, a projected 27 cm sea-level rise by 2050, combined with a storm surge induced by an average 10-year, return-period cyclone such as Sidr (NASA, 2007;Wassmann et al., 2009b), could under certain conditions inundate an area 88-percent larger than the area inundated by current cyclonic storm surges (Meams and Norton 2009). Besides deaths and injuries, further indirect effects of floods and cyclones on health result from disruptions to food supply and access to safe drinking water. Droughts According to our own analysis, droughts are expected to pose an increasing risk in parts of the region, particularly Pakistan, while increasing wetness is projected for southern India (Figure OR3). The direction of change is uncertain for northern India. This is consistent with other estimates using projections in precipitation and warming [START_REF] Dai | Increasing drought under global warming in observations and models[END_REF]): for a global mean warming of 3°C by the end of the 21 st century, the drought risk expressed by the Palmer Drought Severity Index becomes higher across much of northwestern India, Pakistan (and also Afghanistan) but becomes lower across southern and eastern India. It should be noted that such projections are uncertain, not only due to the spread in model projections but also to the choice of drought indicator [START_REF] Taylor | Contributions to uncertainty in projections of future drought under climate change scenarios[END_REF]). The projected increase in seasonality of precipitation is associated with an increase in the number of dry days and droughts with adverse consequences for human lives. Water Security Future water security under climate change is a growing concern. It is dependent on the complex relationship among population growth, increases in agricultural and economic activity, increases in total precipitation, and the ultimate loss of glacial fed water and snow cover, combined with regional variations and changes in seasonality across South Asia. The assessment of water security threats is undertaken using differing metrics across the studies, making a comprehensive assessment difficult. Several studies find that South Asia is already a highly water-stressed region [START_REF] Fung | Water availability in +2°C and +4°C worlds[END_REF][START_REF] Vörösmarty | Global threats to human water security and river biodiversity[END_REF]). It has very low levels of water storage capacity per capita, which increases vulnerability to fluctuations in water flows and changing monsoon patterns (Ministry of Environment and Forests 2012). Projections show that in most cases climate change aggravates the situation (De Fraiture and Wichelns 2010; ESCAP 2011; [START_REF] Green | Beneath the surface of global change: Impacts of climate change on groundwater[END_REF], particularly for the agricultural sector [START_REF] Sadoff | Water Management, Water Security and Climate Change Adaptation: Early Impacts and Essential Responses[END_REF], compare section Impacts on Agriculture. An example of the complexity of such prognoses can be seen in the work of [START_REF] Fung | Water availability in +2°C and +4°C worlds[END_REF], who project the effects of global warming on river runoff in the Ganges basin. A warming of about 2.7°C above pre-industrial levels is projected to lead to a 20-percent increase in runoff, and a 4.7°C warming to approximately a 50-percent increase. While an increase in annual runoff sounds promising for a region in which many areas suffer from water scarcity [START_REF] Bates | of the Intergovernmental Panel on Climate Change[END_REF][START_REF] Döll | Vulnerability to the impact of climate change on renewable groundwater resources: a global-scale assessment[END_REF]ESCAP 2011), it has to be taken into account that the changes are unevenly distributed across wet and dry seasons. In projections by [START_REF] Fung | Water availability in +2°C and +4°C worlds[END_REF], annual runoff increases in the wet season while further decreasing in the dry season -with the amplification increasing at higher levels of warming. This increase in seasonality implies severe flooding in high-flow seasons and aggravated water stress in dry months in the absence of large-scale infrastructure construction [START_REF] Fung | Water availability in +2°C and +4°C worlds[END_REF]World Bank 2012). For global warming of approximately 3°C above pre-industrial levels and the SRES A2 population scenario for 2080, [START_REF] Gerten | Global Water Availability and Requirements for Future Food Production[END_REF] project that it is very likely (>90 percent confidence) that per capita water availability in South Asia (except for Sri Lanka) will decrease by more than 10 percent. While the population level plays an important role in these estimates, there is a 10-30 percent likelihood that climate change alone is expected to decrease water availability by more than 10 percent in Pakistan. The likelihood of water scarcity driven by climate change alone is as high as >90 percent for Pakistan and Nepal and as high as 30-50 percent for India. In a scenario of 2°C warming by 2050, [START_REF] Rockström | Future water availability for global food production: The potential of green water for increasing resilience to global change[END_REF] project that food and water requirements in India would exceed the availability of green water (rainwater stored in the soil as soil moisture) by more than 150 percent, indicating that the country will be highly dependent on blue water (water from rivers and aquifers) for agricultural production. As early as 2050, water availability in Pakistan and Nepal is projected to be too low for self-sufficiency in food production when taking into account a total availability of water below 1300m 3 per capita per year as a benchmark for the amount of water required for a balanced diet [START_REF] Rockström | Future water availability for global food production: The potential of green water for increasing resilience to global change[END_REF]). Impacts on Agriculture Agriculture contributes approximately 18 percent to South Asia's GDP (2011 data based on World Bank, 2013a); more than 50 percent of the population is employed in the sector (2010 data based on World Bank, 2013a) and directly dependent on it. Productivity growth in agriculture is an important driver of poverty reduction. In spite of the paramount importance of this sector, even explaining the observed yields in South Asia remains a non-trivial task (Auffhammer, Ramanathan, andVincent 2006, 2011;[START_REF] Kalra | Effect of increasing temperature on yield of some winter crops in northwest India[END_REF][START_REF] Lin | Reckoning wheat yield trends[END_REF][START_REF] Lobell | Extreme heat effects on wheat senescence in India[END_REF][START_REF] Pathak | Trends of climatic potential and on-farm yields of rice and wheat in the Indo-Gangetic Plains[END_REF]. Projecting agricultural output for the future is even more challenging: it could be expected that future improvements may occur due to technological changes, cultivar breeding and optimization, production efficiencies, and improved farm management practices. However, declining soil productivity, groundwater depletion [START_REF] Green | Beneath the surface of global change: Impacts of climate change on groundwater[END_REF], and declining water availability, as well as increased pest incidence and salinity, already threaten sustainability and food security in South Asia [START_REF] Wassmann | Chapter 3 Regional Vulnerability of Climate Change Impacts on Asian Rice Production and Scope for Adaptation[END_REF]. The effects of climate change have the potential to further significantly aggravate the situation; however, due to the complexity of the issue, projections remain difficult. Extreme Heat Effects Heat stress, which can be particularly damaging during some development stages and may occur more frequently with climate change, is not yet widely included in crop models and projections. Compared to calculations of potential yields without historic trends of temperature changes since the 1980s, rice and wheat yields have declined by approximately 8 percent for every 1°C increase in average growing-season temperatures [START_REF] Lobell | Climate trends and global crop production since 1980[END_REF]. If temperatures increase beyond the upper temperature for crop development (e.g., 25-31°C for rice and 20-25°C for wheat, depending on genotype), rapid decreases in the growth and productivity of crop yields could be expected, with greater temperature increases leading to greater production losses [START_REF] Wassmann | Chapter 3 Regional Vulnerability of Climate Change Impacts on Asian Rice Production and Scope for Adaptation[END_REF]. By introducing the response to heat stress within different crop models, Challinor, Wheeler, Garforth, Craufurd, and Kassam (2007) simulate significant yield decreases for rice (up to -21 percent under double CO2) and groundnut (up to -50 percent). Water Constraints Agricultural productivity is highly dependent on the hydrological cycle and freshwater availability [START_REF] Jacoby | Distributional Implications of Climate Change in India[END_REF]. In turn, agriculture and the food demands of a growing population are expected to be the major drivers of water usage in the future [START_REF] De Fraiture | Satisfying future water demands for agriculture[END_REF][START_REF] Douglas | Climate change, flooding and food security in south Asia[END_REF]. At present, agriculture accounts for more than 91 percent of the total freshwater withdrawal in South Asia (including Afghanistan). Nepal (98 percent), Pakistan (94 percent), Bhutan (94 percent) and India (90 percent) have particularly high levels of water withdrawal through the agricultural sector (2011data, by World Bank, 2013a). [START_REF] Immerzeel | Climate change will affect the Asian water towers[END_REF] demonstrate how changes in water availability in the Indus, Ganges, and Brahmaputra rivers may impact food security. The authors estimate that with a temperature increase of 2-2.5°C compared to pre-industrial levels, by the 2050s reduced water availability for agricultural production may result in more than 63 million people no longer being able to meet their caloric demand by production in the river basins. Recent statistical analysis by [START_REF] Auffhammer | Climate change, the monsoon, and rice yield in India[END_REF] also confirm that changes in monsoon rainfall over India, with less frequent but more intense rainfall in the recent past contributed to reduced rice yields. This decrease in production is due to both direct drought impacts on yields and to the reduction of the planted areas for some water-demanding crops (e.g., rice) as farmers observe that the monsoon may arrive too late [START_REF] Gadgil | The Asian monsoon -agriculture and economy[END_REF]. South Asia, and especially India and Pakistan, are highly sensitive to decreases in groundwater recharge -a situation that is expected to become more critical with climate change [START_REF] Döll | Vulnerability to the impact of climate change on renewable groundwater resources: a global-scale assessment[END_REF][START_REF] Green | Beneath the surface of global change: Impacts of climate change on groundwater[END_REF]. The changing variability of the monsoon season poses a severe risk to agriculture because farming systems in South Asia are highly adapted to the local climate, particularly the monsoon. Observations indicate the agricultural sector´s vulnerability to changes in monsoon precipitation: with a 19-percent decline in summer monsoon rainfall in 2002, Indian food grain production was reduced by 10-15 percent compared to the previous decadal average [START_REF] Mall | Impact of Climate Change on Indian Agriculture: A Review[END_REF]. Without adequate water storage facilities, the potential increase of peak monsoon river flow would not be usable for agricultural productivity; increased peak flow may also cause damage to farmland due to river flooding [START_REF] Gornall | Implications of climate change for agricultural productivity in the early twenty-first century[END_REF]. Observations of agricultural production during ENSO (El Niño -Southern Oscillation) events confirm strong responses to variations in the monsoon regime. ENSO events play a key role in determining agricultural production [START_REF] Iglesias | Climate change in Asia: A review of the vulnerability and adaptation of crop production[END_REF]. Several studies, using historical data on agricultural statistics and climate indices, have established significant correlations between summer monsoon rainfall anomalies, strongly driven by the ENSO events, and crop production anomalies (e.g., [START_REF] Webster | Monsoons: Processes, predictability, and the prospects for prediction[END_REF]. Drought The droughts of 1987 and 2002-2003 affected more than 50 percent of the crop area in India [START_REF] Wassmann | Chapter 3 Regional Vulnerability of Climate Change Impacts on Asian Rice Production and Scope for Adaptation[END_REF]; in 2002, food grain production declined by 29 million tons compared to the previous year (UNISDR 2011). Local droughts in rainfed agricultural areas in northwest Bangladesh cause yield losses higher than those from flooding and submergence [START_REF] Wassmann | Chapter 3 Regional Vulnerability of Climate Change Impacts on Asian Rice Production and Scope for Adaptation[END_REF] Salinization Soil salinity has been hypothesized to be one possible reason for observed yield stagnations and decreases in the Indo-Gangetic Plain [START_REF] Ladha | How extensive are yield declines in long-term rice-wheat experiments in Asia?[END_REF]. Deltaic regions and wetlands are exposed to the risks of sea-level rise and increased inundation causing salinity intrusion into irrigation systems and groundwater resources. Also, higher temperatures would lead to excessive deposits of salt on the surface, further increasing the percentage of brackish groundwater (Wassmann, Jagadish, and Heuer 2009). However, similar to diminished groundwater availability, which is largely due to rates of extraction exceeding rates of recharge and is, in this sense, human induced [START_REF] Bates | of the Intergovernmental Panel on Climate Change[END_REF] groundwater and soil salinization are also caused by the excessive use of groundwater in irrigated agriculture. Salinity stress through brackish groundwater and salt-affected soils reduces crop yields; climate change is expected to aggravate the situation (Wassmann, Jagadish, and Heuer 2009). Flooding, Sea-level Rise and Tropical Cyclones Flooding poses a particular risk to deltaic agricultural production. Even today, food shortages are a persistent problem in Bangladesh [START_REF] Douglas | Climate change, flooding and food security in south Asia[END_REF][START_REF] Wassmann | Chapter 3 Regional Vulnerability of Climate Change Impacts on Asian Rice Production and Scope for Adaptation[END_REF]. In this region, large amounts of productive land could be lost to sea-level rise, with 40-percent area losses projected in southern Bangladesh for a 65 cm rise by the 2080s [START_REF] Yu | Climate Change Risks and Food Security in Bangladesh[END_REF]. Tropical cyclones already lead to substantial damage to agricultural production, particularly in the Bay of Bengal region, yet very few assessments of the effects of climate change on agriculture in the region include estimates of the likely effects of increased tropical cyclone intensity. Uncertain CO2 Fertilization Effect Despite the different representations of some specific biophysical processes, simulations generally show that the positive fertilization effect of the increasing atmospheric CO2 concentration may counteract the negative impacts of increased temperature (e.g., [START_REF] Challinor | Crop yield reduction in the tropics under climate change: Processes and uncertainties[END_REF]. Uncertainties associated with the representation or parameterization of the CO2 fertilization effect, however, lead to a large range of results given by different crop models. For example, large parts of South Asia are projected to experience significant declines in crop yield without CO2 fertilization, while increases are projected when taking the potential CO2 fertilization effect into account [START_REF] Müller | Development and Climate Change Background Note -Climate Change Impacts on Agricultural Yield[END_REF]. However, controversy remains as to the strength of the effect, and there is considerable doubt that the full benefits can be obtained [START_REF] Müller | Development and Climate Change Background Note -Climate Change Impacts on Agricultural Yield[END_REF]. [START_REF] Nelson | The Costs of Agricultural Adaptation to Climate Change[END_REF] estimate the direct effects of climate change (changes in temperature and precipitation for rainfed crops and temperature increases for irrigated crops) on the production of different crops with and without the effect of CO2 fertilization under a global mean warming of about 1.8°C above pre-industrial levels by 2050. They find that South Asia is affected particularly hard by climate change-especially when the potential benefits of the CO2 fertilization effect are not included. If temperatures increase beyond the upper temperature for crop development (e.g., 25-31°C for rice and 20-25°C for wheat, depending on genotype), rapid decreases in the growth and productivity of crop yields could be expected, with greater temperature increases leading to greater production losses [START_REF] Wassmann | Chapter 3 Regional Vulnerability of Climate Change Impacts on Asian Rice Production and Scope for Adaptation[END_REF]. By analyzing the heat stress in Asian rice production for the period 1950-2000, Wassmann, Jagadish, and Heuer (2009) show that large areas in South Asia already exceed maximum average daytime temperatures of 33°C. [START_REF] Auffhammer | Integrated model shows that atmospheric brown clouds and greenhouse gases have reduced rice harvests in India[END_REF], in agreement with e.g. [START_REF] Pathak | Trends of climatic potential and on-farm yields of rice and wheat in the Indo-Gangetic Plains[END_REF] and [START_REF] Kalra | Effect of increasing temperature on yield of some winter crops in northwest India[END_REF]show increasing minimum temperatures caused more than half of the total observed yield decline over the past decade and before. Present crop models may underestimate by as much as 50 percent the yield loss from local warming of 2°C [START_REF] Lobell | Extreme heat effects on wheat senescence in India[END_REF]. Without climate change, overall crop production is projected to increase significantly (by about 60 percent by 2050) although, per capita, crop production will likely not quite keep pace with projected population growth. Under climate change, however, a significant (about one-third) decline in per capita South Asian crop production is projected, if the CO2 fertilization effect does not persist and increase above present levels. The per capita calorie is projected to decline under climate change, while it will rise in the scenario without climate change. The same analysis expects the proportion of malnourished children to be substantially reduced by the 2050s without climate change. However, climate change is likely to partly offset this reduction, as the number of malnourished children is expected to increase by 7 million compared to the case without climate change [START_REF] Nelson | The Costs of Agricultural Adaptation to Climate Change[END_REF]. Projected Changes in Food Production A meta-analysis of the impact of temperature increase on crop yields in the South Asia region from 9 different studies is presented in Online Resource 2 (originally prepared for the World Bank Report, 2013b). society, such as the poor, are most vulnerable to the threats posed by climate change. Climate-change impacts are projected to have immediate as well as long term consequences for livelihoods, especially for the poorest households, as well as for poverty reduction policies and efforts [START_REF] Hallegatte | Climate Change and Poverty -An Analytical Framework[END_REF]. Hunger and Poverty Per capita calorie availability and child malnutrition -which are determinants for long-term growth and health -may be severely affected by climate change and its effect on the agricultural sector [START_REF] Nelson | The Costs of Agricultural Adaptation to Climate Change[END_REF]. Furthermore, the uneven distribution of the impacts of climate change is expected to have adverse effects on poverty reduction. [START_REF] Hertel | The poverty implications of climate-induced crop yield changes by 2030[END_REF] show that, by 2030, rising food prices in response to productivity shocks would have the strongest adverse effects on a selected number of social strata. In a low-productivity scenario, described as a world with rapid temperature increases and crops highly sensitive to warming, higher earnings result in declining poverty rates for self-employed agricultural households. This is due to price increases following production shocks. Non-agricultural urban households, in turn, are expected to be most affected by food price increases. As a result, the poverty rate among this subpopulation rises by up to a third in Bangladesh in this scenario. Other means by which climate change can affect poor households, beyond the consequences of increasing food prices and decreased calorie availability still need to be investigated [START_REF] Hallegatte | Climate Change and Poverty -An Analytical Framework[END_REF]. These channels could for example include the effects on assets and physical capital (e.g. a tropical cyclone destroying living premises), the effects on productivity (e.g. high temperature reducing labor productivity) and opportunities (e.g. the overall effect of climate variability and change on economic growth) [START_REF] Hallegatte | Climate Change and Poverty -An Analytical Framework[END_REF]. Health Childhood Stunting: The negative effects of climate change on food production may have direct implications for malnutrition and undernutrition -increasing the risk of both poor health and rising death rates [START_REF] Lloyd | [END_REF]. At present, more than 31 percent of children under the age of five in South Asia are underweight (2011 data based on World Bank 2013a). Using estimates of changes in calorie availability attributable to climate change, and particularly to its impact on crop production, Lloyd et al. (2011) estimate that climate change may lead to a 62-percent increase in severe childhood stunting and a 29-percent increase in moderate stunting in South Asia by 2050 for a warming of approximately 2°C above pre-industrial levels. As the model is based on the assumption that within-country food distribution remains at baseline levels, it would appear that better distribution could to some extent mitigate the projected increase in childhood stunting. Diarrheal and Vector Borne Diseases: Diarrhea is at present a major cause for child mortality in Asia and the Pacific, with 13.1 percent of all deaths under age five in the region caused by diarrhoea (2008( data from ESCAP 2011)). [START_REF] Pandey | Costs of Adapting to Climate Change for Human Health in Developing Countries[END_REF] investigates the impact of climate change on the incidence of diarrheal disease in South Asia and finds a declining trend between 2010 and 2050. However, the author estimates a climate-change induced increase of 6.2 percent by 2030, and an increase of 1.1 percent by 2050, which is lower than the 2010 increase of 4.1 percent in the relative risk of disease from the baseline. Across the world, climate change induced incidence risk increases at an average of 3 percent in 2030 and 2 percent in 2050 [START_REF] Pandey | Costs of Adapting to Climate Change for Human Health in Developing Countries[END_REF]. Noteworthy in this context is the finding by [START_REF] Pandey | Costs of Adapting to Climate Change for Human Health in Developing Countries[END_REF] that, in the absence of climate change, cases of diarrheal disease in South Asia (including Afghanistan) would decrease earlier, as the expected increase in income would allow South Asian countries to invest in their health services. Climate change is expected to affect the distribution of malaria in the region, causing it to spread into areas at the margins of the current distribution where colder climates had previously limited transmission of the vector-borne disease [START_REF] Ebi | Climate Change-related Health Impacts in the Hindu Kush-Himalayas[END_REF]. [START_REF] Pandey | Costs of Adapting to Climate Change for Human Health in Developing Countries[END_REF] finds that the relative risk of malaria in South Asia is projected to increase by 5 percent in 2030 (174,000 additional incidents) and 4.3 percent in 2050 (116,000 additional incidents) in the model with higher precipitation (NCAR). The drier scenario (CSIRO) does not project an increase in risk; this may be because calculations of the relative risk of malaria consider the geographical distribution and not the extended duration of the malarial transmission season [START_REF] Pandey | Costs of Adapting to Climate Change for Human Health in Developing Countries[END_REF]. As in the case of diarrheal disease, malaria cases are projected to significantly decrease in the absence of climate change from 4 million cases in 2030 to 3 million cases in 2050 [START_REF] Pandey | Costs of Adapting to Climate Change for Human Health in Developing Countries[END_REF]. In a global study on the distribution of malaria [START_REF] Béguin | The opposing effects of climate change and socio-economic development on the global distribution of malaria[END_REF] find that GDP growth per capita would have a stronger influence on the distribution of the disease than climate change, although the effects of climate change are still significant. Salinity intrusion into freshwater resources constitutes another health risk. About 20 million people in the coastal areas of Bangladesh are already exposed to salinity in their drinking water [START_REF] Khan | Drinking water salinity and maternal health in coastal Bangladesh: Implications of climate change[END_REF][START_REF] Khan | Climate Change, Sea-Level Rise, & Health Impacts in Bangladesh[END_REF]. With rising sea levels and more intense cyclones and storm surges, the contamination of groundwater and surface water is expected to intensify. Contamination of drinking water by saltwater intrusion may cause an increasing number of cases of diarrhea [START_REF] Khan | Drinking water salinity and maternal health in coastal Bangladesh: Implications of climate change[END_REF][START_REF] Khan | Climate Change, Sea-Level Rise, & Health Impacts in Bangladesh[END_REF]. Cholera outbreaks may also become more frequent as the bacterium that causes cholera, vibrio cholerae, survives longer in saline water [START_REF] Khan | Drinking water salinity and maternal health in coastal Bangladesh: Implications of climate change[END_REF][START_REF] Khan | Climate Change, Sea-Level Rise, & Health Impacts in Bangladesh[END_REF]. Heat Stress and Heat-Related Mortality: In South Asia, unusually high temperatures pose severe threats to health. Heat exhaustion can cause heatstroke and, in severe cases, death. In Andhra Pradesh, India, for example, heat waves caused 3,000 deaths in 2003 (Ministry of Environment and Forests 2012). In recent years, the death toll as a consequence of heat waves has increased continuously in the Indian states of Rajasthan, Gujarat, Bihar, and Punjab [START_REF] Lal | Implications of climate change in sustained agricultural productivity in South Asia[END_REF]. In their global review, [START_REF] Hajat | Heat-related mortality: a review and exploration of heterogeneity[END_REF] find that increasing population density, lower city gross domestic product, and an increasing proportion of people aged 65 or older were all independently linked to increased rates of heat-related mortality. Moreover, air pollution, which is a considerable problem in South Asia, interacts with high temperatures and heat waves to increase fatalities. A study by [START_REF] Takahashi | Assessing Mortality Risk from Heat Stress due to Global Warming[END_REF] further found that most South Asian countries are likely to experience a very substantial increase in excess mortality due to heat stress by the end of the 21 st century, based on a global mean warming for the 2090s of about 3.3°C above pre-industrial levels under the SRES A1B scenario and an estimated increase in the daily maximum temperature change over South Asia in the range of 2-3°C. [START_REF] Takahashi | Assessing Mortality Risk from Heat Stress due to Global Warming[END_REF] assume constant population densities. [START_REF] Sillmann | Climate extremes indices in the CMIP5 multimodel ensemble: Part 2. Future climate projections[END_REF]) projects, based on the CMIP5 models, an annual average maximum daily temperature increase in the summer months of approximately 4-6°C by 2100 for the RCP 8.5 scenario. Migration The potential for migration, including permanent relocation as well as short-term or seasonal migration, is expected to be heightened by climate change, particularly due to sea-level rise and erosion. There is a lack of consensus on the estimates of future migration patterns resulting from climate-change-related risks [START_REF] Gemenne | Why the numbers don't add up: A review of estimates and predictions of people displaced by environmental changes[END_REF][START_REF] Bierbaum | World development report 2010: development and climate change[END_REF]. Inland migration of households has already been observed in Bangladesh, where exposed coastal areas are characterized by lower population growth rates than the rest of the country (World Bank 2010). A sea-level rise of one meter is expected to affect 13 million people in Bangladesh [START_REF] Huq | [END_REF]World Bank 2010). However, this would not necessarily mean that all people affected would be permanently displaced [START_REF] Gemenne | Why the numbers don't add up: A review of estimates and predictions of people displaced by environmental changes[END_REF]). Impacts on agriculture may cause impoverishment of rural populations, which in turn could either be more likely to migrate in order to diversify their income, or more likely to stay if resources for resettlement are depleted. [START_REF] Brecht | Sea-Level Rise and Storm Surges: High Stakes for a Small Number of Developing Countries[END_REF] estimate that in a 4°C world, a possible sea-level rise of more than one meter could lead to storm surges with a 15% wave height increase, putting 20.1% of India's population under risk of exposure. [START_REF] Hugo | Future demographic change and its interactions with migration and climate change[END_REF] identifies South Asia as a hotspot for both population growth and future international migration as a consequence of demographic changes, poverty, and the impacts of climate change. As migration is a multicausal phenomenon, the propensity for large-scale displacement depends on a variety of factors and the way they will interact. These include future regional trends of population growth and economic development in rural areas, as well as the severity of impacts and the scale of adaptive measures. More transdisciplinary research on these complex interactions is needed. declining quality and quantity of water supplies in the Indus and Ganges-Brahmaputra-Meghna Basins, the increasing demand for water is already causing tensions over water sharing [START_REF] Stefano | Climate change and the institutional resilience of international river basins[END_REF][START_REF] Uprety | Legal aspects of sharing and management of transboundary waters in South Asia: preventing conflicts and promoting cooperation[END_REF]. [START_REF] Uprety | Legal aspects of sharing and management of transboundary waters in South Asia: preventing conflicts and promoting cooperation[END_REF] indicate that sharing and managing water resources in South Asia have become more complex due to the high vulnerability of the region to climate change. Based on the projections for water and food security presented above, it is likely that the risk of conflict over water resources may increase with the severity of the impacts. The estimated reduced per capita availability of water of 10 % in a 3°C scenario by 2080 [START_REF] Gerten | Global Water Availability and Requirements for Future Food Production[END_REF]) could mean that reductions for low-income households may be significantly higher than 10%, whereas more economically resilient communities could pay higher prices for additional water supply and thereby sustain their water usage. Conclusion and Implications for Development Global climate change will manifest itself in various ways in the South Asian region, among them heat extremes, monsoon variability, river flow and tropical cyclones and sea-level rise. The projected impacts are considerable in a 2°C World and significantly higher in a 4 °C world, pointing to the need to avoid the latter in particular (World Bank 2012). Many of the climate change impacts in the region, which appear quite severe even with relatively modest warming of 1.5-2°C, pose significant challenges to development. The majority of the climatic risk factors are ultimately related to changes in the hydrological regime; these would affect populations via changes to precipitation patterns and river flow. One of the most immediate areas of impact resulting from changes in the hydrological regime is agriculture, which is highly dependent on the regularity of monsoonal rainfall. However, agriculture in the region is also sensitive to temperature increases of which projections can be made with higher levels of confidence than projections of changes in precipitation and hydrology. Should the trend of negative effects on crop yields persist, substantial yield reductions can be expected in the near and midterm. The poor in South Asia are particularly vulnerable to the impacts of climate change. Disruptions in agriculture would undermine livelihoods and cause food price shocks. The risks to health associated with inadequate nutrition or unsafe drinking water are significant: childhood stunting, transmission of water-borne diseases and disorders associated with excess salinity. Other health threats are associated with flooding, heat waves, or tropical cyclones. Population displacement is likely to increase in case of more frequent and severe flooding and may also be a coping strategy for other impacts on livelihoods. Bangladesh emerges as an impact hotspot with increasing and compounding challenges occurring in the same timeframe from extreme river floods, more intense tropical cyclones, rising sea levels, extraordinarily high temperatures, and declining crop yields. Increased river flooding combined with tropical cyclone surges pose a high risk of inundation in areas with the largest shares of poor populations. Moreover, coastal agglomerations such as the megacities Kolkata and Mumbai on the shores of South Asia are highly vulnerable to potentially cascading risks resulting from a combination of climatic changes such as sea-level rise, increased temperatures, increasingly intense tropical cyclones, and riverine flooding. Major adaptation measures would be needed to cope with the projected impacts of climate change. Figure 1: Warming levels are relative to pre-industrial temperatures. The impacts shown here are a subset of those summarized in Table A of Online Resource 1. The arrows indicate the range of warming levels assessed in the underlying studies; but do not imply any graduation of risk unless noted explicitly. In addition, observed impacts or impacts occurring at lower or higher levels of warming that are not covered by the key studies highlighted here are not presented. Adaptation measures are not assessed here, but they can be crucial to alleviating the impacts of climate change. The layout of the figure is adapted from [START_REF] Parry | Copenhagen number crunch[END_REF]. The superscript letters indicate the relevant references for each impact. If there is no letter, the results are based on additional analyses for this review. 1 RVD was financially supported by the German Federal Ministry for Education and Research (BMBF) via the Young Investigator's Group CoSy-CC 2 (grant no. 01LN1306A). For the calculation of warming levels referred to in figure 1 and online resource 1, a special tool developed and programmed at PIK has been used: http://54.72.92. Impacts in Bangladesh While the risks for South Asia as a whole emerge as quite serious, the risks and impacts for Bangladesh are arguably amongst the highest in the region. [START_REF] Yu | Climate Change Risks and Food Security in Bangladesh[END_REF] conducted a comprehensive assessment of future crop performance and consequences of production losses for Bangladesh. Taking into account the impact of changes in temperature and precipitation, the uncertain benefits of CO2 fertilization, mean changes in floods and inundation, and rising sea levels, the authors estimate that climate change will cause a reduction of about 2-6.5 percent in annual rice production from 2005-50, depending on the scenario (World Bank 2010; [START_REF] Yu | Climate Change Risks and Food Security in Bangladesh[END_REF]. Interactions of Physical and Biophysical Impacts with Livelihoods The human impacts of climate change will be determined by the socioeconomic context in which they occur. The following sections outline some of these expected implications, drawing attention to how particular groups in Conflict Although there is likewise a lack of consensus on the causal connection between climate change and violent conflicts, there is evidence that impacts like water and food scarcity may increase the likelihood of conflict [START_REF] Stefano | Climate change and the institutional resilience of international river basins[END_REF][START_REF] Gautam | Climate Change and Conflict in South Asia[END_REF]. A reduction in water availability from rivers, could cause conflict over access to this critical resource and thereby further threaten the water security of South Asia [START_REF] Gautam | Climate Change and Conflict in South Asia[END_REF]). In the context of
01773799
en
[ "info.info-dc" ]
2024/03/05 22:32:18
2018
https://inria.hal.science/hal-01773799/file/ICDCS_2018_paper_732.pdf
Ovidiu-Cristian Marcu Alexandru Costan email: [email protected] Gabriel Antoniu email: [email protected] María S Pérez-Hernández Bogdan Nicolae email: [email protected] Radu Tudoran email: [email protected] Stefano Bortoli email: [email protected] KerA: Scalable Data Ingestion for Keywords: Stream Processing Stream processing, dynamic partitioning, ingestion à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. I. INTRODUCTION Big Data real-time stream processing typically relies on message broker solutions that decouple data sources from applications. This translates into a three-stage pipeline described in Figure 1. First, in the production phase, event sources (e.g., smart devices, sensors, etc.) continuously generate streams of records. Second, in the ingestion phase, these records are acquired, partitioned and pre-processed to facilitate consumption. Finally, in the processing phase, Big Data engines consume the stream records using a pull-based model. Since users are interested in obtaining results as soon as possible, there is a need to minimize the end-to-end latency of the three stage pipeline. This is a non-trivial challenge when records arrive at a fast rate and create the need to support a high throughput at the same time. To this purpose, Big Data engines are typically designed to scale to a large number of simultaneous consumers, which enables processing for millions of records per second [START_REF] Venkataraman | Drizzle: Fast and Adaptable Stream Processing at Scale[END_REF], [START_REF] Miao | Streambox: Modern Stream Processing on a Multicore Machine[END_REF]. Thus, the weak link of the three stage pipeline is the ingestion phase: it needs to acquire records with a high throughput from the producers, serve the consumers with a high throughput, scale to a large number of producers and consumers, and minimize the write latency of the producers and, respectively, the read latency of the consumers to facilitate low end-to-end latency. Stream processing pipeline: records are collected at event time and made available to consumers earliest at ingestion time, after the events are acknowledged by producers; processing engines continuously pull these records and buffer them at buffer time, and then deliver them to the processing operators, so results are available at processing time. Achieving all these objectives simultaneously is challenging, which is why Big Data applications typically rely on specialized ingestion runtimes to implement the ingestion phase. One such popular runtime is Apache Kafka [START_REF]Apache Kafka[END_REF]. It quickly rose as the de-facto industry standard for record brokering in end-to-end streaming pipelines. It follows a simple design that allows users to manipulate streams of records similarly to a message queue. More recent ingestion systems (e.g. Apache Pulsar [START_REF]Apache Pulsar[END_REF], DistributedLog [START_REF]Apache DistributedLog[END_REF]) provide additional features such as durability, geo-replication or strong consistency but leave little room to take advantage of trade-offs between strong consistency and high performance. State of art ingestion systems typically achieve scalability using static partitioning: each stream is broken into a fixed set of partitions where the producers write the records according to a partitioning strategy, whereas only one consumer is allowed to access each partition. This eliminates the complexity of dealing with fine-grain synchronization at the expense of costly over-provisioning (i.e., by allocating a large number of partitions that are not needed in the normal case to cover the worst case when the stream is used by a high number of consumers). Furthermore, each stream record is associated at append time with an offset that enables efficient random access. However, in a typical streaming scenario, random access is not needed as the records are processed in sequential order. Therefore, associating an offset for each single record introduces significant performance and space overhead. These design choices limit the ability of the ingestion phase to deliver high throughout and low latency in a scalable fashion. This paper introduces KerA, a novel ingestion system for scalable stream processing that addresses the aforementioned limitations of the state of art. Specifically, it introduces a dynamic partitioning scheme that elastically adapts to the number of producers and consumers by grouping records into fixed-sized segments at fine granularity. Furthermore, it relies on a lightweight metadata management scheme that assigns minimal information to each segment rather than record, which greatly reduces the performance and space overhead of offset management, therefore optimizing sequential access to the records. We summarize our contributions as follows: (1) we identify and study the key aspects that influence the performance and scalability of data processing in the ingestion phase (Section II); (2) we introduce a set of design principles that optimize the stream partitioning and record access (Section III); (3) we introduce the KerA prototype, which illustrates how to implement the design principles in a real-life solution (Section IV); (4) we demonstrate the benefits of KerA experimentally using state-of-art ingestion systems as a baseline (Section V). II. BACKGROUND: STREAM INGESTION A stream is a very large, unbounded collection of records, that can be produced and consumed in parallel by multiple producers and consumers. The records are typically buffered on multiple broker nodes, which are responsible to control the flow between the producers and consumers such as to enable high throughput, low latency, scalability and reliability (i.e., ensure records do not get lost due to failures). To achieve scalability, stream records are logically divided into many partitions, each managed by one broker. A. Static partitioning State-of-art stream ingestion systems (e.g., [START_REF]Apache Kafka[END_REF], [START_REF]Apache Pulsar[END_REF], [START_REF]Apache DistributedLog[END_REF]) employ a static partitioning scheme where the stream is split among a fixed number of partitions, each of which is an unbounded, ordered, immutable sequence of records that are continuously appended. Each broker is responsible for one or multiple partitions. Producers accumulate records in fixedsized batches, each of which is appended to one partition. To reduce communication overhead, the producers group together multiple batches that correspond to the partitions of a single broker in a single request. Each consumer is assigned to one or more partitions. Each partition assigned to a single consumer. This eliminates the need for complex synchronization mechanisms but has an important drawback: the application needs a priori knowledge about the optimal number of partitions. However, in real-life situations it is difficult to know the optimal number of partitions a priori, both because it depends on a large number of factors (number of brokers, number of consumers and producers, network size, estimated ingestion and processing throughput target, etc.). In addition, the producers and consumers can exhibit dynamic behavior that can generate large variance between the optimal number of partitions needed at different moments during the runtime. Therefore, users tend to over-provision the number of partitions to cover the worst case scenario where a large number of producers and consumers need to access the records simultaneously. However, if the worst case scenario is not a norm but ). Producers and consumers query Zookeeper for partition metadata (i.e., on which broker a stream partition leader is stored). Producers append to the partition's leader (e.g., broker 1 is assigned partition 1 leader), while exclusively one consumer pulls records from it starting at a given offset, initially 0. Records are appended to the last segment of a partition with an offset being associated to each record. Each partition has 2 other copies (i.e., partition's followers) assigned to other brokers that are responsible to pull data from the partition's leader in order to remain in sync. an exception, this can lead to significant unnecessary overhead. Furthermore, a fixed number of partitions can also become a source of imbalance: since each partition is assigned to a single consumer, it can happen that one partition accumulates or releases records faster than the other partitions if it is assigned to a consumer that is slower or faster than the other consumers. For instance, in Kafka, a stream is created with a fixed number of partitions that are managed by Kafka's brokers, as depicted in Figure 2. Each partition is represented by an index file for offset positioning and a set of segment files, initially one, for holding stream records. Kafka leverages the operating system cache to serve partition's data to its clients. Due to this design it is not advised to collocate streaming applications on the same Kafka nodes, which does not allow to leverage data locality optimizations [START_REF] Németh | DAL: A Locality-Optimizing Distributed Shared Memory System[END_REF]. B. Offset-based record access The brokers assign to each record of a partition a monotonically increasing identifier called the partition offset, allowing applications to get random access within partitions by specifying the offset. The rationale of providing random access (despite the fact that streaming applications normally access the records in sequential order) is due to the fact that it enables failure recovery. Specifically, a consumer that failed can go back to a previous checkpoint and replay the records starting from the last offset at which its state was checkpointed. Furthermore, using offsets when accessing records enables the broker to remain stateless with respect to the consumers. However, support for efficient random access is not free: assigning an offset to each record at such fine granularity degrades the access performance and occupies more memory. Furthermore, since the records are requested in batches, each batch will be larger due to the offsets, which generates additional network overhead. III. DESIGN PRINCIPLES FOR STREAM INGESTION In order to address the issues detailed in the previous section, we introduce a set of design principles for efficient stream ingestion and scalable processing. a) Dynamic partitioning using semantic grouping and sub-partitions: In a streaming application, users need to be able to control partitioning at the highest level in order to define how records can be grouped together in a meaningful way. Therefore, it is not possible to eliminate partitioning altogether (e.g., by assigning individual records directly to consumers). However, we argue that users should not be concerned about performance issues when designing the partitioning strategy, but rather by the semantics of the grouping. Since state-ofart approaches assign a single producer and consumer to each partition, the users need to be aware of both semantics and performance issues when using static partitioning. Therefore, we propose a dynamic partitioning scheme where users fix the high level partitioning criteria from the semantic perspective, while the ingestion system is responsible to make each partition elastic by allowing multiple producers and consumers to access it simultaneously. To this end, we propose to split each partition into sub-partitions, each of which is independently managed and attached to a potentially different producer and consumer. b) Lightweight offset indexing optimized for sequential record access: Since random access to the records is not the norm but an exception, we argue that ingestion systems should primarily optimize sequential access to records at the expense of random access. To this end, we propose a lightweight offset indexing that assigns offsets at coarse granularity at sub-partition level rather than fine granularity at record level. Additionally, this offset keeps track (on client side) of the last accessed record's physical position within the sub-partition, which enables the consumer to ask for the next records. Moreover, random access can be easily achieved when needed by finding the sub-partition that covers the offset of the record and then seeking into the sub-partition forward or backward as needed. IV. KERA: OVERVIEW AND ARCHITECTURE In this section we introduce KerA, a prototype stream ingestion system that illustrates the design principles introduced in the previous section. A. Partitioning model KerA implements dynamic partitioning based on the concept of streamlet, which corresponds to the semantic highlevel partition that groups records together. Each stream is therefore composed of a fixed number of streamlets. In turn, each streamlet is split into groups, which correspond to the sub-partitions assigned to a single producer and consumer. A streamlet can have an arbitrary number of groups, each of which can grow up to a maximum predefined size. To facilitate the management of groups and offsets in an efficient fashion, each group is further split into fixed-sized segments. The maximum size of a group is a multiple of segment size P ≥ 1. To control the level of parallelism allowed on each broker, only Q ≥ 1 groups can be active at a given moment. Elasticity is achieved by assigning an initial number of brokers N ≥ 1 to hold the streamlets M, M ≥ N. As more producers and consumers access the streamlets, more brokers can be added up to M. In order to ensure ordering semantics, each streamlet dynamically creates groups and segments that have unique, monotonically increasing identifiers. Brokers expose this information through RPCs to consumers that create an application offset defined as [streamId, streamletId, groupId, segmentId, position] based on which they issue RPCs to pull data. The position is the physical offset at which a record can be found in a segment. The consumer initializes it to 0 (broker understands to iterate to first record available in that segment) and the broker responds with the last record position for each new request, so the consumer can update its latest offset to start a future request with. Using this dynamic approach (as opposed to the static approach used by explicit offsets per partition, clients have to query brokers to discover groups), we implement lightweight offset indexing optimized for sequential record access. Stream records are appended in order to the segments of a group, without associating an offset, which reduces the storage and processing overhead. Each consumer exclusively processes one group of segments. Once the segments of a group are filled (the number of segments per group is configurable), a new one is created and the old group is closed (i.e., no longer enables appends). A group can also be closed after a timeout if it was not appended in this time. B. Favoring parallelism: consumer and producer protocols Producers only need to know about streamlets when interacting with KerA. The input batch is always ingested to the active group computed deterministically on brokers based on the producer identifier and parameter Q of given streamlet (each producer request has a header with the producer identifier with each batch tagged with the streamlet id). Producers writing to the same streamlet synchronize using a lock on the streamlet in order to obtain the active group corresponding to the Q th entry based on their producer identifier. The lock is then released and a second-level lock is used to synchronize producers accessing the same active group. Thus, two producers appending to the same streamlet, but different groups, may proceed in parallel for data ingestion. In contrast, in Kafka producers writing to the same partition block each other, with no opportunity for parallelism. Consumers issue RPCs to brokers in order to first discover streamlets' new groups and their segments. Only after the application offset is defined, consumers can issue RPCs to pull data from a group's segments. Initially each consumer is associated (non-exclusively) to one or many streamlets from which to pull data from. Consumers process groups of a streamlet in the order of their identifiers, pulling data from segments also in the order of their respective identifiers. Brokers maintain for each streamlet the last group given to consumers identified by their consumer group id (i.e., each consumer request header contains a unique application id). A group is configured with a fixed number of segments to allow fine-grained consumption with many consumers per streamlet in order to better load balance groups to consumers. As such, each consumer has a fair access chance since the group is limited in size by the segment size and the number of segments. This approach also favors parallelism. Indeed, in KerA a consumer pulls data from one group of a streamlet exclusively, which means that multiple consumers can read in parallel from different groups of the same streamlet. In Kafka, a consumer pulls data from one partition exclusively. C. Architecture and implementation KerA's architecture is similar to Kafka's (Figure 3): a single layer of brokers (nodes) serve producers and consumers. However, in KerA brokers are used to discover stream partitions. Kera builds atop RAMCloud [START_REF] Ousterhout | The RAMCloud Storage System[END_REF] to leverage its network abstraction that enables the use of other network transports (e.g., UDP, DPDK, Infiniband), whereas Kafka only supports TCP. Moreover, it allows KerA to benefit from a set of design choices like polling and request dispatching [START_REF] Kulkarni | Beyond Simple Request Processing with RAMCloud[END_REF] that help boost performance (kernel bypass and zero-copy networking are possible with DPDK and Infiniband). Each broker has an ingestion component offering pub/sub interfaces to stream clients and an optional backup component that can store stream replicas. This allows for separation of nodes serving clients from nodes serving as backups. Another important difference compared to Kafka is that brokers directly manage stream data instead of leveraging the kernel virtual cache. KerA's segments are buffers of data controlled by the stream storage. Since each segment contains the [stream, streamlet, group] metadata, a streamlet's groups can be durably stored independently on multiple disks, while in Kafka a partition's segments are stored on a single disk. To support durability and replication, and implement fast crash recovery techniques, it is possible to rely on RAM-Cloud [START_REF] Ongaro | Fast Crash Recovery in RAMCloud[END_REF], by leveraging the aggregated disk bandwidth in order to recover the data of a lost node in seconds. KerA's fine-grained partitioning model favors this recovery technique. However it cannot be used as such: producers should continuously append records and not suffer from broker crashes, while consumers should not have to wait for all data to be recovered (thus incurring high latencies). Instead, recovery can be achieved by leveraging consumers' application offsets. We plan to enable such support as future work. V. EXPERIMENTAL EVALUATION We evaluate KerA compared to Kafka using a set of synthetic benchmarks to assess how partitioning and (application defined) offset based access models impact performance. A. Setup and parameter configuration We ran all our experiments on Grid5000 Grisou cluster [START_REF]Grid5000[END_REF]. Each node has 16 cores and 128 GB of memory. In each experiment the source thread of each producer creates 50 million non-keyed records of 100 bytes, and partitions them round-robin in batches of configurable size. The source waits no more than 1ms (parameter named linger.ms in Kafka) for a batch to be filled, after this timeout the batch is sent to the broker. Another producer thread groups batches in requests and sends them to the node responsible of the request's partitions (multi TCP synchronous requests). Similarly, each consumer pulls batches of records with one thread and simply iterates over records on another thread. Zookeeper is responsible for providing clients the metadata of the association of streamlets with brokers. Streamlets' groups and their segments are dynamically discovered by consumers querying brokers for the next available groups of a streamlet and for new segments of a group. Replication in Kera can leverage its fine-grained partitioning model (streamlet-groups) by replicating each group on distinct brokers or by fully replicating a streamlet's groups on another broker like in Kafka. In the client's main thread we measure ingestion and processing throughput and log it after each second. Producers and consumers run on different nodes. We plot average ingestion throughput per client (producers are represented with KeraProd and KafkaProd, respectively consumers with KeraCons and KafkaCons), with 50 and 95 percentiles computed over all clients measurements taken when concurrently running all producers and consumers (without considering the first and last ten seconds measurements of each client). Each broker is configured with 16 network threads that corresponds to the number of cores of a node and holds one copy of the streamlet's groups (we plan to study pullbased versus push-based replication impact in future work). In each experiment we run an equal number of producers and consumers. The number of partitions/streamlets is configured to be a multiple of the number of clients, at least one for each client. Unless specified, we configure in KerA the number of active groups to 1 and the number of segments to 16. A request is characterized by its size (i.e., request.size, in bytes) and contains a set of batches, one for each partition, each batch having a batch.size in bytes. We use Kafka 0.10.2.1 since it has a similar data model with KerA (newest release introduces batch entries for exactly once processing, a feature that could be efficiently enabled also in KerA [START_REF] Lee | Implementing Linearizability at Large Scale and Low Latency[END_REF]). A Kafka segment is 512 MB, while in KerA it is 8MB. This means that rolling to a new segment happens more often and may impact performance (since KerA's clients need to discover new segments before pulling data from them). B. Results While Kafka provides a static offset-based access by maintaining and indexing record offsets, KerA proposes dynamic access through application defined offsets that leverage streamlet-group-segment metadata (thus, avoiding the overhead of offset indexing on brokers). In order to understand the application offset overhead in Kafka and KerA, we evaluate different scenarios, as follows. Impact of the batch/request size. By increasing the batch size we observe smaller gains in Kafka than KerA (Figure 4). KerA provides up to 5x higher throughput when increasing the batch size from 1KB to 4KB, after which throughput is limited by that of the producer's source. For each producer request, before appending a batch to a partition, Kafka iterates at runtime over batch's records in order to update their offset, while Kera simply appends the batch to the group's segment. To build the application offset, KerA's consumers query brokers (issuing RPCs that compete with writes and reads) in order to discover new groups and their segments. This could be optimized by implementing a smarter read request that discovers new groups or segments automatically, reducing the number of RPCs. Adding clients (vertical scalability). Having more concurrent clients (producers and consumers) means possibly reduced throughput due to more competition on partitions and less worker threads available to process the requests. As presented in Figure 5, when running up to 64 clients on 4 brokers (full parallelism), KerA is more efficient in front of higher number of clients due to its more efficient application offset indexing. Adding nodes (horizontal scalability). Since clients can leverage multi-TCP, distributing partitions on more nodes helps increasing throughput. As presented in Figure 6, even when Kafka uses 4 times more nodes, it only delivers half of the performance of KerA. Current KerA implementation prepares a set of requests from available batches (those that are filled or those with the timeout expired) and then submits them to brokers, polling them for answers. Only after all requests are executed, a new set of requests is built. This implementation can be further optimized and the network client can be asynchronously decoupled, like in Kafka, in order to allow for submissions of new requests when older ones are processed. Increasing the number of partitions/streamlets. Finally, we seek to assess the impact of increasing the number of partitions on the ingestion throughput. When the number of partitions is increased we also reduce the batch.size while keeping the request.size fixed in order to maintain the target maximum latency an application needs. We configure KerA similarly to Kafka: the number of active groups is 1 so the number of streamlets gives a number of active groups equal to the number of partitions in Kafka (one active group for each streamlet to pull data from in each consumer request). We observe in Figure 7 that when increasing the number of partitions the average throughput per client decreases. We suspect Kafka's drop in performance (20x less than KerA for 1024 partitions) is due to its offset-based implementation, having to manage one index file for each partition. With KerA one can leverage the streamlet-group abstractions in order to provide applications an unlimited number of partitions (fixed size groups of segments). To show this benefit, we run an additional experiment with KerA configured with 64 streamlets and 16 active groups. The achieved throughput is almost 850K records per second per client providing consumers 1024 active groups (fixed-size partitions) compared to less than 50K records per second with Kafka providing the same number of partitions. The streamlet configuration allows the user to reason about the maximum number of nodes on which to partition a stream, each streamlet providing an unbounded number of fixed-size groups (partitions) to process. KerA provides higher parallelism to producers resulting in higher ingestion/processing client throughput than Kafka. VI. RELATED WORK Apache Kafka and other similar ingestion systems (e.g., Amazon Kinesis [START_REF]Amazon Kinesis[END_REF], MapR Streams [START_REF]Mapr Streams[END_REF], Azure Event Hubs [START_REF]Azure Event Hubs[END_REF]) provide publish/subscribe functionality for data streams by statically partitioning a stream with a fixed number of partitions. To facilitate future higher workloads and better consumer scalability, streams are over-partitioned with a higher number of partitions. In contrast, KerA enables resource elasticity by means of streamlets, which enables storing an unbounded number of fixed-size partitions. Furthermore, to alleviate from unnecessary offset indexing, KerA's clients dynamically build an application offset based on streamlet-group metadata exposed through RPCs by brokers. DistributedLog [START_REF] Guo | Distributedlog: A High Performance Replicated Log Service[END_REF], [START_REF]Apache DistributedLog[END_REF] is a strictly ordered, geo-replicated log service, designed with a two-layer architecture that allows reads and writes to be scaled independently. DistributedLog is used for building different messaging systems, including support for transactions. A topic is partitioned into a fixed number of partitions, and each partition is backed by a log. Log segments are spread over multiple nodes (based on Bookkeeper [START_REF] Junqueira | Durability with bookkeeper[END_REF]). The reader starts reading records at a certain position (offset) until it reaches the tail of the log. At this point, the reader waits to be notified about new log segments or records. While KerA favors parallelism for writers appending to a streamlet (collection of groups), in DistributedLog there is only one active writer for a log at a given time. Apache Pulsar [START_REF]Apache Pulsar[END_REF] is a pub-sub messaging system developed on top of Bookkeeper, on a two-layer architecture composed of stateless serving layer and stateful persistence layer. Compared to DistributedLog, reads and writes cannot scale independently (first layer is shared by both readers and writers) and Pulsar clients do not interact with Bookkeeper directly. Pulsar unifies the queue and topic models, providing exclusive, shared and failover subscriptions models to its clients [START_REF]Messaging, storage, or both?[END_REF]. Pulsar keeps track of consumer cursor position, being able to remove records once acknowledged by consumers. Similar to Pulsar/DistributedLog, KerA could leverage a second layer of brokers to cache streamlet's groups when needed to provide large fan-out bandwidth to multiple consumers of the same stream. Pravega [START_REF]Pravega[END_REF] is another open-source stream storage system built on top of Bookkeeper. Pravega partitions a stream in a fixed number of partitions called segments with a single layer of brokers providing access to data. It provides support for auto-scaling the number of segments (partitions) in a stream and based on monitoring input load (size or number of events) it can merge two segments or create new ones. Producers can only partition a stream by a record's key. None of the state-of-art ingestion systems are designed to leverage data locality optimizations as envisioned with KerA in a unified storage and ingestion architecture [START_REF] Marcu | Towards a Unified Storage and Ingestion Architecture for Stream Processing[END_REF]. Moreover, thanks to its network agnostic implementation [START_REF] Ousterhout | The RAMCloud Storage System[END_REF], KerA can benefit from emerging fast networks and RDMA, providing more efficient reads and writes than using TCP/IP. VII. CONCLUSIONS Fig. 1 . 1 Fig. 1.Stream processing pipeline: records are collected at event time and made available to consumers earliest at ingestion time, after the events are acknowledged by producers; processing engines continuously pull these records and buffer them at buffer time, and then deliver them to the processing operators, so results are available at processing time. Fig. 2 . 2 Fig. 2. Kafka's architecture (illustrated with 3 partitions, 3 replicas and 5 brokers.). Producers and consumers query Zookeeper for partition metadata (i.e., on which broker a stream partition leader is stored). Producers append to the partition's leader (e.g., broker 1 is assigned partition 1 leader), while exclusively one consumer pulls records from it starting at a given offset, initially 0. Records are appended to the last segment of a partition with an offset being associated to each record. Each partition has 2 other copies (i.e., partition's followers) assigned to other brokers that are responsible to pull data from the partition's leader in order to remain in sync. Fig. 3 . 3 Fig. 3. KerA's architecture (illustrated with 3 streamlets and 5 brokers).Zookeeper is responsible for providing clients the metadata of the association of streamlets with brokers. Streamlets' groups and their segments are dynamically discovered by consumers querying brokers for the next available groups of a streamlet and for new segments of a group. Replication in Kera can leverage its fine-grained partitioning model (streamlet-groups) by replicating each group on distinct brokers or by fully replicating a streamlet's groups on another broker like in Kafka. Fig. 5 . 5 Fig. 5. Adding clients. Parameters: 4 brokers; 32 partitions/streamlets, 1 active group per streamlet; batch.size = 16KB; request.size = 128KB. Fig. 6 .Fig. 7 . 67 Fig.6. Adding nodes (brokers). Parameters: 32 producers and 32 consumers; 256 partitions, 32 streamlets with 8 active groups per streamlet; batch.size = 16KB; request.size = batch.size multiplied by the number of partitions/active groups per node. Increasing the batch size (request size). Parameters: 4 brokers; 16 producers and 16 consumers; number of partitions/streamlets is 16; request.size equals batch.size multiplied by 4 (number of partitions per node). On X we have producer batch.size KB, for consumers we configure a value 16x higher. 10 6 KeraProd KeraCons KafkaProd KafkaCons Client Throughput 1 2 4 8 16 32 64 Batch Size (KB) Fig. 4. This paper introduced KerA, a novel data ingestion system for Big Data stream processing specifically designed to deliver high throughput, low latency and to elastically scale to a large number of producers and consumers. The core ideas proposed by KerA revolve around: (1) dynamic partitioning based on semantic grouping and sub-partitioning, which enables more flexible and elastic management of partitions; (2) lightweight offset indexing optimized for sequential record access using streamlet metadata exposed by the broker. We illustrate how KerA implements these core ideas through a research prototype. Based on extensive experimental evaluations, we show that KerA outperforms Kafka up to 4x for ingestion throughput and up to 5x for the overall stream processing throughput. Furthermore, we have shown KerA is capable of delivering data fast enough to saturate a Big Data stream processing engine acting as the consumer. Encouraged by these initial results, we plan to integrate KerA with streaming engines and to explore in future work several topics: data locality optimizations through shared buffers, durability as well as state management features for streaming applications.
01678912
en
[ "spi.meca.mefl", "spi.meca.geme" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01678912/file/soumission%20pdf.pdf
O Lafforgue I Seyssiecq S Poncet J Favier Rheological properties of synthetic mucus for airway clearance Keywords: synthetic bronchial mucus; viscoelasticity, viscoplasticity, shear -thinning, thixotropy In this work, a complete rheological characterization of bronchial mucus simulants based on the composition proposed by Zahm et al. [1] is presented. Dynamic Small Amplitude Oscillatory Shear (SAOS) experiments, Steady State (SS) flow measurements and three Intervals Thixotropy Tests (3ITT), are carried out to investigate the global rheological complexities of simulants (viscoelasticity, viscoplasticity, shear -thinning and thixotropy) as a function of scleroglucan concentrations (0.5 to 2wt%) and under temperatures of 20 and 37 °C. SAOS measurements show that the limit of the linear viscoelastic range as well as the elasticity both increase with increasing sclerogucan concentrations. Depending on the sollicitation frequency, the 0.5wt% gel response is either liquid-like or solid-like, whereas more concentrated gels show a solid-like response over the whole frequency range. The temperature dependence of gels response is negligible in the 20-37°C range. The Herschel-Bulkley (HB) model is chosen to fit the SS flow curve of simulants. The evolution of HB parameters versus polymer concentration show that both shear-thinning and viscoplasticity increase with increasing concentrations. 3ITTs allow calculation of recovery thixotropic times after shearings at 100s -1 or 1.6s -1 . Empiric correlations are proposed to quantify the effect of polymer concentration on rheological parameters of mucus simulants. INTRODUCTION A large number of fluids in human body, such as blood or mucus, produced by different organs, are known to exhibit complex non Newtonian rheological properties under physiological states. When transported in the airways as a result of cough or cilia beating, bronchial mucus is characterized by a non constant, shear rate and time dependent viscosity, in both normal and pathological conditions. Bronchial mucus is mainly composed of water (90-95%), mucins (2-5%), lipids (1-2%), salts (1%), 0.02% of DNA and other molecules such as cells debris [START_REF] Vasquez | Complex fluids and soft structures in the human body[END_REF]. Mucins are high molecular weight glycoproteins insuring a structural protection function. The entangled and cross-linked network of its branched chains forms a 3D matrix spanning the mucus gel layer [START_REF] Thornton | From mucins to mucus: toward a more coherent understanding of this essential barrier[END_REF]. As a consequence of this complex internal structure, bronchial mucus is a non Newtonian fluid displaying all the possible rheological complexities such as viscoplasticity, shear-thinning, viscoelasticity and thixotropy. All these properties directly affect the way mucus flows and, as a consequence, the vital clearance function of the mucus layer coating the airways. The mechanism of mucus clearance can be described by the following two steps: Step 1: inhaled particles or pathogens are trapped inside the mucus gel where enzymes and antibodies can biochemically disrupt them. Step 2 : mucus is mainly transported by the mucociliary mechanism or by cough towards the pharynx where it is either expectorated or digested. It is however known that, under certain disease conditions such as Cystic Fibrosis (CF), the clearance function is affected by modifications of the mucus composition and consequently, of its viscosity. Due to a lack of hydration (in the case of CF), mucus can indeed become very thick and difficulties may arise to properly evacuate this fluid from the airways where it can accumulate and become more easily infected. A good understanding of mucus rheology is thus of prime importance in order to develop new care solutions for patients suffering from CF or other chronic respiratory diseases. Among possible care solutions, clearance helping devices, based on different technologies, have been developed during the last decades. These small devices can be used by patients at home, on a daily basis to increase the volume of mucus expectoration and limit the need for respiratory physiotherapy. As an example, a newly developed apparatus known as the Simeox®, imposes an oscillatory air depression to the air flow during the exhalation phase of the patient. Based on the thixotropic and shear-thinning nature of mucus, such a solicitation induces a decrease of its viscosity and stimulates its expectoration. More insight into the rheology of respiratory mucus is needed to further improve the efficiency of such clearance helping devices. Although there is a large number of studies devoted to the rheological characterization of certain types of mucus, the results are still difficult to interpret, due to the use of different rheological techniques, but also due to the time evolution (aging) of such biological materials [START_REF] Lai | Micro-and macrorheology of mucus[END_REF]. Furthermore, variations in the method used to collect samples (contamination issues), together with the natural variability (depending on the patient, the pathology, the occurence of an infection...) of this complex biological fluid lead to important discrepancies in the existing literature, concerning the results on mucus rheological characterization [START_REF] Celli | Helicobacter pylori moves through mucus by reducing mucin viscoelasticity[END_REF]. Numerous previous works devoted to the study of mucus rheology (either synthetic or native mucus from different organs), have only described part of its rheological properties. For instance, many works have used dynamic oscillatory shear measurements to characterize mucus rheology [6 ; 7 ; 8 ; 9; 10 ; 11 ; 12]. Under small deformations (SAOS) these measurements mostly reflect the properties of mucus under its native, unperturbed state and are useful to describe the linear viscoelastic response of mucus. On the contrary, in other works, the authors have made the choice to characterize mucus rheology using only continuous shear experiments [1 ; 13 ; 14; 15]. In these cases, the measured properties mostly reflect the flow behavior of mucus and can be used to investigate its viscosity under physiological shearing rates prevailing in human lungs during normal functioning or during temporary events such as cough. Even in studies where both dynamic oscillatory and shear flow measurements were carried out, the thixotropic nature of mucus was not accounted for [5; 16 ; 17 ; 18 ; 19; 20 ; 21], or at least not on a quantitative point of view [22 ; 23 ; 24]. As a consequence, a complete and intrinsically consistent characterization campaign is still missing in the open literature. As the use of real mucus implies strong issues related to available quantities, and rises questions about the impact of the collection method on the fluid composition, the choice made in this work is to use mucus simulants. In this context, this work proposes a rheological characterization of mucus simulants at different active polymer concentrations (0.5 to 2%), under a temperature of 20 or 37°C allowing to cover the range of air physiological temperature along the airways and using a broad range of available rheological tests (SAOS, controlled shear stress SS flow tests and 3ITT). In an attempt to quantify the measured properties, empirical equations are used to represent the evolution of different rheological parameters as a function of the active polymer concentration. MATERIALS AND METHODS Preparation of mucus simulants The composition and preparation of polymeric synthetic solutions used to mimic human bronchial mucus are described in Zahm et al. [START_REF] Zahm | Role of simulated repetitive coughing in mucus clearance[END_REF]. To account for the natural variability of real mucus from one patient to another but also depending on health conditions, gels with different scleroglucan (Actigum™) concentrations ranging from 0.5 to 2wt% were prepared. Mucus from Chem-Lab NV (Zedelgem, Belgium). Distilled water used in all preparations was produced using a 2012 distillator (GFL, France). Mucus simulants solutions were prepared within glass bottles filled with 200 mg of distilled water. Then, 0.9wt% of NaCl, 0.5wt% of Viscogum™ FA and a chosen fraction (ranging from 0.5wt% to 2wt%) of Actigum™ CS 6 were successively added into the solution under magnetic stirring (Ikamag® RET) at room temperature. The mixture was kept under agitation for 48h at room temperature. After this time period, a mass corresponding to 4mL of di-sodium tetraborate at 0.02M was added. This addition induces the cross-linking of the polymeric chains, building a 3D gel matrix that mimicks the mucin network responsible for the internal structure of real mucus. The agitation is kept for a few more hours before storing the final mixture at 4°C. Before performing the measurements, the solution is fractionnated into several 30mL plastic vials and then allowed to recover at room temperature. Such mucus simulants were found to mimic accurately the main properties of bronchial mucus in the case of different pathologies. Rheological measurements Rheometers and measuring systems Rheological measurements were performed using two controlled-stress rheometers, the AR 550 and the DHR-2 (TA Instruments) equiped with a measuring system consisting of a 2° stainless steel cone (40 mm or 50 mm in diameter). The temperature was controlled by a Peltier plate. A wet steel lid or a thin silicone oil layer insuring a water saturated atmosphere around the sample was used as a dehydration preventing solution. Sample loading A small amount of gel was loaded onto the Peltier plate by gently pouring it from the vial in order to minimize shear history effects. The geometry was then lowered down to the corresponding gap plus a few micrometers and the excess of fluid was removed on the edges, the exact gap value was then set. The desired temperature (20 °C or 37°C) was also set before the tests began. Measurements protocoles Viscoelastic properties of the simulants were investigated through a series of dynamic shear experiments (Small Amplitude Oscillatory Shear: SAOS). The results were interpreted based on the evolution of the elastic and viscous moduli (G', G") and the loss angle (δ) as a function of the sinusoidal input. The stress dependency of the different gels (0.5wt% to 2wt% in active polymer) response was first measured via stress amplitude sweeps at constant frequency ( Hz 2 1 π ). This is a classical test carried out in order to determine, for each solution, the limit of the Linear ViscoElastic (LVE) range. The frequency dependency (in a maximum range of 10 -5 to 100 Hz) of the different gels response was also measured at constant stress amplitude, within the LVE range according to the stress amplitude sweep results. The temperature dependency of simulants rehological properties was finally investigated by comparing stress amplitude sweeps obtained for a given sample at either 20°C (ambiant air temperature) or 37°C (physiological temperature in the lower airways). However, to fully characterize the behavior of a mucus layer in response to in vivo solicitations such as cough or air flows artificially induced by clearance helping devices, the rheological measurements have to be performed far beyond the LVE range. Rotational controlled shear stress flow tests were then used to determine the rheological properties of the mucus simulants under shear flow conditions. In order to quantify the viscoplastic and the shear-thinning effects independently of the thixotropic ones, steady state rheograms were recorded. Such a steady state curve is obtained by applying a given shear stress until the corresponding shear rate reaches a constant value. Steady state (SS) flow curves of the different simulants were modelized using a 3 parameters Herschel-Bulkley (HB) model. This model accounts for the fluid viscoplasticity via the yield stress value (τ 0HB ) and for its shear -thinning behavior via the flow (n) and consistency (K) indexes values. The thixotropy of the more concentrated mucus simulant was separately quantified using three Intervals Thixotropy Tests (3ITT). A 3ITT test consists in a stepwise change of stress or strain rate to successively monitor the initial structure, then its breaking up and finally its recovery. More precisely, the 3ITTs applied here can be described as follows: First interval: the sample is submitted to very low shear conditions ( ). This interval gives a reference for the fluid structure "at rest" or at least under very low shear. Second interval: higher shear conditions are imposed by applying a constant shear stress or shear rate to disrupt the internal structure until a steady state is reached (depending on the chosen stress or strain rate value). In our case, a constant shear rate value of either 1.6s -1 or 100s -1 was applied during step 2, in order to submit the sample to shearing conditions representative of either normal shearing conditions in the airways, or during peculiar events such as cough. Third interval: the sample is allowed to recover under very low shear conditions again ( 1 029 . 0 - • = s γ ). A RESULTS SAOS tests Stress amplitude and frequency sweep tests conducted at a constant temperature of 20°C, on simulant gels at different Actigum™ concentrations, all display the same feature. An example is given in figure 1 for a 1.5wt% Actigum™ gel (figure 1 Stress amplitude sweep tests The stress amplitude dependency of the different gels response was recorded via stress amplitude sweeps at a constant frequency ( Hz 2 1 π ) for all the concentrations in active polymer (0.5wt% to 2wt%). The results obtained are displayed in figure 2. For all polymer concentrations, a plateau region for G' and G" moduli defines the LVE Range at the preset frequency. A yield stress value (τ y ) limits the LVE range and is determined from the end of the moduli plateau. Since the G' curve often deviates first from the plateau, the G' F o r P e e r R e v i e w function is commonly chosen to determine τ y . Here, a 10% deviation from the plateau has arbitrarily been set for τ y calculations . The yield stress is the stress limit below which no significant change of the internal structure occurs. For τ < τ y the sample is displaying reversible viscoelastic behavior. On the contrary, above τ y , the measurements no longer reflect the structure at rest due to early signs of stress induced microstructural evolutions. Based on the value of the yield stress and of the corresponding critical deformation γ c , one can also calculate the volumetric energy of cohesion of the 3D network E c in J.m -3 (eq. ( 1)) [START_REF] Coussot | Rheophysical classification of concentrated suspensions and granular pastes[END_REF]. This energy of cohesion can be used in a quantitative manner as a measure of the extent of intermolecular and intramolecular interactions of the polymeric internal structure [START_REF] Niraulab | Evaluation of rheological property of dodecyl maltoside, sucrose dodecanoate, Brij 35p and SDS stabilizided O/W emulsion: effect of head group stracture on rheology property and emulsion stability[END_REF]. c y c E γ τ . 2 1 = (1) In the case of the mucus simulants tested here, it can be seen in figure 2 that above the critical yield stress, G' decreases while G" shows an overshoot before decreasing. This kind of behavior has been observed on lots of gels by Mezger [START_REF] Mezger | The Rheology Handbook: For Users of Rotational and Oscillatory Rheometers[END_REF] and is classified as a "Type III behavior or weak strain overshoot" by Hyun et al [START_REF] Kim | Large amplitude oscillatory shear as a way to classify the complex fluids[END_REF]. The flow point (τ f ) is identified as the stress value for which a moduli crossover (G'=G", tan(δ)=1) is observed. This flow point corresponds to the stress above which the material becomes more viscous than elastic due to critical microstructural breakdowns. Finally the stress corresponding to the maximum of G" (peak overshoot) is denote by τ peak and is, for all concentrations, almost superimposed to the flow point stress (except for the 0.5wt% gel for which the G" overshoot does not appear). The range between τ y and τ f is sometimes referred as the "yield zone" or the "yield / flow transition range" [START_REF] Mezger | The Rheology Handbook: For Users of Rotational and Oscillatory Rheometers[END_REF]. In this range, despite the predominance of elastic behavior (G' > G"), irreversible deformations might locally have already taken place. ) and a given Actigum™ concentration (0.75 wt%) at either 20°C or 37°C. Each curve is the average of 3 successive measurements. Measurements were also performed with simulants at different polymer concentrations and all display qualitatively identical results. As it is observed in figure 4, the LVE range of the mucus simulant gel as well as the different stresses (τ y, τ f , τ peak ) characterizing the solid -liquid transition zone show very little dependence on the temperature in the 20-37°C range. The curves obtained at either 20 or 37°C are almost superimposed and the observed differences are of the same order of magnitude than reproducibility. Concerning the gels studied here, it can be concluded that the effect of temperature in the 20-37°C range on mucus simulants rheological properties is not significant compared to reproducibility. Frequency sweep tests Since the frequency is the inverse of a time, frequency sweep tests are usually performed in order to investigate the behavior of a substance as a function of the sollicitation characteristic time under small deformations. Short-term behavior of the sample is then simulated at high frequencies, whereas long-term behavior is displayed under low frequencies. The main Since working at very low frequencies implies a long experiment duration and, as a consequence, a more important risk of sample dehydration, all the simulants have been studied in the common range 10 -2 to 10 Hz. only in the case of the more diluted and the more concentrated gels, the frequency range was enlarged to 10 -4 -10 Hz and 10 -5 -10 Hz to investigate the possible occurence of a low frequency moduli crossover. Over the whole range of frequencies, simulants with Actigum TM concentrations ranging from 0.75wt% to 2wt%, show a gel-like behavior characterized by an almost parallel low increase of G' and G" (in log-log scale) with G' values 2 to 7 times higher than G". Such a behavior is characteritic of a soft gel in which intermolecular forces are mostly responsible for the 3D internal network [START_REF] Mezger | The Rheology Handbook: For Users of Rotational and Oscillatory Rheometers[END_REF]. Therefore elastic behavior dominates the viscous one over the entire frequency range attesting for the gel stability. G' and G" only show a slight frequency dependence displaying an average slope in log-log representation of 0.1 in the case of G' and 0.04 in the case of G" (see discussion part for interpretation). 5 (c)) is also observed at low frequencies (< 2 10 -3 Hz) for which G" > G'. Such a behavior corresponds to a liquid-like behavior and can be observed when a deformation (even very small) is applied at a sufficiently low rate (i.e. during a long time). On the contrary, an identical deformation applied at higher frequencies induces a solid-like response. The typical example usually given in rheology books to illustrate this behavior is the one of a silicone ball. Such a ball left at rest in a beaker for a long time will finally take the form of the beaker (viscous flow under low frequency or long time sollicitation), while the same ball thrown onto a wall will bounce, displaying an elastic behavior under high frequency or small time sollicitation. It is worth noting that the inverse of the frequency for the moduli crossover is sometimes used as a time characterizing the elasticity of the gel [START_REF] Coussot | Comprendre la rhéologie de la circulation du sang à la prise du béton[END_REF]. However, in this case, it would be delicate to interpret this value since complex internal structures composed of a large number of components with different lengths, flexibilities or mobilities, such as mucus simulants, are most likely to display viscoelastic properties governed by the superposition of several relaxation modes [START_REF] Vasquez | Complex fluids and soft structures in the human body[END_REF]. The low frequency crossover of moduli is not experimentally reached in the case of more concentrated simulants (0.75wt% to 2wt%), for the range of tested frequencies. However, in figure 5 (c), tan(δ) increases when the frequency decreases, so that a moduli crossover would likely appear under sufficiently low frequency at a decreasing frequency value, as the concentration increases, traducing an increased elasticity for more concentrated mucus simulants. The non Newtonian behavior observed in figure 6, is the direct consequence, in the case of polymeric substances, of a shear-induced spatial structure transformation [START_REF] Toker | 3ITT in food applications: a novel technique to determine structural regeneration of mayonnaise under different shear conditions[END_REF]. Steady-state (SS) flow test = = C K ( 7 ) 963 . 0 R C 346 . 0 1 2 = - = n ( 8 ) Equation 6 accounts for the increase in viscoplasticity, whereas equations 7 and 8 account for the increase in shear-thinning behavior, as the active polymer concentration increases. Three Interval Thixotropy Tests 3ITT tests were performed on the different simulant gels in order to investigate the thixotropic nature of mucus simulants independently of their SS flow behavior. Thixotropy is characteristic of complex internal structure systems and is linked to slow time evolutions in rheological properties due to either restructuring at rest or destructuring initiated by deformation [START_REF] Toker | 3ITT in food applications: a novel technique to determine structural regeneration of mayonnaise under different shear conditions[END_REF]. Figure 8 represents a first 3ITT performed on a 2 wt% simulant. The second step imposes a shear flow under an effective shear rate of 1.6 s -1 corresponding to the order of magnitude of the shearing of a mucus layer in the tracheobronchial tree. Indeed, if the physiological maximum value for shear rate, can reached 10 2 -10 4 s -1 (for example during coughing), physiological rates in the normal lung are of the order of magnitude of 0.1 to 1 s -1 . [4 ; 8 ; 14]. The three successive steps are plotted as apparent viscosity versus time. After a shearing under 1.6 s -1 during step 2, one can observe in figure 8 that the thixotropic regeneration curve (step 3) allows the calculation of a thixotropic recovery time characterizing a given state of regeneration chosen here at 90% and 100% of recovery, based on the initial structure measured during step 1. The thixotropic recovery time is, in this case of 2.7s for 90% and 75s for 100% of recovery. Figure 9 gives the results of a second test also performed on a 2 wt% simulant. This time, the second step corresponds to a shear flow under a larger shear rate (100 s -1 ) matching the order of magnitude for the shearing of a mucus layer submitted to cough. After a shearing at 100s -1 during step 2, one can observe in figure 9 that the thixotropic regeneration curve (step 3) gives a 90% recovery time of 17s and a 100% recovery time of 917s, which are logically higher than the recovery times needed after the shearing under 1.6s -1 observed in figure 8. DISCUSSION During stress amplitude SAOS experiments (figure 2) a G" overshoot has been observed with mucus simulants. According to Hyun et al [START_REF] Kim | Large amplitude oscillatory shear as a way to classify the complex fluids[END_REF][START_REF] Coussot | Comprendre la rhéologie de la circulation du sang à la prise du béton[END_REF][START_REF] Toker | 3ITT in food applications: a novel technique to determine structural regeneration of mayonnaise under different shear conditions[END_REF][START_REF] Wilhelm | A review of nonlinear oscillatory shear tests: Analysis and application of large amplitude oscillatory shear (laos)[END_REF] or Mezger [START_REF] Mezger | The Rheology Handbook: For Users of Rotational and Oscillatory Rheometers[END_REF], such an overshoot is generally obtained in the case of cross-linked polymers or gel-like internal structures (existence of a 3D network). The overshoot is directly linked to the progressive collapse of the 3D network. During the initiation of flow, at first the friction increases due to spatial rearrangements of some free elements (for example relative motion of end pieces of chains). These irreversible motions induce an increase in dissipated energy (G" increases). Then the breakdown of the internal superstructure occurs (at the τ peak value) and the dissipated energy 3 for quantification). This is not surprising since more concentrated simulants have a stronger internal network that will need a higher energy input to collapse. On the other hand, the peak height is also increasing with the Actigum™ concentration (figure 2(b)). The overshoot amplitude is directly linked to the amount of friction occuring during the initiation of flow due to spatial rearrangements of free elements. A more concentrated sample is thus expected to dissipate more friction energy as a result of its higher network density (more cross-linking and chains entanglements). A G" overshoot during stress amplitude sweeps was rarely mentioned in the literature on the rheological characterization of mucus. In the work of Bastholm [START_REF] Bastholm | The viscoelastic properties of the cervical mucus plug[END_REF], a G" overshoot seems to occur with some of the tested cervical mucus samples, without any comment concerning this phenomenon. Celli et al. [START_REF] Celli | Helicobacter pylori moves through mucus by reducing mucin viscoelasticity[END_REF], performing a SAOS stress amplitude sweep on a sane gastric mucus sample (compared to a Helicobacter pylori contaminated one), also observed a "weak strain overshoot" on G" and refered to the work of Huyn et al. [START_REF] Kim | Large amplitude oscillatory shear as a way to classify the complex fluids[END_REF] for interpretation. Concerning now the internal network volumetric energy of cohesion E c values, we can refer to works by Mori et al. [START_REF] Mori | Rheological measurements of sewage sludge forvarious solids sonsentrations and geometry[END_REF] or Niraula et al. [START_REF] Niraulab | Evaluation of rheological property of dodecyl maltoside, sucrose dodecanoate, Brij 35p and SDS stabilizided O/W emulsion: effect of head group stracture on rheology property and emulsion stability[END_REF] for comparison purposes. Based on SAOS characterizations on respectively biological suspensions or stabilized oil in water (O/W) emulsions both exhibiting a 3D cohesive network, they calculated the volumetric cohesive energy of the material internal network to account for the degree of interaction and thus of flocculation between droplets in the case of stabilized O/W emulsions [START_REF] Niraulab | Evaluation of rheological property of dodecyl maltoside, sucrose dodecanoate, Brij 35p and SDS stabilizided O/W emulsion: effect of head group stracture on rheology property and emulsion stability[END_REF] and between bioflocs in the case of activated sludge suspensions [START_REF] Mori | Rheological measurements of sewage sludge forvarious solids sonsentrations and geometry[END_REF]. The mucus simulants studied here are not flocculated suspensions nor liquid-liquid emulsions but aqueous dispersions of macromolecules. However since a 3D network exists both due to entanglement and cross-linking of the macromolecules the concept of volumetric cohesive energy can be useful to quantitatively account for the degree of interaction of the subsequent network. One can observe that the order of magnitude of E c for the Actigum TM based mucus simulants (0.02 to 0.36 J.m -3 ) is intermediate between E c values calculated for biological activated sludge suspensions [START_REF] Mori | Rheological measurements of sewage sludge forvarious solids sonsentrations and geometry[END_REF] (0.2 to 1.2 J.m -3 ) and E c values of stabilized O/W emulsions (0.24 10 -3 to 2.95 10 -3 J.m -3 ) . It is also worth noting that E c values showed important dependence upon the network macromolecular or suspended bioflocs concentration (power law (eq. 4) in this work, exponential law in [START_REF] Mori | Rheological measurements of sewage sludge forvarious solids sonsentrations and geometry[END_REF]). The influence of the polymer concentration on stress amplitude SAOS measurements displayed in figure 3 for synthetic mucus, has been discussed in some previous works. As an example, Riley et al. [START_REF] Riley | An investigation of mucus/polymer synergism using synthessised and characterised poly(acrylic acid)s[END_REF] showed that an increase in polymer (Carbopol 934P) concentration induced an increase in both G' and G" measured in the LVE range at a constant (5 Hz) frequency. Shah et al. [START_REF] Shah | An in vitro evaluation of the effectiveness of endotracheal suction catheters[END_REF] also observed working with mucus simulants that both G' and G" (either measured at 1 or 100 rad/s) increased with the coagulant concentration (0.5, 1.5 and 3%). Finally, Hamel & Fiegel [START_REF] Hamed | Synthetic tracheal mucus with native rheological and surface tension properties[END_REF] measured an identical increase with crosslinked polymer concentration of G' and G" moduli, performing on synthetic mucus strain sweeps at a constant frequency. Nevertheless, in all of these previous studies, the observed variations were not quantified. An interesting point that can however be pointed out is the fact that the achieved order of magnitude of G' and G" (in the LVE range) with the mucus simulants tested in this work correspond to those usually observed with sane or pathologic native respiratory mucus. For instance, CF mucus can reach various viscoelasticity levels as a function of many factors. In the work of Dawson et al. [START_REF] Dawson | Enhanced viscoelasticity of human cystic fibrotic sputum correlates with increasing microheterogeneity in particle transport[END_REF] CF mucus displays viscoelastic properties, in terms of moduli plateau values, close to those displayed by the 2wt% Actigum™ gel while in the research of Yuan et al. [START_REF] Yuan | Oxidation increases mucin polymer cross-links to stiffen airway mucus gels[END_REF], the measured behavior is close to the one measured with the 0.75wt% gel. Concerning sane mucus, moduli plateau values close to those recorded with the 0.5wt% solution have been observed [START_REF] Yuan | Oxidation increases mucin polymer cross-links to stiffen airway mucus gels[END_REF]. Concerning the effect of temperature on the LVE range limit, identical results to those shown in figure 4 described for mucus gels with similar composition, in the same temperature range (20, 32 or 36°C) [START_REF] Guarente | Etude sur la caractérisation rhéologique du mucus bronchique[END_REF]. A very limited influence of temperature on simulants rheological properties measured both via stress amplitude sweeps and flow curves was recorded by the authors. Taylor et al. [START_REF] Taylor | Rheological characterisation of mixed gels of mucin and alginate[END_REF], who studied the rheological characterization of mucin -alginate gels, also reported no significant differences between the gels behaviors (frequency sweep tests) over a 10 to 60°C temperature range. The rheological properties of such aqueous polyoside solutions are known to be related to the conformation taken by the macromolecules (Viscogum TM & Actigum TM ). Mucus simulants studied here are composed of a constant amount of Viscogum TM (galactomannan chains that are cross-linked in the presence of sodium tetraborate) and various proportions of Actigum™ (extracellular polyoside) that can adopt either a rigid helicoidal conformation or a random entangled conformation, as a function notably of temperature [START_REF] Coussot | Comprendre la rhéologie de la circulation du sang à la prise du béton[END_REF]. From the measurements displayed in figure 4, it can be concluded that Actigum™ molecules are likely to present the rigid helicoidal conformation in simulants tested here, leading to the observed gel-like behavior (G' > G") and also that this conformation remains when the temperature is raised to values such as 37°C. During frequency SAOS sweeps, we obtained results (figure 5) that are, for 0.75 to 2wt% gels, qualitatively comparable to those observed by Coussot et Grossiord [START_REF] Coussot | Comprendre la rhéologie de la circulation du sang à la prise du béton[END_REF] on a 0.5wt% xanthan gum aqueous solution at 25°C. Xanthan gum is an extracellular polyoside close to Actigum TM in terms of macromolecular composition and structure (glucose chain branched every 2 units by other oside functions forming a 3D rigid double helix structure), exhibiting very similar rheological properties. The gel-like behavior of aqueous xanthan solutions over the whole frequency range is known to be linked to the helicoidal rigid conformation adopted by polyoside molecules under moderate temperatures or high ionic strength, impairing their ability to move under small sollicitations. In the case of Actigum TM aqueous solutions, Actigum TM macromolecules are also known to adopt a rigid triple helix conformation (except under high temperatures or low ionic strength) that is responsible for the gel-like response measured over the entire range of tested frequencies in the case of 0.75wt% to 2wt% Actigum TM gels. Kocevar-Nared et al. [START_REF] Kocevar-Nared | Comparative rheological investigation of crude gastric mucin and natural gastric mucus[END_REF] also made observations qualitatively comparable to figure 5 in the case of rehydrated dried crude porcine gastric mucus. Varying the mucin concentration of rehydrated mucus, they observed that more concentrated samples displayed no moduli crossover (in the 0.1 -100 Hz explored range) but a power law slight frequency dependence for both moduli whereas, more diluted sample displayed a stronger frequency dependence with a crossover point moving towards lower frequencies as the mucin concentration is increased and as the structure changes to gel-like. In the case of the 0.5wt% gel studied here, the stronger dependence to the frequency can be linked to the increase in relative importance of Viscogum TM (flexible entangled network) compared to Actigum TM (rigid network) in comparison to gels with higher Actigum TM concentrations. During SS flow measurements, the concentration dependent yield stress and shear-thinning nature of mucus simulants has been observed (figure 6) and quantify (equations 6 to 8). Identical observations have already been reported but only qualitatively in the case of mucus simulants and native mucus based on measurements of their rheological flow behavior. The increase in viscoplasticity (yield stress) with the active polymer concentration has already been observed in the SAOS stress amplitude tests section of the paper. Yield stresses measured in flow mode τ 0HB show higher values than yield stresses measured in SAOS τ y . This is not surprising since τ y corresponds to the end of the LVE range characterized by early motions of small elements of the complex internal polymeric structure (at the microscopic level), while τ 0HB corresponds to the proper beginning of flow measured at a macroscopic scale (existence of a finite shear rate in the measuring gap). It is also worth noting that both τ y and τ 0HB values are dependent on the calculation method (chosen % of deviation from the plateau for τ y and chosen limit value for 0 → • γ in the case of τ 0HB ) . As a consequence, both values should only be considered as order of magnitudes or for comparison purposes of data obtained using the same calculation method. As far as the evolution of yield stresses with the active concentration is concerned, Malkin et al. [START_REF] Malkin | Non-Newtonian viscosity in steady state shear flows[END_REF] reported the case of cysteine / Ag based colloidal gels for which the yield stress deduced from flow measurements decreases as the dilution factor of the initial gel increases, showing the decrease in strength of the rigid structure as the colloidal concentration decreases. If we now focus on the observed shearthinning properties of mucus simulants, Puchelle et al. [START_REF] Puchelle | Elastothixotropic properties of bronchial mucus and polymer analogs. i. experimental results[END_REF] also described the shear-thinning nature of native and lyophilized pathological bronchial mucus but also of simulants composed of polyisobutylene solutions (3 and 6% in decalin). Measuring the SS flow curve of these gels, they obtained shear-thinning indexes (Ostwald de Waele or power law) matching the range observed in table 2 with our mucus simulants and varying from 0.44 to 0.64. Banerjee et al. [START_REF] Banerjee | Effect of phospholipid mixtures and surfactant formulations on rheology of polymeric gels, simulating mucus, at shear rates experienced in the tracheobronchial tree[END_REF] also used a power law to describe the apparent viscosity of shear-thinning mucus simulants (tragacanth gum) mixed with different surfactants to evaluate their ability to reduce viscosity. They obtained viscosity reduction ratios varying from 1.5 to 7.2 depending on the type of surfactant under physiological shear rates (0.1 to 1 s -1 in the tracheobronchial tree). The shear-thinning behavior of mammalian lung mucus is also cited by Vasquez et al. [START_REF] Vasquez | Rheological characterization of mammalian lung mucus[END_REF] but not quantified. It is also the case of Boegh et al. [START_REF] Boegh | developement and rheological pofiling of biosimilar mucus[END_REF] who tried to propose a bio-similar mucus composition displaying rheological properties (notably in terms of shear-thinning) close to the composition of a mucus isolated from cultured cells. Finally, Tomaiuolo et al. [24] also described the shear-thinning nature of different native CF mucus samples without proposing any quantification of this property. More generally, the increase in shear-thinning behavior with the polymer concentration, is classical for aqueous polymer solutions. It corresponds to an increase in shear-thinning capacity for solutions with a higher macromolecular content due to a concentration enhanced reduction of the flow resistance following the alignment and orientation of polymeric chains in the flow direction as the shear rate increases [17 ; 23 ; 34]. Shah et al. [START_REF] Shah | An in vitro evaluation of the effectiveness of endotracheal suction catheters[END_REF] compared viscosity curves of respiratory mucus simulants (0.5 to 3% in polyox resin coagulant). However, they only discussed their results in terms of viscosity increase with the coagulant concentration and did not account for the shearthinning nature of the studied fluids, nor for its evolution with the polymer concentration. Kocevar et al. [START_REF] Kocevar-Nared | Comparative rheological investigation of crude gastric mucin and natural gastric mucus[END_REF] worked on dried porcine gastric mucins solutions at concentrations varying from 10 to 60% and observed a non quantified shear-thinning behavior for solutions above 30%. Finally, 3ITTs performed on a 2wt% simulant gel (figures 8 & 9) allow measuring thixotropic recovery times. If the thixotropic nature of native or synthetic mucus has been reported in some of the previous works devoted to mucus rheological characterization [1 ; 22 ; 23 ; 24], it has often not been quantified. Nielsen et al. [START_REF] Nielsen | Elastic contributions dominate the viscoelastic properties of sputum from cystic fibrosis patients[END_REF] but also Tomaiuolo et al. [START_REF] Tomaiuolo | A new method to improve the clinical evaluation of cystic fibrosis patients by mucus viscoelastic properties[END_REF], described qualitatively the thixotropic nature of native samples observing an hysteresis loop between viscosity curves successively measured at increasing or decreasing shear rates. Kocevar et al. [START_REF] Kocevar-Nared | Comparative rheological investigation of crude gastric mucin and natural gastric mucus[END_REF] recorded with porcine gastric mucins solutions at concentrations varying from 10 to 60% the apparent viscosity versus time under a constant shear rate of 50 s -1 . They concluded depending on the appearance of a viscosity decrease versus time, that gels above 30% are thixotropic while lower concentrated one showed no thixotropy. Zahm et al. [START_REF] Zahm | Role of simulated repetitive coughing in mucus clearance[END_REF] performed identical measurements on mucus simulants and calculated a thixotropic index based on the stress evolution with time under a 1.6 s -1 shear rate. The thixotropic index was calculated by the ratio of the initial stress value to the plateau stress value and gave values between 1 (no thixotropy) and 1.8 depending on the samples. However, based on our experiments, since two different rheometers have been tested here, it seems that the initial stress overshoot height may be dependent on the response time of the rheometer. Indeed, the more precise DHR-2 (TA Instruments) rheometer gave on the same sample a higher stress overshoot than the AR500 rheometer (TA Instruments). As a consequence, thixotropic indexes should not be compared when measured with different apparatus. To the authors' knowledge, 3ITTs results on mucus or mucus simulants have never been published. For comparison purposes however, we can refer to the recent work of Toker et al. [START_REF] Toker | 3ITT in food applications: a novel technique to determine structural regeneration of mayonnaise under different shear conditions[END_REF] describing the results of 3ITTs performed on a mayonnaise or to the work of Mezger [START_REF] Mezger | applied rheology Anton Paar GmbH[END_REF] in which identical 3ITTs are compared for two different paints. For a 100 s -1 step 2 in both studies, Toker et al. [START_REF] Toker | 3ITT in food applications: a novel technique to determine structural regeneration of mayonnaise under different shear conditions[END_REF] measured a 100% recovery time varying of 56 to 432 s for a mayonnaise depending on the temperature, while Mezger [START_REF] Mezger | applied rheology Anton Paar GmbH[END_REF] measured a 100% recovery time of 60 s or 300 s for the two paint samples (temperature not given). These results indicate that mucus simulants tested here show a slower thixotropic recovery compared to paints or mayonnaises (often used as "model" thixotropic fluids) investigated in these previous works. To conclude with thixotropy, it is obvious that the thixotropic nature of mucus simulants can give rise to much more studies. In particular, it would be interesting to perform an important set of 3ITTs with step 2 covering intermediate shear rate values between 1 s -1 and 100 s -1 . Modelling these experiments, a time dependent rheological model for mucus simulants at each active polymer concentration, could then be deduced. Such a time dependent model could be useful to account for the whole rheological complexities of these gels and could for instance allow to simulate their behavior in a model trachea, when submitted to various air pressure signals, such as signals imposed by clearance helping devices. This will be the purpose of a future work. In conclusion, in this work, a complete and quantified rheological characterization has been performed on mucus simulants at different active polymer concentrations (0.5wt% to 2wt% in Actigum TM ). Stress amplitude SAOS sweeps first allowed determining the LVE range of these gels at 20°C. To account for the end of the LVE range and the subsequent transition towards flow, differents transition stresses (namely, yield τ y , flow τ f and peak τ peak stresses) were described. The increase of these transitions stresses with Actigum TM concentration showed a power law dependence (eq. 2 & 3). Based on the yield stress value and the corresponding critical deformation, a cohesive volumetric energy (J.m -3 ) that also followed an active concentration increasing power law (eq. 4) was calculated. Elastic and viscous moduli plateau values indicated, for all gel concentrations, stability (G' > G"), as well as a linear increase with the Actigum TM concentration. The temperature dependence of the LVE range was not significant in the range between 20 and 37 °C. Frequency SAOS sweeps displayed, only in the case of the lowest active polymer concentration, a low frequency moduli cross-over, showing the liquid-like behavior of this gel under very slow sollicitation and its solid-like behavior under relatively rapid sollicitation. More concentrated gels displayed a solid-like behavior over the whole range of tested frequencies. However, based on the tan(δ) evolution one can anticipate a moduli cross-over under lower frequencies as the Actigum TM concentration increases. In flow mode, to separate viscoplasticity and shear-thinning from thixotropy, steady state flow curves were recorded and the HB model was used to quantify simulants SS viscoplasticity (via a yield stress measured in flow mode τ 0HB ) and shearthinning property (via the consistency (K) and flow indexes (n)). The evolutions of these parameters with the Actigum TM concentration displayed a power law increase for τ 0HB and K (eq. 6 & 7) and a linear decrease for n (eq. 8) indicating an enhanced viscoplasticity and shear-thinning ability for more concentrated gels. Finally the thixotropy of a 2wt% gel was tested performing two 3ITTs, submitting the sample to shear rates representative of normal shearing in the lungs (1.6 s -1 ) or special events such as cough (100 s -1 ). To quantify thixotropy it was proposed to evaluate a recovery time based on either 90% or 100% of recovery of the simulants are aqueous solutions mainly composed of two types of polymers, Viscogum™ FA (Cargill™) and Actigum™ CS 6 (Cargill™). The polymers used in this work were kindly provided by the Laserson compagny (Etampes, France). Viscogum™ FA is a galactomannan locust beans and Actigum™ CS 6 is a scleroglucan (branched homopolysaccharide). It consists in a glucose chain branched every three units by an additional glucose forming a three dimensional (triple helix) structure. Sodium chloride (99.8+% NaCl) and di-sodium tetraborate 10aq (99.5+% Na 2 B 4 O 7 .10H 2 O) were purchased 1 π 1 (a) stress amplitude sweep, (b) frequency sweep). A value of the elastic modulus G' above the viscous modulus G" implies that the elastic behavior dominates the viscous one and indicates a solid-like or gel-like behavior, while G" > G' indicates a liquid-like behavior. In figure 1 (a), the stress amplitude value for which the transition from one behavior to the other is observed, is denoted by τ f (flow point value corresponding to the crossover of moduli G' = G", tan(δ) = 1) and will be discussed hereafter. As a consequence, in the case of figure 1.(b) for which τ = 1 Pa < τ f , the mucus simulant shows a gel-like behavior for all frequencies ranging from 10 -3 to 10 Hz. In the case of figure 1.(a), under a constant frequency of Hz 2, the two domains are successively observed, with a solid-like behavior for τ < τ f , then liquid-like behavior for τ > τ f . 10 In 10 the case of mucus simulants studied here, G' and G" moduli plateau values, as well as yield, flow and peak stresses and volumetric cohesive energy, all show increasing values according to the Actigum™ concentration as reported in table 1. The evolution with the Actigum™ concentration of the different transition stresses (τ y , τ f and τ peak ) as well as the volumetric energy of cohesion (E c ) and the plateau moduli values are all displayed in figure3.For quantification purposes, yield, flow and peak stresses variations with Actigum™ concentration can be described by power law empirical relationships reported hereafter (equations 2 & 3). As previously mentioned, flow and peak stresses are almost superimposed, which makes sense since they both refer to the actual beginning of macroscopic flow of the gel (G" > G' and friction starts decreasing). As a consequence a single empirical correlation is needed to account for the effect of polymer concentration on both flow and peak stresses.Equation 2 gives the quantitative evolution of the yield stress (τ y in Pa) as a function of Actigum™ concentration (C in wt%): for the Actigum TM dependency of both τ f and τ peak : evolutions are qualitatively compared to the evolution of the yield stress calculated from the HB modelling of the SS flow curve (τ 0HB ) (see discussion part). Concerning G' and G" plateau values (in Pa), they increase linearly (Figure3(b)) with the active polymer concentration (in wt%) with an average slope of 10 for G" and 49 for G'. The evolution of the volumetric energy of cohesion (E c in J.m -3 ) of the 3D gel internal network versus the Actigum™ concentration (C in wt%) is also a power law as reported in equation 4:Temperature dependence of the LVE rangeThe temperature dependence of the LVE range of mucus simulants was measured by comparing stress amplitude sweeps obtained at either 20°C (ambient temperature) or 37°C (physiological temperature in the lower airways). This temperature range was chosen in accordance to the possible range of variation for the air at different stages of respiratory airways. Figure4presents the variations of elastic modulus and viscous modulus versus the stress amplitude for a constant frequency ( Hz 2 1 π 1 π 1 under very low frequencies is the conservation of the sample during the important timelag needed to obtain these measuring points. So in these cases, a special care has to be taken to control samples dehydration. The frequency dependence of the mucus simulants response was measured by oscillating stress at increasing frequencies (maximun range: 10 -5 to 100 Hz) and a constant stress amplitude corresponding to a small deformation, i.e. within the LVE range (τ < τ y ) as measured at Hz 2 (see table 1.). Figure5gathers the evolutions of elastic modulus (a), viscous modulus (b) and loss angle (c) as a function of the applied frequency for the different Actigum™ concentrations. The less concentrated simulant (0.5 wt%) displays a different behavior. G' and G" show a more important frequency dependence, especially under low frequencies due to a more . A moduli crossover (characterized by tan(δ) = 1 in figure Steady state (SS) shear flow tests were performed on three successive samples at each Actigum™ concentration. The response to the imposed stress (τ) is recorded in terms of steady state strain rate ( each Actigum™ concentration displayed a similar shape for all polymer concentrations. Figure6shows an example of such a rheogram obtained in the case of a 2 wt% simulant.A yield zone with a solid-like behavior ( 0 → • γ grey zone in figure6) is first observed until the progressive departure from a quasi vertical slope to the flow zone showing a shearthinning behavior (concave rheogram beyond the yield stress value) is reached. The yield zone is the manifestation of the yielding behavior of the gel structure that has already been characterized using SAOS stress amplitude sweeps tests. In SS flow mode, the transition between solid and liquid-like behaviors is defined by a stress value quoted τ 0HB . This value is also referred in the literature as a yield stress but gives different values compared to the yield stress measured in SAOS mode (see discussion part). This is why a different notation (τ 0HB ) is used to described this yield stress deduced from the fitting of SS flow experimental data by a 3 parameters Herschel-Bulkley model (equation 5) allowing to account for both the SS yield stress (viscoplastic behavior) and shear-thinning behavior. τ 0HB (Pa) is the SS flow yield stress, K (Pa.s n ) and n (-) are the consistency and flow indexes respectively deduced from the HB modelling. It is worth noting that, as SS flow curves (as well as the HB model) do not account for the time dependent flow behavior (thixotropy), the observed yield stress τ 0HB is not an inherent property of the material since the structural strength of the gel is shear history and thus time dependent. the fluid actually flows. In the case of mucus simulants used in this work, the overshoot behavior shows a strong dependency on the Actigum™ concentration. On the one hand, the τ peak value increases as the simulant concentration increases (figure3(a) and eq. structure. Concerning this latter property, further investigations are necessary to fully characterize the thixotropic nature of mucus simulants, in particular to propose a time dependent rheological. This is clearly an interesting prospect for future works. Figure 1 : 1 Figure 1: SAOS sweeps for a 1.5wt% sample. (a) Stress sweep ( Hz 2 1 π Figure 2 :Figure 3 :Figure 4 : 1 πFigure 5 :Figure 6 : 234156 Figure 2 : Evolution of G' (a), G" (b), tan(δ) (c) vs stress amplitude for different Actigum TM concentrations Figure 7 :Figure 8 : 78 Figure 7: HB model parameters vs. Actigum TM concentration Figure 9 : 1 FFigure 6 : 916 Figure 9: 3ITT step 1 at 0.029 s -1 , step 2 at 100 s -1 , step 3 at 0.029 s -1 Table 1 . 1 Evolution of limit stresses for LVE range, E c and moduli plateau values vs active polymer concentrations. [Actigum TM ](wt%) τ y (Pa) τ f (Pa) τ peak (Pa) E c (J.m -3 ) G' (Pa) G" (Pa) 0.50 0.32 1.55 - 0.024 2.07 0.88 0.75 0.63 2.88 2.37 0.035 5.52 2.02 1 1.96 4.92 5.02 0.073 26.56 6.06 1.50 4.15 9.20 9.32 0.143 61.18 12.34 2.00 6.98 15.00 14.78 0.360 67.64 14.66 Table 2 . 2 Evolution of the HB model parameters vs. active polymer concentrations. [Actigum TM ](wt%) τ τ τ τ 0HB (Pa) K(Pa.s n ) n 0.50 2.98 0.04 0.78 0.75 6.15 0.1 0.72 1 8.74 0.21 0.66 1.50 13.62 1.67 0.42 2.00 20.43 3.29 0.37 John Wiley & Sons, Inc.Journal of Biomedical Materials Research: Part A Journal of Biomedical Materials Research: Part A ACKNOWLEDGMENTS The authors are indebted to the PhysioAssist Co. and the Association Nationale de la Recherche et de la Technologie for the CIFRE grant 2014-1287 (O. Lafforgue PhD thesis) and to the Laserson Co. for kindly providing the Actigum TM and Viscogum TM polymers. S. Poncet acknowledges also the Canada Foundation for Innovation (John R. Evans Leaders Fund n°34582) and the Natural Sciences and Engineering Research Council of Canada through the Discovery Grant (RGPIN-2015-06512) for their financial support.
01402012
en
[ "sdv.imm", "sdv.mp.vir" ]
2024/03/05 22:32:18
2016
https://pasteur.hal.science/pasteur-01402012/file/Mounce%20et%20al_Submitted%20article_v10Apr2015.pdf
Bryan C Mounce Teresa Cesaro Gonzalo Moratorio Jan Peter Anna Hooikaas Scott W Yakovleva Everett Clinton Werneke Enzo Z Smith Etienne Poirier Matthieu Simon-Loriere Prot Enzo Z Poirier Everett Clinton Smith Carole Tamietti Sandrine Vitry Etienne Simon-Loriere Romain Volle Cécile Khou Matthieu Prot Marie-Pascale Frenkiel Kenneth A Stapleford Anavaj Sakhuntabai Francis Delpeyroux Nathalie Pardigon Marie Flamand Giovanna Barba-Spaeth Monique Lafon Mark R Denison Marco Vignuzzi email: [email protected] Inhibition of polyamine biosynthesis is a broad-spectrum strategy against RNA viruses à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Emerging viruses present an extraordinary threat to human health, given their sudden and unpredictable appearance and the potential for rapid spread among the human population. Recent emergence of chikungunya virus (CHIKV) in the Americas 1 , Middle East respiratory syndrome coronavirus (MCoV) in the Arabian peninsula [START_REF] Holmes | MERS-CoV enigma deepens as reported cases surge[END_REF] and Ebola virus in Western Africa 3 highlight the struggles to contain outbreaks. A significant hurdle is the availability and implementation of antiviral therapies to treat the infected or protect at-risk populations, such as family members and healthcare workers. While several compounds show promise in vitro and in vivo, these recent outbreaks underscore the need to accelerate drug discovery, as well as to explore therapeutic avenues of broad antiviral activity. In this report, we describe the antiviral effects of difluoromethylornithine (DFMO, eflornithine), a potent suicide inhibitor of ornithine decarboxylase (ODC1) [START_REF] Metcalf | Catalytic irreversible inhibition of mammalian ornithine decarboxylase (E.C.4.1.1.17) by substrate and product analogs[END_REF] , a critical enzyme in polyamine synthesis. We show that DFMO is active against diverse families of RNA viruses. Our data show that polyamines are a general requirement for viral RNA synthesis. DFMO is bioavailable and currently used in treating trypanosomiasis [START_REF] Milord | Efficacy and toxicity of eflornithine for treatment of Trypanosoma brucei gambiense sleeping sickness[END_REF] and hirsutism [START_REF] Wolf | Randomized, double-blind clinical evaluation of the efficacy and safety of topical eflornithine HCl 13.9% cream in the treatment of women with facial hair[END_REF] , and given its tolerance in humans [START_REF] Carbone | Bioavailability study of oral liquid and tablet forms of αdifluoromethylornithine[END_REF] , may be an immediately available and viable option for controlling infection during outbreaks of significant concern. Polyamines are small, positively-charged molecules involved in several cellular processes, including proliferation [START_REF] Gerner | Polyamines and cancer: old molecules, new understanding[END_REF] , apoptosis [START_REF] Schipper | Involvement of polyamines in apoptosis. Facts and controversies: Effectors or protectors? Semin[END_REF] , ion channel regulation [START_REF] Williams | Interactions of polyamines with ion channels[END_REF] , DNA conformation [START_REF] Thomas | Polyamine-mediated conformational perturbations in DNA alter the binding of estrogen receptor to poly(dG-m5dC).poly(dG-m5dC) and a plasmid containing the estrogen response element[END_REF] , and transcription [START_REF] Law | Polyamine regulation of ribosome pausing at the upstream open reading frame of S-adenosylmethionine decarboxylase[END_REF][START_REF] Frugier | Synthetic polyamines stimulate in vitro transcription by T7 RNA polymerase[END_REF] . The biosynthesis of polyamines is regulated by several enzymes, but the conversion of ornithine into putrescine by ornithine decarboxylase (ODC1) is the bottleneck step 14 . Although early work shows that polyamines are included in virions [START_REF] Gibson | Compartmentalization of spermine and spermidine in the herpes simplex virion[END_REF][START_REF] Lanzer | Polyamines in vaccinia virions and polypeptides released from viral cores by acid extraction[END_REF][START_REF] Fukuma | Polyamines in bacteriophage R17 and its RNA[END_REF] or facilitate viral replication [START_REF] Hodgson | Ornithine decarboxylase activity in uninfected and vaccinia virus-infected HeLa cells[END_REF][START_REF] Isom | Stimulation of ornithine decarboxylase by human cytomegalovirus[END_REF] , their general role in viral infection is not established for RNA viruses [START_REF] Tuomi | Inhibition of Semliki Forest and herpes simplex virus production in alpha-difluoromethylornithine-treated cells: reversal by polyamines[END_REF][START_REF] Pohjanpelto | Polyamine depletion of cells reduces the infectivity of herpes simplex virus but not the infectivity of Sindbis virus[END_REF] . We utilized CHIKV to study the role of polyamines in alphavirus infection. CHIKV titers were significantly reduced (p=0.010) in BHK-21 cells treated for four days with 500 μM DFMO and completely rescued with the addition of exogenous polyamines (Fig. 1a). The complete phenotypic reversal with exogenous polyamines suggests that DFMO treatment was non-toxic to cells, which we corroborated with viability assays (Supplementary Fig. 1). The biogenic polyamines putrescine (Put) and spermidine (Spd) also fully rescued viral titers when added individually, but the polyamine precursor, ornithine (Orn), did not (Supplementary Fig. 2), indicating that the biogenic polyamines are required for robust viral replication. Additionally, CHIKV titers were not enhanced with polyamine treatment alone, indicating that viral replication is maximized with endogenous levels of polyamines (Supplementary Fig. 3). We observed the same phenotype in Aedes albopictus C6/36 cells (Fig. 1b), primary C57/Bl6 murine embryonic fibroblasts (Fig. 1c), and green monkey kidney Vero-E6 cells (Fig. 1d). For Sindbis virus (SINV), a closely-related alphavirus, we also observed reduced titers (p=0.002) with DFMO treatment and full rescue with exogenous polyamines (Fig. 1e). To confirm that DFMO was reducing polyamine content of treated cells, we performed immunofluorescence with an antibody directed against the biogenic polyamines (spermine, spermidine, and putrescine). BHK-21 cells were treated with DFMO and subsequently infected with CHIKV containing mCherry-tagged nsp3 [START_REF] Kümmerer | Construction of an infectious Chikungunya virus cDNA clone and stable insertion of mCherry reporter genes at two different sites[END_REF] to visualize replication compartments. Upon DFMO treatment, polyamine signal was significantly reduced (Fig. 1f). With CHIKV infection, no gross difference was noted in polyamine content or localization, and polyamines did not appear to localize precisely with replication compartments. In addition, we directly visualized individual polyamines within untreated or DFMO-treated cells by thin-layer chromatography [START_REF] Madhubala | Polyamine Protocols[END_REF] . DFMO treatment reduced the total amount of polyamines by approximately 73% (Fig 1g). As a further confirmation that polyamines are critical to CHIKV replication, we used another ODC1 inhibitor, POB, and observed similar reduction in viral titers (Fig. 1h). DFMO-mediated knockdown of viral titers of Semliki Forest Virus relied on a four-day pretreatment [START_REF] Tuomi | Synthesis of Semliki-forest virus in polyamine-depleted baby-hamster kidney cells[END_REF] . For CHIKV, BHK-21 cells required a 48-hour pretreatment to significantly impact titers (p=0.038), further reduced with 72-or 96-hour pretreatment (Fig. 1i). This pretreatment was effective at MOI 0.1 and 1, but not 10 for both CHIKV and SINV (Supplementary Fig. 4). Previous reports described a role for polyamines in herpes simplex virus-1 [START_REF] Wallace | The Effect of Polyamines on Herpes Simplex Virus Type 1 DNA Polymerase Purified from Infected Baby Hamster Kidney Cells (BHK-21/C13)[END_REF] , Semliki Forest virus [START_REF] Tuomi | Synthesis of Semliki-forest virus in polyamine-depleted baby-hamster kidney cells[END_REF] , and hepatitis C virus [START_REF] Korovina | Biogenic polyamines spermine and spermidine activate RNA polymerase and inhibit RNA helicase of hepatitis C virus[END_REF] polymerase activity or RNA replication, and human cytomegalovirus viral assembly [START_REF] Gibson | L-alphadifluoromethylornithine inhibits human cytomegalovirus replication[END_REF] . Entry of CHIKV was not impacted, as viral RNA was equivalent (p=0.282) in untreated and DFMO-treated cells at 2 hours post infection (Fig. 2a). Furthermore, we observed significant inhibition of RNA synthesis for both genomic (p=0.001) and subgenomic (p=0.039) messengers at 24 hours (Fig. 2a,b). SINV viral genomes were similarly reduced (Fib. 2c). Although polyamines did not specifically localize with replication compartments as assayed by immunofluorescence (Fig. 1f), we asked whether polyamines interact with the replication complex or with viral RNA. BHK-21 cells were infected for 16 hours, lysed and immunoprecipitated with an anti-polyamine antibody to analyze coimmunoprecipitation of replication components. The viral nsp2 (helicase, methyltransferase), nsp3 and nsp4 (polymerase) were indeed found to coimmunoprecipitate (Fig. 2d). To provide additional evidence of this interaction, we performed RNA immunoprecipitation with the same anti-polyamine antibody. Both genomic and subgenomic RNA co-immunoprecipitated with polyamines (Fig. 2e), further implicating them in viral RNA replication. To test whether polyamines directly stimulate viral transcription, we measured polymerase activity of both CHIKV and SINV using an in vitro replication (IVR) assay with replication complexes purified from infected cells. Membrane fractions from infected cells were incubated in buffer alone, or buffer containing exogenous polyamines, and polymerization was measured by radionucleotide incorporation after a 3-hour incubation. Samples incubated with polyamines exhibited a significant increase (p=0.021 and p=0.015 for CHIKV and SINV, respectively) in viral RNA synthesis (Fig. 2f, quantitated in Fig. 2g,h). Similar analyses by qRT-PCR and using the CHIKV replicon system exhibited the same phenotype (Fig. 2i). In these assays, polyamines could be either directly stimulating the replication complex, or stabilizing templates and nascent genomes. To discern between these hypotheses, viral RNA was incubated in IVR reaction mixtures using membranes from uninfected cells, in the presence or absence of polyamines. Rather than stabilizing viral RNA, we observed significant degradation of viral genomes when incubated with polyamines (Supplementary Fig. 5). Taken together, our data support that polyamines are directly stimulating replication complexes activity. To understand the sensitivity of CHIKV to DFMO, we infected BHK-21 cells treated with a range of concentrations, from 1 μM to 10 mM and determined the IC 50 to be 35.55 µM (13.05 to 96.08 μM) (Fig. 3a), which is near the measured plasma concentration in patients treated with oral DFMO [START_REF] Carbone | Bioavailability study of oral liquid and tablet forms of αdifluoromethylornithine[END_REF] . At concentrations above 100 μM, titers were significantly reduced (p<0.002 for each), and no virus was recovered with 10 mM treatment. Similar results were obtained for SINV (Supplementary Fig. 6). Additionally, a 2013 Caribbean strain of CHIKV exhibited significant sensitivity to DFMO that was completely reversed with exogenous polyamine treatment (Fig. 3b). While the required pre-exposure treatment may be a viable option for susceptible populations during an outbreak, such as medical professionals treating infected individuals or cohabitants of individuals with communicable diseases, treating the broader population prior to infection is impractical for alphaviruses. To test whether DFMO could reduce viral titers if given after infection, we infected BHK-21 cells with CHIKV at a MOI of 0.0001 (100 viral particles), to mimic a more natural infectious dose, and treated cells with 1 mM DFMO just after infection. CHIKV titers were significantly reduced (p=0.0004) by 32 hours and remained significantly suppressed (p<0.003) through 72 hours (Fig. 3c). To further investigate potential clinical applications of DFMO, we examined whether it could be combined with another antiviral pharmaceutical, ribavirin [START_REF] Beaucourt | Ribavirin: a drug active against many viruses with multiple effects on virus replication and propagation. Molecular basis of ribavirin resistance[END_REF] . We found that DFMO synergizes with 400 μM ribavirin to significantly reduce CHIKV titers (p=0.038 and 0.0553) in BHK-21 cells beyond either of the antivirals individually (Fig. 3d). Polyamines are ubiquitous molecules and as such, we wished to determine whether their requirement, and the effects of DFMO that we observed for alphaviruses, were more broadly applicable. Thus, we explored several RNA viruses from different taxonomic families: the positive-sense coronavirus Middle Eastern respiratory syndrome virus (MCoV, Fig. 3e); the positive-sense picornaviruses coxsackievirus B3 (CVB3, Fig. 3f), poliovirus (PV, Fig. 3g), and enterovirus-71 (EV-71, Fig. 3h); the positive-sense flaviviruses dengue fever virus-1 (DENV1, Fig. 3i), Zika virus (ZIKV, Fig. 3j), Japanese encephalitis virus (JEV, Fig. 3k), yellow fever virus (YFV, Fig. 3l), and West Nile virus (WNV, Fig. 3m); the negative-sense rhabdoviruses vesicular stomatitis virus (VSV, Fig. 3n) and rabies virus (RABV, Fig. 3o); and the negative-sense, segmented bunyavirus Rift Valley fever virus (RVFV, Fig. 3p). Despite the diversity in viral family and cell types, DFMO treatment significantly reduced viral titers for all viruses (p<0.05 for all), and in all cases the infection phenotype was rescued by the addition of exogenous polyamines. These results reveal that polyamines may be a general requirement for the replication of all RNA viruses. Our results uncover a universal role of polyamines in RNA virus replication and a broad antiviral activity of DFMO against every RNA virus we tested. Although the relatively long pretreatment time required to knockdown endogenous polyamines precludes the use of DFMO in most situations of acute infection, we propose DFMO as a potential preventative or early-response therapeutic in particularly severe outbreaks. Given that DFMO is already clinically approved for treating African sleeping sickness in humans [START_REF] Milord | Efficacy and toxicity of eflornithine for treatment of Trypanosoma brucei gambiense sleeping sickness[END_REF][START_REF] Pepin | Difluoromethylornithine for arseno-resistant Trypanosoma brucei Gambinese sleeping sickness[END_REF] , the rapid implementation of DFMO during serious outbreaks may warrant immediate attention. Specifically, this relatively well-tolerated drug could provide added protection for at-risk populations, such as healthcare workers or family members of afflicted individuals, to prevent spread of disease. Furthermore, since many RNA virus infections are characterized by several days of incubation before onset of symptoms, treatment of individuals soon after suspected exposure may delay replication or reduce titers enough, in favor of the mounting immune response, to reduce mortality. Derivatives of DFMO and other molecules that manipulate the polyamine biosynthetic pathway have been developed as anticancer agents [START_REF] Casero | Recent advances in the development of polyamine analogues as antitumor agents[END_REF] as well as for the treatment of leishmaniasis [START_REF] Singh | Antileishmanial effect of 3-aminooxy-1-aminopropane is due to polyamine depletion[END_REF] . These drugs should also be examined as potent inhibitors of RNA viruses. Figure 1 . 1 Figure 1. DFMO exhibits antiviral activity against alphaviruses. Chikungunya virus (CHIKV) titers in (a) baby hamster kidney (BHK) fibroblasts, (b) Aedes albopictus (C6/36) cells, (c) C57/Bl6 murine embryonic fibroblasts, or (d) Vero-E6 epithelial cells treated for four days with 500 μM DFMO or in combination with exogenous polyamines and infected for 24 h. (e) Sindbis virus (SINV) titers in BHK fibroblasts treated for four days with 500 μM DFMO. (f) Representative immmunofluorescence against polyamines (green) and mCherry-tagged viral nsp3 in untreated and DFMO-treated (500 μM, four days) mock-or CHIKV-infected BHK cells for 8 h. (g) Representative thin layer chromatography analysis of cellular lysates from BHK cells treated with 500 μM DFMO or in combination with exogenous polyamines. Individual polyamines are labeled for putrescine (put), spermidine (spd), and spermine (spm). (h) CHIKV titers in BHK cells treated with the ODC1 inhibitor, POB (200 μM, four day pretreatment, 24 hpi). (i) CHIKV titers from DFMO-treated BHKs with increasing pretreatment times. * P ≤ 0.05, ** P ≤ 0.01, and *** P ≤ 0.001 versus untreated control or as indicated, one-tailed Student's T-test, n=3. Error bars represent mean ± one standard deviation. Immunofluorescence images and chromatograph are representative of three and two independent replicates, respectively. Figure 2 . 2 Figure 2. Polyamines stimulate RNA-dependent RNA polymerase activity. (a) Time-course of CHIKV genomes upon four-day 500 μM DFMO treatment (n=3). (b) CHIKV subgenomes and (c) SINV genomes as quantitated by qRT-PCR, following four-day 500 μM DFMO treatment and 24-h infection (n=4). (d) Western blot of coimmunoprecipitation of nsp2, nsp3, and nsp4 with polyamines (representative of two independent replicates). (e) RNA immunoprecipitation of CHIKV genomes and Figure 3 . 3 Figure 3. DFMO is broadly antiviral. (a) CHIKV titers at 24 hpi following four-day pretreatment with increasing concentrations of DFMO in BHK cells (n=3). (b) Caribbean-isolated CHIKV titers in BHK cells following DFMO pretreatment and 24-h infection (n=3). (c) CHIKV titer time-course following inoculation at MOI 0.0001 (10 PFU) and DFMO treatment 1 hpi at 1 mM (n=3). (d) Effect of DFMO pretreatment, ribavirin treatment, and combined treatment, on CHIKV titers (n=2). (e-p) Virus titers following DFMO treatment and rescue with exogenous polyamines. (e) Middle East respiratory syndrome coronavirus (MCoV) at 24 hpi in Vero81 cells (f) Coxsackvirus B3 (CVB3) titers at 24 hpi in HeLa cells (g) Poliovirus (PV) at 24 hpi in HeLa cells (h) Enterovirus-A71 (EV-A71) after 24-h infection of HeLa cells (i) Dengue virus-1 (DENV1) at 24 hpi in BHK-21 cells (j) Zikavirus (ZIKV) at 24 hpi in BHK-21 cells (k) Japanese encephalitis virus (JEV) at 24 hpi in BHK-21 cells (l) Yellow fever virus (YFV) at 96 hpi in BHK-21 (m) West Nile virus (WNV) at 24 hpi in BHK-21 cells (n) Vesicular stomatitis virus (VSV) at 16 hpi in BHK-21 cells at MOI 0.1. (o) Rabies virus (RABV) somatic inclusions measured in fluorescent intensity per cell at 24 hpi in primary cortical neurons (p) Rift Valley fever virus (RVFV) at 24 hpi in BHK-21 cells. * P ≤ 0.05 versus untreated control. Error bars represent mean ± one standard deviation. * P ≤ 0.05, ** P ≤ 0.01, and *** P ≤ 0.001 versus untreated control or as Figure 3 3 Figure 1 Acknowledgments This work was supported by the European Research Council (ERC Starting Grant no. 242719), the French Government's Investissement d'Avenir program, Laboratoire d'Excellence "Integrative Biology of Emerging Infectious Diseases" (grant n°ANR-10-LABX-62-IBEID). the United States Public Health Service awards R01 AI108197 (M.R.D.) and U19 AI109680 (E.C.S) from the National Institutes of Health. We thank Andres Merits for nsp3-mCherry construct and Gorben Piljman for the CHIKV replicon. We thank Benjamin tenOever, Britt Glaunsinger, Vera Tarakanova, and Antonio Borderia for scientific discussions.
01706476
en
[ "phys.cond.cm-ds-nn", "phys.meca.mema" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01706476v2/file/180204_manuscript_v5.pdf
Alexandre Nicolas email: [email protected] Lptms Jörg Rottler email: [email protected] Orientation of plastic rearrangements in two-dimensional model glasses under shear The plastic deformation of amorphous solids is mediated by localized shear transformations involving small groups of particles rearranging irreversibly in an elastic background. We introduce and compare three different computational methods to extract the size and orientation of these shear transformations in simulations of a two-dimensional (2D) athermal model glass under simple shear. We find that the shear angles are broadly distributed around the macroscopic shear direction, with a more or less Gaussian distribution with a standard deviation of around 20 • about the direction of maximal local shear. The distributions of sizes and orientations of shear transformations display no substantial sensitivity to the shear rate. These results can notably be used to refine the description of rearrangements in elastoplastic models. I. INTRODUCTION Polydisperse foams, highly concentrated emulsions, molecular glasses, and bulk metallic glasses exhibit microscopically heterogeneous mechanical properties. As a result, these disordered solids do not deform affinely under shear. Instead, their deformation features bursty rearrangements of small groups of particles embedded in an otherwise elastically deforming medium. It is now well accepted that these microscopically localized shear transformations (ST) are the elementary carriers of plastic deformation in sheared amorphous solids [1,2]. By straining its surroundings, each ST gives rise to a characteristic long-range deformation halo around it [3,4], which mediates most collective effects in the material, such as cascades of rearrangements [5,6]. Based on this picture at the particle scale, mesoscale elastoplastic models of amorphous plasticity have been formulated, which divide the material into small regions (blocks) that are loaded elastically until they fail plastically [7]. The failure of a block is described as an ideal ST which partly dissipates the local stress and partly redistributes it to the other blocks. For an ST aligned with the principal direction of the macroscopic shear in d-dimensional space, the Green's function G for the nonlocal redistribution of the shear stress satisfies G(r, θ) C cos[4θ + 2θ pl ]/r d (1) in the plane of the transformation, with a dimensiondependent prefactor C, where (r, θ) are the polar coordinates in the frame centered on the plastic block and θ pl (defined precisely in Eq. ( 3)) refers to the orientation of the individual ST. The far field limit of this expression for G matches Eshelby's solution for a spherical inclusion endowed with a spontaneous strain [8], and was shown to suitably describe the disorder-averaged response of an amorphous solid to an ideal ST in atomistic simulations [9]. Mesoscale models, however, rest on several assumptions concerning the STs, including their idealized "Eshelby" nature, their equal size, and their orientation along the direction of maximal local shear [10,11], or even along the macroscopic shear direction in scalar models [12,13] (in this regard, ref. [14] is an exception). To give them stronger footing, experimental and numerical efforts have been made to characterize plastic rearrangements, as exposed in Sec. II. In particular, much attention has been paid to their shape and their size [1,[15][16][17], while the question of their orientation has remained largely unexplored, despite its obvious relevance for the buildup of spatial correlations between individual STs [18,19]. In this contribution, we simulate the shear deformation of a two-dimensional (2D) athermal model glass (described in Sec. III) with molecular dynamics in order to study the statistical properties of actual rearrangements for different shear rates. Strong emphasis is placed on their angles of failure. To this end, we propose (in Sec. III) and compare (in Sec. V) several numerical methods to extract these angles. We find that these angles are broadly distributed around the macroscopic shear direction, with a more or less Gaussian distribution with a standard deviation of around 20 • . Overall, the sizes and orientations of the detected rearrangements are fairly insensitive to the shear rate, but many of them actually differ from ideal STs. Even when the ideal ST description works reasonably well, local methods relying exclusively on the displacements (or forces) of the most active rearranging particles give poor estimates of the ST orientation; the latter is recovered if a broader selection of particles near the ST is considered. II. PREVIOUS ENDEAVORS TO CHARACTERIZE PLASTIC REARRANGEMENTS Leaving aside Schwarz's early attempts to classify rearrangements in a 3D foam at rest [20], Argon and Kuo were the first to report localized rearrangements in a disordered system, more precisely a 2D foam ('bubble raft') that was used as a model system for metallic glasses [1]. Interestingly, they mentioned two types of STs: sharp slips of rows of about 5 bubbles in length and more diffuse cooperative rearrangements of regions of 5 bubbles in diameter. In the 1980's, Princen studied the swap of neighbors between four bubbles (in 2D) to account for some rheological properties of foams and concentrated emulsions [21]; the detailed dynamics of this swap process were investigated much later in clusters of 4 bubbles [22]. In slowly sheared colloidal glasses, STs were directly visualized using confocal microscopy and their core was observed to be around 3 particle diameters in linear size [15]. In metallic glasses, direct visualization of STs cannot be achieved experimentally but estimates for their volumes can be obtained indirectly (e.g., via nanoindentation tests and their sensitivity to the shear rate) and typically correspond to a few dozen atoms (∼ 30 in the Zr-based glass studied with nano-indentation tests in [23]), with a possible dependence on the sample morphology (for instance, for a Ni-Nb metallic glass, the ST size was reported to decrease from 83 atoms to 36 atoms when the material was cast into a µm-thin film [16]). Numerically, the most comprehensive characterization of rearrangements to date was performed by Albaret et al. [17] on a 3D atomistic model for amorphous bulk silicon under quasi-static shear. Rearrangements were detected by artificially reverting the applied strain increments at every step and deducing the irreversible changes that took place; the detected rearrangements were then modeled as a collection of Eshelby inclusions, whose sizes (or volumes V 0 ) and eigenstrains were fitted to best reproduce the displacement field measured during the actual strain increment. These inclusions were shown to account for all plastic effects visible in the stress-strain curves of these materials and the effective volume γ V 0 (where γ is the maximal shear component of ) was found to be exponentially distributed, with a typical size of 70 Å3 , while both dilational and contractional volumetric strains were observed. The evolution of the effective volume γ V 0 during the transformation was computed in [24] by detecting the saddle point; the value of the effective volume at this saddle point, called activation volume, was found to amount to around 20% of the final γ V 0 . III. NUMERICAL MODEL AND METHODS A. Model and simulation protocol In order to get information on the morphology and orientation of STs, we perfom molecular dynamics simu-lations of an amorphous material (a glass) under simple shear, in 2D and in the athermal limit. The model glass is a binary mixture of A and B particles, with N A = 32500 and N B = 17500, of respective diameters σ AA = 1.0 and σ BB = 0.88, confined in a square box of dimensions 205σ AA × 205σ AA , with periodic boundary conditions. The system, at density 1.2, was prepared by quenching an equilibrated configuration at temperature T = 1 with a fast quenching rate dT dt = 2 • 10 -3 , at constant volume. The particles, of mass m = 1, interact via a pairwise Lennard-Jones potential, V αβ (r) = 4 αβ σ αβ r 12 - σ αβ r 6 , where α, β = A, B, σ AB = 0.8, AA = 1.0, AB = 1.5, and BB = 0.5. The potential is truncated at r = 2.5σ AA and shifted for continuity. Simple shear γ is imposed at rate γ by deforming the (initially square) box into a parallelogram and remapping the particle positions. After an initial transient (20% strain), the system reaches a steady state, which is the focus of the present study. In the athermal limit, the equations of motion read dr i dt = v i ; m dv i dt = - i =j ∂V (r ij ) ∂r ij + f D i . The dissipative force f D i experienced by particle i is computed with a Dissipative Particle Dynamics scheme, viz., f D i = - j =i ζw 2 (r ij ) v ij • r ij r 2 ij r ij (2) where w(r) ≡ 1 -r rc if r < r c ≡ 3σ AA , 0 otherwise. Here, v ij ≡ v iv j denotes the relative velocity of particle i with respect to j, r ij ≡ r ir j , and ζ = 1/τ LJ controls the damping intensity (the effect of the damping was studied in [25]). Equations ( 2) are integrated with the velocity Verlet algorithm with a time step dt = 0.005. In all the following, we use τ LJ ≡ mσ 2 AA / as the unit of time and σ AA as the unit of length. B. Detection of rearrangements As expected, the simulations display fast localized rearrangements. Several measures are available to identify them and are known to yield comparable results [26]. In Fig. 1, we compute three of these diagnostics of nonaffinity on a typical snapshot of a simulation at shear rate γ = 10 -4 . These diagnostics are based on the displacements δu j of particles j during a short time interval [t, t + δt], with δt = 2. Panel (a) shows the amplitude of the minimized mean-square difference between the actual displacements δu j of particles j in a circular region C around a given particle r 0 and any set of affine displacements, i.e., displacements resulting from a uniform displacement gradient G during δt [2]. This measure of the nonaffine residual strain has become a quasi gold standard for identifying plastic rearrangements in amorphous solids. Panel (b) shows a simpler measure, namely the amplitude of the average kinetic energy of a particle averaged over δt. The motivation is that in an athermal system, only particles undergoing a rearrangement are expected to have large marginal velocities. Lastly, in panel (c) we consider the magnitudes of the (linearized) forces f d 2 min = min G rj ∈C [δu j -δu 0 -G • (r j -r 0 )] 2 (H) i = -j H ij (t) • δu j , where H ij (t) = ∂ 2 V ∂rirj is the Hessian matrix at time t. These are the forces that effectively drive plastic rearrangements. As discussed by Lemaître [19], they also localize in regions of high non-affine strain. Figure 1 confirms that the three methods studied give very similar results. Accordingly, for convenience, we choose to use a criterion based on kinetic energies to detect rearrangements. More precisely, particles with a kinetic energy larger than an arbitrary threshold e min are considered to be rearranging; the threshold value is lowered to 3 /4 e min for the neighbors of rearranging particles, in order to obtain more compact ST shapes, where two particles are defined as neighbors if they are separated by a distance smaller than 2. Finally, rearranging particles are partitioned into clusters of neighbors, each corresponding to an individual ST (clusters with fewer than 3 particles were discarded). The distributions p(S) of sizes of the resulting clusters for distinct thresholds e min and distinct shear rates γ are represented on Fig. 3; neither the threshold nor the shear rate seem to considerably alter the seemingly slower-than-exponential (but fasterthan-power-law) decay of p(S). In the following, we shall see that all our results are fairly insensitive to these parameters e min and γ. We have also checked (though inexhaustively) that the distributions of orientations of rearrangements detected on the basis of the linearized forces C. Methods to measure ST orientations In order to study ST orientations, a rearrangement is likened to a circular Eshelby inclusion with an eigenstrain , i.e., a region whose stress-free state is not reached for a deformation (r) = 0, but for (r) = (if it were unconstrained). The eigenstrain can be split into a deviatoric part, associated with shape change, and a volumetric part, associated with local dilation, viz., = sin 2θ pl cos 2θ pl cos 2θ pl -sin 2θ pl + v 1 0 0 1 (3) with 0. We define the ST orientation as the angle of failure θ pl ∈] -90 • , 90 • ]; it is thus the angle between the elongational principal direction of the ST and that of the macroscopic shear, as sketched in Fig. 2. Fit to an Eshelby inclusion We are now left with the problem of determining in practice. Drawing inspiration from Albaret et al. [17], we exploit the elastic field induced by an inclusion à la Eshelby. For homogeneous isotropic elastic media, the deformation in within any embedded elliptical inclusion will be constant. It naturally follows that, for a circular inclusion, the principal directions of in and will be identical, owing to symmetry arguments. Outside the circular inclusion (of radius a and centered at r = 0), the induced displacements δu are given by [27] δu 1 (r) = x 1 8(1 -ν) ã2 2(1 -2ν) + ã2 ( 11 -22 ) + 2ã 2 ( 11 + 22 ) + 4 1 -ã2 x2 1 11 + x2 2 22 (4) + x 2 8(1 -ν) ã2 • 2 12 2(1 -2ν) + ã2 + 4 1 -ã2 x2 1 δu 2 (r) = x 2 8(1 -ν) ã2 2(1 -2ν) + ã2 ( 22 -11 ) + 2ã 2 ( 11 + 22 ) + 4 1 -ã2 x2 1 11 + x2 2 22 + x 1 8(1 -ν) ã2 • 2 12 2(1 -2ν) + ã2 + 4 1 -ã2 x2 2 , where r = (x 1 , x 2 ) and tildes denote distances rescaled by the norm of r (viz., x1 = x 1 /r). For each rearranging cluster, the equivalent size a and eigenstrain components , v , and θ pl defined in Eq. ( 3) are calculated as the parameters minimizing the squared difference between the particle displacements δu i over δt = 2 and the theoretical expectations of Eq. ( 4), for all particles i that are at a distance between 2a and a large distance d max away from the cluster center; the quality of the fit will be measured by the relative squared difference χ 2 . (Note that the results turned out to be insensitive to the value of d max .) However, unlike ref. [17], the displacements δu i are not extracted from the actual dynamical simulation. Instead, in order to avoid the superposition of many STs, we run an auxiliary simulation for each rearranging cluster so as to measure the response induced only by this cluster. Pragmatically, starting from the configuration at t, we move particles j belonging to the cluster by a fraction α 1 of their actual displacements δu j , pin them to their new positions and obtain the response αδu i of the other particles to this local rearrangement by minimization This strategy, which we refer to as MD/Esh, will be our main method to access the ST morphology. One should nevertheless be aware that the results of the auxiliary simulations display a slight sensitivity to the details of the minimization procedure, but the consistency of our results will prove that this sensitivity can be overlooked. Azimuthal modes of the displacements induced by the STs A variant of this method may save us the cost of the fitting step. As mentioned in the introduction, the strain field δ induced by the shear part ( ) of an ST has a fourfold azimuthal symmetry. Therefore, focusing on δ xy for instance, the m = 4 azimuthal mode of δ xy (r) contains all information pertaining to the ST orientation (whereas the m = 2 component results from the dilational part v ). In practice, using the auxiliary simulations described above, we compute the local strain around each particle (i.e. the tensor δ i which minimizes the local non-affine deviations d 2 min introduced in Sec. III B), coarse-grain the xy-shear strain field into boxes of linear size r c = 3 (see Fig. 4), and compute the azimuthal Fourier modes c m of the resulting coarse-grained field δ c xy along a circle of radius r (much larger than the cluster size), viz., c m = 2π 0 e -imθ δ c xy (r, θ)dθ. ( 5 ) Calculating c 4 for the quadrupolar strain field and writing it as c 4 = |c 4 |e iφ4 , we find that the angle of failure is related to φ 4 via θ pl = φ 4 /2. We call this method Esh/azi. Methods exclusively based on the forces or displacements of rearranging particles The two methods described above involve minimization steps and/or additional (auxiliary) simulations and are therefore numerically costly. To bypass this cost, we will try to get information on the ST by using only the observed displacements δu i of the particles i within the rearranging cluster. A first idea is to compute the internal part σ of the local stress tensor: σ = -V -1 i f i ⊗r i , where V is the cluster size, the sum runs over all particles i in the cluster, each subjected to an average force f i and undergoing a displacement δu i between t and t + δt. The analogue for the displacements is the tensor M = -V -1 i δu i ⊗ r i . Positions r i are expressed relative to the cluster centers of gravity, and the mean force (or displacement) among the ST particles is drawn off the f i (or u i ). A yield angle θ pl can be extracted from these tensors by symmetrising them and writing their deviatoric (traceless) part s dev as s dev = -α sin 2θ pl cos 2θ pl cos 2θ pl -sin 2θ pl , (6) with a coefficient α > 0 (the minus sign comes from the sign convention used to define the Cauchy stress). These methods will be referred to as Loc. We have checked that they yield the same result as the inspection of the azimuthal mode c 4 of the response of an isotropic homogeneous elastic continuum to the set of pointwise forces F i = f i , or F i ∝ δu i for the displacement-based version, as computed by means of the Oseen-Burgers tensor. (We have underlined the word continuum to insist on the difference with the MD/azi method). IV. CHARACTERISTICS OF STS In this Section, we employ the method based on fitting rearranging clusters to Eshelby inclusions in order to unveil key characteristics of the rearrangements. Although STs are often idealized as pure shear transformations, the volumetric deformations are found not to be negligible in practice. In Fig. 5, we report the distributions of the dilational strengths πa 2 v and the shear strengths πa 2 of the STs detected at γ = 10 -5 , where πa 2 is the surface of the inclusion and v and were defined in Eq. ( 3). The corresponding plots at γ = 10 -4 , 10 -3 are very similar. As in ref. [17], we observe an exponential distribution of shear strengths, with a typical value around 0.3 here. One should however note that, since the present simulations are not quasi-static, the detected rearrangements (computed over δt = 2) often do not cover the whole transformation, which lasts for several time units. Moving on to the ST orientations, we plot the distribution p(θ pl ) of angles of failure obtained at the three shear rates in Fig. 6(a). We observe no significant sensitivity to the shear rate. Besides, the central part of p(θ pl ) can be approximated by a normal distribution with standard deviation δθ pl = 23 • , but p(θ pl ) has heavier tails. If we discard the STs for which the elastic response significantly deviates from the Eshelby fit (Fig. 6(b)), the peak of p(θ pl ) sharpens slightly, but this does not strongly affect its shape. It is interesting to compare these results with those predicted by a mainstream tensorial elasto-plastic model in simple shear [START_REF] Nicolas | The Flow of Amorphous Solids: Elastoplastic Models and Mode-Coupling Approach[END_REF]. The latter also showed a Gaussianlike distribution p(θ pl ) which was virtually insensitive to the shear rate, but which was by far narrower than the present ones, with standard deviations of 3 -4 • that could increase up to ≈ 7 • if cooperativity in the flow was enhanced by increasing the duration of plastic events or if elasto-plastic blocks were advected along the flow, instead of being static (see Chap. 9.2, p. 111, of [START_REF] Nicolas | The Flow of Amorphous Solids: Elastoplastic Models and Mode-Coupling Approach[END_REF]). In these models, angular deviations from the macroscopic shear direction θ pl = 0 are exclusively due to cooperative effects, whereby the stress redistributed during an ST (Eq. ( 1)) may load other blocks along a direction θ pl = 0, depending on their relative positions. The much broader distribution p(θ pl ) measured in the present atomistic simulations hints at the impact of the granularity of the local medium, which may favor failure along a direction distinct from that of the local loading. V. COMPARISON BETWEEN DISTINCT METHODS TO MEASURE ST ORIENTATIONS Having characterized the strengths and orientations of STs, we now discuss to what extent the ST characteristics can be extracted from methods that do not rely on fits to Eshelby inclusions. A. Azimuthal mode of the induced strain We start by considering the MD/azi method introduced in Sec. III C 2, which extracts the quadrupolar azimuthal mode of the xy-strain (from the auxiliary MD simulations) on a circle of radius r to determine θ pl . The angles of failure θ pl measured at distinct r (r = 17 and r = 23) are typically within ±10 • of one another (data not shown); there are outliers, but these very generally correspond to STs that strongly deviate from the Eshelby fits. Hereafter, we fix the radius at r = 17. Figure 7(a) shows that the individual MD/azi angles of failure agree relatively well with those determined with the MD/Esh method used so far, with absolute differences smaller than 20 • for STs with reasonable Eshelby fits. B. Methods based on local forces or displacements Turning to the results obtained with local methods (Sec. III C 3), we report that we have not found any correlation between the MD/Esh angles of failure and those determined with force-based local methods, whether it be the total force f i or the 'linearized' forces f (H) i (both being averaged over δt). On the other hand, displacementbased local methods broadly agree with MD/Esh, even though this does not immediately transpire from the scatter plot of Fig. 7(b). To prove the overall consistency of the methods despite this large noise, we split the detected STs into 10 • -wide bins depending on their orientation θ pl MD/Esh and, for each bin, plot the average angle θ pl Loc (measured with the displacement-based local method) in Fig. 8. On a technical note, one should mention that, to average over angles θ 1 , ..., θ n , we computed the circular average arg j e iθj . With these averaged data, the two methods are found to be in good accordance [29]. To shed light on the discrepancies in the one-to-one comparison, we extend the local method by including the displacements (measured in the auxiliary simulation) of all particles within a distance R of the center of gravity of the ST, instead of only the rearranging particles, with the expectation that both methods converge when R → ∞. In Fig. 9, we apply this method to STs detected at γ = 10 -5 for which a mismatch between θ pl Loc and θ pl MD/Esh was observed, despite fairly good fits to Eshelby inclusions. The figure suggests a reasonably quick convergence between the two methods, although the radii R at which convergence is reached strongly depend on the ST. This implies that the deficiency of the pristine Loc method stems from its biased selection of too few particles for the computation of the local tensor. . Differences ∆θ pl between the angles of failure found with the local method based on the displacements of all particles within a distance R of the ST center of gravity (in the auxiliary simulation) and the MD/Esh method for four STs that displayed good Eshelby fits (χ 2 < 1) but large discrepancies with the Loc method. For R = 0, the local method makes only use of the rearranging particles as identified by the kinetic energy threshold. VI. CONCLUSION This paper has introduced and compared three approaches to extract the size and orientation of STs in sheared amorphous solids. Rearranging particles were grouped into clusters based on a threshold criterion for the kinetic energy, which is reliable for athermal solids, and their displacements over a small time interval were recorded. Once these clusters are extracted, auxiliary simulations are performed in which the particles taking part in a given ST are displaced and the remainder is relaxed via energy minimization. In the first approach, which we consider to be the most general one, the resulting displacment field is then analyzed by fitting to the ideal "Eshelby" solution for the far-field displacements. In the second method, this fitting is avoided by instead computing the azimutal mode of the (coarse grained) strain field resulting from the ST. Angles of failure obtained from these two methods agree well with each other as long as the Eshelby fit itself is reasonable. A third and purely local method that avoids auxiliary simulations altogether consists in computing the deviatoric part of the displacement (inertia) tensor after the rearranging clusters have been identified. These angles of failure agree less well with those from Eshelby fits in a point by point comparison, but can be shown to be overall consistent after the noise is reduced through averaging. The inclusion of a larger number of particles improves the agreement between the methods considerably. In practice, this extended local method is the most efficient one as long as the STs do not overlap. It will be interesting to compare the angles of failure of STs to the local configurations prior to failure, in particular the direction of the maximal shear stress and the directional dependence of the local yield stress, which can be measured by deforming a small region embedded in a purely affinely deforming region [START_REF] Patinet | [END_REF]31]. Moreover, our results suggest that mesoscopic elastoplastic models [7] should be refined to better describe the deviations from the idealized Eshelby picture observed at the particle scale, and the sensitivity of their predictions to such microscopic details should be examined. Figure 1 . 1 Figure 1. Detection of plastic events via (a) the d 2 min criterion, (b) by the average kinetic energy of a particle and (c) the magnitude of the linearized "Hessian" forces (see text). The scale bar is 10 particle diameter. Figure 2 . 2 Figure 2. Representation of the angle of failure θ pl . The orange arrows indicate the elongational and contractional directions of an ideal ST, while the dashed line represents the elongational direction of the macroscopic shear. Figure 3 . 3 Figure 3. Distribution of sizes S of the rearranging clusters detected with the kinetic energy based criterion (a) for two different threshold values emin at γ = 10 -5 and (b) for three different shear rates γ with emin = 0.11. The thin dashed line in the top panel is proportional to exp(-S/S0) with S0 = 7. those shown below. Figure 4 . 4 Figure 4. Elastic reponse computed in the auxiliary MD simulations (see text) to a selection of three STs exhibiting a quadrupolar response. In the left column, particles in the ST are colored in orange, while the colors of the other particles depend on the norms of their displacements δui (warmer colors denote larger displacements). The arrows with wide shafts represent the directions of δui for a random subset of particles, while the (directions of) displacements represented by narrower arrows are the response to the best-fitting Eshelby inclusion. The figures shown are zooms on a 50 × 50 portion of the global system (of size 205×205). The right column presents the coarse-grained strain field δ c xy computed from the associated auxiliary simulations, in a 100 × 100 square around the cluster. Figure 5 . 5 Figure 5. Distribution of the dilational strengths πa 2 v (circles) and the shear strengths πa 2 (squares) of the STs detected at γ = 10 -5 (with threshold emin = 0.11). The dashed blue line is proportional to exp(-x/0.3) Figure 6 . 6 Figure 6. Distributions of angles of failure θ pl obtained with the MD/Esh method. (a) Comparison of p(θ pl ) between distinct shear rates γ. The dashed line represents a normal distribution with standard deviation δθ pl = 23 • . (b) Distribution p(θ pl ) at γ = 10 -5 before (filled blue) and after (red) removing the STs which substantially deviate from their Eshelby fits (χ 2 > 0.5). Figure 7 . 7 Figure 7. Scatter plot of angles of failure θ pl measured at γ = 10 -5 with (a) the MD/Esh method vs. the MD/azi method and (b) the MD/Esh method vs. the displacementbased Loc method. Large (orange) crosses refer to STs with good Eshelby fits, while small (blue) crosses indicate poor fits; more precisely, the sizes of the crosses are inversely proportional to the χ 2 -deviation from the fit. Figure 8 .Figure 9 89 Figure 8. Comparison between the angles of failure θ pl MD/Esh and θ plLoc measured with the MD/Esh method and the displacement-based local method, respectively. The STs have been binned into 10 • -wide angular windows, according to the value of θ pl MD/Esh . ACKNOWLEDGEMENTS We thank Jean-Louis Barrat for discussions related to this study. JR is being supported by the Discovery Grant Program of the Natural Sciences and Engineering Research Council of Canada. This research was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915.
01596484
en
[ "spi", "spi.meca.mefl" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01596484/file/preprint.pdf
O Lafforgue N Bouguerra S Poncet I Seyssiecq J Favier S Elkoun Thermo-physical properties of synthetic mucus for the study of airway clearance Keywords: synthetic mucus, cystic fibrosis, rheology, thermo-physical properties 311-315 were experiementally determined. This simulant is mainly composed of a galactomannan gum and a scleroglucan. It was shown that thermophysical properties of synthetic mucus are dependant of scleroglucan concentrations. More importantly and for some scleroglucan concentrations, the syntetic mucus, exhibits, somehow, comparable thermophysical properties to real bronchial mucus. An insight on the microstructure of this simulant is proposed and the different properties enounced previously have been measured for various scleroglucan concentrations and over a certain range of operating temperatures. This synthetic mucus is found to mimic well the rheological behavior and the surface tension of real mucus for different pathologies. Density and thermal properties have been measured for the first time. INTRODUCTION Mucus is a complex biological material whose role in human health is to lubricate and protect organs such as lungs, vagina, eyes or gastrointestinal tract among others. It acts as a physical barrier against pathogens and noxious particles [START_REF] Cone | Barrier properties of mucus[END_REF] while preventing tissue dehydration and allowing useful molecules (for example, ions or proteins) to be transferred through. The present article focuses on the thermophysical properties of a bronchial mucus simulant. Many respiratory diseases such as cystic fibrosis (CF) are related to abnormal compositions of lung mucus related to a default of transfer for certain ions through the epithelial surface. This results in a hyper viscosity of the mucus, impairing the process of mucociliary clearance, and exacerbating inflammations and chronic infections. As reported by Flume et al., [START_REF] Flume | Cystic fibrosis pulmonary guidelines: Airway clearance therapies[END_REF] there is to date, no clear consensus about the most efficient therapy to support the airway clearance and treat adequately patients suffering from CF or chronic obstructive pulmonary disease (COPD). Presently, most physicians used a combination of drugs, [START_REF] Majima | Mucoactive medications and airway disease[END_REF] respiratory physiotherapy (with a therapist and/or an airway clearance device) and regular physical activities. In addition, some devices were designed to increase mucus secretion volume to help the airway clearance and increase the patient autonomy. Either based on vibrations, percussions or positive expiratory pressures, the functioning of these devices is based on the shear-thinning and thixotropic properties of bronchial mucus. A given shear stress is applied on the chest or during the exhalation phase of the respiratory cycle, increasing mucus fluidity, thus helping patients to expectorate it more easily. One could cite as examples the FrequencerV R , [START_REF] Cantin | Mechanical airway clearance using the Frequencer electro-acoustical transducer in cystic fibrosis[END_REF] which transmits acoustic and mechanical sinusoidal vibrations at different locations inside the chest, or more recently the SimeoxV R , which imposes an oscillatory air depression during the exhalation phase. [START_REF] Benkoussas | Etude exp erimentale de l'influence de la d epression m ecanique sur le transport et le comportement rh eologique du mucus bronchique synth etique dans une trach ee artificielle[END_REF] Mucus exhibits a complex structure with different characteristic scales. 7 It is made of 90-95% water, 2-5% mucins, 1-2% lipids, 1% salts, and 0.02% of DNA and other molecules such as cells debris. [START_REF] Vasquez | Chapter 2: Complex fluids and soft structures in the human body[END_REF] It forms a physical gel structured by a three dimensional mucin matrix. Mucins are high molecular weight glycoproteins (length 0.5-10 lm) insuring a structural protection function. Within the airways, they can be found as two types [START_REF] Thornton | From mucins to mucus: Toward a more coherent understanding of this essential barrier[END_REF] : monomeric and oligomeric mucins, the latter are able to form a gel and are mainly responsible for the complex rheological properties of the mucus gel. In human native mucus, the mucins of the gel-forming type (mainly MUC2, MUC5AC, MUC5B and MUC6) build with other proteins, a three dimensional gel network. These mucins consist in a peptide chain with Oglycosylated regions (branched sugar chain, endowing the gel-forming ability) and cysteine-rich naked regions (non glycosylated or N-glycosylated). The mucins are oligomerized by strong disulfide inter-molecular bonds; there are also disulfide intra-molecular bonds which stabilize the naked domains. These latter branch free domains are hydrophobic and entail the cross-link of two naked mucin regions trying to avoid water. In addition to these weak hydrophobic bonds and the strong disulfide bonds, several interactions play a role in the cross-linked network: entanglements due to the macromolecules large sizes, hydrogen bonds between sugar branches which are weak but numerous and Van der Waals forces between oligosaccharide moieties. [START_REF] King | Mucus and its role in airway clearance and cytoprotection[END_REF] Boat et al. [START_REF] Boat | Biochemistry of mucus[END_REF] have summarized the biochemistry of mucus, while Verdugo [START_REF] Verdugo | Supramolecular dynamics of mucus[END_REF] provided a deep insight into the dynamics of the different mucus components. At a macroscopic scale, mucus is also a non-Newtonian fluid, exhibiting properties of viscoelasticity, shear-thinning and thixotropy, while, at a microscale, it behaves as a low viscosity fluid. The reader can refer to Rubin [START_REF] Rubin | Mucus structure and properties in cystic fibrosis[END_REF] or Lai et al. 7 and the references herein for a state-of-the-art up to 2009 on the micro-and macro-rheology of different types of mucus, including humans and animal ones. One focuses here only on human bronchial mucus from patients suffering from CF or COPD, for which rheology has been studied starting from the 1960s [START_REF] Denton | Rheology of human lung mucus[END_REF][START_REF] Davis | The rheological properties of sputum[END_REF] and correlated with the mucociliary clearance some years later. [START_REF] Litt | Mucus rheology. Relevance to mucociliary clearance[END_REF] As an example, Puchelle et al. [START_REF] Puchelle | Biochemical and rheological data in sputum. Relationship between the biochemical constituents and the rheological properties of sputum[END_REF] measured the biochemical and rheological properties of bronchial mucus from 21 chronic bronchitic patients without any medical treatment. The rheological properties of sputum are found to be affected by interactions between its different constituents, namely proteins (like the secretory immunoglobulins, serum albumin, transferrin), the mucins and the nucleic acids. For this pathology, other authors 18 also assessed the important role of spinability, together with the ones of viscosity and elasticity, on the mucociliary transport rate. The thixotropic property of bronchial mucus was investigated some years later by the same authors using bronchial mucus and simulants and was interpreted as a modification in the three-dimensional structure at a microscale. [START_REF] Puchelle | Elastothixotropic properties of bronchial mucus and polymer analogs. I. Experimental results[END_REF] Puchelle et al. [START_REF] Puchelle | Rheological properties controlling mucociliary frequency and respiratory mucus transport[END_REF] correlated the mucus viscosity and the mucociliary transport. Beyond an optimal value of dynamic viscosity close to 12 Pa.s, the mucociliary frequency and transport rate decrease. The spinability and adhesiveness of mucus are also supposed to modify/regulate the mucociliary clearance. Rubin et al. [START_REF] Rubin | Collection and analysis of respiratory mucus from subjects without lung disease[END_REF] characterized the bronchial mucus of healthy patients under two shear rates (1 and 100 rad/s) and compared their mechanical impedance and loss tangent with values obtained on other types of mucus sampled from patients suffering from CF, and dogs. Concerning human mucus, they did not notice any difference between men and women nor depending on the age. There was also no significant differences between healthy or CF human mucus nor dogs mucus. Nielsen et al. [START_REF] Nielsen | Elastic contributions dominate the viscoelastic properties of sputum from cystic fibrosis patients[END_REF] performed oscillatory, creep and steady shear rheological measurements over a wide range of characteristic times (from 10 23 to 10 6 s) using 23 mucus samples from CF patients. They confirmed that such bronchial mucus exhibits viscoelastic properties with a significant elastic recovery. At low shear rates, a nearly constant steady viscosity is observed for long shearing times. For given shear rates, the measured viscosities are significantly different from the ones previously reported in the literature, highlighting the high variability of sputum characteristics. The main issues in analyzing bronchial mucus properties can be listed as follows: 1. The difficulty to collect it: on one hand, the collecting method often entails changes in the properties of concern (e.g., saliva contamination, hypersecretion of mucins in response to the collecting tool). 7 On the other hand, the collectable amount is limited and requires to establish clinical studies. 2. Its huge variability in terms of mucin size, type and concentration. They depend indeed on many factors: patient, pathology (disease conditions, 9 severity 23 ) and daily practices (food, smoker/non smoker, [START_REF] Kollerstrom | A difference in the composition of bronchial mucus between smokers and non-smokers[END_REF] practice of physical activities). . . Hence, many studies are performed on mucus simulants that are engineered in accordance with the properties to be probed (bulk rheology, surface tension, diffusivity. . .). These mucus models can be produced by cell cultures, which provides a great relevance compared to native mucus but are still limited in quantity, or by the formulation of synthetic gels made of polymeric materials which can be produced in large amounts but hardly gather all mucus properties. [START_REF] Hamed | Synthetic tracheal mucus with native rheological and surface tension properties[END_REF] Most of the studies to date on synthetic mucus used either guar gum [START_REF] King | On the transport of mucus and its rheologic simulants in ciliated systems[END_REF] or locust bean gum. [START_REF] Hasan | Effect of artificial mucus properties on the characteristics of airbone bioaerosol droplets generated during simulated coughing[END_REF][START_REF] King | Clearance of mucus by simulated cough[END_REF][START_REF] Hassan | Clearance of viscoelastic mucus simulant with airflow in a rectangular channel, an experimental study[END_REF] Madsen et al. [START_REF] Madsen | A rheological evaluation of various mucus gels for use in in-vitro mucoadhesion testing[END_REF] compared four commercial simulants of mucus, Sigma or Orthana mucin types. Banerjee et al. [START_REF] Banerjee | Effect of phospholipid mixtures and surfactant formulations on rheology of polymeric gels, simulating mucus, at shear rates experienced in the tracheobronchial tree[END_REF] investigated the influence of three therapeutic surfactants on the viscosity of mucus simulant mainly composed of gum tragacanth. Such polymeric gel was preferred for the presence of fucose, used to simulate fucomucins. The mucus exhibits a shearthinning behavior with a flow index inferior to unity. Shah et al. [START_REF] Shah | An in vitro evaluation of the effectiveness of endotracheal suction catheters[END_REF] developed a simulant based on a polyethylene oxide with addition of resin to test the efficiency of endotracheal suction catheters. Hamed et al. [START_REF] Hamed | Synthetic tracheal mucus with native rheological and surface tension properties[END_REF] developed a mucus simulant with a complex composition based on pig gastrin mucins to mimic native tracheal mucus in terms of rheology and surface tension. The main objective of the present article is to fully characterize the thermophysical properties of a synthetic mucus for different mucin concentrations and operating temperatures. The empirical correlations and different measured properties could then be used for a better understanding of the respiratory diseases impacting the composition of mucus. It could be used to develop better adapted treatments or improve existing numerical models dedicated to the simulation of airway clearance in human lungs, and go a step forward toward the development of numerical lungs as a predictive tool for medical diagnosis. To the best of the author's knowledge, this experimental study is the first to propose the characterization of the thermal properties and an insight into the structure of a synthetic mucus. The article is organized as follows: The materials and the preparation of the mucus simulant are described in Section "Measurement Preparation". The techniques used to measure the dynamic viscosity, the density, the surface tension, the heat capacity, the thermal diffusivity and the thermal conductivity are presented in Section "Measurement Techniques". The results are discussed in Section "Results and Discussion" for different mucins concentrations over a given range of temperature before some conclusions in Section "Conclusion". MATERIALS AND PREPARATION Materials Viscogum TM FA (Cargill TM ), a galactomannan gum derived from locust beans and Actigum TM CS 6 (Cargill TM ), a scleroglucan obtained by aerobic fermentation of a Sclerotium fungus were kindly provided by the company Laserson (Etampes, France). Viscogum TM is composed of galactose and mannose (average of 1 galactose unit for 4 mannose residues). A chain of mannose is branched with galactose sugars irregularly distributed in smooth and substituted zones. Actigum TM consists in a glucose chain branched every three units by an additional glucose forming a three dimensional (triple helix) structure. Figure 1 shows the molecular structure of these two components. Sodium chloride a.r. (99.81% NaCl) and di-sodium tetraborate 10aq a.r. buffer substance (99.51% Na 2 B 4 O 7 .10H 2 O) were purchased from Chem-Lab NV (Zedelgem, Belgium). Distilled water used in all preparations were obtained from a settling tank (GFL 2012). Preparation of synthetic mucus Mucus simulants were prepared within glass bottles filled with 200 mg of distilled water. Each component was then very slowly poured into the solution stirred using a magnetic stirrer (IkamagV R RET) at room temperature (around 218C). The following protocol taken from Ref. 33 and adapted from Ref. 1 consists in the addition, in the following order, of 0.9 wt % of NaCl, 0.5 wt % of Viscogum TM FA (galactomannan) and a chosen fraction of Actigum TM CS 6 (scleroglucan). To approach the diversity of real mucus, seven different gels with scleroglucan concentration from 0.5 to 2 wt % (by increments of 0.25%) were prepared. The mixture was kept under agitation for 48 h at room temperature. After this period, a mass corresponding to 0.2 mL per 10 mL of di-sodium tetraborate at 0.02 M (buffer component) was added to cross-link the galactomannan chains. The high molecular weight branched macromolecules of sugar, once cross-linked, build a gel matrix that mimicks the mucin network patterning the native mucus. The concentration of tetraborate (around 4 3 10 24 mol/L) remains in the range [2.5 3 10 24 -7 3 10 24 ] mol/L found in the literature. [START_REF] Zahm | Role of simulated repetitive coughing in mucus clearance[END_REF][START_REF] Hasan | Effect of artificial mucus properties on the characteristics of airbone bioaerosol droplets generated during simulated coughing[END_REF][START_REF] Zahm | Tests effectu es avec l'appareil SIMEOX[END_REF] It is notewothy that higher concentrations, as proposed by Ref. 27 (4 3 10 23 mol/L) or more, lead to a gel with a jelly aspect. The agitation is kept for a few hours before storing the final mixture at 48C. Before performing the measurements, the gel is allowed to recover at room temperature and is fractionated into several 30 mL plastic vials. Figure 2 presents the microstructure of the simulant using both an optical microscope and a scanning electron microscope for a solution of 2 wt % in Actigum TM . For the optical microscopy, droplets of mucus simulant were placed between two glass plates and loaded in an optical microscope (Leica DMRX, Germany) equipped with a 103 lens and different objectives (Fluotar, Germany) from 53 to 633. For scanning electron microscopy, small amounts of simulant were sampled and dehydrated by series of alcohol washings before being critical-point-dried. The dried samples were then metalized with a gold/palladium alloy before being observed with a scanning electron microscope (Hitachi S-4700) operating at 2.0 kV. The microstructure is very similar to the one of bronchial mucus with a complex tangled network of worm-like filaments. [START_REF] Suk | Rapid transport of muco-inert nanoparticles in cystic fibrosis sputum treated with n-acetyl cysteine[END_REF][START_REF] Manzenreiter | Ultrastructural characterization of cystic fibrosis sputum using atomic force and scanning electron microscopy[END_REF][START_REF] Schuster | Nanoparticle diffusion in respiratory mucus from humans without lung disease[END_REF] From Figure 2, the filaments appear as linear chains at a larger scale (top) but at a smaller scale (bottom), they tangle to form a low viscosity network as reported by Voynow and Rubin [START_REF] Voynow | Mucins, Mucus, and Sputum[END_REF] for bronchial mucus. At the macroscale, the length of the filaments remains here of the order of 200 lm, while their thickness is rather constant and equal to 2.4 lm. At the microscale, the medium pore size is hardly accurately measurable with accuracy but it remains in accordance with the literature on human bronchial mucus: between ten and hundreds of nm. For example, Kesimer et al. [START_REF] Kesimer | Molecular organization of the mucins and glycocalyx underlying mucus transport over mucosal surfaces of the airways[END_REF] reported for real bronchial mucus that the typical length of mucins ranges between 0.2 and 1.5 lm depending on the type of mucins, while their typical width varies from 7 to 12 nm. Nevertheless, in some respiratory diseases, such as severe asthma, there is an intermolecular cross-linking between mucin chains making the structure more complex and the length of some mucin chains much longer. [START_REF] Voynow | Mucins, Mucus, and Sputum[END_REF] A secondary polymer structure consisting of DNA and filamentous actin copolymers may coexist with the mucin chains for CF patients. This structure consists of thicker and longer polymer bundles dissociated from the mucin network and induces a decrease in elasticity compared to pure mucins. As shown in, [START_REF] Voynow | Mucins, Mucus, and Sputum[END_REF] the characteristic length of such a structure is close to the one observed in Figure 2 (top). More surprisingly, such longer bundles of fiber-like mucus strands have been observed for the stomach mucus from rabbits. [START_REF] Bansil | The influence of mucus microstructure and rheology in Helicobacter pylori infection[END_REF] It will be shown in Section " Results and Discussion", that such simulant mimic the main properties of bronchial mucus for CF patients. Increasing the concentration in Actigum TM (not shown here) leads to higher density regions without modifying the microstructure. One can attribute this to an unavoidable shrinkage happening during the drying process, as also reported in the literature. [START_REF] Kesimer | Molecular organization of the mucins and glycocalyx underlying mucus transport over mucosal surfaces of the airways[END_REF] MEASUREMENT TECHNIQUES Rheological measurements The experiments were performed on a stress-controlled rheometer AR 550 (TA Instruments) equipped with a 50 mm/ 28 steel cone. Temperature was regulated by a Peltier plate. Dehydration was prevented by using a wet steel cover insuring a water saturated atmosphere around the sample. All the tests were realized at 208C and 378C. No significant differences were noted depending on the temperature. Thus, series of isothermal experiments were performed at 208C to limit dehydration issues. The viscoelastic properties of the simulant unbroken structure were investigated through a series of dynamical shear experiments (Small Amplitude Oscillatory Shear). The results were interpreted from the evolution of the elastic and viscous moduli and the loss angle (G 0 , G 00 , d) in response to the sinusoidal load input. The stress dependency of the structure was observed via stress amplitude sweeps at a constant frequency (1 rad.s 21 ). More detailed rheological measurements will be presented in a further article. Densimetry measurements Density measurements have been performed using the DMA 5000 M manufactured byAnton Paar GmbH. A 1 mL sample is placed in an U-shaped tube made in borosilicate glass, which is electronically oscillated at its characteristic frequency. This frequency varies with the density of the sample, which is deduced from the accurate measurement of the new characteristic frequency (U-shaped tube 1 sample) from a simple mathematical formula. The oscillations are measured by optical sensors. Temperature is controlled by a built-in Peltier thermostat. Two platine Pt 100 probes are used to measure the temperature of the sample, with an accuracy of 0.018C. For high viscosity samples, corrections are applied to avoid errors due to viscosity. It leads to highly repeatable and accurate density measurements: the standard deviations on the repeatability and accuracy of the density measurements are 10 26 and 5 3 10 26 g/cm 3 respectively. The experimental protocol has been validated using distilled water with a standard deviation of 5 3 10 26 g/cm 3 compared to the expected values between 198C and 418C. Surface tension measurements The surface tension of the mucus simulant was measured by means of the du No€ uy ring method using a semiautomatic tensiometer (Surface Tensiomat 21, Fisher Scientific). A platinum-iridium du No€ uy ring (circumference 5.93 cm, radius ratio between the ring and the wire R/ r 5 53.2) suspended to a counter-balanced lever arm was immersed in the mucus simulant samples before being driven upward by a torsion wire (radius 0.007 00 ) until the fluid film carried by the ring broke down. The apparent surface tension is determined from the force necessary to induce the breakage. Each measurement was repeated at least three times at room temperature (238C). This method was found to be more accurate than the Wilhelmy plate balance method described by Hamed and Fiegel. [START_REF] Hamed | Synthetic tracheal mucus with native rheological and surface tension properties[END_REF] To validate the experimental protocol, preliminary tests have been performed using distilled water, acetone and toluene as calibration samples. For all these samples, the relative standard deviations and the repeatability remain better than 0.5%. Thermal diffusivity and conductivity measurements Both thermal conductivity and diffusivity were measured using the THW-L1 Liquid Thermal Conductivity System from Thermtest Thermophysical Instruments. It measures simultaneously the thermal conductivity, k, and the thermal diffusivity, a, based on the Transient Hot Wire (THW) method. Coupled with a system controlling the temperature (heat exchanger 1 thermostat bath circulator), this device allows a complete characterization of the thermal conductivity and diffusivity within the range 2408C to 2008C. The main advantage of this method for its application to complex fluids is its capacity to experimentally eliminate the error due to natural convection. The principle of the hot-wire method is based on an ideal and constant heat generation source, an infinitely long and thin continuous line, dissipating the heat into an infinite test medium. A constant current is supplied to a platinum wire (diameter 0.1 mm, length 35 mm) to generate the temperature rise. The wire serves as both the heat source and the temperature sensor. Heating the wire by Joule effect causes the variation of its resistance, thus its temperature is measured as a function of time using a Wheatstone bridge and a data acquisition system. A PT100 platinum resistance thermometer enables to measure independently the temperature of the sample. The THW sensor including the sample cell is made of stainless steel. The required sample volume is 50 mL. The thermal conductivity value is determined from the heating power and the slope of temperature change in a logarithmic time scale. The thermal diffusivity is then evaluated through the thermal conductivity, temperature and heating power at a given time. To validate the experimental protocol, preliminary tests have been performed using distilled water as the working fluid between 208C and 508C. For both thermal conductivity and diffusivity, the repeatability remains better than 1%. The relative standard deviations for k ranges between 0.1% at 208C and 0.17% at 40.58C, while for a, its remains at 0.1% for this range of temperatures. RESULTS AND DISCUSSION Rheological properties Bronchial mucus is a complex biologic material with viscoelastic properties and a non-linear and time-dependent rheological behavior. 7 These rheological properties have been widely observed qualitatively on real native, pathological or simulant mucus. [START_REF] Rubin | Mucus structure and properties in cystic fibrosis[END_REF][START_REF] Denton | Rheology of human lung mucus[END_REF][START_REF] Davis | The rheological properties of sputum[END_REF][START_REF] Litt | Mucus rheology. Relevance to mucociliary clearance[END_REF][START_REF] Puchelle | Biochemical and rheological data in sputum. Relationship between the biochemical constituents and the rheological properties of sputum[END_REF][START_REF] Puchelle | Spinability of bronchial mucus. Relationship with viscoelasticity and mucous transport properties[END_REF][START_REF] Puchelle | Elastothixotropic properties of bronchial mucus and polymer analogs. I. Experimental results[END_REF][START_REF] Puchelle | Rheological properties controlling mucociliary frequency and respiratory mucus transport[END_REF][START_REF] Rubin | Collection and analysis of respiratory mucus from subjects without lung disease[END_REF][START_REF] Nielsen | Elastic contributions dominate the viscoelastic properties of sputum from cystic fibrosis patients[END_REF][START_REF] Puchelle | Rheological and transport properties of airway secretions in cystic fibrosisrelationships with the degree of infection and severity of the disease[END_REF] In the present Section, the objective is to demonstrate that the present mucus formulation exhibits similar properties. Figure 3 displays a typical rheogram obtained from continuous shear stress ramps (shear stress ramps of 6 12.5 Pa.s 21 ) for a 2 wt % concentration in Actigum TM at 208C. A continuous shear stress up and down ramp test has been performed using different geometries (Couette, plate and cone) leading to similar results. A progressive departure from an elastic behavior to a shear-thinning behavior is observed beyond a stress threshold. This observation is the manifestation of the progressive yielding of the gel structure. The experimental data (up curve or down curve) are well fitted by a Herschel-Bulkley model with a flow index lower than 1. Nevertheless, one has to keep in mind that the result of this experiment is subjected to the timedependent behavior of the material. The yield stress observed here is not an inherent property of the material since the structural strength of the gel is time-dependent. As a consequence, in such a non steady flow measurement, the up curve apparent yield stress is above the down curve apparent yield stress. The characterization of such a non steady-state flow curve constitutes a first step toward a more complete characterization accounting for the steady state flow curve and the time-dependent behavior as well. The thixotropic property of the mucus simulant is simply highlighted, on a purely qualitative manner, by the hysteresis loop in Figure 3 (yellow area). Small Amplitude Oscillatory Shear (SAOS) sweep tests for a 2 wt % concentration in Actigum TM obtained at 208C and a constant frequency of 1 rad.s 21 have also been performed. Figure 4 presents the variations of the elastic G 0 and viscous G 00 moduli and the loss factor (tan d) as a function of the oscillating stress amplitude. A yield point, denoted s y , that delimits the linear viscoelastic domain can be deduced from a 5% departure from the plateau behavior of G 0 . This is the stress limit above which the measurements no longer reflect the native structure since a structural breakdown starts to occur. Beyond the yield stress, the moduli are no longer constant due to the breakdown of the gel network. Just above this critical point, the stress is large enough to initiate the structure breakdown but too small to allow flowing. A flow point (s f ) is then identified as the shear stress corresponding to the moduli crossover, that is, the point above which the material becomes more viscous than elastic due to a critical structure breakdown. Above the yield stress, the viscous modulus shows an overshoot before decreasing. According to Mezger 40 this is a common phenomenon in gels. The increase of deformation energy inducing the overshoot could be related to a progressive collapse of the network due to structure components with some freedom of motion. The resulting friction would thus be responsible for the high dissipated energy (increase in G 00 ) until the final breakdown occurs when G 00 displays a peak (s peak ). This kind of behavior has been classified as a Type III ("weak strain overshoot") behavior by Hyun et al. [START_REF] Hyun | Large amplitude oscillatory shear as a way to classify the complex fluids[END_REF] A review of Hyun et al. [START_REF] Hyun | A review of nonlinear oscillatory shear tests: Analysis and application of large amplitude oscillatory shear (laos)[END_REF] emphasizes the very diverse possible structural causes of this behavior depending on the class of the soft material and claims the necessity to perform LAOS (Large Amplitude Oscillatory Shear sweep test) study to probe the meaning of this behavior. Identical stress sweep tests have been performed on mucus simulants for the whole range of concentrations in Actigum TM . Similar curves are obtained for all concentrations (data not shown here) and the characteristic values are summarized in Table I. Viscous and elastic moduli and yield stresses all increase according to the polymer concentration. The overshoot is also observed at higher stress amplitudes for more concentrated samples. One could interpret this by the fact that more concentrated samples have a stronger network, which requires more energy to collapse. When the collapse begins, friction occurs and energy dissipation expresses itself by the G 00 overshoot. The peak height is also amplified by the Actigum TM concentration. When the structure starts collapsing, friction occurs between structural components. A more concentrated sample is expected to dissipate more friction energy as a result of its higher network density entailing more chain promiscuity and entanglements. The plateau values of G 0 and G 00 shown in Figure 4 or in Table I agree Surface tension Surface tension of synthetic mucus with concentration in Actigum TM varying between 0.5 and 2 wt % has been measured by a du Nouy ring tensiometer at 238C. The results are compared to former results obtained on real mucus [START_REF] Albers | Ring distraction technique for measuring surface tension of sputum: Relationship to sputum clearability[END_REF][START_REF] Bush | Mucus properties in children with primary ciliary dyskinesia: Comparison with cystic fibrosis[END_REF][START_REF] Daviskas | Effect of mannitol and repetitive coughing on the sputum properties in bronchiectasis[END_REF][START_REF] Bennett | Effect of a single 1200 mg dose of MucinexV R on mucociliary and cough clearance during an acute respiratory tract infection[END_REF] or mucus simulants [START_REF] Hamed | Synthetic tracheal mucus with native rheological and surface tension properties[END_REF][START_REF] Schenck | Tensiometric and phase domain behavior of lung surfactant on mucus-like viscoelastic hydrogels[END_REF] in Table II. The surface tension is particularly interesting since it is closely related to the wettability and adhesiveness of bronchial mucus on the epithelium surface. It directly influences the efficiency of ciliary and cough clearance on the mucus transport. At low concentrations in Actigum TM , the present values are close to that of pure water, whose surface tension slightly decreases from 72.8 mN.m 21 at 208C to 70 mN.m 21 at 378C. As already reported by Hamed and Fiegel, [START_REF] Hamed | Synthetic tracheal mucus with native rheological and surface tension properties[END_REF] surface tension increases with cross-linking due here to an increase of the concentration in Actigum TM . Albers et al. [START_REF] Albers | Ring distraction technique for measuring surface tension of sputum: Relationship to sputum clearability[END_REF] also observed higher surface tensions for more solid-like sputum surfaces. The values reported here are significantly different from those obtained for native mucus, which contains surface-active molecules. This kind of surfactant considerably lowers the surface tension of mucus down to 30-34 mN.m 21 approximately. [START_REF] Hof | In vivo determination of surface tension in the horse trachea and in vitro model studies[END_REF] For high concentrations, the results have to be considered with special care as the du Nouy ring method requires some corrections to account for the complex shape of the meniscus during the ring detachment. As discussed in, [START_REF] Hamed | Synthetic tracheal mucus with native rheological and surface tension properties[END_REF] such a correction exists only for Newtonian fluids and cannot be applied here leading to less reliable values of surface tension. Other properties To the best of the author's knowledge, the density, heat capacity and thermal diffusivity (or conductivity) of mucus simulants or sputum have not been considered in the literature. As it will be shown below, these properties take different values compared to those of pure water. They have to be measured carefully for the different concentrations u in Actigum TM at the body temperature. From a thermal point, the respiratory system also plays the role of humidifier and heater for the inspired gas. Thomachot et al. [START_REF] Thomachot | Measurement of tracheal temperature is not a reliable index of total respiratory heat loss in mechanically ventilated patients[END_REF] measured the minimal and maximal temperatures in the trachea of 10 healthy patients. For an ambient air temperature around 238C and mean body temperatures equal to 37.88C, temperatures in the upper part of the trachea vary between 30 and 338C. Considering the whole upper airways (nose, mouth, larynx. . .), they can vary between the ambient temperature and 338C. It appears then also important to measure the thermal properties of mucus simulants for a temperature range between 208C and 408C (patients with fever). It will provide a very useful database for future advanced numerical modelings dedicated to the transport of bronchial mucus in the respiratory system by mucociliary clearance or helping clearance devices. Figure 5 presents the evolution of the density q of mucus simulants as a function of temperature. As for pure water, q decreases quadratically with temperature. For example, for u 5 2 wt %, one gets: [START_REF] Albers | Ring distraction technique for measuring surface tension of sputum: Relationship to sputum clearability[END_REF][START_REF] Bush | Mucus properties in children with primary ciliary dyskinesia: Comparison with cystic fibrosis[END_REF] 81.1-92.4 79.1 Chronic bronchitis [START_REF] Albers | Ring distraction technique for measuring surface tension of sputum: Relationship to sputum clearability[END_REF] 72.1-84.8 Primary Ciliary Dyskenesia [START_REF] Bush | Mucus properties in children with primary ciliary dyskinesia: Comparison with cystic fibrosis[END_REF] 82.9 Bronchestasis [START_REF] Daviskas | Effect of mannitol and repetitive coughing on the sputum properties in bronchiectasis[END_REF] 86 Respiratory Tract Infection [START_REF] Bennett | Effect of a single 1200 mg dose of MucinexV R on mucociliary and cough clearance during an acute respiratory tract infection[END_REF] 86.47 Synthetic mucus [START_REF] Hamed | Synthetic tracheal mucus with native rheological and surface tension properties[END_REF][START_REF] Schenck | Tensiometric and phase domain behavior of lung surfactant on mucus-like viscoelastic hydrogels[END_REF] 53.3-95 72 q T; u52% ð Þ 520:0049T 2 20:0277T11015:8 with T in 8C. Eq. ( 1) is to be compared to the following expression for pure water: q T; water ð Þ 520:0041T 2 20:0574T11001:1 As expected also, q is also an increasing function of the concentration u in Actigum TM . For the lower concentration, u 5 0.5 wt %, q is increased by around 0.9% compared to pure water at the same temperature. All results are fitted by the following quadratic relationship giving q as a function of the concentration u in Actigum TM for a given prescribed temperature T (8C): q 8T; u ð Þ5au 2 1bu1c u ð Þ c u ð Figure 6 presents the variations of thermal diffusivity and conductivity versus the temperature. Both quantities increases linearly with temperature as in the case of pure water. The influence of the concentration u in Actigum TM is almost negligible. The thermal conductivity (resp. diffusivity) of pure water is 0.4% (resp. 0.3%) higher (resp. lower) in average than that for u 5 2 wt %. The heat capacity at constant pressure has been deduced from the previous measurements through the relation: C p 5 k/(a q). Figure 7 displays the distribution of the heat capacity versus temperature for different values of u. Whatever the concentration in Actigum TM , the heat capacity decreases quadratically with temperature. As expected, taking into account the fact that the heat capacity of Actigum TM is lower than that of pure water, the heat capacity decreases also with u. At a given imposed temperature, C p decreases by 1.6% between pure water and the u 5 2 wt % simulant. Note that the effect of aging on the heat capacity of mucus simulant has also been considered. There is no noticeable difference between measurements performed with "fresh" simulant or with simulant prepared 4 days before and stored in a fridge at 48C. CONCLUSION Bronchial mucus plays a crucial role in the protection of the respiratory system and can be seen as a key-element for better understanding and treatment of chronic respiratory diseases such as severe asthma or cystic fibrosis. The airway clearance function is a complex phenomenon directly linked to the thermophysical properties of the mucus layer coating the airways. The difficulty to collect and exhaustively test samples from a same mucus collection first motivated the choice to work here on mucus simulants. The composition and experimental protocol to prepare this synthetic mucus were proposed by. [START_REF] Zahm | Tests effectu es avec l'appareil SIMEOX[END_REF] This gel is mainly composed of water, NaCl, Viscogum TM FA (galactomannan) and a variable fraction of Actigum TM CS 6 (scleroglucan) to mimic a large range of mucus consistencies. The rheological properties at rest were investigated using small amplitude oscillatory shear tests. They revealed that mucus simulant behaves as a gel within a defined linear viscoelastic region and as a viscoelastic liquid above the yield stress zone. To characterize the behavior of mucus in response to in vivo shearing, continuous ramp flow tests have also been performed. The flow curves revealed the yield stress and shear-thinning behavior of the mucus simulant as well as a hysteresis loop between the up and down curves that qualitatively accounts for the time-dependent (thixotropic) behavior. Measurements of the surface tension of mucus simulants also measured here, were in good agreement with previous published data. The present synthetic mucus described as the most appropriate one in the literature, is here shown to mimic precisely the surface tension of real mucus for different pathologies ranging from cystic fibrosis to respiratory tract infection. Future measurements of the contact angle should enable to determine experimentally the adhesiveness of such mucus simulants. Thermal properties, namely thermal diffusivity, thermal conductivity and heat capacity, are quite comparable to those of pure water in the range of tested temperature (20-408C). Thermal diffusivity and thermal conductivity increases linearly by respectively 6% and 5%, while the heat capacity decreases by only 0.3% between 208C and 408C. FIGURE 1 . 1 FIGURE 1. Structural chemical formula of (a) Viscogum TMV R FA and (b) Actigum TMV R CS. FIGURE 2 . 2 FIGURE 2. Structure of synthetic mucus (2 wt % of Actigum TM ) using: an optical microscope (top, 103 and 633) and an scanning electron microscope (bottom). FIGURE 3 . 3 FIGURE 3. Example of rheogram for a 2 wt % concentration in Actigum TM obtained at 208C. Continuous shear stress ramp (up and down) highlighting a hysteresis loop. particularly well with the experimental values for human cystic fibrosis sputum displayed on Figure5ain the review of Lai et al.7 FIGURE 4 . 4 FIGURE 4. Small Amplitude Oscillatory Shear (SAOS) sweep test for a 1.5 wt % concentration in Actigum TM obtained at 208C and a constant frequency of 1 rad.s 21 . Elastic G 0 and viscous G 00 moduli (Pa) and the loss factor (tan d) as a function of the stress amplitude (Pa). FIGURE 5 . 5 FIGURE 5. Influence of temperature and concentration in Actigum TM on the density q of synthetic mucus. FIGURE 6 . 6 FIGURE 6. Influence of temperature and concentration in Actigum TM on the (a) thermal diffusivity a and (b) thermal conductivity k of synthetic mucus. Þ521:41u 2 17 17 TABLE I . I Main Rheological Characteristics as a Function of wt % in Actigum TM Measured at 208C for of 1 rad.s 21 . LVE Refers to the Linear Viscoelasticity Region wt% in Actigum TM G 0 LVE (Pa) G 0 LVE -5% (Pa) G 00 LVE (Pa) G 00 peak (Pa) s y (5%) (Pa) s f (Pa) s peak (Pa) 0.5 5.76 5.48 1.77 2.07 0.24 2.30 1.93 0.75 14.36 13.64 3.68 5.86 2.43 5.20 4.84 1 32.47 30.85 6.76 11.49 1.53 5.51 6.10 1.25 39.57 37.59 12.90 16.01 2.43 8.90 9.66 1.5 100.86 95.82 15.70 35.03 6.10 15.30 15.32 1.75 108.35 102.93 18.86 43.34 7.68 18.72 12.17 2 133.08 126.43 20.22 52.38 1.93 18.10 19.29 TABLE II . II Surface Tension as a Function of wt % in Actigum TM Measured at 238C wt% in Actigum TM 0.5 0.75 1 1.25 1.5 1.75 2 Present measurements (10 23 N.m 21 ) 71.6 72 75.4 78.4 86.1 86.6 89.9 Cystic fibrosis LAFFORGUE ET AL. LAFFORGUE ET AL. LAFFORGUE ET AL. ACKNOWLEDGMENTS O. Lafforgue would like to thank the company Physio-Assist and the Association Nationale de la Recherche et de la Technologie for the CIFRE grant 2014-1287. S. Poncet gratefully acknowledges the support of the Natural Sciences and Engineering Research Council of Canada through the Discovery Grant (RGPIN-2015-06512) and Canadian Foundation for Innovation (Grant 34582) for the provision of most measurement facilities through the John R. Evans Leaders fund. Contract grant sponsor: Physio-Assist and the Association Nationale de la Recherche et de la Technologie; contract grant number: 2014-1287 Contract grant sponsor: Natural Sciences and Engineering Research Council of Canada; contract grant number: RGPIN-2015-06512 Contract grant sponsor: Canadian Foundation for Innovation; contract grant number: Grant 34582 V C 2017 WILEY PERIODICALS, INC.
01592866
en
[ "spi", "spi.meca.mefl" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01592866/file/meccanica1-2.pdf
Julien Favier email: [email protected] Cuicui Li Laura Kamps Alistair Revell email: [email protected] Joseph O'connor Christoph Brücker email: [email protected] The PELskin project -part I -Fluid-structure interaction for a row of flexible flaps: a reference study in oscillating channel flow Previous studies of flexible flaps attached to the aft part of a cylinder have demonstrated a favourable effect on the drag and lift force fluctuation. This observation is thought to be linked to the excitation of travelling waves along the flaps and as a consequence of that, periodic shedding of the von Kármán vortices is altered in phase. A more general case of such interaction is studied herein for a limited row of flaps in an oscillating flow; representative of the cylinder case since the transversal flow in the wake-region shows oscillating character. This reference case is chosen to qualify recently developed numerical methods for the simulation of fluid-structure interaction in the context of the EU funded 'PELskin' project. The simulation of the two-way coupled dynamics of the flexible elements is achieved via a structure model for the flap motion, which was implemented and coupled to two different fluid solvers via the immersed boundary method. The results show the waving behaviour observed at the tips of the flexible elements in interaction with the fluid flow and the formation of vortices in the gaps between the flaps. In addition, formation of vortices upstream of the leading and downstream of the trailing flap is seen, which interact with the formation of the shear-layer on top of the row. This leads to a phase shift in the wave-type motion along the row that resembles the observation in the cylinder case. Introduction The wave behaviour of arrays of flexible structures (hairs, flaps, filaments) induced by a cross flow is an active area of research interest for a range of disciplines, and has been described in many studies [Finnigan and Mulhearn, 1978b;[START_REF] Nepf | Flow and transport in regions with aquatic vegetation[END_REF][START_REF] Nezu | The effect of coherent waving motion on turbulence structure in flexible vegetated open channel flows[END_REF][START_REF] Py | Measurement of wind-induced motion of crop canopiesfrom digital video images[END_REF][START_REF] Py | A frequency lock-in mechanism in the interaction between wind and crop canopies[END_REF]. This waving motion is most commonly referred to as Honami in the case of terrestrial canopies and Monami for aquatic canopies. Of particular interest to flow control, a wave-type motion along rows of flexible structures has been observed in the wake of bluff bodies, where such flexible structures are attached to the aft part. The hairs interact with the unsteady wake flow and show the emergence of travelling wave-like motion patterns [START_REF] Favier | Passive separation control using a self-adaptive hairy coating[END_REF]. Experimental studies of flow past cylinders with attached hairs proved the potential for these structures to modify the shedding cycle [START_REF] Kunze | Control of vortex shedding on a circular cylinder using self-adaptive hairy-flaps[END_REF]. The study showed a characteristic jump in the shedding frequency at a critical Reynolds number of Re c ≈ 14,000 when comparing to the classical behaviour of a plain cylinder wake flow. The analysis of the motions of the hairy-flaps showed that for Re = Re c the amplitude of the flap motion is considerably increased and a characteristic travelling wave-like motion pattern could be observed along the row of flaps. As a consequence, the presence of the hairy flaps alter the phase within the vortex shedding cycle such that the transverse dislocation -i.e. the transverse distance from the centerline -of the shed vortices is reduced [START_REF] Kunze | Control of vortex shedding on a circular cylinder using self-adaptive hairy-flaps[END_REF]. Accordingly, the vortices are not arranged in a classical zig-zag pattern of the Kármán vortex-street, but rather they are shed in a row along the centerline (y = 0). These observations provided the motivations for the recent EU funded 'PELskin' project2 , wherein a small consortium of partners3 focussed on investigating the potential amelioration of aerodynamic performance via a Porous and ELastic (PEL) coating. The objective being to elucidate the potential for passive structures to reconfigure/adapt to the separated flow, thereby directly changing the near-wall flow and the subsequent vortex shedding, which can lead to reduced form drag by decreasing the intensity and the size of the recirculation region. A further investigation of the physical mechanisms involved in the fluid structure interaction within the rows of flaps requires a more general setup, so as to enable a detailed analysis of the flap behaviour under clearly defined conditions. This facilitates the parametric study of the interaction as a function of the eigenfrequency, spacing and stiffness of the flaps. Such a case is proposed herein in form of an oscillating channel flow, where a limited row of flexible flaps is implemented. The selected configuration is simple enough to capture the essential characteristics of the coupled problem, and may also be considered to be quasi twodimensional. Experiments were carried out in a flow channel of square cross-section where fluid is driven by an oscillating piston along a row of 10 flexible flaps at a peak Reynolds-number of approximately 120. The numerical framework is based on the Immersed Boundary method coupled to a flow solver, to treat the moving boundaries on a fixed Cartesian grid. Two fundamentally different fluid solvers were used to compare their quality in comparison to the experimental data and judge the proper choice for further investigations of such coupled problems. The first is a finite difference code based on Navier-Stokes equations and the second one is a code employing the lattice Boltzmann method. The dynamics of the flexible elements is modelled using the Euler-Bernoulli equations, as it is done in [START_REF] Huang | Simulation of flexible filaments in a uniform flow by the immersed boundary method[END_REF] and [START_REF] Favier | Numerical study of flapping filaments in a uniform fluid flow[END_REF]. Conclusions to this work will be drawn in section 6. The oscillating channel flow is generated in a long tube of squared cross-section with diameter L = 6cm (cross-section 6 cm x 6 cm, or 3H x 3H, in terms of the flap length H) which is filled with liquid and is connected at the upstream end to a piston drive unit and at the downstream end to a basin. As working fluid we use a mixture of water and glycerin to adjust the viscosity of the flow. This allows us to vary the characteristic numbers of the flow such as the Reynolds-and Wormersley-number across a wider range. The piston is able to run at maximum flow amplitudes of 16 cm of bulk fluid at oscillations frequencies of 1 Hz. All parts of the tube are made of transparent perspex to ensure optical access to the flow. The following results were obtained with a glycerine-water mixture of volume-ratio 80/20 resulting in a kinematic viscosity of ν = 100 × 10 -6 m 2 s -1 at room temperature and a density of ρ f = 1.2gcm -3 . Experimental set-up and methods In the centre of the tube is an insert, which contains a row of 10 flexible flaps (d = 1 mm thick, length H = 2 cm, span B = 5 cm) that protrude into the flow. The interspacing between the flaps is set to 1 cm as the reference case. The flaps are made of silicone rubber (Elastosil RT 601, Wacker Chemie, Germany, Youngs modulus E = 1.2 MPa, density ρ s = 1.2g/cm 3 ) so that they are easily deflected by the flow. The flexural rigidity of the flaps k is calculated with k = E × I = 5 × 10 -6 N m 2 (I is the second moment of area along the thin axis of the flap I = Bd 3 /12). The density of the flaps ρ s is equal to the density of the fluid ρ f so that gravity is not contributing to the motion of the flaps in the experiments. For characterization of the flap response, a step test was carried out in the liquid environment. The flap was deflected to a certain extend and was then released while recording the tip motion with a high-speed camera. The tip motion is shown in Fig 2a). For comparison, the response curve in air is added, too. The latter shows the natural frequency of the flap at f n = 15Hz while for the damped case in liquid the damped frequency is f D = 3Hz, see Fig 2b). Therefore the damping coefficient D of the flap in the liquid is calculated from the relation f D = f n (1 -D 2 ) and results to D = 0.98. The piston is controlled via a linear traverse (Moog Series) and performs a harmonic motion. To ensure an undisturbed flow within the centre of the flow channel from both sides we placed a honeycomb at the entrance and exit of the tube as well as a smoothed transition insert from circular to squared cross-section. In the absence of flaps in the first instance, velocity profiles in the centre of the measurement chamber were measured with Particle Image Velocimetry. A high-speed camera (Phantom V12.1-8 G-M, Vision Research) recorded the flow evolution from the side while the centre plane was illuminated with a vertical light-sheet from below with a continuous laser (Ray Power 2000, Dantec). In addition to the PIV measurements, we used a special designed Schlieren setup with two larger lenses (f = 400 mm) and illumination with a LED from the back in form of a point source for recordings of flap motion and shear-layer evolution. A special preparation of the flaps was required to achieve a good Schlieren image by means of differences in the refractive index of working liquid. This was intentionally generated by coating the flaps in the empty channel prior to the experiments with a thin water lining (refracting index of water n w = 1.33). Then the channel was slowly filled with the working liquid, which has a higher refractive index than water (n l = 1.45). When starting the oscillating flow, the water layer along the flaps is shed from the flaps along the shear-layers within the cavity between the flaps and in the shear layer formed along the top of the flap row. This allows us to visualize the shear-layer in a very illustrative way (see later discussion and Fig 7). Numerical method The Immersed Boundary Method (IBM) is used to simulate the moving geometries of the flaps immersed in the unsteady fluid flow. Following this approach, the fluid equations are solved on a fixed Cartesian grid, which do not conform to the body geometry, and the solid wall boundary conditions are satisfied on the body surface by using appropriate volume forces [START_REF] Peskin | Flow patterns around heart valves: A numerical method[END_REF][START_REF] Peskin | The immersed boundary method[END_REF][START_REF] Pinelli | Immersed-boundary methods for general finite-difference and finite-volume Navier-Stokes solvers[END_REF]. In the context of the EU PELskin project, the fluid is solved using two different approaches, partly as a function of the project planning and partly for the purpose of demonstrating the flexibility of the method. Flow solver 1: Lattice Boltzmann In the first instance, the lattice Boltzmann method is used to simulate the fluid flow, which is based on microscopic models and mesoscopic kinetic equations; in contrast to Navier-Stokes which is in terms of macro-scale variables. The Boltzmann equation for the distribution function f = f (x, e, t) is given as follows: ∂f ∂t + e • ∇ x f + F • ∇ e f = Ω 12 , (1) where x are the spatial coordinates, e is the particle velocity and F accounts any external force; in the present work this force is the body force f ib applied to the fluid. Clearly this last term is very important as it will be used to convey the information between the fluid and the structure. The collision operator Ω 12 is simplified using the Bhatnagar, Gross, and Krook (BGK) approach [START_REF] Bhatnagar | A model for collision processes in gases. i: small amplitude processes in charged and neutral one-component system[END_REF], where it is assumed that local particle distributions relax to an equilibrium state f (eq) in a single relaxation time τ : Ω 12 = 1 τ f (eq) -f . (2) This equation is discretised and solved on the lattice, a Cartesian and uniform mesh in our case. At each point on the lattice, each particle is assigned one of a finite number of discrete velocity values. In our case we use the D2Q9 model, which refers to two-dimensional and nine discrete velocities, referred to by subscript i. The equilibrium function f (eq) (x, t) can be obtained by Taylor series expansion of the Maxwell-Boltzmann equilibrium distribution [START_REF] Qian | Lattice bgk models for navier-stokes equation[END_REF]. Concerning the discrete force distribution needed to keep into account the body force f ib , here we use the formulation proposed by [START_REF] Guo | forcing term lbm[END_REF], as follows, where c is the lattice speed, c s = 1/ √ 3 is the speed of sound and ω i are the weight coefficients, which take standard values. For further details the reader is referred to [START_REF] Favier | A lattice boltzmann-immersed boundary method to simulate the fluid interaction with moving and slender flexible objects[END_REF]. F i = 1 - 1 2τ ω i e i -u c 2 s + e i • u c 4 s e i • f ib (3) Flow solver 2: Navier Stokes In this work we also use an incompressible Navier-Stokes solver with a staggered grid discretization [START_REF] Harlow | Numerical calculation of time-dependent viscous incompressible flow of fluid with free surface[END_REF]. In this case, both convective and diffusive fluxes are approximated by second-order central differences. The fractional time-step method is used for the time-advancement [START_REF] Chorin | Numerical solution of Navier-Stokes equations[END_REF][START_REF] Kim | Application of a fractional-step method to incompressible navier-stokes equations[END_REF], in the form of a second-order semi-implicit pressure correction procedure [START_REF] Van Kan | A second-order accurate pressure correction scheme for viscous incompressible flow[END_REF]. The alternating direction implicit method (ADI) is used for the temporal discretization of the diffusive terms, allowing to transform three-dimensional problem into three one-dimensional ones by an operator-splitting technique, while retaining the formal order of the scheme. The code parallelization relies upon the Message-Passing Interface (MPI) library and the domain-decomposition technique. The numerical strategy used to impose the desired zero velocity boundary condition at the solid surface (which is a solid and rigid wing) is the following. The predicted velocity u * , if first obtained explicitly, without the presence of the embedded boundary: u * = u n -∆t N l (u n , u n-1 ) -Gφ n-1 + 1 Re L(u n ) , (4) where u n is the divergence-free velocity field at time-step n, ∆t is the time step, N l is the discrete nonlinear operator, G and D are, respectively, the discrete gradient and divergence operators, L is the discrete Laplacian, φ is a projection variable (related to the pressure field). The operators include coefficients that are specific to the time scheme used in this study, a three-steps low-storage Runge Kutta. Immersed boundary method to couple flow solver to structure model The presence of the solid geometry is imposed by using the IBM, via a process of interpolation and spreading [START_REF] Uhlmann | An immersed boundary method with direct forcing for the simulation of particulate flows[END_REF]: u * is interpolated onto the embedded geometry of the obstacle, Γ, which is discretized through a number of Lagrangian marker points with coordinates X k : U * (X k , t n ) = I(u * ) (5) At this stage, knowing the velocity U * (X k , t n ) at location of the Lagrangian markers, a distribution of singular forces that restore the desired velocity U d (X k , t n ) on Γ is determined as: F * (X k , t n ) = U d (X k , t n ) -U * (X k , t n ) ∆t . ( 6 ) The singular surface force field given over Γ is then transformed by a spreading operator S into a volume force-field defined on the Cartesian mesh points x i,j,k surrounding Γ: f * (x i,j , t n ) = S [F * (X k , t n )] . (7) At this stage, in the case of the lattice Boltzmann method, the force f * (x i,j , t n ) is used directly as f ib in eqn 3 and the algorithm is completed. For the Navier Stokes solver, some final steps are required as follows. First, the predicted velocity is re-calculated, using an implicit scheme for the viscous operator, adding the forces that accounts for the presence of the solid body: u * -u n ∆t = -N l (u n , u n-1 ) -Gφ n-1 + 1 Re L(u * , u n ) + f * (8) Finally, the algorithm completes the time step with the usual solution of the pressure Poisson equation and the consequent projection step: Lφ = 1 ∆t Du * (9) u n+1 = u * -∆tGφ n . ( 10 ) The key elements of the present IBM are the transformations between the Eulerian the Lagrangian meshes, which are carried out through the interpolation and spreading operators, I and S. These two operators are built using a method presented in Favier et al [2013]; [START_REF] Pinelli | Immersed-boundary methods for general finite-difference and finite-volume Navier-Stokes solvers[END_REF], which ensures that the interpolation and spreading are reciprocal operations, implying that the integral of the force is the same when computed in the Lagrangian or Eulerian frames. Important properties of the algorithm are the preservation of the global accuracy of the underlying differencing scheme, and the sharpness with which the interface is resolved. For further details the reader is referred to [START_REF] Pinelli | Immersed-boundary methods for general finite-difference and finite-volume Navier-Stokes solvers[END_REF] and [START_REF] Favier | A lattice boltzmann-immersed boundary method to simulate the fluid interaction with moving and slender flexible objects[END_REF]. Model of flexible flap Coming back to equation 6 defined on each Lagrangian marker, the term U d n+1 (X k ) denotes the velocity value at the location X k we wish to obtain at time step completion. Those values are determined for each flap integrating in time the respective Euler-Bernoulli equation in non-dimensional form: dU d n+1 dt = ∂ ∂s (T ∂X k ∂s ) -K B ∂ 4 X k ∂s 4 + Ri g g -F ib (11) ∂X k ∂s • ∂X k ∂s = 1 (12) This condition ensures that the flap length remains constant, and is satisfied using the tension values, which effectively act as Lagrange multipliers. The boundary conditions for the system (11-12) are X = X 0 , ∂ 2 X k ∂s 2 = 0 for the fixed end, and T = 0, ∂ 2 X k ∂s 2 = 0 for the free end. The resulting set of equations are discretised using a staggered arrangement and solved using a Newton method, by a direct evaluation of the exact Jacobian matrix, which incorporates the given boundary values. More details can be found in [START_REF] Favier | A lattice boltzmann-immersed boundary method to simulate the fluid interaction with moving and slender flexible objects[END_REF]. Validation of fluid structure interaction The validation is first performed for the model of flexible flap alone (pure solid), and subsequently the fluid solver is validated alone (pure fluid), by comparing with the experiments . The flow unsteadiness allows one to identify and characterize the time dependent dynamics of the oscillating flexible flaps. Flap model without fluid To check the consistency of the structure model, the motion of a hanging flap without ambient fluid and under a gravitational force is considered, as shown in Figure 3a. The non-dimensional flexural rigidity is set to K B = 0, so that a flexible flap (a pendulum) with an initial angle θ o = 2 o is examined. The time evolution of the coordinate of the free extremity in the x-direction (∆x) is monitored by setting the gravity to a value equivalent to Ri = 10. Figure 3b shows that the time evolution of the free extremity position of the flap using the present model is in good agreement with the analytical solution which can be obtained under the small angles assumption [START_REF] Favier | A lattice boltzmann-immersed boundary method to simulate the fluid interaction with moving and slender flexible objects[END_REF]. Fluid simulation without flap A fluid simulation without flap is then conducted in a computational domain which is set to 22H × 3H (H is the height of flexible flap), in streamwise (x) and vertical (y) direction respectively corresponding to the 2D case of the centerplane in the experimental flow channel. Periodic boundary conditions are imposed in xdirection, and no-slip conditions are applied on the upper and lower walls. The flow is driven by an oscillating flow, by sinusoidally varying the pressure gradient at a given flow frequency f = 1.0Hz as following: ∂p ∂x = A sin(2πf t) (13) The present simulation follows a Womersley velocity profile, in the same way as the analytical expression derived by [START_REF] Chandrasekaran | Dynamic calibration technique for thermal shear-stress sensors with mean flow[END_REF] for a squared channel flow in the center-plane. Figure 4 indeed shows a good agreement between simulation, experiment and the analytical solution of [START_REF] Chandrasekaran | Dynamic calibration technique for thermal shear-stress sensors with mean flow[END_REF] at the inlet through one flow oscillation cycle. The Reynolds number of the present simulation is Re = U max H/ν = 120, based on the characteristic streamwise velocity U max and the flexible flap height H. The Womersley number defined with the channel diameter L is α = L 2πf /ν = 15. Fluid structure interaction A two-way fluid structure interaction configuration is considered at the same dimensions and same boundary conditions as in section 4.2. The flexible flaps are mounted on the bottom wall of the channel. Figure 1 shows the experimental setup, where the same ratio 3.0 of channel height over flap length, the same flow velocity profile and flow frequency, as the simulation case, are adopted. In the first instance a refinement study was undertaken as shown in Figure 5. The metric L refers to the number of Lagrangian markers along each flap, and it is obvious that even for low resolutions the accuracy is good. A value of 35 Lagrangian points per cilia is taken for subsequent computations. Also shown in Figure 5, is the L2 norm of the convergence, with respect to the prediction from the finest level of refinement (L = 40). It is clear that the numerical method is of 2nd order accuracy during these computations. Figure 6 provides a comparison of tip positions of flaps in x direction obtained from both flow solvers, the experimental results are also plotted for comparison. The initial observation is that both flow solvers return almost identical results for this case, providing grounds for cross-validation of the two implementations. Minor differences are likely to be due to differences in numerical settings, as well as the impact of the significantly different nature of the two methodologies; for example the LBM is effectively a compressible solver, while the current N-S method is incompressible. The second observation is with respect to the experimental results, and again, the agreement is strong. The main amplitude is well captured, although a small 'kick' in the profile of the first flap (F1) is missed by both solvers. This could be due to differences in the approximation of the structural parameters of the model, which slightly differ from experiments to numerics. Also, the 2D Figure 5 Refinement study of coupled FSI solver (using LBM), showing tip displacement (x coordinate) of flap in centre of array for increasing flap resolution. Also shown are zoom view (top right) and L2 convergence (red line order 2, blue order 3) approximation made in the numerical solvers may play a role on this phenomenon. However, towards the more centrally located flaps, the agreement improves, with finer detail of the tip motion at the oscillating extremities agreeing notably well with the experimental data. Further validation can be obtained from analysis of instantaneous flow velocity vector (u, v) are as provided in Figure 7 (a-d) for the numerical results and Figure 7 (e-h) for the corresponding experimental results. Again the indication is for an accurate prediction of tip location, and where streaklines are observable in the experimental results, numerically predicted contours of velocity are in good agreement also. The roll-up of shears layer can be seen due to the relative motion of the forward mean flow and the backward motion of the flap tips and vice versa. This will be investigated in more detail in the following section Numerical results We start by investigating and elucidating the principle flow mechanisms identified in this case, and focus on two key aspects; the identified phase lag of the flaps, and the cyclic generation of coherent structures. Phase lag Forced by the driving motion of the fluid, the flaps individually move at the same frequency as the flow. However, there is a clear phase lag between adjacent flaps as seen in the normalized flap tip position, see Figure 8. The displacements of each flap tip levels off differently in time and they reach different maximum and minimum values of ∆x/H. ∆t 1 and ∆t 2 are defined as the time differences between two successive time instants when the flap tip reaches the position of its fixed extremity in x-direction (∆x = 0). Due to the phase lag of flap response, which is different depending on the flap location on x-direction, the values of ∆t 1 and ∆t 2 are different for each flap. This phase lag between adjacent elements has already been observed in several research works on waving motions of flexible plants, and plays an important role in the emergence of the coherent waving motion of plants. It is known as Honami in the case of resonant waving of wheat stalks for instance [Finnigan and Mulhearn, 1978a], or Monami in the case of aquatic waving plants [START_REF] Nezu | The effect of coherent waving motion on turbulence structure in flexible vegetated open channel flows[END_REF]. Despite numerous literature work associated with Honami/Monami, little qualitative information [Finnigan and Mulhearn, 1978a] and no quantitative data are available regarding the phase lag of these structures. On the other hand, a similar wave-type motion pattern was observed in the case of flexible flaps attached to the aft part of cylinder and it was found that this motion pattern plays an important role in the modification of the wake [START_REF] Kunze | Control of vortex shedding on a circular cylinder using self-adaptive hairy-flaps[END_REF]. More recent work focussing on an infinite array of flaps demonstrated that a Reynolds dependence of the phase lag was associated with the size of the recirculating flow between successive flaps, but the study was limited to infinite periodic arrays of flaps [START_REF] O'connor | Application of a lattice boltzmann-immersed boundary method for fluid-filament dynamics and flow sensing[END_REF]. Therefore the present results in the oscillating channel flow can make a significant contribution to the understanding of this phenomenon. Detection of coherent eddies To investigate the flap dynamics and the phase lag evolution between adjacent flaps, snapshots of instantaneous velocity field (u, v) through one flow cycle (T = 1.0s) are provided in Figure 9. Results reveal that flap 1 begins to deflect from its vertical position at t = 0.26 s in Figure 9 (a), and finally recovers this initial position at t = 1.26 s in Figure 9 (j). Just afterward the start of the cycle, at t = 0.5 s of Figure 9 (b), the bulk flow velocity becomes positive (left to right) and a vortex is formed at the right side of flap layer, as shown at x ≈ 14.25 in Figure 9 (c). Although initially small this vortex quickly grows, as in Figures 9 (d-e). Consequently a region of negative streamwise velocity is formed near the lower wall downstream of the flaps, and very quickly induces a large deflection of the flexible flaps near to the right side of the array. The largest deflection is experienced by flap 10, while deflections are reduced progressively towards the channel centre, i.e. for flaps 9 to 6. During the same period, the impinging cross flow induces a large deflection of flap 1, which is initially notably greater than flaps 2-5. As the flow evolves this deflection is transmitted through flaps 2 and 3, as shown in Figure 9 (c-d). This wave-like motion results in a smoothly varying phase lag, as also indicated on Figure 8 (a) for the approximate range 0.45 < t < 0.65. From Figure 9 (f) onwards, the bulk flow velocity becomes negative (right to left), and the reverse mechanism is observed. Under the driving motion of the oscillating flow, the flexible flap motion is thus significantly influenced by the presence of the vortex, which periodically appears near to both sides of the coating. Its presence is confirmed via comparison with experimental observation, as shown in Figure 10. Also, it appears that the temporal and spatial responses of the flexible flaps are closely related to their distances from the channel centre position in streamwise direction. As shown in Figure 11, this relationship Figure 12 shows several snapshots of instantaneous velocity field (u, v) and the corresponding instantaneous vorticity. The coherent vortex observed in Figure 12 (a-d) is clearly associated to large vorticity regions in Figure 12 (e-h). From Figure 9, it can be seen that the boundaries of the highlighed zones of uniform momentum pass through the cores of coherent vortices, which suggests an important link between coherent vortices and uniform-momentum zones, as it is also observed in the analysis performed in the experimental results of [START_REF] Adrian | Vortex organization in the outer region of the turbulent boundary layer[END_REF] and [START_REF] Nezu | The effect of coherent waving motion on turbulence structure in flexible vegetated open channel flows[END_REF]. Conclusions The physical mechanisms involved in the two-way interaction between an incompressible oscillating channel flow and a coating made of flexible flaps have been investigated in the present work. A Navier Stokes solver and a Lattice Boltzmann solver have been used, and it is found that both methodologies are in principle good agreement with the results of experiment at the same conditions, for similar CPU costs. Thus, the incompressible or compressible nature of the solvers does not play any role in this configuration involving flexible structures immersed in an unsteady flow. It is shown that a cyclically generated coherent vortex, occurring alternatively near the entrance and the exit of the flap row, is the primary cause leading to the smoothly varying phase difference of adjacent flaps. This coherent vortex generation cycle is expected to hold in general for the case of a finite array size, since it depends on entrance and exit effects; i.e. flow impingement on the upstream end of the array and recirculation on the downstream end. Where flap rows are infinite in length, such entrance and exit effects are expected to vanish, and interaction would be driven solely by incoherence in dynamic response of the flexible structures either by variation in stiffness or near the resonant excitation where phase relationship is lost. The observed effect is comparable to the situation of flaps in the aft part of a cylinder in cross-flow. The flaps interact with the roll-up of the shear layer which leads to a phase shift in formation of the von Karman vortices. This roll-up starts along the lateral side-walls of the cylinder and vorticity is then swept along the row of the flaps in transversal direction towards the inner part of the row, similar as in the case discussed herein. Therefore the observed travelling wave-type motion of the flaps in the cylinder wake [START_REF] Favier | Passive separation control using a self-adaptive hairy coating[END_REF]; [START_REF] Kunze | Control of vortex shedding on a circular cylinder using self-adaptive hairy-flaps[END_REF] is a result of the phase-shift between neighbouring flaps as documented herein. Figure 1 : 1 Figure 1 : Schematic view of the experimental working section. Figure 2 2 Figure 2 Step response of the flexible flap (a): motion in air (dashed line) and in the liquid (solid line). (b): frequency response curve. Figure 3 3 Figure 3 Motion of the hanging flexible flap under gravity (without fluid) without bending term and with an initial angle of θ o = 2 o . (a): Initial position of the flexible flap. (b): Time evolution of the tip position ∆x, with respect to the position in x of its equilibrium position. Present solution: -, analytical solution: x. Figure 4 : 4 Figure 4 : Womersley velocity profiles at inlet position at different instants through one flow oscillation cycle (IBM: present work using Immersed Boundary Method). Figure 6 : 6 Figure 6 : Tip positions of flap in x-direction within three flow cycles. : numerical results from LBM : : numerical results from N-S solver, • • : experiment. The letters Fi indicate the flexible flap number i. Figure 7 7 Figure 7 Evolution of the flow over a half-period of the oscillation cycle. (a-d): Contours of instantaneous flow velocity vectors (u, v) obtained by numerical simulation; (e-h): Experimental snapshots of Schlieren images obtained at the same instant as in the numerical simulation. Figure 8 : 8 Figure 8 : Superimposed normalized tip deflections ∆x of flexible flaps for (a) F1-F5; (b) F6-F10. Figure 9 : 9 Figure 9 : Instantaneous flow velocity vectors (u, v) represented by arrows through one flow oscillation cycle. The channel dimension is normalized by the flap height H. The colormaps correspond to the values of contours of streamwise velocity u. i.e. the phase difference ∆t (∆t = ∆t 1 -∆t 2 ) of each flap tip position, normalized by the oscillating flow cycle T , is proportional to their distance to the channel centre in x-direction. The lack of symmetry reflects the initialisation of the flow, wherein the flaps are initially arranged vertically and undergo initial deflection to the right, via a positive bulk flow velocity. Figure 10 :Figure 11 : 1011 Figure 10 : (left) Instantaneous flow velocity vectors (u, v) represented by arrows and contours of streamwise velocity u. (right) Path lines of tracer particles in the corner of the flaps from experiment Figure 12 : 12 Figure 12 : (a-d): Instantaneous flow velocity vectors (u, v) represented by arrows and color contours of streamwise velocity u; (e-h): Color contours of instantaneous vorticity. The boundaries between uniform-momentum zones are shown by red lines. http://www.transport-research.info/project/pel-skin-novel-kind-surface-coatings-aeronautics Aix Marseille Université, City University London, Wolfdynamics SRL, Technische Universität Bergademie Freiberg, The University of Manchester Acknowledgement The financial support of the European Commission through the PELskin FP7 European project (AAT.2012.6.3-1 -Breakthrough and emerging technologies) is greatly acknowledged. Funding of the position of Professor Christoph Brücker as the BAE SYSTEMS Sir Richard Olver Chair in Aeronautical Engineering is gratefully acknowledged herein. AR acknowledges support from the UK Engineering and Physical Sciences Research Council under the project UK Consortium on Mesoscale Engineering Sciences (UKCOMES) (Grant No. EP/L00030X/1).
01773871
en
[ "spi.gproc" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01773871/file/Glycine_ESS16.pdf
Hector Uriel Rodriguez Vera Fabien Baillon Philippe Accart Fabienne Espitalier Olivier Louisnard Crystallization of α-glycine by anti come Introduction Ultrasound and Crystallization by cooling: Reduction of the induction time and a reduction of the size of the formed crystals with equivalent supersaturation. Ultrasound and Crystallization by anti-solvent effect: Do ultrasound have an influence on the crystallization of glycine by antisolvent effect in batch system (induction time, crystal size distribution, polymorphism)? Materials and methods Glycine Conclusions and perspectives Trends with US vs. without US: • Smaller crystals, monomodal size distributions • Decrease of induction time rates (g.min -1 ) of ethanol • Supersaturation ratio S = C C eq (T,R)
01767425
en
[ "phys.astr.co" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01767425/file/boissier1801.00985.pdf
S Boissier email: [email protected] O Cucciati A Boselli S Mei L Ferrarese The GALEX Ultraviolet Virgo Cluster Survey (GUViCS). VII.: BCG UV upturn and the FUV-NUV color up to redshift 0.35 ⋆ Keywords: ultraviolet:galaxies, galaxies: ellipticals and lenticulars, cD, galaxies:stellar content Context. At low redshift, early-type galaxies often exhibit a rising flux with decreasing wavelength in the 1000-2500 Å range, called "UV upturn". The origin of this phenomenon is debated, and its evolution with redshift is poorly constrained. The observed GALEX FUV-NUV color can be used to probe the UV upturn approximately to redshift 0.5. Aims. We provide constraints on the existence of the UV upturn up to redshift ∼ 0.4 in the brightest cluster galaxies (BCG) located behind the Virgo cluster, using data from the GUViCS survey. Methods. We estimate the GALEX far-UV (FUV) and near-UV (NUV) observed magnitudes for BCGs from the maxBCG catalog in the GUViCS fields. We increase the number of nonlocal galaxies identified as BCGs with GALEX photometry from a few tens of galaxies to 166 (64 when restricting this sample to relatively small error bars). We also estimate a central color within a 20 arcsec aperture. By using the r-band luminosity from the maxBCG catalog, we can separate blue FUV-NUV due to recent star formation and candidate upturn cases. We use Lick indices to verify their similarity to redshift 0 upturn cases. Results. We clearly detect a population of blue FUV-NUV BCGs in the redshift range 0.10-0.35, vastly improving the existing constraints at these epochs by increasing the number of galaxies studied, and by exploring a redshift range with no previous data (beyond 0.2), spanning one more Gyr in the past. These galaxies bring new constraints that can help distinguish between assumptions concerning the stellar populations causing the UV upturn phenomenon. The existence of a large number of UV upturns around redshift 0.25 favors the existence of a binary channel among the sources proposed in the literature. Introduction Code (1969) presented for the first time evidence of an excess of far-ultraviolet (FUV) light in the bulge of M31. The International Ultraviolet Explorer (IUE) observations of Ellipticals allowed astronomers to characterize this as "UV upturn", i.e., a rising flux with decreasing wavelengths from about 2500 Å to 1000 Å (e.g., Bertola et al. 1982). The UV upturn was found in the nearby universe in quiescent gas depleted ellipticals and has been associated to old stars (O'Connell 1999;Ferguson 1999). This feature has also been found in other old stellar systems such as M32 (Brown 2004) or open clusters (Buson et al. 2006;Buzzoni et al. 2012). Empirical work to find the actual source of the upturn included the analysis of color-magnitude diagrams (Brown et al. 1998), the detection of individual horizontal branch stars (Brown et al. 2000), or surface brightness fluctuations (Buzzoni & González-Lópezlira 2008). ⋆ Tables 2 to 5 are available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/ Since the UV upturn is found in early-type galaxies characterized by old stellar populations, an effect of age can be expected. Observed correlations also suggest a role of metallicity (Faber 1983;Burstein et al. 1988). However, these results have been extensively discussed (see the conflicting results in Deharveng et al. 2002;Rich et al. 2005;Boselli et al. 2005;Donas et al. 2007). The recent work on absorption indices revealing old and young populations by Le Cras et al. (2016) has showed that there is still a strong interest to understand the nature of UV upturn sources and their contribution to stellar populations as a whole. From the point of view of the evolution of galaxies and the role of the environment, it is important to understand the UV emission associated with old stellar populations in early-type galaxies and to determine whether it is related to the environment (see Boselli et al. 2014). Hills (1971) suggested that the UV emission in M31 could be related to the presence of very hot stars. [START_REF] Renzini | Spectral Evolution of Galaxies[END_REF] discussed the possible candidates in the context of stellar population evolution. This included young stars, hot horizontal A&A proofs: manuscript no. sambcg branch stars, post-AGB stars, and binaries. Several theoretical works studied the UV emission of various types of stars during their advanced evolution phases (e.g., Greggio & Renzini 1990;Dorman et al. 1993Dorman et al. , 1995;;D'Cruz et al. 1996). These works showed that their UV output is very sensitive to small differences in the assumptions made. Greggio & Renzini (1990) suggested that stellar evolution theory alone could not provide the explanation of the UV upturn. It is still generally believed that the UV upturn is related to extreme horizontal branch stars (Brown 2004, and references therein). These stars could be low mass helium burning stars having lost their hydrogen-rich envelope (e.g., Han et al. 2007, and references within). However, the precise stellar evolution producing hot low mass stars is still debated. Recent models include single-star evolution with both metal-poor and metal-rich populations (e.g., Park & Lee 1997;Yi et al. 1998) or models including the effect of binarity, with stars losing their hydrogen envelopes during binary interactions (Han et al. 2007). The various models proposed for the UV upturn sources predict drastic differences concerning the evolution of galaxy colors with redshift. [START_REF] Renzini | Spectral Evolution of Galaxies[END_REF] and Greggio & Renzini (1990) already suggested that observations at a look-back time of a few Gyr should allow us to distinguish between possible sources. In the case of single-star origin, the UV upturn is expected to occur late because it is produced by evolved stars. If it is related to binaries, its apparition can be more progressive. Higher redshift observations are needed to distinguish these different evolutionary scenarios. From the observational point of view, Rich et al. (2005) found little evolution up to redshift 0.2 in a sample of 172 red quiescent galaxies (not restricted to the most massive or to central clusters) obtained by cross-matching SDSS and GALEX results. Lee et al. (2005) and Ree et al. (2007) compared the observed FUV-V color of nearby ellipticals to the brightest ellipticals in 12 remote clusters up to redshift 0.2, and also compared this color to a few previous works (see Brown 2004) up to redshift 0.6. These results suggest that the FUV-V restframe color is bluer by about 0.8 mag around redshift 0.2 with respect to redshift 0, with a large dispersion at all redshifts. The color of the few galaxies at redshifts higher than 0.2 is close to the average color at lower redshifts. Donahue et al. (2010) presented the evolution of the FUV-R color with redshift in 11 brightest cluster galaxies (BCGs) in the redshift range 0.05-0.2, detected in FUV, with little evolution. Direct studies of the evolution of the UV upturn with redshift are still limited to small samples and include only a few tens of objects when limited to BCGs. Table 1 list the samples of early-type galaxies with FUV and near-ultraviolet (NUV) magnitudes from GALEX. Several colors or other quantities have been used to detect and study the UV upturn. One of them is simply the FUV-NUV GALEX color that is easily accessible from GALEX data. It has been used by, e.g., Boselli et al. (2005); Donas et al. (2007); Loubser & Sánchez-Blázquez (2011). This color probes the slope of the UV spectrum, and can be used to this end up to moderate redshift. In Fig. 1 we show the spectrum of NGC1399, a Fornax elliptical with a strong UV upturn at redshift 0. We also show the spectrum shifted for a few redshifts, and the evolution of the corresponding FUV-NUV color as a function of redshift. The FUV (around 1500 Å) and NUV (around 2300 Å) filters from GALEX are also indicated, showing that the FUV-NUV color probes the UV slope. The FUV-NUV color can thus be an indicator of the presence of an upturn, and of the value of the slope of the spectrum. As a visual reference, we also show the evolution for a flat UV spectrum. Any upturn will be by defini-Fig. 1. Top: Arbitrarily scaled spectrum of NGC1399 (solid line). The spectrum was taken from the database of UV-Optical Spectra of Nearby Quiescent and Active Galaxies (http://www.stsci.edu/science/sed/sed.htm), the original UV data are from Burstein et al. (1988). The spectrum was smoothed for visualization, and extrapolated at low wavelength (between 912 and 1170 Å) to match a typical upturn galaxy spectra (Yi et al. 1998). This spectrum is also shown after redshifting to z=0.3, 0.6, 0.9. A "flat" version is also shown (i.e., removing the upturn) for reference purposes (see text in the introduction). The GALEX FUV and NUV passbands are respectively indicated as a blue and red shaded area. Middle: Evolution with redshift of the FUV-NUV color for the upturn and flat spectra as the solid and dashed curve, respectively. The dotted horizontal lines corresponds to FUV-NUV=0.9, the limiting color for an upturn as defined by Yi et al. (2011). Bottom: Evolution with redshift of the FUV-V color of the upturn and flat spectra as the solid and dashed curve, respectively. tion bluer than this reference (at redshift 0, it corresponds to a color close to the 0.9 limit, proposed by Yi et al. 2011, to characterize the presence of an upturn). On the contrary, old stellar populations without upturn are redder than this reference. The upturn FUV-NUV color is distinguishable from a flat spectrum up to approximately redshift 0.5. Beyond this redshift, no more flux is found in the observed FUV band, and the color can no longer be used to detect a UV upturn. The figure illustrates that the presence of a UV upturn can clearly be detected as a blue Notes. (a) Number of FUV detections. (b) Sample with low uncertainty on the central color (sum of uncertainties on each side lower than 1.2 mag) and excluding galaxies flagged as contaminated. (c) Red global NUV-r color and blue central FUV-NUV color (see Sect. 3.1). observed FUV-NUV color at the redshifts considered in this paper (below redshift 0.4). In the nearby universe, the UV upturn is often studied on the basis of a color rather similar to FUV-V. Figure 1 shows differences between the flat and upturn cases that are similar to those for the FUV-NUV color. For this color, however, we find a greater evolution of the observed FUV-V color (in AB magnitudes) with redshift. This makes it impossible to adopt a single color threshold for the detection of an upturn over the same redshift range. The two bottom panels of this figure can still help the reader compare our study to this color choice. We note that this is very close to the classical (1550-V) Burstein from Burstein et al. (1988) with FUV-V (in the AB system) ∼ (1550-V) Burstein + 2.78 (Buzzoni et al. 2012). The Virgo area was extensively studied in the FUV and NUV bands of GALEX in the context of the GUViCS project (Boselli et al. 2011). The photometry collected by GUViCS provides a deeper coverage than in most large areas over the sky. The UV properties of early-type galaxies inside the Virgo cluster were studied in Boselli et al. (2005). In the present paper, we take advantage of these data to extract FUV and NUV photometry for massive galaxies in the background of the cluster, up to a redshift of about 0.35. We select a sample of BCG galaxies from the maxBCG catalog (Koester et al. 2007), extract FUV and NUV data for 177 of these galaxies from GUViCS, and perform a visual inspection to ensure the quality and noncontamination of these fluxes. Considering the small statistics of existing BCG samples with FUV data (e.g., only 36 galaxies in Loubser et al. 2011 at redshift 0;12 in Rhee et al. 2007 up to redshift 0.2), even after removing the galaxies with possible contamination or large error bars, our sample brings new constraints for future models of the UV upturn population. We provide all our data in the form of easyto-use tables for this purpose. In Sect. 2, we present our sample and methods. The selection of galaxies showing upturn signs is discussed in Sect. 3. In Sect. 4, we show the dependences of the FUV-NUV color on luminosity and redshift in our sample, and discuss their implications. A summary is given in Sect. 5. Throughout the paper, we use a flat cosmology (H 0 =70, Ω m =0.3) to convert between look-back time τ and redshift (z). In our redshift range, the relation is linear with τ ∼ 11.5 × z. Samples and data BCG sample In order to obtain a sample of galaxies that are as evolved as possible in the background of the Virgo cluster area, we ex-tracted a sample of BCGs using the maxBCG catalog that was computed from the Sloan Digital Sky Survey photometric data (Koester et al. 2007). We selected all galaxies with right ascension (RA) in the 180-195 degrees range and with declination (DEC) in the 0-20 degrees range, and with available GALEX images in the FUV and the NUV bands, from the GUViCS survey of this area (Boselli et al. 2011). This sample consists of 177 galaxies listed in Table 2. Koester et al. (2007) provided a number of properties of each BCG galaxy (position, redshift, luminosity) and of its cluster (e.g., number of members, luminosity of the members). We also performed a query to the DR13 SDSS release (Albareti et al. 2016) to obtain the latest spectroscopic information. We checked that the spectroscopic redshifts are in agreement with the Koester et al. (2007) values for the 71 galaxies for which it was provided, and we increased the number of spectroscopic redshifts in this way up to 150 objects, i.e., the vast majority of our sample. We also obtained the SDSS spectroscopic class ("GALAXY" for the 150 objects) and subclass based on line properties that result, in our sample, in one active galactic nucleus (AGN), three "BROAD-LINE" objects, one "STAR-FORMING" object. Finally, SDSS also provides measurements of the Mg2 and Hβ Lick indices often used in the literature to study the origin of the upturn in galaxies (e.g., Faber 1983;Burstein et al. 1988;Boselli et al. 2005;Buzzoni et al. 2012). Table 2 compiles RA, DEC, photometric redshift, r-and iband luminosity from the maxBCG catalog, while Table 3 provides the spectroscopic information obtained by querying the DR13 database: spectroscopic redshift, subclass, and Lick Mg2 and Hβ indices when available. UV images The early-type distant galaxies are often very faint in the ultraviolet bands, and a blind search can easily be affected by nearby objects (especially considering the ∼5 arcsec resolution of GALEX) or low signal-to-noise ratio. Due to the nature of the GUViCS survey and the GALEX circular field of view, the survey is not homogeneous, and many galaxies were observed on several occurrences. We constructed stamps around the position of the BCGs by coadding any available UV images around our sources. This was done using the Montage sofware [START_REF] Jacob | Montage: An Astronomical Image Mosaicking Toolkit[END_REF] following the procedure described in Boissier et al. (2015), which allows the deepest possible UV exposure for each target. The UV original pixel is 1.5 arcsec wide, but since we reconstructed images from a variety of sources with arbitrary position shifts, we used Montage to project the UV im-A&A proofs: manuscript no. sambcg ages on a finer pixel grid. For practical reasons, we adopted the same pixel scale (0.187 arcsec per pixel) as the optical data that we obtained from the NGVS survey (Ferrarese et al. 2012), as discussed in Sect. 2.4. UV photometry Since we are targeting relatively small and faint galaxies whose shape in the UV is not known a priori, we computed photometry systematically in a number of circular rings around the BCG galaxies, with apertures of radii 20,30,50,70,90,110,130 pixels (with a size of 0.187 arcsecs), chosen to cover the range of sizes found in our sample. We note that the first aperture is only slightly larger than the GALEX PSF. We use it nevertheless because early-type galaxies are often very compact in the UV, and this allows us to have an estimate of the color even when a nearby galaxy could contaminate a larger aperture. The magnitude obtained is by definition partial. Aperture corrections for a point source at this size are 0.23 mag in both filters1 , thus the color is unchanged. This central aperture has the advantage that it often presents a higher S/N, and may be more sensible to UV upturn if the stellar populations giving rise to it are concentrated. NGC1399, which we use to illustrate a typical upturn as seen in the nearby universe, clearly shows a FUV-NUV color gradient in the inner 30 arcsec, with the maximum upturn at the center as can be seen in Fig. 3 of Gil de Paz et al. (2007). The UV spectrum shown in Fig. 1 was obtained in the IUE aperture. For our distant objects, we certainly probe a larger physical size, thus this NGC1399 reference is likely to be the bluest color we can expect for a galaxy with a central upturn. The sky value was measured in many independent regions around the galaxy and is void of obvious sources. The photometry and associated uncertainties were then computed as in Gil de Paz et al. (2007). The photometry was corrected for the Galactic extinction, using the Schlegel et al. (1998) values for the visual extinction and R FUV =8.24, R NUV =8.20 (Wyder et al. 2007). Optical images For all these galaxies, we fetched optical images from the NGVS survey (Ferrarese et al. 2012). Because the UV area we started from is larger than the NGVS area, we obtained these observations for about half of our sample (84 out of 177 galaxies). For the others, we fetched SDSS images. The optical images are not crucial for the analysis in this work, but were used for visual inspection allowing for instance to flag for possible contamination by nearby galaxies, or signs of star formation in the form of spiral arms (Sect. 2.5) or other morphological peculiarities. In order to have homogeneous data; however, we use the SDSS photometry from the Koester et al. (2007) catalog for all our galaxies in the i and r band (in order to test the relations found in early-type galaxies and to test for the presence of a young stellar population). We do not perform photometry measurements in the optical images in this work. Visual inspection A visual inspection of our images was performed. This step was important for this work for several reasons: • We identified four BCGs with strong signs of star formation (spiral arms, prominent and spread out UV emission) that would obviously pollute any signal from the UV upturn; • We identified objects for which the UV photometry was polluted by a nearby companion. In optical images, it is easy to spot small companions that are unimportant in optical bands, but that can be the dominant source in the UV images if they are star-forming. Considering the GALEX PSF, in these cases the flux in the BCG region could be due to these companions and not to the BCG. Comparing the images, we can easily say when the UV emission is centered on a companion rather than on the BCG; • We chose the best circular aperture for this work. Having measurements in a collection of apertures, we could see in the image the surface covered by each aperture. We then selected the best one according to the following rules: 1) if possible the aperture including all the emission observed in the UV image (when not possible, our magnitude was flagged as partial) and 2) in any case, an aperture not polluted by a nearby companion (when not the case, we used another flag to indicate contamination). The inspection was performed independently by two people (O.C. and S.B.). A small discrepancy occurred for 25 % of the galaxies (in most of the cases a different optimal aperture was chosen with a different flag). The discrepancies were resolved through discussion (usually adopting the smaller aperture to avoid possible pollution by a companion). Table 4 provides our results concerning the photometry, including the exposure time, the chosen aperture, the FUV and NUV magnitudes and their 1 σ uncertainties. We have 11 galaxies without FUV measurements (measured flux below the sky level). In this case the table indicates a -99.9 magnitude and the -1 σ column is replaced by the limiting magnitude as deduced from the sky noise measurement. Table 5 provides the flags. For both FUV and NUV, a flag can be "ok" (the aperture encompasses all the observed emission in the image and there is no contamination), "part." (we had to use a smaller aperture than the full observed emission to avoid contamination), or "contam." (the flux is likely to be contaminated by a nearby source, usually a star-forming galaxy). When possible, we preferred to have a part. flag, with a meaningful color in a small aperture, but in some cases, it was impossible to avoid a contamination. We also added some notes that indicates clear signs of star formation and spiral arms ("spiral structure"), presence of arcs ("arc"), or presence of other signs that might be related to a merger or interactions ("shells/tails/asymetric/mergers"). These flags allow the identification of objects that may be affected for example by a recent merger or by star formation. It may be useful to distinguish them since, e.g., Using GALEX imaging, Rampazzo et al. (2011) found signs of star formation in their sample of 40 nearby early-type galaxies in low density environments that can produce rings or arm-like structures. At high redshift (0.2 to 0.9), Donahue et al. (2015) found in their CLASH BCG sample that BCGs with star formation activity indeed show perturbed morphology, while quiet BCGs have a smooth aspect. We found similar percentages of the various flags on the subsamples with deep NGVS or SDSS optical images. Our flags are thus not affected by the source of the optical image that was examined together with our UV images. The only exception is the arc flag. We visually recognized four arcs, all of them in the NGVS images. Deep high quality exposures are necessary to recognize this feature. In our figures, we identify the galaxies 4 and text for details). In blue: obvious spiral structure (crosses); arcs (circles); strong asymmetries, shells, tails, or signs of merger (squares). The red circles indicate galaxies with the SDSS spectral subclass BROADLINE. The dashed lines indicates 0 (no color gradient) and the dotted line is a LOWESS fit to the dark blue points. flagged for these different categories so that it is possible to see if the one where they are found present differences in a systematic manner with respect to the global sample or not. Aperture issues We also provide in Table 4 the central color, i.e., the color of the innermost aperture. It provides an idea of the central upturn in the case of an extended object, where the population responsible for the upturn might be more concentrated (see Sect. 2.3). When it was not possible to measure a FUV-NUV color in this aperture, the table indicates -99.9 color. We found that the central color correlates quite well with the total color. Figure 2 shows the difference between the central and global color as a function of redshift. Over the range of redshift considered, for an intrinsic constant size, the observed size may change by a factor of about 2 with redshift. The difference in the selected optimal size compensates for this effect. For galaxies with ok flags, we selected mostly the 30-or 50-pixel apertures. We chose 50 pixels for most of the nearby galaxies (z <0.15), and 30 pixels for all the distant ones (z >0.25). Figure 2 shows that we do not introduce a color trend by adopting the central color with respect to the global one. The central color has in general smaller error bars than the global color. The uncertainty is on average reduced by a factor of 3 when using the 20 arcsec aperture with respect to the total aperture. For this reason, in the following we perform an analysis of the UV upturn in this smallest aperture. This has two main advantages. First, by definition, our color does not correspond to the total galaxy, but when we can have a total galaxy, they correlate. We are thus not limited to galaxies without contamination in the outer part (i.e., we can use galaxies having only a partial magnitude). Second, the error bar is smaller, which allows us to better distinguish trends. For the same adopted limit on the error bar size, we obtain larger statistics. Of course, this is not adequate for all analysis, thus Table 5 also provides the total magnitude measured as described above. This allows us to obtain what we call the "best sample", i.e., 64 BCGs with a central FUV-NUV color with global uncertainty (sum of the error bars on each side) lower than 1.2 magnitudes, and not contaminated. Sample of local galaxies in GUViCS A previous work on UV upturn in elliptical galaxies inside the Virgo cluster, based on GALEX data, was performed by Boselli et al. (2005). We refer the reader to this work for a detailed analysis of local galaxies. We consider here a similar comparison sample of local galaxies in the Virgo cluster, using the most recent set of data (Boselli et al. 2014). For all galaxies, the UV data have been taken from the GUViCS catalog of UV sources published in Voyer et al. (2014). The optical data in the SDSS photometric bands (Abazajian et al. 2009) have been taken, in order of preference, from the SDSS imaging of the Herschel Reference Survey (Boselli et al. 2010), published in Cortese et al. (2012), or from Consolandi et al. (2016). Given the extended nature of all these nearby sources, all magnitudes have been taken from imaging photometry of extended sources and thus are total magnitudes. Among this sample, we identify the seven galaxies being central to subgroups in Virgo (Boselli et al. 2014) that are probably more similar to our BCGs than the other early-type galaxies in the local sample. Confirmed upturn sample Using optical photometry to exclude recent star formation Brightest cluster galaxies are not systematically quiescent systems: 10 % to 30 % of BCGs in optically selected samples show star formation or AGN activity (Donahue et al. 2015, and references therein). A blue FUV-NUV is thus not necessarily the result of an evolved upturn population and can be the result of star formation activity. BCGs with star formation tend to have morphological signs such as filaments, elongated clumps, or knots (Donahue et al. 2015). From our visual inspection, we have identified a few obvious cases that can be excluded when focusing on the upturn phenomenon. Fortunately, Yi et al. (2011) and Han et al. (2007) showed that combining the FUV-NUV and NUV-r (or FUV-r) colors allows the separation of upturns, compared to galaxies that are simply blue as a result of young populations. Donahue et al. (2010) also showed that UV-optical colors are sensitive to even modest amounts of recent star formation. We thus present in Fig. 3 a FUV-NUV versus NUV-r color diagram for our sample. The NUV-r color is sensitive to young populations even if it does not probe exactly the same restframe waveband at all redshifts: blue colors indicate the presence of young stellar populations. We adopt the qualitative limits of Yi et al. (2011) to confirm the detection of rising UV flux when FUV-NUV is lower than 0.9; and the detection of a young population component when NUV-r is bluer than 5.4 mag. The galaxies that were flagged for spiral structures all fall on the young population side of the NUV-r=5.4 limit, three of them being even among the bluest objects in our galaxies. The figure clearly shows that we have a number of BCGs in the best sample at all redshifts falling in the upturn/old stellar population part of this diagram. We consider that these 27 BCGs are likely to present a UV upturn. Another possible contribution to blue FUV-NUV color would be the presence of an AGN. O'Connell (1999) however discusses the contribution of known bright nuclei in elliptical galaxies (M87, NGC4278). Only 10 % of the FUV luminosity is related to the nuclei. While it could affect the FUV-NUV color, this is marginal with respect to our observational uncertainties. Moreover, the subclass from SDSS indicates one AGN and a few BROADLINE objects in our sample. Only three of them pass our UV photometry criteria (uncertainties, noncontamination). These few objects do not distinguish themselves from the rest of our sample, as can be seen in the figures where we marked them. In summary, our results should not be affected by AGNs. Spectroscopic confirmation We show in Fig. 4 the SDSS Mg2 and Hβ Lick indices of the "confirmed upturn" sample (and of the full sample for comparison). These features have been used in the context of UV upturn studies since Faber (1983) and Burstein et al. (1988). SDSS provides values for many of our galaxies and the majority of our upturn sample (24 out of 27). In local galaxies, a trend between the Mg2 index and the strength of the upturn was found, with stronger upturns in more metallic galaxies. With our sample of BCG galaxies, we probe only the most massive of the galaxies with respect to the local sample. We thus do not find a trend with the Mg2 index as in the redshift 0 sample from Boselli et al. (2005). Our confirmed upturn sample behaves as the local "centrals". As can be expected from color evolution of Fig. 1, the BCGs are slightly bluer at higher redshift, but present similar Mg2 values to the more massive of the local early-type galaxies showing upturns. In the nearby universe, the Hβ index has been used to distinguish galaxies with real upturn and those presenting residual star formation. The bottom panel of Fig. 4 shows its value for our full sample and confirmed upturns. We find a very mild trend of bluer UV colors for smaller Hβ indices. This pattern is typical of upturns (Buzzoni et al. 2012). Blue UV colors related to star formation are instead found with higher values of the Hβ index (above 2 Å). Most of our BCGs are found with values between 1 and 2 Å, typical of the passive galaxies with UV upturns. From this section, we conclude that even if our selection of confirmed upturn were based on photometry alone, the spectroscopic information would confirm the status of these galaxies as quiescent with a real UV upturn, similar to that observed in the Local Universe and not polluted by residual star formation. The central galaxies of the various subgroups in Virgo are indicated by larger symbols than the rest of the sample. These centrals are repeated in the other panels (squares). In the middle panel, we show our full and best sample (colored circles). We use the central FUV-NUV color (smaller error bar, larger statistics). Galaxies with contamination or very large error bars (1 σ interval larger than 1.2 mag) are indicated by a pale green circle. For the other galaxies, the color corresponds to the redshift, as indicated in the color bar. Peculiarities found in our visual inspection (blue) or in the SDSS spectral subclass (red) are marked as in Fig. 2. The dotted line is a LOWESS fit to our sample. In the right panel, the colored circles are used only for the galaxies with confirmed upturn. Results FUV-NUV color vs. luminosity We compare in Fig. 5 the FUV-NUV color and i-band luminosity of the local sample, our BCG sample, and the upturn sample. While some works have found an evolution with luminosity (Boselli et al. 2005), the color in our BCG sample varies little with luminosity. However, the sample covers a small range of i-band luminosity, since BCG have by definition higher masses (well traced by the i-band luminosity) than the general population of galaxies. The dispersion is reduced, however, when we restrict the sample to galaxies with confirmed upturn with a very mild trend. The central galaxies in subgroups of Virgo are found in the same range of luminosities as our BCGs. In this range, the upturn sample is slightly bluer on average than the local sample, in agreement with the K-correction that can be deduced from Fig. 1. Loubser & Sánchez-Blázquez (2011) studied a sample of 36 nearby BCGs. Their FUV-NUV color (0.79 ± 0.055) is bluer than normal ellipticals of the same mass, but they do not find a strong dependence on mass or other parameters within BCGs. This is similar to what we find at higher redshift on average. FUV-NUV color vs. redshift As seen in the Introduction, the evolution with redshift of the UV upturn can bring direct constraints on the nature of stars producing it. Rich et al. (2005) did not find any evolution of the UV upturn up to redshift 0.2. Brown (2004) suggested that the UV upturn fades progressively with redshift up to redshift 0.6. Ree et al. (2007) compared the observed FUV-V color of nearby ellipticals to the ellipticals in 12 remote clusters up to redshift 0.2, and also compared this color to six objects in two clusters at redshifts around 0.3 and 0.5, suggesting a weak evolution of this color (they did not show the evolution of the FUV-NUV color, but provided the corresponding data in their table). Le Cras et al. (2016) has suggested that the UV upturn appears at redshift 1 in massive galaxies and becomes more frequent at lower redshift. Their work is based on the fitting of line indices in synthesis population models where they can include a UV upturn component. They found that the rate of galaxies better fitted with models including an upturn is of 40 % at redshift 0.6 and 25 % at redshift 1. A weak evolution could be consistent with the binary model of Han et al. (2007), but the constraints are still scarce, and our sample can bring new information. Figure 6 shows our measurement of the observed FUV-NUV color as a function of redshift for the BCG sample. Here, we use the best sample, i.e., all the galaxies with good constraints on the FUV-NUV color, which can be directly compared to published works with similar data. In the next section, we focus instead on the confirmed upturn that can be defined using optical photometry. Our galaxies are compared to the other samples of BCGs with published FUV-NUV colors: the 36 local BCGs of Loubser & Sánchez-Blázquez (2011) for which we indicate the average value and observed range of color, and at intermediate redshifts the BCG samples of Ree et al. (2007) and Donahue et al. (2010). Our sample adds new points at redshift lower than 0.2, and brings unique measurements in the redshifts 0.2 to 0.3, namely a significant increment in look-back time. When a FUV-NUV color could be measured with a global range of uncertainty lower than 1.2 magnitude (this is an arbitrary value that we chose in order to balance precision and statistics), we do obtain relatively blue colors. Most of these galaxies are bluer than the flat spectrum that we used as an artificial reference, as expected in the case of an upturn. A LOcally WEighted Scatterplot Smoothing (LOWESS) fit (implemented in Python by Cleveland 1979) performed on our BCGs combined with the two intermediate redshift samples stresses the trend of obtaining bluer colors at higher redshifts. However, this blue color is not necessarily a sign of a UV upturn, as discussed above. We thus study the FUV-NUV color as a function of redshift for confirmed upturn in the next section. Detected UV upturn up to look-back times of 3.5 Gyr In Fig. 7, we finally show the FUV-NUV color as a function of redshift for our BCGs, this time indicating only the BCGs for which a UV upturn is considered very likely based on the colorcolor diagram and the spectroscopic confirmation discussed in section 3. We include in the figure the evolution predicted by different models. At all our redshifts, we selected very massive passive galaxies. It is likely that their stellar mass or i luminosity does not evolve much over the considered period since their star formation must have occurred at much earlier times in such galaxies (Thomas et al. 2005). The FUV-NUV evolution with redshift may thus be close to the actual color evolution of passively aging very massive galaxies. However, we cannot be sure that we select the precursors of redshift 0 upturn galaxies when we select upturns at higher redshift. Thus, the evolution with redshift that we present is not necessarily the redshift occurring in any individual galaxy. Even so, our results bring a new constraint for the stellar evolution models producing a UV upturn since models should at least predict the possibility of an upturn at the redshift when it is observed, which is not necessarily the case for all models. Fig. 7. FUV-NUV color as a function of redshift as in Fig. 6, but keeping only our BCG galaxies with old populations and UV upturn, as defined by the bottom right part of Fig. 3 (confirmed upturn sample). In the top panel, we show the rest-frame color. We corrected the observed points assuming the NGC1399 spectrum since we selected upturn galaxies. In the bottom panel we show the observed color. In each panel our points are compared to model predictions. The shaded area indicates the evolution of the rest-frame color for a SSP model of Han et al. (2007) including binaries (assuming a redshift 5 formation). The dot-dashed lines indicate the observed colors in typical models of Yi et al. (1998) based on single stars, for infall histories from Tantalo et al. (1996) for two galaxy masses (10 12 and 5 10 11 M ⊙ ), and two mass-loss efficiency parameter (0.7 for the two lower and 1 for the two upper curves). The dotted curve is a LOWESS fit to our data. The rest-frame FUV-NUV colors for the SSP models of Han et al. (2007) are shown in the top panel. We computed restframe colors for our galaxies assuming the NGC1399 spectrum. In future model computations, it should be straightforward to obtain the observed color to compare it directly to the values provided in our tables to avoid this step. The weak evolution they propose is quite consistent with the lack of evolution found in our sample and the colors are globally consistent with these galaxies; we recall that they do not represent the full population of BCGs, but those presenting a UV upturn, which is a significant fraction (27 out of the 64 galaxies in the best sample). We also show in the figure four typical models among those presented in Yi et al. (1998) for two infall accretion histories and for two values of their mass-loss efficiency parameters. We refer to their paper for a more detailed description of their models. In this case, we reproduce their prediction for the observed color m(1500)-m(2500), which are not the GALEX bands but probe the UV slope in a similar wavelength range. These models tend to be characterized by more ample and rapid variations than the Han et al. (2007) model is. Such large variation is not favored by our data (at least for a fraction of the BCG population). We note that the models were basically constructed on the basis of redshift 0 constraints, with large uncertainties on the population responsible for the UV upturn. The inclusion of binary evolutions by Han et al. (2007) seems to help reproduce the observed upturn at look-back times of around 3 Gyr that we find in many of our galaxies. Ciocca et al. (2017) recently found at redshift 1.4 blue color gradients in ellipticals in one cluster, with bluer UV-U colors in their center that may indicate the existence of a UV upturn population at even higher redshift. Environment influence on the UV upturn? We verified whether there is a correlation between the FUV-NUV color and other parameters in the maxBCG catalog. We could not find any significant trend, especially with those parameters related to the environment (e.g., number of cluster members). This is consistent with the findings of Yi et al. (2011) andLoubser &Sánchez-Blázquez (2011), suggesting that the UV upturn is intrinsic to galaxies and not directly related to their environment. Conclusions This work extends by a factor of several the size of BCG samples and provides constraints on their UV color. For the first time, we bring constraints on this subject to the redshift range 0.2-0.35, almost multiplying by 2 the look-back time with respect to previous studies. We took advantage of the GUViCS survey to study the FUV-NUV color (probing the UV slope) of 177 massive galaxies of the maxBCG catalog. Even if it is poorly constrained for many of them (due to their intrinsic faintness, and low exposure time in part of the GUViCS survey), we measured the FUV magnitude for 166 objects in this sample. Removing from this sample the galaxies with relatively large uncertainties and those with contamination, we obtained an interesting constraint (sufficient to distinguish different models) on the (central) FUV-NUV color of 64 BCGs at redshift 0.05 to 0.35. Our most important result is that 27 out of the 64 BCGs with good photometry at these redshifts present the characteristic of a UV upturn. They are selected on the basis of their blue FUV-NUV color, and of red optical colors suggesting an old underlying stellar population. The quiescent nature of these galaxies is confirmed by spectroscopic information. They bring important constraints for models of the stellar populations responsible for the UV upturn phenomenon in very massive early-type galaxies. The comparison of our new data set with models from the literature favors a mild evolution with redshift like that obtained by models taking into account the effect of binaries on stellar evolution (Han et al. 2007). In conclusion, our data favors the existence of a binary channel to produce very hot stars that can produce a UV upturn, even up to redshift 0.35. This empirical work cannot give a definitive answer, however. Our tabulated data should offer a new constraint for future models of stellar evolution. From the empirical point of view, follow-up work could be done to increase the statistics on the basis of extensive UV and optical data sets. Especially, a search for all massive galaxies, ellipticals, and BCGs in the NGVS optical catalog, and a similar systematic measurement of the UV color, but also of the UVoptical colors would be useful. In the long term, future large UV facilities (e.g., LUVOIR) could allow us to directly probe the UV spectrum of massive galaxies, providing more direct constraints on this still enigmatic phenomenon. Table 2. BCG galaxies in our sample.They are selected from the reference catalog of Koester et al. (2007), from which we list here RA, DEC, photometric redshift, r and i band luminosities. Fig. 2 . 2 Fig. 2. Difference between the central and global FUV-NUV color. Galaxies with contamination or very large error bars (1 σ interval on the central or global color larger than 1.2 mag) are indicated by a pale green circle. The dark blue symbols with error bars are all our other points, including indications from the visual inspection results (see Table4and text for details). In blue: obvious spiral structure (crosses); arcs (circles); strong asymmetries, shells, tails, or signs of merger (squares). The red circles indicate galaxies with the SDSS spectral subclass BROADLINE. The dashed lines indicates 0 (no color gradient) and the dotted line is a LOWESS fit to the dark blue points. Fig. 3 . 3 Fig. 3. Central FUV-NUV vs. global NUV-r color-color diagram in the best sample (colored by redshift). For comparison, pale green dots show the location of the other BCGs. The vertical line indicates NUV-r=5.4,above which there should be no contamination from young stars, and the horizontal line indicates FUV-NUV=0.9, below which the UV slope is consistent with a UV upturn(Yi et al. 2011). Peculiarities found in our visual inspection (blue) or in the SDSS spectral subclass (red) are marked as in Fig.2. Fig. 4 . 4 Fig. 4. FUV-NUV color as a function of the Mg2 and Hβ Lick indices.Galaxies not pertaining to our upturn sample are shown as pale green dots. For the confirmed upturn galaxies, the symbols are colored according to their redshift. Peculiarities found in our visual inspection (blue) or in the SDSS spectral subclass (red) are marked as in Fig.2. In the top panel, the squares show the relation found in the local sample (larger squares for centrals). Fig. 5 . 5 Fig.5. FUV-NUV color as a function of the i-band luminosity. The left panel shows the relation for the redshift 0 early-type galaxies (Sect. 2.7). The central galaxies of the various subgroups in Virgo are indicated by larger symbols than the rest of the sample. These centrals are repeated in the other panels (squares). In the middle panel, we show our full and best sample (colored circles). We use the central FUV-NUV color (smaller error bar, larger statistics). Galaxies with contamination or very large error bars (1 σ interval larger than 1.2 mag) are indicated by a pale green circle. For the other galaxies, the color corresponds to the redshift, as indicated in the color bar. Peculiarities found in our visual inspection (blue) or in the SDSS spectral subclass (red) are marked as in Fig.2. The dotted line is a LOWESS fit to our sample. In the right panel, the colored circles are used only for the galaxies with confirmed upturn. Fig. 6 . 6 Fig.6. Observed FUV-NUV color as a function of redshift. The gray squares are the redshift 0 centrals of Virgo subgroups (Sect. 2.7). The diamond is the average value ofLoubser & Sánchez-Blázquez (2011), the error bar corresponding to the range of values they found for local BCGs. Intermediate redshift BCGs ofRee et al. (2007) andDonahue et al. (2010) are shown as orange triangles and magenta pentagons, respectively. The circles correspond to our sample for which we use the central FUV-NUV color (smaller error bar, larger statistics). Galaxies with contamination or very large error bars (1 σ interval larger than 1.2 mag) are indicated by pale green circles, others by dark blue circles with corresponding error bars. Peculiarities are marked as in Fig.2(noted in our visual inspection in blue; noted in the SDSS spectral subclass in red). The solid (dashed) curve show the FUV-NUV color for the upturn spectrum of NGC1399 (for a flat spectrum) as a function of redshift. A LOWESS fit to our sample combined with the points ofDonahue et al. (2010) andRee et al. (2007) is indicated as the dotted line. Table 1 . 1 Boissier et al.: The GALEX Ultraviolet Virgo Cluster Survey (GUViCS). VII.: UV upturn in early-type galaxy FUV-NUV samples Reference Rich et al. (2005) Boselli et al. (2005) Boselli et al. (2014) Ree et al. (2007) Donahue et al. (2010) Loubser et al. (2011) This work (all, FUV measurements) 0.05-0.35 redshift range Galaxies 0-0.2 early types 0 Virgo early types 264 Statistics 172 0 Virgo centrals 7 0-0.2 BCGs 12 0.06-0.18 BCGs 10 a 0 BCGs 36 BCGs 166 This work (best sample b ) This work (confirmed upturn c ) 0.05-0.35 0.05-0.35 BCGs BCGs 64 27 S. Table 2 . 2 ID RA (deg) DEC (deg) phot. redshift L r (10 10 L ⊙ ) L i (10 10 L ⊙ ) continued. ID RA (deg) DEC (deg) phot. redshift L r (10 10 L ⊙ ) L i (10 10 L ⊙ ) BCG24 188.57278 BCG129 186.45179 BCG271 191.04632 14.482497 9.766238 7.971470 BCG304 182.92986 8.018963 BCG485 188.94962 15.555812 BCG671 183.13768 6.076853 BCG692 187.03522 8.945062 BCG933 181.14637 11.095937 BCG1076 188.90784 13.177456 BCG1123 190.82471 19.013533 BCG1140 184.92495 0.840782 BCG1152 182.95640 8.439066 BCG1187 183.29939 14.229333 BCG1251 183.28358 12.019687 BCG1355 185.43028 0.329512 BCG1383 185.24322 11.528934 BCG1408 187.79463 11.786205 BCG1520 187.29306 14.728700 BCG1529 191.95320 10.648070 BCG1574 185.60712 6.502490 BCG1635 187.69004 9.170359 BCG1684 186.82361 2.601625 BCG1805 188.03874 13.302351 BCG1934 183.24882 7.898145 BCG1961 184.82473 11.279270 BCG2196 192.43198 1.746452 BCG2199 193.94356 1.980950 BCG2255 190.47262 10.902658 BCG2296 186.72384 12.365892 BCG2322 182.88571 10.805757 BCG2395 180.60957 10.562301 BCG2410 192.83481 10.812324 BCG2566 188.37254 8.835154 BCG2801 183.64345 0.791082 BCG2833 187.20573 8.314187 BCG2907 183.38725 7.421361 BCG3098 186.99976 13.942666 BCG3137 181.02868 1.779234 BCG3194 193.99197 9.706866 BCG3299 187.46419 14.412767 BCG3332 192.11267 12.143343 BCG3571 180.92705 1.031848 BCG3728 189.20382 8.853715 BCG3786 188.77550 19.269088 BCG3809 183.87004 1.840244 BCG3858 180.26887 15.209924 BCG4032 194.52230 5.328463 BCG4048 181.62656 1.820709 BCG4106 181.03875 13.664575 BCG4120 189.24367 14.695208 BCG4189 185.44575 7.806539 BCG4259 185.50020 15.793529 BCG4262 180.89012 11.101584 BCG4264 181.29986 10.435014 BCG4391 185.94397 16.155459 BCG4674 183.69782 0.515216 BCG4702 188.48709 8.626259 BCG4810 180.86246 11.157377 BCG4909 192.35491 6.410631 BCG4919 189.03059 9.055363 0.243050 0.248450 0.275450 0.264650 0.267350 0.132350 0.267350 0.237650 0.261950 0.116150 0.299750 0.135050 0.299750 0.153950 0.156650 0.261950 0.172850 0.275450 0.243050 0.102650 0.291650 0.229550 0.278150 0.132350 0.253850 0.207950 0.288950 0.113450 0.164750 0.229550 0.232250 0.121550 0.248450 0.240350 0.218750 0.148550 0.253850 0.243050 0.240350 0.288950 0.270050 0.259250 0.170150 0.237650 0.256550 0.108050 0.286250 0.288950 0.207950 0.264650 0.135050 0.216050 0.256550 0.253850 0.248450 0.202550 0.251150 0.221450 0.221450 0.167450 17.90260 9.48259 11.28250 18.48990 7.44125 8.40151 16.49510 11.01730 9.87069 9.03394 14.13630 7.61520 10.18440 10.31280 8.38778 11.40520 11.27210 9.69912 10.15970 7.37734 6.05027 7.50154 8.07326 7.24300 9.85226 6.77548 7.68495 5.97685 8.30567 8.58161 8.10823 6.90917 8.20927 7.34158 7.47353 8.42181 5.43397 6.02762 5.82393 6.20903 6.84018 6.88905 6.51664 6.90189 6.56095 4.72527 4.46252 5.12966 6.80957 7.04354 5.56824 6.42034 4.67782 6.57431 6.41880 7.41103 5.73425 7.27635 6.31448 6.52211 21.94700 11.81850 14.54510 24.32040 9.18799 10.62250 19.86410 13.09370 12.29600 10.88040 18.22730 9.68125 12.66650 13.23270 10.61470 13.99760 13.43390 12.29420 12.65270 9.44810 7.71484 9.51897 10.13060 9.11284 11.54580 8.75636 9.16246 7.83891 10.67860 10.37810 9.51045 8.73994 10.14310 8.89927 9.13566 10.39250 6.64345 7.36433 6.94385 7.80151 8.07318 8.58642 8.38270 8.42570 7.96016 5.94890 5.58348 6.41988 8.51718 8.58972 6.99704 7.65803 5.85087 8.10770 7.84344 9.13500 6.95224 8.66375 7.76555 7.90281 Table 2 . 2 continued. ID RA (deg) DEC (deg) phot. redshift L r (10 10 L ⊙ ) L i (10 10 L ⊙ ) BCG9130 183.79112 BCG9161 191.99719 BCG9375 188.01972 BCG9402 190.47755 16.107645 1.164337 8.624872 1.465264 BCG9419 187.17228 13.676487 BCG9455 194.07920 18.619761 BCG9478 188.50546 11.973708 BCG9512 189.25606 7.631371 BCG9514 186.97122 16.030243 BCG9686 186.11951 2.465236 BCG9693 186.04074 8.730912 BCG9737 184.80382 14.395416 BCG9799 193.64689 10.002305 BCG10141 185.55197 14.590736 BCG10151 193.53080 3.804609 BCG10217 194.28821 14.722016 BCG10326 191.03480 8.069578 BCG10339 183.42129 1.628454 BCG10372 187.08285 14.608728 BCG10430 180.91580 1.204584 BCG10452 192.03184 11.352027 BCG10815 192.97780 11.842069 BCG11017 185.16711 1.404918 BCG11239 188.99457 13.207563 BCG11241 189.62273 5.904392 BCG11369 189.41545 7.794008 BCG11442 186.81470 9.169160 BCG11482 180.85448 10.811054 BCG11484 185.86695 15.655224 BCG11518 189.07718 15.494217 BCG11534 183.49782 1.602452 BCG11547 187.59676 14.259085 BCG11551 192.94604 3.879207 BCG11571 187.65542 9.283149 BCG11683 187.13218 2.635468 BCG11703 187.70206 2.218458 BCG11762 182.74433 1.192858 BCG11787 187.53217 2.836198 BCG11965 187.10510 13.898171 BCG12016 186.49359 16.200333 BCG12019 186.24668 11.836638 BCG12122 188.76065 15.269065 BCG12155 180.76919 10.410552 BCG12463 185.14656 6.376248 BCG12522 185.62052 14.471922 BCG12632 182.74434 1.612938 BCG12703 184.97424 7.512982 BCG12990 192.18827 1.647833 BCG13101 189.35622 10.383818 BCG13175 193.80410 10.232985 BCG13480 187.03828 8.954130 BCG13497 188.38020 0.190788 BCG13609 189.60870 15.768887 BCG13639 192.86621 2.883622 BCG13672 180.13584 0.336242 BCG13790 185.21390 11.476087 0.272750 0.256550 0.240350 0.288950 0.256550 0.183650 0.178250 0.299750 0.283550 0.207950 0.167450 0.259250 0.191750 0.224150 0.224150 0.221450 0.164750 0.234950 0.286250 0.102650 0.256550 0.175550 0.172850 0.170150 0.240350 0.178250 0.275450 0.210650 0.286250 0.256550 0.207950 0.256550 0.205250 0.240350 0.213350 0.248450 0.275450 0.137750 0.229550 0.221450 0.102650 0.245750 0.232250 0.118850 0.234950 0.297050 0.280850 0.183650 0.264650 0.164750 0.229550 0.183650 0.226850 0.218750 0.253850 0.229550 5.33444 3.76567 3.78301 4.12880 4.65006 4.18043 4.55322 3.55800 3.33753 4.63002 3.74240 4.47137 3.62893 3.88453 3.64730 3.98609 3.84886 4.38894 2.72270 2.99825 3.62367 4.07066 3.62870 3.04897 3.41960 3.78488 3.32945 3.73503 2.89435 3.53120 3.29541 3.39592 3.80870 3.26598 3.38850 4.11068 3.50309 3.34340 2.88396 2.73968 2.16607 2.48031 2.92582 2.53692 2.33325 3.12973 2.18398 2.89727 2.53288 2.35196 1.90207 1.92044 1.95320 1.92269 1.52157 1.16904 6.41501 4.60629 4.68663 4.94586 5.57512 5.33622 5.92121 4.34681 4.15719 5.94480 4.74735 5.52323 4.58484 4.72299 4.39979 4.88795 4.77938 5.31973 3.44370 3.73585 4.52326 5.10382 4.53496 3.80921 4.25677 4.57201 4.23054 4.63222 3.63280 4.22659 4.02701 4.05598 4.80651 4.09370 4.19167 5.11039 4.27821 4.21592 3.54409 3.33520 2.74446 2.99305 3.67983 3.09705 2.75455 3.70466 2.71403 3.58857 3.13100 2.94989 2.18253 2.40199 2.37338 2.40750 1.85562 1.43750 Table 3 . 3 Galaxies in our sample with spectroscopic information provided by SDSS: redshift, sub-class (AGN, STARFORMING, or BROADLINE), Mg2 and Hβ Lick indices and their uncertainty. ID spec. redshift sub-class Mg2 ± Hβ ± BCG24 BCG129 BCG271 BCG304 BCG485 BCG671 BCG692 BCG933 BCG1076 BCG1123 BCG1140 BCG1152 BCG1187 BCG1251 BCG1355 BCG1383 BCG1408 BCG1520 BCG1529 BCG1574 BCG1635 BCG1684 BCG1805 BCG1934 BCG1961 BCG2196 BCG2255 BCG2296 BCG2322 BCG2395 BCG2410 BCG2566 BCG2801 BCG2833 BCG2907 BCG3137 BCG3194 BCG3299 BCG3332 BCG3571 BCG3728 BCG3786 BCG3809 BCG3858 BCG4032 BCG4048 BCG4106 BCG4120 BCG4189 BCG4259 BCG4262 BCG4264 BCG4391 BCG4674 BCG4702 BCG4810 BCG4909 BCG4919 BCG4920 BCG4944 0.230558 0.252744 0.272047 0.248118 0.284612 0.137062 0.231012 0.225683 0.251505 0.113173 0.324538 0.144250 0.316567 0.146893 0.158895 0.254507 0.171116 0.285424 0.245753 0.075471 0.321950 0.218690 0.283662 0.137488 0.225744 0.201966 0.116928 0.163604 0.225985 0.229071 0.125230 0.251963 0.251029 0.214718 0.137207 0.240821 0.227791 0.290972 0.263462 0.255169 0.165497 0.225821 0.241753 0.109187 0.324959 0.295877 0.205980 0.230859 0.137094 0.209841 0.288793 0.277109 0.233642 0.183824 0.250176 0.202245 0.225047 0.165049 0.235314 0.242670 0.313916 0.024135 1.624143 0.842013 0.254789 0.018935 0.947647 0.611584 0.274995 0.017862 2.101470 0.584695 0.293060 0.016535 1.441411 0.571690 0.272066 0.026296 1.362648 0.921333 0.288143 0.015167 2.183753 0.400882 0.256044 0.019307 0.602522 0.693763 0.276631 0.017347 2.785665 0.501516 0.271697 0.009468 1.914185 0.278490 0.221946 0.021216 1.258562 0.626185 0.297741 0.011613 1.711954 0.344826 0.285260 0.016848 1.749445 0.504575 0.292536 0.014081 2.338836 0.493430 0.280844 0.009324 1.602184 0.295529 0.273814 0.016362 2.003437 0.512988 0.301538 0.017347 1.501519 0.558158 0.249108 0.021581 2.431830 0.765589 0.262535 0.016176 1.070567 0.548029 BROADLINE 0.278830 0.010293 1.364787 0.294204 0.273304 0.023427 -0.048969 0.706194 0.317027 0.012773 0.914725 0.590046 0.252599 0.018863 0.840112 0.627715 0.311870 0.010023 1.219293 0.280608 0.264401 0.017479 1.400491 0.636460 0.290880 0.020839 1.298720 0.715578 BROADLINE 0.277711 0.016368 1.565580 0.522218 0.285680 0.016465 1.087216 0.615393 0.227800 0.016382 0.740022 0.596499 0.290743 0.010008 1.168643 0.283420 0.275614 0.020150 1.001811 0.694451 0.261442 0.015082 0.970654 0.514771 0.262154 0.022697 1.579777 1.061491 0.239093 0.015403 1.684279 0.348206 0.279896 0.018839 1.752254 0.681716 0.280011 0.017548 1.264112 0.645362 0.287693 0.022433 1.620180 0.707405 0.241738 0.024016 2.952008 0.707217 0.288372 0.016482 1.510066 0.529413 0.270516 0.014891 1.181187 0.518753 0.235692 0.019025 1.411908 0.689607 0.311572 0.010663 1.377183 0.292759 0.203827 0.030595 1.650039 0.828820 0.261821 0.020166 2.665730 0.712405 0.294526 0.016561 1.060984 0.675914 0.254494 0.021146 1.600741 0.790321 0.274778 0.014361 1.429085 0.397715 0.271855 0.017336 2.728652 0.734529 0.309794 0.029587 0.596318 1.027418 0.268839 0.022987 1.420293 0.846605 0.260215 0.027254 1.726228 1.030619 0.265723 0.019309 0.534590 0.642284 0.286761 0.016784 2.222454 0.548012 0.290797 0.026173 2.369166 0.947302 0.264383 0.017178 2.231198 0.752067 0.295891 0.020710 1.875535 0.641725 0.260306 0.022286 1.717322 0.848310 0.255257 0.014690 2.228167 0.508443 Table 3 . 3 continued. ID spec. redshift sub-class Mg2 ± Hβ ± BCG5015 BCG5041 BCG5189 BCG5232 BCG5349 BCG5498 BCG5642 BCG5758 BCG5853 BCG5856 BCG5866 BCG5908 BCG6008 BCG6133 BCG6141 BCG6336 BCG6351 BCG6414 BCG6449 BCG6452 BCG6511 BCG6591 BCG6606 BCG6654 BCG6824 BCG6963 BCG7149 BCG7211 BCG7356 BCG7528 BCG7607 BCG7665 BCG7815 BCG7849 BCG7925 BCG7990 BCG7992 BCG8059 BCG8085 BCG8201 BCG8239 BCG8300 BCG8338 BCG8395 BCG8511 BCG8535 BCG8603 BCG8617 BCG8643 BCG8658 BCG8692 BCG8767 BCG8918 BCG8934 BCG9065 BCG9088 BCG9130 BCG9161 BCG9419 BCG9455 BCG9478 0.285776 0.227974 0.132145 0.264033 0.223418 0.242627 0.235521 0.255011 0.296180 0.254792 0.193292 0.231782 0.284115 0.296098 0.190765 0.268462 0.323224 0.231156 STARFORMING 0.242674 0.017245 0.142032 0.657410 0.271097 0.023992 2.314066 0.813229 0.266843 0.016595 1.377062 0.551542 BROADLINE 0.286807 0.010781 1.367809 0.313986 0.252630 0.021272 1.902738 0.689642 0.214403 0.029562 2.294126 1.067352 0.269739 0.021343 1.229566 0.783294 0.260319 0.020424 3.442233 0.696234 0.276979 0.026336 1.460787 0.903020 0.287312 0.030376 1.137156 1.062628 0.280112 0.021290 2.649512 0.642271 0.274434 0.016210 1.366956 0.556039 0.283237 0.020836 0.349850 0.789100 0.327230 0.022497 1.443567 0.754465 0.261117 0.023851 0.710391 0.838279 0.279060 0.024338 2.091054 0.758165 0.241370 0.021738 1.564736 0.736708 0.258681 0.254429 2.003296 0.699393 0.164279 0.279094 0.013334 1.413066 0.426180 0.225377 0.224416 0.019607 2.907984 0.636684 0.135480 0.297471 0.012751 1.132342 0.343998 0.246205 0.199980 0.023050 1.667743 0.839960 0.267358 0.211951 0.024373 0.817268 0.777089 0.163213 0.258033 0.147835 0.271783 0.012884 2.711296 0.432613 0.237128 0.247838 0.022450 0.491749 0.860989 0.238646 0.254240 0.019507 2.450907 0.670883 0.185485 0.314032 0.018754 0.523212 0.619710 0.188359 0.250326 0.019063 1.907109 0.610453 0.156296 0.290713 0.016277 1.703787 0.488739 0.297713 0.288204 0.319072 0.032785 1.828393 1.146880 0.110779 0.260042 0.010400 1.710873 0.299358 0.297880 0.138473 0.238749 0.025068 0.793702 0.696294 0.157149 0.264521 0.020285 1.181411 0.645111 0.246665 0.226726 0.231797 0.020208 2.718560 0.744617 0.268376 0.297682 0.182181 0.291955 0.014758 0.979427 0.477231 0.282993 0.136322 0.249261 0.017110 1.604781 0.410299 0.134509 0.246746 0.018423 1.434693 0.441388 0.164692 0.273411 0.016561 1.824647 0.522287 0.247998 0.260107 0.258923 0.284193 0.266323 0.157578 0.278197 0.018817 2.075243 0.562048 0.293149 0.143465 AGN 0.241417 0.272666 0.255641 0.168573 0.227542 0.021991 2.536526 0.711398 0.151029 0.235051 0.016479 2.477687 0.594636 Table 3 . 3 continued. ID spec. redshift sub-class Mg2 ± Hβ ± BCG9512 BCG9514 BCG9686 BCG9693 BCG9799 BCG10141 BCG10151 BCG10217 BCG10326 BCG10339 BCG10372 BCG10430 BCG10452 BCG10815 BCG11017 BCG11239 BCG11369 BCG11442 BCG11482 BCG11547 BCG11551 BCG11571 BCG11683 BCG11703 BCG11787 BCG12019 BCG12463 BCG12990 BCG13175 0.310271 0.292013 0.221717 0.176730 0.199863 0.225204 0.208103 0.224653 0.161365 0.221089 0.284212 0.077954 0.254391 0.180769 0.159263 0.178699 0.176449 0.288675 0.230001 0.255540 0.219413 0.260091 0.217616 0.247557 0.134161 0.100942 0.121134 0.199584 0.153552 0.284064 0.021264 2.206975 0.748415 0.268354 0.018521 1.542066 0.608530 0.257792 0.017475 0.461349 0.614137 0.253260 0.022747 1.615399 0.890202 0.246809 0.032352 2.047414 1.186051 0.192641 0.018991 3.032758 0.647151 0.275198 0.016163 1.633791 0.488293 0.282606 0.023190 0.000000 -1.000000 0.254954 0.014284 1.656597 0.381681 0.246739 0.019481 0.005139 0.654559 0.303790 0.018146 1.295591 0.555865 0.224435 0.026502 1.138037 0.841920 0.236906 0.016402 2.130898 0.570206 0.195462 0.025990 2.273386 0.970192 0.246214 0.029532 1.130792 1.433648 0.261489 0.019548 1.456438 0.520110 0.260534 0.015678 1.443693 0.437083 0.242750 0.029549 1.733223 1.099187 0.212377 0.019158 0.353287 0.683562 Table 4 . 4 UV photometry measurements (see Sect. 2.3). Table 4. continued. ID BCG11442 BCG11482 BCG11484 BCG11518 BCG11534 14669 13644 FUV NUV Aperture exp.(s) exp.(s) (pixels) 5227 6713 30 -16.88 0.18 FUV +1σ (AB) 1295 1194 30 -17.49 0.12 6583 8691 30 -17.63 0.06 1746 1609 30 -16.73 0.32 30 -15.70 0.18 BCG11547 1651 1540 30 -99.90 0.00 -15.77 -16.75 0.36 -1σ NUV +1σ (AB) 0.21 -16.42 0.36 0.14 -18.54 0.08 0.06 -17.60 0.14 0.47 -18.20 0.12 0.21 -16.55 0.09 BCG11551 1523 1584 70 -19.51 0.25 0.32 -19.76 0.20 BCG11571 3196 6403 30 -16.04 0.39 0.63 -16.83 0.16 BCG11683 29672 45423 50 -15.95 0.31 0.43 -16.80 0.17 BCG11703 32676 48586 30 -16.31 0.10 0.12 -17.03 0.05 BCG11762 14920 15286 30 -16.76 0.15 0.18 -17.47 0.08 BCG11787 29675 45494 50 -15.58 0.18 0.21 -16.13 0.08 BCG11965 3350 4658 30 -15.64 0.63 1.64 -15.81 0.36 BCG12016 4617 4269 30 -15.92 0.26 0.34 -17.54 0.09 BCG12019 7934 9315 50 -15.62 0.24 0.31 -17.00 0.08 BCG12122 1769 3130 30 -15.86 0.52 1.05 -15.56 0.52 BCG12155 1406 1296 30 -16.91 0.32 0.46 -17.12 0.21 BCG12463 29943 32240 50 -15.28 0.17 0.20 -16.23 0.07 BCG12522 1657 4605 30 -99.90 0.00 -15.41 -16.40 0.32 BCG12632 14920 15286 30 -15.90 0.32 0.45 -16.68 0.20 BCG12703 3232 7435 30 -16.57 0.24 0.31 -16.67 0.23 BCG12990 1390 1264 50 -18.56 0.13 0.15 -18.92 0.09 BCG13101 1810 1721 30 -99.90 0.00 -15.21 -16.56 0.45 BCG13175 166 1756 30 -16.00 0.46 0.82 -16.85 0.13 BCG13480 4801 6313 30 -15.54 0.43 0.72 -18.34 0.04 BCG13497 4641 10557 30 -13.89 0.97 113.79 -15.45 0.29 BCG13609 1746 3064 30 -15.77 0.51 1.00 -16.13 0.44 BCG13639 1340 1209 30 -14.08 1.72 113.98 -15.40 0.92 115.30 -1σ FUV-NUV center 0.54 -0.24 0.08 1.13 0.16 -0.31 0.13 1.15 0.10 0.82 0.53 -99.90 0.25 -0.12 0.19 1.00 0.20 0.91 0.05 0.55 0.09 0.79 0.08 0.45 0.53 -0.02 0.10 1.48 0.09 1.39 1.02 -0.21 0.26 0.24 0.08 0.85 0.46 1.39 0.25 0.34 0.29 0.15 0.09 0.34 0.78 -0.16 115.23 +1σ 0.37 0.14 0.14 0.29 0.20 0.00 0.18 0.46 0.22 0.09 0.21 0.11 0.85 0.22 0.14 0.70 0.53 0.10 1.44 114.35 -1σ 0.33 0.15 0.13 0.33 0.21 0.00 0.18 0.60 0.24 0.10 0.22 0.11 1.07 0.25 0.15 0.72 0.60 0.10 0.40 0.40 0.43 0.44 0.08 0.08 1.60 0.14 1.45 0.65 1.26 0.05 3.34 0.53 0.98 0.40 1.81 1.20 113.21 0.75 -0.51 0.91 0.72 2.48 2.69 113.39 BCG13672 1858 1711 30 -17.42 0.16 0.19 -16.49 0.53 1.08 -1.37 1.08 0.62 BCG13790 5591 6627 30 -14.81 0.64 1.79 -15.23 0.61 1.49 -0.05 2.03 1.45 Table 5 . 5 Flags and notes for galaxies in our sample (see Sect. 2.5). number, page 18 of 21 S.Boissier et al.: The GALEX Ultraviolet Virgo Cluster Survey (GUViCS). VII.: ID Flag FUV Flag NUV Note BCG24 ok BCG129 ok BCG271 part. BCG304 ok BCG485 ok BCG671 ok BCG692 ok BCG933 ok BCG1076 ok BCG1123 ok BCG1140 ok BCG1152 ok BCG1187 contam. BCG1251 ok BCG1355 ok BCG1383 ok BCG1408 contam. BCG1520 ok BCG1529 contam. BCG1574 ok BCG1635 ok BCG1684 ok BCG1805 ok BCG1934 ok BCG1961 contam. BCG2196 contam. ok ok part. ok ok ok ok ok ok ok ok ok contam. ok ok ok contam. ok contam. ok ok ok ok ok contam. contam. Arcs Arcs shells/tails/asymetric/mergers shells/tails/asymetric/mergers Article Table 5 . 5 continued.Boissier et al.: The GALEX Ultraviolet Virgo Cluster Survey (GUViCS). VII.: ID Flag FUV Flag NUV Note BCG6824 ok BCG6963 contam. BCG7149 contam. BCG7211 part. BCG7356 contam. BCG7528 ok BCG7607 ok BCG7665 ok BCG7815 contam. BCG7849 ok BCG7925 ok BCG7970 ok BCG7990 ok BCG7992 ok BCG8059 ok BCG8085 contam. BCG8201 contam. BCG8239 ok BCG8300 ok BCG8338 contam. BCG8395 ok BCG8511 ok BCG8535 ok BCG8603 contam. BCG8617 ok BCG8643 contam. BCG8658 ok BCG8692 contam. BCG8767 contam. BCG8918 part. BCG8934 ok BCG9065 contam. BCG9088 contam. BCG9106 contam. BCG9130 ok BCG9161 ok BCG9375 ok BCG9402 ok BCG9419 ok BCG9455 ok BCG9478 contam. BCG9512 contam. BCG9514 contam. BCG9686 ok BCG9693 contam. BCG9737 ok BCG9799 contam. BCG10141 ok BCG10151 ok BCG10217 ok BCG10326 ok BCG10339 contam. BCG10372 ok BCG10430 ok BCG10452 ok BCG10815 ok BCG11017 ok BCG11239 ok BCG11241 ok BCG11369 contam. BCG11442 ok ok contam. ok part. contam. ok ok ok contam. ok ok ok ok ok ok contam. contam. ok ok contam. ok ok ok contam. ok contam. ok contam. contam. part. ok contam. contam. contam. ok ok ok ok ok ok contam. contam. contam. ok contam. ok contam. ok ok ok ok contam. ok ok ok ok ok ok ok contam. ok Spiral structure Spiral structure shells/tails/asymetric/mergers shells/tails/asymetric/mergers Arcs Spiral structure shells/tails/asymetric/mergers shells/tails/asymetric/mergers shells/tails/asymetric/mergers Article number, page 20 of 21 http://www.galex.caltech.edu/researcher/techdoc-ch5.html Article number, page 21 of 21 Acknowledgements. This research is based on observations made with the NASA Galaxy Evolution Explorer. GALEX is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034. We wish to thank the GALEX Time Allocation Committee for the generous allocation of time devoted to GUViCS. This research made use of Montage, funded by the National Aeronautics and Space Administration's Earth Science Technology Office, Computation Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology. Montage is maintained by the NASA/IPAC Infrared Science Archive.
01705587
en
[ "spi.meca.mefl" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01705587/file/CFM.pdf
Sylvain Chateau email: [email protected]&[email protected] Sébastien Poncet Julien Favier email: [email protected]&[email protected] Umberto D'ortona Metachronal Metachronal wave formation in 3D cilia arrays immersed in a two-phase flow Keywords: Lattice-Boltzmann method, Immersed Boundary, Metachronal waves, Mucociliary clearance des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Ciliary propulsion is a universal phenomenon developed by nature as a way to propel fluids. It can be found in almost every living organisms, going from the prokaryotic bacteria to mammals. In the particular case of the mucociliary clearance, cilia are found in tufts and serve to propel the mucus, a complex fluid whose purpose is to protect the bronchial epithelium against foreign particles. This epithelium is covered by a fluid layer called the Airways Surface Liquid (ASL), usually considered to be the superposition of two layers : the periciliary liquid (PCL) and the mucus just above it. The PCL is generally considered to be a Newtonian fluid similar to water. In normal epithelium, its depth is around 6 µm, which allows the tips of the cilia (length of 7 µm) to emerge into the mucus layer. The mucus is a highly non-Newtonian fluid which possesses characteristics such as visco-elasticity and thixotropy [START_REF] Lai | Micro-and macrorheology of mucus[END_REF]. Its height varies between 5 to 100 µm [START_REF] Widdicombe | Regulation of human airway surface liquid[END_REF]. The main purpose of the mucus is to act as a barrier against the foreign particles that may enter the human body (dust, pollutants, bacteria) by catching them. The cilia protrude on the epithelial surface and serve to transport the mucus up to the stomach, where it can be digested. Their motion can be decomposed into two steps : the stroke and the recovery phases, which take around 1/3 and 2/3 of the beating period respectively. Their beating frequency varies between 10 and 20 Hz. Note that in creeping flows, only the spatial asymmetry is essential for the cilia to generate propulsion [START_REF] Khaderi | Breaking of symmetry in microfluidic propulsion driven by artificial cilia[END_REF]. Defects in the mucociliary process usually lead to stagnant mucus, which induces breathing difficulties. Infections that may cause death also develop. It is then of prime importance to understand the hidden mechanisms behind the mucociliary clearance which allow thousands of cilia to act together for transporting mucus. Indeed, early experimental observations [START_REF] Sleigh | The biology of Cilia and Flagella[END_REF] have shown that cilia do not usually beat randomly, but instead adapt their beatings accordingly to their neighbors, giving birth to metachronal waves (MCW). If the phase lag ∆Φ between two cilia is positive (0 < ∆Φ < π), then the waves are called antipleptic MCW and move in the same direction as the fluid. On the contrary, if the phase lag ∆Φ between two cilia is negative (-π < ∆Φ < 0), then the waves are called symplectic MCW and move in the opposite direction. For the particular case where ∆Φ = 0, the cilia beat synchronously, and for ∆Φ = π a standing wave appears. In this work, the formation of MCW is studied in a two-layer environment. To the best of our knowledge, the present study is the first one where both antipleptic and symplectic MCW are seen to emerge using a simple feedback law, while usually only one type of wave is observed, as in [START_REF] Gueron | Energetic considerations of ciliary beating and the advantage of metachronal coordination[END_REF]. Moreover, while many studies considered single-layer fluid [START_REF] Elgeti | Emergence of metachronal waves in cilia arrays[END_REF], only few had considered two-layer environments when studying MCW [START_REF] Mitran | Metachronal wave formation in a model of pulmonary cilia[END_REF]. A parametric study where the metachrony is imposed is also performed. A particular value of phase lag ∆Φ = π/4, corresponding to an antipleptic MCW, is shown to be the more suitable for transporting and mixing the fluids. Finally, the numerical method possesses the following advantages : (i) viscosity ratios up to O(10 2 ) can be achieved [START_REF] Porter | Multicomponent interparticlepotential lattice Boltzmann model for fluids with large viscosity ratios[END_REF], and (ii) the mucus-PCL interface emerges intrinsically from the model. This solver is the only one that combines all these capabilities. Numerical method Geometrical modeling The computational domain is a box composed of N x × N y × N z points regularly spaced. The cilia are hair-like structures modeled by a set of 20 Lagrangian points, such that their base point is located at z = 0 along the wall. The spacing between two neighbouring cilia is a in the x and y-directions, and their length L is set to 15 lattice unit (lu). Their motion is imposed to be in the (x,z) plane, and the MCW propagate in the x-direction only. Thus, cilia located at the same value of x beat in phase with each other. The ratio h/H between the PCL thickness and the height of the domain is set to 0.27 for all simulations. The equations of motion for the cilia are inspired from [START_REF] Chatelin | Méthodes numériques pour l'écoulement de Stokes 3D : fluides à viscosité variable en géométrie complexe mobile ; application aux fluides biologiques[END_REF] and reproduce the beating pattern by resolving a 1D transport equation along a parametric curve. With such a beating pattern, the essential ingredients of the beating are captured, such as the angular amplitude between the beginning and the end of a stroke phase (θ = 2π 3 ), which agrees well with experimental data [START_REF] Sleigh | The Propulsion of Mucus by Cilia[END_REF]. Figure 1(a) gives a schematic view of the geometry, and figure 1(b) an insight of the stroke and recovery phases as modeled in the present work. In the simulations, the PCL thickness h varies between 0.6L to 0.9L. Thus, the cilia tips enter the mucus phase when the cilia are in the stroke phase, as observed in real epithelium configurations. Both the PCL and the mucus are considered Newtonian fluids. The kinematic viscosity of the mucus is ν m = 10 -3 m 2 /s, and the viscosity ratio r ν = ν m /ν P CL between the mucus and PCL is set to 20. A recent study [START_REF] Chatelin | A parametric study of mucociliary transport by numerical simulations of 3D non-homogeneous mucus[END_REF] has indeed shown that mucus transport was maximized for such viscosity ratios when considering a stiff transition between the mucus and PCL. The beating period of the cilia is N it × dt (with dt = 1 using the classical LBM normalization), N it being the number of iterations for a cilium to perform a complete beating cycle. Algorithm The numerical model is described in [START_REF] Li | An improved explicit immersed boundary method to couple with Lattice Boltzmann model for single-and multi-component fluid flows[END_REF], and validated on several configurations involving flexible and moving boundaries in multiphase flows, with a 2 nd order accuracy. The fluid part is first solved on a Cartesian grid with LBM using the Bhatnagar-Gross-Krook (BGK) model and a D3Q19 scheme. The collision and streaming steps proper to the LBM method are first performed. The model of [START_REF] Porter | Multicomponent interparticlepotential lattice Boltzmann model for fluids with large viscosity ratios[END_REF] is used to simulate the two-phase flow as it allows to minimize the magnitude of spurious currents near the fluid-fluid interface. More importantly, it also enables to consider higher density or viscosity ratios. Then, values for the fluid velocity are interpolated at the Lagrangian points. It allows to compute an IB force to be spread onto the neighbouring Eulerian fluid nodes in order to ensure the no-slip condition along the cilia. The macroscopic fluid velocity is then updated. Note that the geometric shape of the beating is fixed in all simulations. To save computing time, this shape is only computed once, decomposed into a finite number of steps (snapshots) during a beating period, and stored in memory. If necessary, an interpolation can be done in order to have the velocity values along the cilia in between two steps. Since the model of [START_REF] Porter | Multicomponent interparticlepotential lattice Boltzmann model for fluids with large viscosity ratios[END_REF] uses a Shan-Chen (SC) repulsive force [START_REF] Shan | Simulation of nonideal gases and liquid-gas phase transitions by the lattice Boltzmann equation[END_REF], surface tension effects emerge intrinsically at the PCL-mucus interface. m for the force imposed on the mucus phase and F i P CL for the force imposed on the PCL phase-, are projected on the velocity vector V s i corresponding to the s th Lagrangian point. The lever arm is L p = X p ⊗ V s i / V s i . Periodic boundary conditions are used in the x and y directions, while no-slip and free-slip boundary conditions are used at the bottom and top walls respectively. The size of the computational domain ranges from 50 lu to 400 lu depending on the configuration considered, except for the size along the z direction which is always set to 50 lu. Taking advantage of the local character of the LBM algorithm, the code is parallelized using MPI libraries (Message Passing Interface), by splitting the full computational domain into 9 sub-domains of size (N x /3, N y /3, N z ). More details on the numerical model can be found in [START_REF] Li | An improved explicit immersed boundary method to couple with Lattice Boltzmann model for single-and multi-component fluid flows[END_REF]. Feedback law The basic idea is to modulate the beating motion of the cilia as a function of the fluid motion. To do so, it is assumed that all cilia follow the same beating pattern, meanwhile a feedback of the fluids, which consists in accelerating or slowing down the motion of the cilia, is introduced. Each cilium is discretized with N s = 20 Lagrangian points. Let s be the subscript corresponding to the s th Lagrangian point, starting from the base tip at s = 1, and V s i the velocity on the s th Lagrangian point of the i th cilium. For each cilium, we define the average velocity over all Lagrangian points V i , which is linked to the number of steps (snapshots) this cilium will skip during one iteration of the fluid solver. The fluid feedback onto the cilia thus consists in modifying the norm of the velocity vector V i , while its direction remains unchanged. The feedback is computed in three steps. First, the IB forces corresponding to mucus and PCL are projected onto the corresponding velocity vectors for each Lagrangian point. Then, an estimate of the feedback is computed based on the torques of the forces for each Lagrangian point. Finally, the beating pattern of the cilia is adjusted at the beginning of the next time step. By doing so, only the norm of the velocity vector, but not its direction, is modified. A coupling parameter α is also introduced to control both the intensity of the fluid feedback and the direction of the wave propagation : V i = V i ± α dV i . Figure 2 gives a schematic view of the forces and variables considered. More details regarding the feedback law can be found in [START_REF] Chateau | Transport efficiency of metachronal waves in 3D cilia arrays immersed in a two-phase flow[END_REF]. Larger3.tif F 3 -Antipleptic MCW emerging from an initially random state of the cilia. 1024 cilia arranged in a 32 × 32 square are considered on a computational domain of size (N x = 161, N y = 161, N z = 32) with a cilia spacing a/L = 0.23. The mucus phase is in red and the PCL phase in blue. The color bar indicates the phase of a particular cilium within one beating period, which is represented by a circle at its base. Top : 3D view of the system. Bottom : 2D view of the same system in a (x,y) plane to highlight the 3D modulation in the z-direction. Results Emergence of MCW Using the feedback law previously introduced, both antipleptic and symplectic MCW are seen to emerge. The parameter α plays a role in the emergence : it controls both the direction of the wave propagation and the time for synchronization to occur (higher absolute values of α reduce the time for MCW to emerge). Figure 3 shows an antipleptic MCW emerging from cilia beating initially randomly, using α = -3.5. In this figure the presence of two wavelengths can be noticed. The phase lag between neighbouring cilia is ∆Φ = π/8. Moreover, assuming a least effort behaviour of the cilia (meaning α < 0), antipleptic MCW are seen to emerge for small cilia spacings while symplectic MCW occur for higher cilia spacings (see figure 4). A tendency can also be observed (black line) and needs to be studied in more details. This result is important as it shows that natural cilia, who are usually highly packed, should adopt a least effort behaviour to organize in antipleptic MCW. However, with the present model, a least effort behaviour (α < 0) induces a stroke phase longer than the recovery phase. On the contrary, assuming that the cilia beat faster when encountering a resistance (meaning α > 0), the stroke phase becomes faster than the recovery phase as observed in real cilia configurations. While hydrodynamics interactions are sufficient to explain the synchronization of neighbouring cilia, they are not sufficient to explain the temporal asymmetry present in the beating pattern. Hence, the conclusion is that other biological parameters must also play a role. Note that the white space between the green and blue points (just above the black line) in figure 4 corresponds to simulations where no MCW emerged. Transport efficiency of MCW In order to quantify the efficiency of the MCW to transport fluids, a parametric study where the metachrony is imposed, has been performed. In the following, the stroke and recovery phases are fixed and take the same amount of time in order to study the influence of the spatial asymmetry only. Indeed, it is the only mechanism that is important for inducing motion in creeping flows [START_REF] Khaderi | Breaking of symmetry in microfluidic propulsion driven by artificial cilia[END_REF]. In order to save CPU time, the results presented here have been obtained using a Reynolds number of 20 larger than the one usually seen for such configurations (Re ≈ 10 -5 ). Thus, inertial effects are introduced in the model. However, it has been verified that such effects are small and do not modify the behaviour of MCW. The only notable quantitative differences are found for fully synchronized cilia : for such configurations, the inertial effects cancel the reversal of the flow that should occur during the recovery motion. More details regarding the effect of inertia can be found in [START_REF] Chateau | Transport efficiency of metachronal waves in 3D cilia arrays immersed in a two-phase flow[END_REF]. The total volume of fluid displaced during a beating cycle for the different phase lags is compared in figure 5. For a small cilia spacing (a/L = 1.67), the efficiency of the antipleptic metachrony is obvious. It agrees well with the results of [START_REF] Khaderi | Microfluidic propulsion by the metachronal beating of magnetic artificial cilia : a numerical analysis[END_REF] who observed a larger net flow produced by antipleptic metachrony for this value of cilia spacing. Symplectic waves appear to be less or at best equally efficient than antipleptic motion, except for ∆Φ = -7π/8 for a/L = 1.67 where there is a peak in the total displaced volume of flow. There are two neighbouring maxima at ∆Φ = π/4 and ∆Φ = π/2 for a/L = 1.67 and a/L = 2 respectively, indicating that specific phase lags are more able to generate a strong flow. A displacement ratio that will quantify the capacity of a given system to transport particles, with respect to a given amount of power, is now introduced. In that context, η 1 is defined by the mean fluid displacement over the x-direction during one beating cycle, divided by the mean power P * that a cilium had to spend during this beating cycle. Since the main purpose of mucociliary clearance is to transport mucus, and since experimental data [START_REF] Winters | Roles of hydration, sodium, and chloride in regulation of canine mucociliary transport system[END_REF] report that the total thickness in the vertical direction of the mucus layer is in the range [1.4L; 10L], values for the displacement were taken on an arbitrary plane z/L = 3.2 near the extremity of the domain. To obtain a value for the displacement, the instantaneous average fluid velocity over the x-direction is computed, and the resulting value is then multiplied by the period of a full beating cycle, giving the mean displacement < d x > over one period on the (x,y,3.2L) plane. By dividing this mean displacement with appropriate quantities, a dimensionless expression of the displacement ratio is obtained : η 1 = (< d x > N cil )/(λP * ) , where N cil is the number of cilia and λ the metachronal wavelength. For the synchronized case, i.e. ∆Φ = 0, λ is infinite and thus the size of the domain over the x direction was used and divided by the number of cilia. On Figure 6, one can see the displacement ratio η 1 for different cilia spacings a/L and phase lags ∆Φ. A clear peak exists for the smaller cilia spacing (a/L = 1.67) for a phase lag ∆Φ = π/4, showing that such antipleptic MCW are more able to transport fluids. On the other hand, symplectic MCW are often less, or at best equally efficient, than antipleptic MCW. The mixing capacity of the systems has also been evaluated. To do so, the average stretching rate during a beating period has been computed. Figure 7 shows the corresponding results. One can see that the antipleptic MCW corresponding to ∆Φ = π/4 and a/L = 1.67 present the best capacity for stretching Conclusion Considering a simple feedback law, both antipleptic and symplectic MCW were observed to emerge spontaneously in a two-layer environment. Considering a least effort behaviour for cilia, antipleptic MCW were obtained for the smallest cilia spacing studied, while symplectic MCW were observed for larger ones. The resulting beating pattern consisted in a slow stroke phase and a fast recovery phase, which is the opposite of what is observed in nature. Hence, other biological parameters must play a role in the cilia beating pattern, and hydrodynamic interactions are not sufficient to fully explain their motion. A parametric study has also been performed and shows that the antipleptic MCW with ∆Φ = π/4 are the most efficient in transporting and mixing fluids. a) Schematic view of the computational domain. The present case corresponds to an antipleptic MCW. The domain is filled with PCL (in blue) and mucus (in red). (b) Beating pattern of a cilium with the parametric equation used. Steps 1 to 6 correspond to the recovery phase, and steps 7 to 9 to the stroke phase. F 2 - 2 Schematic view of a cilium with the corresponding forces exerted on the fluids. The interpolated IB forces applied by the i th cilium onto the fluids -respectively F i F 4 - 4 Emergence of MCW as a function of the cilia spacing a/L and number N of cilia in a row. Green : Antipleptic MCW ; Blue : Symplectic MCW. The markers respectively represent : (+) : 1 wavelength ; (•) : 2 wavelengths ; and ( ) : 3 wavelengths. The black line illustrates the presence of a "trend". F 5 - 5 Total dimensionless displaced flow volume generated by an array of cilia over a beating cycle for different phase lags and cilia spacings. + : a/L = 1.67 ; : a/L = 2 ; * : a/L = 2.53 ; : a/L = 3.33. F 6 - 6 Displacement ratio η 1 as a function of the phase lag ∆Φ for different cilia spacings a/L. 4 F 7 - 47 Average stretching rate computed between 0 < z < 1.4L as a function of the phase lag ∆Φ for different cilia spacings a/L. the fluids. A more complete study is ongoing. Acknowledgment The authors would like to thank the Natural Sciences and Engineering Research Council of Canada for its financial support through a Discovery Grant (RGPIN-2015-06512). This work was granted access to the HPC resources of Compute Canada and Aix-Marseille University (project Equip@Meso ANR-10-EQPX-29-01).
01697476
en
[ "sdv.bbm.bm" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01697476/file/01697476%20nihms842761.pdf
Andrea N Kravats Shannon M Doyle email: [email protected]. Joel R Hoskins Olivier Genest Erin Doody Sue Wickner email: [email protected] Interaction of E. coli Hsp90 with DnaK involves the DnaJ binding region of DnaK Keywords: Hsp40, CbpA, HtpG, molecular chaperone, protein remodeling Hsp90 is a widely conserved and ubiquitous molecular chaperone that participates in ATPdependent protein remodeling in both eukaryotes and prokaryotes. It functions in conjunction with Hsp70 and the Hsp70 cochaperones, an Hsp40 (J-protein) and a nucleotide exchange factor. In E. coli the functional collaboration between Hsp90 Ec and Hsp70, DnaK, requires that the two chaperones directly interact. We used molecular docking to model the interaction of Hsp90 Ec and DnaK. The top-ranked docked model predicted that a region in the nucleotide-binding domain of DnaK interacted with a region in the middle domain of Hsp90 Ec . We then made substitution mutants in DnaK residues suggested by the model to interact with Hsp90 Ec . Eleven of the twelve mutants tested were defective or partially defective in their ability to interact with Hsp90 Ec in vivo in a bacterial two-hybrid assay and in vitro in a Bio-Layer Interferometry assay. These DnaK mutants were also defective in their ability to function collaboratively in protein remodeling with Hsp90 Ec , but retained the ability to act with DnaK cochaperones. Taken together these results suggest that a specific region in the nucleotide-binding domain of DnaK is involved in the interaction with Hsp90 Ec and this interaction is functionally important. Moreover, the region of DnaK that we found to be necessary for Hsp90 Ec binding includes residues that are also involved in J-protein binding, suggesting a functional interplay between DnaK, DnaK cochaperones and Hsp90 Ec . Introduction The 90-kDa heat shock protein (Hsp90) is a highly ubiquitous and evolutionarily conserved molecular chaperone [START_REF] Johnson | Evolution and function of diverse Hsp90 homologs and cochaperone proteins[END_REF][START_REF] Mayer | Gymnastics of molecular chaperones[END_REF][START_REF] Röhl | The chaperone Hsp90: Changing partners for demanding clients[END_REF][START_REF] Taipale | HSP90 at the hub of protein homeostasis: emerging mechanistic insights[END_REF][START_REF] Li | The Hsp90 chaperone machinery: Conformational dynamics and regulation by co-chaperones[END_REF]. It is essential in eukaryotes, where it is involved in the folding, stability and activation of more than 200 client proteins including many transcription factors, steroid hormone receptors and protein kinases [START_REF] Taipale | HSP90 at the hub of protein homeostasis: emerging mechanistic insights[END_REF][START_REF] Zuehlke | Hsp90 and co-chaperones twist the functions of diverse client proteins[END_REF][START_REF] Karagöz | Hsp90 interaction with clients[END_REF]. In addition, Hsp90 stabilizes and activates oncoproteins and therefore is a potential target for drug discovery for the treatment of cancer [START_REF] Trepel | Targeting the dynamic HSP90 complex in cancer[END_REF]. Escherichia coli Hsp90, referred to as Hsp90 Ec and encoded by htpG, is an abundant protein and is further induced upon heat shock and other stress conditions [START_REF] Bardwell | Ancient heat shock gene is dispensable[END_REF]. Strains lacking Hsp90 Ec exhibit modest phenotypes, including slow growth at elevated temperature [START_REF] Bardwell | Ancient heat shock gene is dispensable[END_REF], accumulation of aggregated proteins at high temperature [START_REF] Thomas | ClpB and HtpG facilitate de novo protein folding in stressed Escherichia coli cells[END_REF], loss of adaptive immunity conferred by the CRISPR system [START_REF] Yosef | High-temperature protein G is essential for activity of the Escherichia coli clustered regularly interspaced short palindromic repeats (CRISPR)/Cas system[END_REF], decreased ability to form biofilms at elevated temperature [START_REF] Grudniak | Interactions of Escherichia coli molecular chaperone HtpG with DnaA replication initiator DNA[END_REF] and decreased ability to swarm [START_REF] Press | Genome-scale Coevolutionary Inference Identifies Functions and Clients of Bacterial Hsp90[END_REF]. Overexpression of Hsp90 Ec causes defects in cell division that results in filamentous cells as well as SDS sensitivity [START_REF] Genest | Uncovering a Region of Heat Shock Protein 90 Important for Client Binding in E. coli and Chaperone Function in Yeast[END_REF]. Hsp90 is a homodimer with each monomer consisting of three domains: an N-terminal domain that possesses an ATP binding site [START_REF] Prodromou | The 'active life' of Hsp90 complexes[END_REF][START_REF] Prodromou | Tuning" the ATPase Activity of Hsp90[END_REF]; a middle domain containing residues that participate in binding client proteins [START_REF] Karagöz | Hsp90 interaction with clients[END_REF][START_REF] Genest | Uncovering a Region of Heat Shock Protein 90 Important for Client Binding in E. coli and Chaperone Function in Yeast[END_REF][START_REF] Prodromou | The 'active life' of Hsp90 complexes[END_REF][START_REF] Southworth | Species-Dependent Ensembles of Conserved Conformational States Define the Hsp90 Chaperone ATPase Cycle[END_REF]; and a C-terminal domain that is essential for dimerization and is also involved in client binding [START_REF] Röhl | The chaperone Hsp90: Changing partners for demanding clients[END_REF][START_REF] Genest | Uncovering a Region of Heat Shock Protein 90 Important for Client Binding in E. coli and Chaperone Function in Yeast[END_REF]. ATP binding and hydrolysis by Hsp90 triggers large conformational changes that are necessary for the cycle of client binding, remodeling and release [START_REF] Taipale | HSP90 at the hub of protein homeostasis: emerging mechanistic insights[END_REF][START_REF] Prodromou | The 'active life' of Hsp90 complexes[END_REF][START_REF] Prodromou | Tuning" the ATPase Activity of Hsp90[END_REF][START_REF] Graf | Spatially and kinetically resolved changes in the conformational dynamics of the Hsp90 chaperone machine[END_REF][START_REF] Krukenberg | Conformational dynamics of the molecular chaperone Hsp90[END_REF][START_REF] Ratzke | Dynamics of heat shock protein 90 Cterminal dimerization is an important part of its conformational cycle[END_REF][START_REF] Shiau | Structural Analysis of E. coli hsp90 Reveals Dramatic Nucleotide-Dependent Conformational Rearrangements[END_REF]. The Hsp90 dimer exists in a predominantly open V-shaped conformation in the absence of nucleotide with the protomers interacting via the C-terminal domain [START_REF] Shiau | Structural Analysis of E. coli hsp90 Reveals Dramatic Nucleotide-Dependent Conformational Rearrangements[END_REF]. ATP binding triggers closing of the ATP lid on the ATP-binding site, followed by dimerization of the two N-terminal domains [START_REF] Taipale | HSP90 at the hub of protein homeostasis: emerging mechanistic insights[END_REF][START_REF] Graf | Spatially and kinetically resolved changes in the conformational dynamics of the Hsp90 chaperone machine[END_REF][START_REF] Shiau | Structural Analysis of E. coli hsp90 Reveals Dramatic Nucleotide-Dependent Conformational Rearrangements[END_REF][START_REF] Ali | Crystal structure of an Hsp90-nucleotide-p23/Sba1 closed chaperone complex[END_REF]. ATP hydrolysis and ADP release leads to the dissociation of the N-domains [START_REF] Taipale | HSP90 at the hub of protein homeostasis: emerging mechanistic insights[END_REF][START_REF] Graf | Spatially and kinetically resolved changes in the conformational dynamics of the Hsp90 chaperone machine[END_REF][START_REF] Shiau | Structural Analysis of E. coli hsp90 Reveals Dramatic Nucleotide-Dependent Conformational Rearrangements[END_REF] and Hsp90 returns to the open conformation [START_REF] Graf | Spatially and kinetically resolved changes in the conformational dynamics of the Hsp90 chaperone machine[END_REF][START_REF] Krukenberg | Conformational dynamics of the molecular chaperone Hsp90[END_REF][START_REF] Ratzke | Dynamics of heat shock protein 90 Cterminal dimerization is an important part of its conformational cycle[END_REF][START_REF] Ali | Crystal structure of an Hsp90-nucleotide-p23/Sba1 closed chaperone complex[END_REF]. Cochaperones and client protein binding bias the Hsp90 chaperone cycle and stabilize or destabilize various conformations of Hsp90 [START_REF] Johnson | Evolution and function of diverse Hsp90 homologs and cochaperone proteins[END_REF][START_REF] Röhl | The chaperone Hsp90: Changing partners for demanding clients[END_REF][START_REF] Taipale | HSP90 at the hub of protein homeostasis: emerging mechanistic insights[END_REF]. Hsp90 functions with the Hsp70 chaperone system in protein activation and remodeling [START_REF] Johnson | Evolution and function of diverse Hsp90 homologs and cochaperone proteins[END_REF][START_REF] Li | The Hsp90 chaperone machinery: Conformational dynamics and regulation by co-chaperones[END_REF][START_REF] Genest | Heat shock protein 90 from Escherichia coli collaborates with the DnaK chaperone system in client protein remodeling[END_REF]. Eukaryotic Hsp70 and its prokaryotic homolog, DnaK, are highly conserved proteins [START_REF] Balchin | In vivo aspects of protein folding and quality control[END_REF][START_REF] Mayer | Hsp70 chaperone dynamics and molecular mechanism[END_REF][START_REF] Clerico | How Hsp70 molecular machines interact with their substrates to mediate diverse physiological functions[END_REF][START_REF] Thomas | Dynamical Structures of Hsp70 and Hsp70-Hsp40 Complexes[END_REF]. Hsp70/DnaK is comprised of an N-terminal nucleotide binding domain (NBD) and a C-terminal substrate-binding domain (SBD) that are connected by a flexible linker [START_REF] Mayer | Hsp70 chaperone dynamics and molecular mechanism[END_REF]. It collaborates with two cochaperones, an Hsp40 (J-domain protein) and a nucleotide exchange factor (NEF). The Hsp40 protein stimulates ATP hydrolysis by Hsp70/DnaK and presents substrate to Hsp70/DnaK while the NEF stimulates nucleotide exchange by Hsp70/DnaK [START_REF] Balchin | In vivo aspects of protein folding and quality control[END_REF][START_REF] Thomas | Dynamical Structures of Hsp70 and Hsp70-Hsp40 Complexes[END_REF][START_REF] Zuiderweg | Allostery in the Hsp70 Chaperone Proteins[END_REF]. In addition to collaborating with the Hsp70 chaperone system, eukaryotic Hsp90 also functions with numerous cochaperones, including Hop/Sti1, Aha1/Hch1, p23/Sba1, Cdc37 and Sgt1 [START_REF] Johnson | Evolution and function of diverse Hsp90 homologs and cochaperone proteins[END_REF][START_REF] Prodromou | The 'active life' of Hsp90 complexes[END_REF][START_REF] Prodromou | Tuning" the ATPase Activity of Hsp90[END_REF]. Cochaperones regulate Hsp90 in various ways, such as imparting client protein specificity, modulating ATPase activity or stabilizing specific Hsp90 conformations [START_REF] Johnson | Evolution and function of diverse Hsp90 homologs and cochaperone proteins[END_REF][START_REF] Taipale | HSP90 at the hub of protein homeostasis: emerging mechanistic insights[END_REF]. Moreover, Hop/Sti1 stabilizes the interaction between eukaryotic Hsp90 and Hsp70 by simultaneously interacting with tetratricopeptide repeat domains at the extreme Cterminus of each chaperone. In contrast to eukaryotes, bacterial Hsp90 functions with the DnaK chaperone system independently of other Hsp90 cochaperones. Protein reactivation by bacterial Hsp90 in vitro is simpler in its requirements than its eukaryotic homolog, requiring only DnaK and a J-domain protein [START_REF] Genest | Heat shock protein 90 from Escherichia coli collaborates with the DnaK chaperone system in client protein remodeling[END_REF][START_REF] Nakamoto | Physical interaction between bacterial heat shock protein (Hsp) 90 and Hsp70 chaperones mediates their cooperative action to refold denatured proteins[END_REF]. GrpE, the bacterial NEF stimulates reactivation, but is not essential [START_REF] Genest | Heat shock protein 90 from Escherichia coli collaborates with the DnaK chaperone system in client protein remodeling[END_REF]. ATP hydrolysis and client binding by both chaperones is essential for reactivation [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF]. Moreover, bacterial Hsp90 physically interacts with DnaK in vivo and in vitro [START_REF] Nakamoto | Physical interaction between bacterial heat shock protein (Hsp) 90 and Hsp70 chaperones mediates their cooperative action to refold denatured proteins[END_REF][START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF] through a region identified on the M-domain of Hsp90 Ec [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF]. In the work presented here, we examined the collaboration between Hsp90 Ec and DnaK. We used molecular modeling to identify a region on DnaK that interacts with the middle domain of Hsp90 Ec . We then constructed DnaK mutants in some of the residues suggested by the docked model and tested the mutants for defects in interaction and functional collaboration with Hsp90 Ec . We found that the region of DnaK involved in the physical and functional interaction with Hsp90 Ec comprises residues in the DnaK NBD. This region of DnaK overlaps the region where DnaJ binds, suggesting a mechanism where Hsp90 Ec directly interacts with a client bound DnaK-DnaJ complex to displace DnaJ and promote the transfer of substrate to Hsp90 Ec . Results Identification of residues in DnaK potentially involved in protein interactions with Hsp90 Ec We previously showed that E. coli DnaK collaborates with Hsp90 Ec in protein reactivation and that the two chaperones physically interact through a region in the middle domain of Hsp90 Ec [START_REF] Genest | Heat shock protein 90 from Escherichia coli collaborates with the DnaK chaperone system in client protein remodeling[END_REF][START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF]. To elucidate the region of DnaK essential for binding Hsp90 Ec , we used molecular docking to predict potential interactions between the two chaperones. Four combinations of Hsp90 Ec and DnaK molecules available in the protein data bank were tested [START_REF] Shiau | Structural Analysis of E. coli hsp90 Reveals Dramatic Nucleotide-Dependent Conformational Rearrangements[END_REF][START_REF] Bertelsen | Solution conformation of wild-type E. \coli Hsp70 (DnaK) chaperone complexed with ADP and substrate[END_REF][START_REF] Kityk | Structure and Dynamics of the ATP-Bound Open Conformation of Hsp70 Chaperones[END_REF]: 1) ADP-bound DnaK with apo Hsp90 Ec , 2) ADP-bound DnaK with ADPbound Hsp90 Ec , 3) ATP-bound DnaK with apo Hsp90 Ec and 4) ATP-bound DnaK with ADP-bound Hsp90 Ec (see Materials and Methods). Without imposing restraints, the proteins were docked using ZDOCK [START_REF] Chen | ZDOCK: An initial-stage protein-docking algorithm[END_REF][START_REF] Pierce | ZDOCK server: Interactive docking prediction of protein-protein complexes and symmetric multimers[END_REF] and the top model for each combination was selected based on the lowest energy using ZRANK [START_REF] Pierce | ZRANK: reranking protien docking predictions with an optimized energy function[END_REF]. Three of the four models (2, 3 and 4, above) could be eliminated since they predicted interacting regions that were inconsistent with earlier work [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF] or with results shown in Supplemental Information (Supplemental Results and Supplemental Fig. S1-S5 and Supplemental Tables S2-S4). The model of the complex of ADP-bound DnaK and apo Hsp90 Ec predicted that DnaK interacted exclusively with residues on the Hsp90 Ec middle domain, many of which had been shown experimentally to be involved in binding DnaK [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF] (Fig. 1a and Table 1). To gain additional evidence for this model, we constructed two Hsp90 Ec mutants with substitutions in residues predicted by the model to be important in interaction with DnaK (Supplemental Fig. S6a). Hsp90 Ec(Q302A, R303A) was defective in functional and physical interaction with DnaK in vitro; Hsp90 Ec(D329A, L330A, P331A) was slightly defective (Supplemental Fig. S6b andS6c). To further test our model, we used the PRISM webserver [START_REF] Baspinar | PRISM: A web server and repository for prediction of protein-protein interactions and modeling their 3D complexes[END_REF][START_REF] Tuncbag | Predicting protein-protein interactions on a proteome scale by matching evolutionary and structural similarities at interfaces using PRISM[END_REF] to predict the interface between DnaK in the ADP-bound conformation and apo Hsp90 Ec by structural matching. The model we obtained was very similar to our top docking result obtained with ZDOCK and ZRANK (Supplemental Fig. S7a and S7b and Supplemental Table S1). The model of ADP-bound DnaK and apo Hsp90 Ec predicted that residues in the DnaK nucleotide-binding domain (NBD) interacted with Hsp90 Ec (Fig. 1a and1b, Table 1 and Supplemental Table S1). Interestingly, of the 35 DnaK residues predicted from this docked model to interact with Hsp90 Ec , 20 were previously identified as important for interacting with DnaJ [START_REF] Ahmad | Heat shock protein 70 kDa chaperone/DnaJ cochaperone complex employs an unusual dynamic interface[END_REF][START_REF] Suh | Interaction of the Hsp70 molecular chaperone , DnaK , with its cochaperone Dna[END_REF][START_REF] Gässler | Mutations in the DnaK chaperone affecting interaction with the DnaJ cochaperone[END_REF], suggesting that Hsp90 Ec and DnaJ share a common binding interface on DnaK (Fig. 1c, Table 1). Additionally, 26 of the 35 predicted residues are also involved in the association between the DnaK NBD and substrate-binding domain (SBD) that occurs when DnaK is in the ATP conformation [START_REF] Kityk | Structure and Dynamics of the ATP-Bound Open Conformation of Hsp70 Chaperones[END_REF][START_REF] Qi | Allosteric opening of the polypeptide-binding site when an Hsp70 binds ATP[END_REF] (Fig. 1d). The NBD and SBD of DnaK are not in contact with one another when DnaK is in the ADP-bound conformation [START_REF] Bertelsen | Solution conformation of wild-type E. \coli Hsp70 (DnaK) chaperone complexed with ADP and substrate[END_REF], indicating that Hsp90 Ec likely interacts with the ADP-bound conformation of DnaK. Importantly, the docked model of ADP-bound DnaK and apo Hsp90 Ec , would allow a client protein bound to the SBD of DnaK to readily interact with the client-binding site of Hsp90 Ec , making this model mechanistically plausible (Fig. 1a). Mutations in the potential Hsp90 Ec binding region of DnaK cause defective interaction with Hsp90 Ec in vivo We wanted to explore the validity of the docked model of the apo Hsp90 Ec structure with DnaK in the ADP-bound form. To do this we constructed site-directed amino acid substitutions in surface exposed residues of DnaK in and near the residues suggested by the model to interact with Hsp90 Ec and screened the mutants in a bacterial two-hybrid assay (Fig. 2a). Some substitutions were previously described mutants, including DnaK Y145A,N147A,D148A [START_REF] Gässler | Mutations in the DnaK chaperone affecting interaction with the DnaJ cochaperone[END_REF], DnaK R167H [START_REF] Suh | Interaction of the Hsp70 molecular chaperone , DnaK , with its cochaperone Dna[END_REF], DnaK N170A,T173A [START_REF] Suh | Interaction of the Hsp70 molecular chaperone , DnaK , with its cochaperone Dna[END_REF] and DnaK E217A,V218A [START_REF] Gässler | Mutations in the DnaK chaperone affecting interaction with the DnaJ cochaperone[END_REF], while other substitutions were in residues that had previously been substituted with alternate amino acids, including DnaK D208R , DnaK E209C and DnaK V210R [START_REF] Ahmad | Heat shock protein 70 kDa chaperone/DnaJ cochaperone complex employs an unusual dynamic interface[END_REF]. The remaining mutants were alanine substitutions or charge changes, including DnaK R84E , DnaK D211R , DnaK E213Q,K214A , DnaK D224K,T225A and DnaK S234A,I237A,N238A . We had previously observed that DnaK wild-type and Hsp90 Ec interact in the bacterial twohybrid assay [START_REF] Genest | Heat shock protein 90 from Escherichia coli collaborates with the DnaK chaperone system in client protein remodeling[END_REF][START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF]. For this assay one domain of the Bordetella pertussis adenylate cyclase protein, T18, was fused to Hsp90 Ec and the other domain, T25, was fused to DnaK wild-type or to a DnaK mutant. If the two fusion proteins interact when they are coexpressed in an E. coli cya-strain, cyclic AMP is synthesized and the cAMP reporter gene, βgalactosidase, is expressed [START_REF] Battesti | The bacterial two-hybrid system based on adenylate cyclase reconstitution in Escherichia coli[END_REF]. As we saw previously, the coexpression of T25-DnaK wildtype and T18-Hsp90 Ec wild-type resulted in colonies that appeared red on indicator plates and expressed ~12-fold higher levels of β-galactosidase compared to coexpression of T25-DnaK wild-type and T18-vector [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF] (Fig. 2b and2c). When plasmids expressing T25-DnaK mutant proteins were coexpressed with T18-Hsp90 Ec in an E. coli cya-strain, we observed that 11 of the 12 colonies appeared white or pale pink on indicator plates and had lower levels of β-galactosidase activity than cells coexpressing T25-DnaK wild-type and T18-Hsp90 Ec , suggesting that the DnaK mutant fusion proteins were defective in interaction with Hsp90 Ec (Figure 2b and2c). These mutants included DnaK Y145A,N147A,D148A , DnaK R167H , DnaK N170A,T173A , DnaK D208R , DnaK V210R , DnaK E217A,V218A , DnaK S234A,I237A,N238A , DnaK E209C , DnaK D224K,T225A , DnaK D211R and DnaK R84E . One mutant, DnaK E213Q,K214A , produced red colonies on indicator plates and had β-galactosidase activity similar to T25-DnaK wild-type (Fig. 2b and2c) and was therefore not studied further. Control experiments showed that all of the mutant fusion proteins were expressed at levels similar to the T25-DnaK wild-type protein (Supplemental Fig. S8). Thus, all but one of the DnaK substitution mutants constructed in residues that were predicted by the model to interact with Hsp90 Ec or were in nearby surface exposed residues were defective in interaction in vivo. These results suggest the DnaK mutants may be defective in direct interaction with Hsp90 Ec . However, in vivo the interaction may be affected by other cellular components [START_REF] Battesti | The bacterial two-hybrid system based on adenylate cyclase reconstitution in Escherichia coli[END_REF]. DnaK mutant proteins defective in Hsp90 Ec interaction in vivo are defective in direct complex formation with Hsp90 Ec in vitro We next tested if the DnaK mutants that were defective in the two-hybrid screen were defective in direct protein-protein interaction in vitro. We first cloned, purified and characterized the mutant proteins (Supplemental Results and Supplemental Fig. S9 andS10). DnaK and Hsp90 Ec proteins have been shown to form a binary complex in vitro in both the presence and absence of hydrolyzable ATP, although the interaction is very weak [START_REF] Nakamoto | Physical interaction between bacterial heat shock protein (Hsp) 90 and Hsp70 chaperones mediates their cooperative action to refold denatured proteins[END_REF][START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF]. We tested our DnaK mutant proteins for direct interaction with Hsp90 Ec by Bio-Layer Interferometry (BLI). Hsp90 Ec(E584C) was specifically labeled with biotin and immobilized on a streptavidin-coated biosensor (see Material and Methods). Binding was monitored using buffer conditions that allowed an optimal signal to noise ratio (see Material and Methods). We first monitored the binding of DnaK wild-type and Hsp90 Ec (Fig. 3a and3b). Since the binding curves at high DnaK concentrations were complex, we used a steady-state analysis and calculated the K d to be ~13 µM under the conditions used in this assay (Fig. 3a and3b). The observed K d is consistent with the weak interaction seen previously between Hsp90 Ec and DnaK wild-type [START_REF] Genest | Heat shock protein 90 from Escherichia coli collaborates with the DnaK chaperone system in client protein remodeling[END_REF][START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF] as well as between Synechococcus elongates Hsp90 and DnaK [START_REF] Nakamoto | Physical interaction between bacterial heat shock protein (Hsp) 90 and Hsp70 chaperones mediates their cooperative action to refold denatured proteins[END_REF] and mitochondrial Hsp90 (TRAP) and Hsp70 (Mortalin) [START_REF] Sung | 2.4 Å resolution crystal structure of human TRAP1NM, the Hsp90 paralog in the mitochondrial matrix[END_REF]. We then used BLI to monitor the direct interaction between the DnaK mutant proteins and Hsp90 Ec , at a concentration of DnaK (50 µM) that was saturating for the interaction of DnaK wild-type with Hsp90 Ec (Fig. 3c and3d). The results indicated that all of the DnaK mutants were defective to varying degrees in interaction with Hsp90 Ec . The most defective were DnaK E217A,V218A , DnaK V210R and DnaK R167H (Fig. 3c and3d). Others, including DnaK Y145A,N147A,D148A , DnaK R84E , DnaK D208R , DnaK D224K,T225A , DnaK N170A,T173A , DnaK D211R and DnaK E209C were partially defective in Hsp90 Ec binding (Fig. 3c and3d). One mutant, DnaK Y145A,N147A,D148A , was further evaluated and found to have ~3-fold lower affinity for Hsp90 Ec than wild-type DnaK; the K d was ~44 µM (Fig. 3e). DnaK S234A,I237A,N238A was the least defective mutant protein tested (Fig. 3c and3d). These results show that the DnaK residues identified by molecular docking are important for the direct interaction with Hsp90 Ec . Based on our previous observation that a model client protein, ribosomal protein L2, stabilizes the DnaK-Hsp90 Ec interaction [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF], we next tested if L2 similarly stabilizes the physical interaction between the DnaK mutant proteins and Hsp90 Ec . We used an in vitro protein-protein interaction assay (pull down assay) in which DnaK wild-type or mutant protein was incubated with biotin labeled Hsp90 Ec in the presence of L2. Biotinylated Hsp90 Ec and associated proteins were then captured on neutravidin agarose beads and analyzed by SDS-PAGE. As shown previously [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF], in the absence of L2 the interaction between Hsp90 Ec and DnaK wild-type was not detectable by Coomassie staining of the gels (Fig. 4a, compare lanes 1 and 2), although the weak interaction could be detected by Western blot analysis [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF]. Similarly, interaction between Hsp90 Ec and the DnaK mutant proteins in the absence of L2 was not observed by Coomassie staining (Fig. 4a, lanes 4-14). In the presence of L2, the three DnaK mutants that were most defective in binary interaction with Hsp90 Ec (Fig. 3c and3d), including DnaK E217A,V218A , DnaK R167H and DnaK V210R , were most defective in interaction with Hsp90 Ec in the pull-down assay (Fig. 4b and4c). Other mutants were partially defective, including DnaK R84E , DnaK N170A,T173A , DnaK D211R , DnaK Y145A,N147A,D148A , DnaK D208R and DnaK E209C (Fig. 4b and4c). DnaK D224K,T225A , which was partially defective in binary interaction with Hsp90 Ec , bound Hsp90 Ec in the presence of L2 to a similar extent as DnaK wild-type (Fig. 4b and4c). This observation suggests that L2 rescues the defective Hsp90 Ec -DnaK D224K,T225A interaction. The mutant that was least defective in the BLI assay, DnaK S234A,I237A,N238A , bound Hsp90 Ec in the presence of L2 similarly to DnaK wild-type (Fig. 4b and4c). In control experiments, all of the DnaK mutants were able to bind L2 like DnaK wild-type, showing that the differences in complex formation in the presence of L2 were not due to defects in client binding by the DnaK mutant proteins (Supplemental Fig. S10c). Together these results indicate that many of the DnaK mutant proteins are defective in direct physical interactions with Hsp90 Ec and show decreased ability to interact with Hsp90 Ec in the presence of L2. DnaK mutants are defective in functioning synergistically with Hsp90 Ec We next wanted to determine whether DnaK mutants that are defective in their ability to physically interact with Hsp90 Ec are also defective in collaborating synergistically with Hsp90 Ec in functional assays. It has previously been shown that ATP hydrolysis is stimulated ~2-fold above additive by the combined action of Hsp90 Ec and DnaK in the presence of ribosomal protein L2, demonstrating a functional collaboration between the two chaperones [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF] (Fig. 5). This synergy requires client binding by DnaK and Hsp90 Ec as well as ATP hydrolysis by both chaperones [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF]. Although all of our DnaK mutant proteins bound L2 similarly to wild-type (Supplemental Fig. S10c) and hydrolyzed ATP (Supplemental Fig. S10d), we observed that several of the DnaK mutants showed no synergy in ATP hydrolysis in combination with Hsp90 Ec and L2, including DnaK E217A,V218A DnaK V210R and DnaK D208R (Fig. 5). Others stimulated ATP hydrolysis 1.1 to 1.3-fold above additive compared to the 1.9-fold stimulation seen with wild-type DnaK, including DnaK R84E , DnaK N170A,T173A , DnaK R167H , DnaK Y145A,N147A,D148A , DnaK D211R and DnaK E209C (Fig. 5). DnaK S234A,I237A,N238A was slightly defective in stimulating ATP hydrolysis, consistent with its slight defect in the BLI assay. In combination with Hsp90 Ec and L2, DnaK D224K,T225A stimulated ATP hydrolysis 2.7-fold above additive (Fig. 5). The stabilization of the Hsp90 Ec -DnaK D224K,T225A interaction by L2 (Fig. 4b and4c) may explain the increased synergy in ATP hydrolysis. Taken together, the results indicate that the DnaK mutants that are defective or partially defective in physical interaction with Hsp90 Ec are defective or partially defective in functional interaction as well, with the exception of DnaK D224K,T225A . We next tested if the DnaK mutants were also defective in functional collaboration with Hsp90 Ec in client protein remodeling. We assessed remodeling ability by monitoring reactivation of heat-inactivated luciferase by the combination of the DnaK chaperone system and Hsp90 Ec . In this assay, the chaperone concentrations were optimized in order to observe the largest dependence on Hsp90 Ec . Under these conditions, DnaK and two cochaperones, CbpA (a DnaJ homolog in E. coli) and GrpE (a nucleotide exchange factor), catalyze luciferase reactivation at a slow rate and Hsp90 Ec stimulates the rate ~6-fold [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF] (Fig. 6a). The percentage of luciferase reactivated is low; however, only ~20% of the total heatinactivated luciferase is soluble and available for reactivation [START_REF] Genest | Heat shock protein 90 from Escherichia coli collaborates with the DnaK chaperone system in client protein remodeling[END_REF]. When we tested our DnaK mutant proteins, all were defective or partially defective in functioning synergistically with Hsp90 Ec (Fig. 6). DnaK R167H , DnaK E217A,V218A and DnaK V210R , reactivated luciferase at rates similar to DnaK wild-type in the absence of Hsp90 Ec , but were not stimulated or only slightly stimulated by Hsp90 Ec (Fig. 6b-6d). These three DnaK mutants were also the most defective in binary complex formation with Hsp90 Ec (Fig. 3) and in complex formation with Hsp90 Ec and L2 (Fig. 4). The remainder of the mutants with the exception of DnaK D208R were also defective or partially defective in synergistic reactivation of luciferase with Hsp90 Ec . However, in the absence of Hsp90 Ec several of these mutants were similar to DnaK wild-type in the ability to reactivate luciferase with CbpA and GrpE, including DnaK D211R , DnaK E209C and DnaK D224K,T225A (Fig. 6e-6g), while others, including DnaK N170A,T173A , DnaK S234A,I237A,N238A , DnaK Y145A,N147A,D148A and DnaK R84E , were more active than DnaK wild-type (Fig. 6h-6k). The reason for the increased activity of these mutants with CbpA and GrpE is not understood. One mutant, DnaK D208R , was defective in the absence and presence of Hsp90 Ec (Fig. 6l), consistent with previous results showing that this mutant is defective in J-domain stimulated ATPase activity [START_REF] Ahmad | Heat shock protein 70 kDa chaperone/DnaJ cochaperone complex employs an unusual dynamic interface[END_REF]. Altogether, the results indicate that many of the DnaK mutants are defective or partially defective in their ability to function collaboratively with Hsp90 Ec in protein reactivation. They suggest that the functional defects of the DnaK mutants are a consequence of defects in the physical interaction between the mutants and Hsp90 Ec . In summary, we have identified a region of DnaK that is required for the interaction of DnaK with Hsp90 Ec . The DnaK mutants define a site located in the nucleotide-binding domain of DnaK that is essential for Hsp90 Ec interaction. This region overlaps the DnaK site known to bind DnaJ, suggesting an interplay between DnaK, DnaJ and Hsp90 Ec during protein remodeling. Discussion In this work, we identified a region in the nucleotide-binding domain of DnaK that is important for the physical and functional interaction with a region in the middle domain of Hsp90 Ec . Direct interactions between eukaryotic Hsp90 and Hsp70 have been observed previously. A recent study showed a direct interaction between the mitochondrial Hsp90 and Hsp70 homologs, TRAP1 and Mortalin [START_REF] Sung | 2.4 Å resolution crystal structure of human TRAP1NM, the Hsp90 paralog in the mitochondrial matrix[END_REF]. Like bacteria, mitochondria lack Hop/Sti1, a cochaperone known to interact simultaneously with both Hsp90 and Hsp70 and facilitate the Hsp90-Hsp70 interaction [START_REF] Johnson | Evolution and function of diverse Hsp90 homologs and cochaperone proteins[END_REF][START_REF] Li | Structure, function and regulation of the hsp90 machinery[END_REF]. Therefore, the mechanism of action of TRAP1 may be more similar to that of bacterial Hsp90 than eukaryotic cytoplasmic Hsp90. However, in studies using crude lysates from rabbit reticulocytes or using purified cytoplasmic Hsp90 and Hsp70 from Neurospora crassa [START_REF] Murphy | Stoichiometry, Abundance, and Functional Significance of the hsp90/hsp70-based Multiprotein Chaperone Machinery in Reticulocyte Lysate[END_REF][START_REF] Freitag | Heat shock protein 80 of Neurospora crassa, a cytosolic molecular chaperone of the eukaryotic stress 90 family, interacts directly with heat shock protein 70[END_REF][START_REF] Britton | The oligomeric state, complex formation, and chaperoning activity of hsp70 and hsp80 of Neurospora crassa[END_REF], direct interactions were also observed. In both systems, Hop/Sti1 increased the amount of Hsp90 and Hsp70 in the complex [START_REF] Murphy | Stoichiometry, Abundance, and Functional Significance of the hsp90/hsp70-based Multiprotein Chaperone Machinery in Reticulocyte Lysate[END_REF][START_REF] Freitag | Heat shock protein 80 of Neurospora crassa, a cytosolic molecular chaperone of the eukaryotic stress 90 family, interacts directly with heat shock protein 70[END_REF][START_REF] Britton | The oligomeric state, complex formation, and chaperoning activity of hsp70 and hsp80 of Neurospora crassa[END_REF]. Thus, the functional significance of the observed direct interaction between cytoplasmic chaperones is not clear, and current models suggest that eukaryotic Hsp90 and Hsp70 need not contact one another directly, but rather Hop/Sti1 forms a bridge between the two chaperones [START_REF] Röhl | The chaperone Hsp90: Changing partners for demanding clients[END_REF][START_REF] Taipale | HSP90 at the hub of protein homeostasis: emerging mechanistic insights[END_REF][START_REF] Prodromou | The 'active life' of Hsp90 complexes[END_REF][START_REF] Prodromou | Tuning" the ATPase Activity of Hsp90[END_REF][START_REF] Li | Structure, function and regulation of the hsp90 machinery[END_REF]. Our finding that many residues in the DnaK NBD that are involved in Hsp90 Ec interaction are also involved in binding DnaJ/CbpA [START_REF] Ahmad | Heat shock protein 70 kDa chaperone/DnaJ cochaperone complex employs an unusual dynamic interface[END_REF][START_REF] Suh | Interaction of the Hsp70 molecular chaperone , DnaK , with its cochaperone Dna[END_REF][START_REF] Gässler | Mutations in the DnaK chaperone affecting interaction with the DnaJ cochaperone[END_REF][START_REF] Mayer | Investigation of the interaction between DnaK and DnaJ by surface plasmon resonance spectroscopy[END_REF], suggests the possibility of a functional interplay of DnaK with Hsp90 Ec and DnaJ/CbpA. One hypothesis that invokes a DnaK binding site shared by DnaJ/CbpA and Hsp90 Ec is that Hsp90 Ec participates in displacing DnaJ/CbpA from DnaK once DnaJ/CbpA has promoted ATP hydrolysis by DnaK. This is consistent with our previous studies that showed substoichiometric concentrations of CbpA (~1:100, CbpA:Hsp90 Ec ) facilitated formation of the DnaK-Hsp90 Ec -L2 ternary complex while CbpA was not detected in the complex [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF]. Previous studies have also suggested a similar role for Hsp40, where Hsp40 stimulated formation of Hsp90-Hop-Hsp70 complexes but was not seen in the final ternary complex [START_REF] Alvira | Structural characterization of the substrate transfer mechanism in Hsp70/Hsp90 folding machinery mediated by Hop[END_REF][START_REF] Hernandez | The assembly and intermolecular properties of the hsp70-Hop-hsp90 molecular chaperone complex[END_REF][START_REF] Kirschke | Glucocorticoid receptor function regulated by coordinated action of the Hsp90 and Hsp70 chaperone cycles[END_REF]. Together, these results suggest that the overlapping binding site on DnaK should lead to competition between DnaJ/CbpA and Hsp90 Ec . However, in the assays used above, we have not seen competition, suggesting the reaction pathway may be more complex. In E. coli, the concentrations of molecular chaperones and cochaperones varies under different conditions (i.e. cellular stress or starvation) and growth phase. At 30 °C in log phase growth, DnaK is highly abundant with a concentration ~5-fold higher than Hsp90 Ec or GrpE, ~25-fold higher than DnaJ and >25-fold higher than CbpA [START_REF] Mogk | Identification of thermolabile Escherichia coli proteins: prevention and reversion of aggregation by DnaK and ClpB[END_REF][START_REF] Tomoyasu | Levels of DnaK and DnaJ provide tight control of heat shock gene expression and protein repair in Escherichia coli[END_REF]. However, CbpA expression increases as cells enter stationary phase and DnaK may only be ~2-fold higher in concentration than CbpA in late stationary phase [START_REF] Azam | Growth phase-dependent variation in protein composition of the Escherichia coli nucleoid[END_REF]. This suggests that the interplay between DnaK and DnaJ/CbpA or Hsp90 Ec likely varies depending on the growth phase or stress condition. Our current working model for the mechanism of protein remodeling by Hsp90 Ec and DnaK is speculative, but consistent with the current knowledge of the bacterial chaperones (Fig. 7). First, the client protein is bound by DnaK, in a reaction requiring ATP hydrolysis and facilitated by DnaJ/CbpA and GrpE (Fig. 7, step 1). This is in keeping with our previous work showing the DnaK chaperone system functions prior to Hsp90 Ec in protein remodeling [START_REF] Genest | Heat shock protein 90 from Escherichia coli collaborates with the DnaK chaperone system in client protein remodeling[END_REF]. Then, client-bound DnaK recruits Hsp90 Ec , likely through a direct interaction between the DnaK NBD and the middle domain of Hsp90 Ec . Based on our molecular docking results, a client protein bound to the SBD of DnaK would be poised for interaction with the client binding site of Hsp90 Ec (Fig. 7, step 2). The binding of Hsp90 Ec to DnaK may displace DnaJ/CbpA, since our results suggest DnaJ/CbpA and Hsp90 Ec bind overlapping regions on DnaK. Next, ATP binding and hydrolysis by Hsp90 Ec leads to conformational changes in Hsp90 Ec that promote client transfer from DnaK, stabilize client binding by Hsp90 Ec and may release DnaK (Fig. 7, step 3). Then, nucleotide release from Hsp90 Ec likely causes the release of an active client (Fig. 7, step 4). However, in cases where the client does not attain its active conformation it may enter another cycle of remodeling. Taken together, these studies are providing insight into the interactions and collaboration between Hsp90 and Hsp70. Materials and Methods Plasmids and Strains Substitution mutations of Hsp90 Ec and DnaK were made with the QuikChange Lightning mutagenesis kit (Agilent) using pET-HtpG [START_REF] Genest | Heat shock protein 90 from Escherichia coli collaborates with the DnaK chaperone system in client protein remodeling[END_REF], pRE-DnaK [START_REF] Skowyra | The interplay of the GrpE heat shock protein and Mg2+ in RepA monomerization by DnaJ and DnaK[END_REF], pET-DnaK [START_REF] Miot | Species-specific collaboration of heat shock proteins (Hsp) 70 and 100 in thermotolerance and protein disaggregation[END_REF] or pT25-DnaK [START_REF] Genest | Heat shock protein 90 from Escherichia coli collaborates with the DnaK chaperone system in client protein remodeling[END_REF]. All mutations were verified by DNA sequencing. Proteins Hsp90 Ec wild-type and mutants [START_REF] Genest | Heat shock protein 90 from Escherichia coli collaborates with the DnaK chaperone system in client protein remodeling[END_REF], DnaK wild-type and mutants [START_REF] Skowyra | The interplay of the GrpE heat shock protein and Mg2+ in RepA monomerization by DnaJ and DnaK[END_REF], CbpA [START_REF] Ueguchi | An analogue of the DnaJ molecular chaperone in Escherichia coli[END_REF], GrpE [START_REF] Skowyra | The interplay of the GrpE heat shock protein and Mg2+ in RepA monomerization by DnaJ and DnaK[END_REF] and His-tagged L2 [START_REF] Motojima-Miyazaki | Ribosomal protein L2 associates with E. coli HtpG and activates its ATPase activity[END_REF] were isolated as described. All proteins were >95% pure as determined by SDS-PAGE. All DnaK substitution mutants exhibited similar physical properties as the wild-type, including partial proteolysis patterns with and without ATP (Supplemental Fig. S9a) and CD spectra (Supplemental Fig. S9b). All exhibited some DnaK functional activities, including: reactivation of GFP with DnaJ and GrpE (Supplemental Fig. S10a) and further stimulation of GFP reactivation by ClpB (Supplemental Fig. S10b), client binding (L2) (Supplemental Fig. S10c) and basal ATPase activity (Supplemental Fig. S10d). The Hsp90 Ec E584C-biotin mutant was similar to Hsp90 Ec wild-type in luciferase reactivation activity (Supplemental Fig. S11a). Luciferase and luciferin were from Promega. Concentrations given are for Hsp90 Ec , CbpA and GrpE dimers and DnaK, L2 and luciferase monomers. Hsp90 Ec E584C was labeled using a 20-fold excess of Maleimide-PEG 11 -Biotin (Thermo, Life Technologies) as recommended by the manufacturer. CbpA was labeled using a 1.5-fold excess of NHS-PEG4-Biotin (Thermo, Life Technologies). Excess biotin reagent was removed by extensive dialysis. Luciferase reactivation Luciferase reactivation was performed as previously described [START_REF] Genest | Heat shock protein 90 from Escherichia coli collaborates with the DnaK chaperone system in client protein remodeling[END_REF][START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF]. 40 nM heatdenatured luciferase was incubated at 24 °C in reaction mixtures (75 µl) containing 25 mM Hepes, pH 7.5, 50 mM KCl, 0.1 mM EDTA, 2 mM DTT, 10 mM MgCl 2 , 50 µg/ml bovine serum albumin (BSA), 3 mM ATP, an ATP regenerating system (25 mM creatine phosphate, 6 µg creatine kinase), 0.95 µM DnaK wild-type or mutant, 0.15 µM CbpA, 0.05 µM GrpE and 0.5 µM Hsp90 Ec . Aliquots were removed at the indicated times and light output was measured using a Tecan Infinite M200Pro in luminescence mode with an integration time of 1000 ms. Reactivation was determined compared to a non-denatured luciferase control. To test for potential defects in binding between the DnaK mutants and CbpA that could lead to defects in luciferase reactivation, we performed pull-down assays with our DnaK mutants using biotinylated-CbpA (Supplemental Fig. S11b). ATPase activity Steady state ATP hydrolysis was measured at 37 °C in 25 mM Hepes, pH 7.5, 50 mM KCl, 5 mM DTT, 5 mM MgCl 2 and 2 mM ATP using a pyruvate kinase/lactate dehydrogenase enzyme-coupled assay as described [START_REF] Graf | Spatially and kinetically resolved changes in the conformational dynamics of the Hsp90 chaperone machine[END_REF] and using 1 µM Hsp90 Ec , 1 µM L2 and 1 µM DnaK wild-type or mutant. Bio-Layer Interferometry (BLI) Assay BLI was used to monitor the interaction between Hsp90 Ec and DnaK using a FortéBio (Menlo Park, CA) Octet RED96 instrument and streptavidin (SA) biosensors at 30 °C. Each assay step was 200 µL containing BLI buffer (20 mM Tris, pH 7.5, 25 mM NaCl, 0.01% Triton X-100 (vol/vol), 0.02% Tween-20 (vol/vol), 1 mg/ml BSA, 1 mM ATP, and 10 mM MgCl 2 ). Hsp90 Ec E584C-biotin was loaded on the biosensors to a BLI response signal of 1 nm. The biosensors were blocked in BLI buffer containing 10 µg/ml biocytin (Sigma) for 1 min, and a baseline was established in BLI buffer alone. Association of DnaK wild-type or mutant (5 to 100 µM) to the Hsp90-biotin bound sensor tip in BLI buffer was monitored over time. Lastly, dissociation was monitored in BLI buffer alone. For each experiment, nonspecific binding was monitored using a reference biosensor subjected to each of the above steps in the absence of biotinylated Hsp90 Ec and non-specific binding signal was subtracted from the corresponding experiment. For steady state analysis of kinetic association data, the association curve at each DnaK concentration was fit using a single exponential equation without constraints in Prism (GraphPad Software, La Jolla California USA, www.graphpad.com) and the plateau value determined from the fit was plotted versus the concentration of DnaK. The resulting binding curve was analyzed using a one-site specific binding model in Prism to determine the K d and B max values. Protein-protein interaction assay Interaction of Hsp90 Ec with DnaK in the presence of L2 was measured using a pull down assay. Hsp90 Ec E584C-biotin (1.5 µM) was incubated for 5 minutes at 23 °C in reaction mixtures (50 µL) containing GPD buffer (20 mM Tris-HCl, pH 7.5, 75 mM KCl, 10% glycerol (vol/vol), 0.01% Triton X-100 (vol/vol), 2 mM DTT) with 2 µM DnaK wild-type or mutant, and 2.3 µM L2. Neutravidin agarose (40 µL 1:1 slurry) (Thermo, Pierce) was then added and incubated 5 min at 23 °C with mixing. The reactions were diluted with 0.4 mL GPD buffer, centrifuged 1 min at 1000 × g and the recovered agarose beads were washed twice with 0.4 mL GPD buffer. Bound proteins were eluted with buffer containing 2 M NaCl and analyzed by Coomassie blue staining following SDS-PAGE. Where indicated, protein band intensities from replicate gels were quantified using ImageJ (http://imagej.nih.gov/ij). The results were normalized to Hsp90 Ec E584C-biotin and the ratio of DnaK mutant relative to DnaK wild-type was calculated and plotted. Bacterial two-hybrid assay Bacterial two-hybrid assays were performed as previously described [START_REF] Genest | Heat shock protein 90 from Escherichia coli collaborates with the DnaK chaperone system in client protein remodeling[END_REF][START_REF] Battesti | The bacterial two-hybrid system based on adenylate cyclase reconstitution in Escherichia coli[END_REF]. Modeling DnaK-Hsp90 Ec interaction Starting with the ADP-bound conformation of DnaK, PDB ID 2KHO [START_REF] Bertelsen | Solution conformation of wild-type E. \coli Hsp70 (DnaK) chaperone complexed with ADP and substrate[END_REF], or separately, the ATP-bound form of DnaK, PDB ID 4BQ9 [START_REF] Kityk | Structure and Dynamics of the ATP-Bound Open Conformation of Hsp70 Chaperones[END_REF], the CHARMM molecular modeling program [START_REF] Brooks | CHARMM: A program for macromolecular energy, minimization, and dynamics calculations[END_REF] was used to build in missing atoms as described previously [START_REF] Doyle | Interplay between E. coli DnaK, ClpB and GrpE during protein disaggregation[END_REF]. Similarly, the dimeric structure of ADP-bound Hsp90 Ec was constructed from biological assembly 1 of PDB ID 2IOP [START_REF] Shiau | Structural Analysis of E. coli hsp90 Reveals Dramatic Nucleotide-Dependent Conformational Rearrangements[END_REF] using CHARMM to build in missing atoms. The dimeric structure of apo Hsp90 Ec had several missing regions at positions 1-15, 98-114, 493-501 and 544-565 [START_REF] Shiau | Structural Analysis of E. coli hsp90 Reveals Dramatic Nucleotide-Dependent Conformational Rearrangements[END_REF]. Therefore, we constructed a full length monomer using the I-TASSER structure prediction software [START_REF] Roy | I-TASSER: a unified platform for automated protein structure and function prediction[END_REF][START_REF] Yang | The I-TASSER Suite: protein structure and function prediction[END_REF][START_REF] Zhang | I-TASSER server for protein 3D structure prediction[END_REF] by threading the sequence of E. coli Hsp90 along the apo conformation of Hsp90 Ec (PDB ID 2IOQ, chain A). Two copies of the resulting full-length monomer were then oriented as to minimize the RMSD with each monomer of the dimer to create a dimer of apo Hsp90 Ec . Hsp90 Ec -DnaK complexes were generated using ZDOCK [START_REF] Chen | ZDOCK: An initial-stage protein-docking algorithm[END_REF][START_REF] Pierce | ZDOCK server: Interactive docking prediction of protein-protein complexes and symmetric multimers[END_REF] (version 2.3), which employs rigid body docking and utilizes a scoring function based on pairwise shape complementarity, desolvation, and electrostatic energies. Docking complexes were created without restraining the interaction between DnaK and Hsp90 Ec . The top 2000 docked complexes from ZDOCK were then reranked using ZRANK [START_REF] Pierce | ZRANK: reranking protien docking predictions with an optimized energy function[END_REF]; a detailed scoring function that takes into account desolvation energy and both attractive and repulsive van der Waals and electrostatic energies that are calculated using the CHARMM19 polar hydrogen force-field [START_REF] Neria | Simulation of activation free energies in molecular systems[END_REF]. The docked complex with the best score was selected for each combination: 1) ADP-bound DnaK with apo Hsp90 Ec , 2) ADP-bound DnaK with ADP-bound Hsp90 Ec , 3) ATP-bound DnaK with apo Hsp90 Ec and 4) ATP-bound DnaK with ADP-bound Hsp90 Ec . Contacts were computed by calculating distances from the center of geometry of residue pairs within an 8 Å distance measured from alpha carbons. This larger distance cutoff value was used to define a general region of DnaK that may participate in binding Hsp90 Ec , which aided in the selection of residues for DnaK substitution mutants. Research Highlights 1. Molecular docking predicts Hsp90 Ec interacts with the ADP conformation of DnaK 2. The predicted Hsp90 Ec interacting region is in the DnaK nucleotide-binding domain 3. Many predicted Hsp90 Ec interacting residues on DnaK are involved in DnaJ binding 4. Substitutions in predicted DnaK residues cause defects in interaction with Hsp90 Ec 5. The DnaK mutants are defective in collaboration with Hsp90 Ec in protein remodeling Regions of interaction on DnaK and Hsp90 Ec . (a) Docked model of the apo structure of Hsp90 Ec [START_REF] Shiau | Structural Analysis of E. coli hsp90 Reveals Dramatic Nucleotide-Dependent Conformational Rearrangements[END_REF] and ADP-bound DnaK [START_REF] Bertelsen | Solution conformation of wild-type E. \coli Hsp70 (DnaK) chaperone complexed with ADP and substrate[END_REF] as determined using ZDOCK and ZRANK and described in Materials and Methods. Hsp90 Ec is shown as a surface rendering with one protomer in dark gray and one protomer in light cyan. The DnaK interacting region of Hsp90 Ec [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF] is shown in red while the client binding region is in blue [START_REF] Genest | Uncovering a Region of Heat Shock Protein 90 Important for Client Binding in E. coli and Chaperone Function in Yeast[END_REF]. DnaK in the ADP-bound conformation is shown as a ribbon model with the NBD in light orange and the SBD in light gray. (b) DnaK in the ADP-bound conformation [START_REF] Bertelsen | Solution conformation of wild-type E. \coli Hsp70 (DnaK) chaperone complexed with ADP and substrate[END_REF] showing residues (purple) on DnaK within 8 Å of Hsp90 Ec as predicted by the docked model in (a). In (b-d) DnaK is shown as a surface rendering with the NBD in light orange and the SBD in light gray. (c) DnaK in the ADP-bound conformation [START_REF] Bertelsen | Solution conformation of wild-type E. \coli Hsp70 (DnaK) chaperone complexed with ADP and substrate[END_REF] showing residues (green) experimentally identified as interacting with DnaJ [START_REF] Ahmad | Heat shock protein 70 kDa chaperone/DnaJ cochaperone complex employs an unusual dynamic interface[END_REF][START_REF] Suh | Interaction of the Hsp70 molecular chaperone , DnaK , with its cochaperone Dna[END_REF][START_REF] Gässler | Mutations in the DnaK chaperone affecting interaction with the DnaJ cochaperone[END_REF]. (d) DnaK in the ATP-bound conformation [START_REF] Kityk | Structure and Dynamics of the ATP-Bound Open Conformation of Hsp70 Chaperones[END_REF] showing residues (purple) on DnaK within 8 Å of Hsp90 Ec as predicted by the docked model in (a). In the ATP-bound conformation, only some of the DnaK residues within 8 Å of Hsp90 Ec in the model are surface exposed. Images in (a-d) were made using PyMOL (Schrodinger, LLC; www.pymol.org). experiments is shown with colonies labeled 1 through 16. For colonies 2 through 13, all DnaK substitution mutants, named by substituted residues, have been constructed in T25-DnaK and are present in reactions with the T18-Hsp90 Ec construct. In (c), β-galactosidase activity is shown as mean ± SEM (n = 3) and are also presented in Supplemental Table S5. each DnaK concentration and fit as described in Materials and Methods. The K d and Bmax for the Hsp90 Ec interaction with DnaK wild-type are 13.4 ± 3.3 µM and 0.12 ± 0.01 nm, respectively. (c) Curves showing association and dissociation of 50 µM DnaK wild-type (black) or mutant (colored) and biotinylated Hsp90 Ec using BLI. (d) Average plateau response value for the interaction between DnaK wild-type or mutant and biotinylated Hsp90 Ec . Data are plotted as mean ± SEM (n=2 or more) and are also presented in Supplemental Table S5. (e) Steady-state analysis of the DnaK Y145A,N147A,D148A -Hsp90 Ec interaction as described in (b). The K d and Bmax for the Hsp90 Ec interaction with DnaK Y145A,N147A,D148A are 44.4 ± 7.5 µM and 0.10 ± 0.01 nm, respectively. DnaK mutants exhibit defective interaction with Hsp90 Ec in the presence of L2. Hsp90 Ecbiotin was incubated with DnaK wild-type or mutant, without or with L2. Hsp90 Ec -biotin associated proteins were monitored using a pull-down assay and analyzed by Coomassie blue staining following SDS-PAGE as described in Materials and Methods. (a) In control experiments, Hsp90 Ec -biotin was incubated with DnaK wild-type with L2 (lane 1) or without L2 (lane 2) or with DnaK mutant proteins without L2 (lanes 4-14). DnaK wild-type is seen in association with Hsp90 Ec -biotin in the presence of L2 (lane 1), but not in the absence of L2 by this analysis (lane 2); however, the weak association between Hsp90 Ec and DnaK is seen by Western blot analysis [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF]. NSB indicates non-specific binding of DnaK wild-type in the absence of Hsp90 Ec -biotin. (b) Interaction between Hsp90 Ec -biotin and DnaK wild-type or mutant in the presence of L2 was determined as in a. Pure proteins are shown in the first lane as markers. NSB indicates non-specific binding of DnaK wild-type and L2 in the absence of Hsp90 Ec -biotin. The gels shown are representative of at least three independent experiments. (c) Quantification of DnaK wild-type or mutant associated with Hsp90 Ec -biotin in the presence of L2. The results were normalized to Hsp90 Ec -biotin and the ratio of DnaK mutant to wild-type was plotted. Data from three or more replicates are presented as mean ± SEM and are also presented in Supplemental Table S5. The gray dashed line indicates DnaK wild-type binding and is meant to aid the eye. Working model for the mechanism of action of the DnaK system in collaboration with Hsp90 Ec . First, the client protein is bound by DnaK for initial ATP-dependent remodeling, a process that requires DnaJ/CbpA and GrpE (step 1). Next, DnaK, through a direct interaction of the DnaK NBD with the Hsp90 Ec middle domain, recruits Hsp90 Ec to the client, which further stabilizes the interaction between DnaK and Hsp90 Ec (step 2). DnaJ/ CbpA may be released at this time. Binding and hydrolysis of ATP by Hsp90 Ec triggers conformational changes in the chaperone that lead to client transfer and stabilization of client binding to Hsp90 Ec (step 3). DnaK and GrpE may be released at this step. Hsp90 Ec promotes further client remodeling and releases the active native client (step 4). Client proteins that do not attain the active conformation may reenter the chaperone cycle. Fig. 1 . 1 Fig. 1. Fig. 2 . 2 Fig. 2. Identification of DnaK amino acid residues involved in Hsp90 Ec interaction in vivo. (a) Model of E. coli DnaK in the ADP-bound conformation [31] with the mutated residues used in this study shown as CPK models. The NBD is colored light orange and the SBD is gray. DnaK was rendered using PyMOL. (b, c) Interaction between DnaK wild-type or mutant and Hsp90 Ec in a bacterial two-hybrid system in vivo, as described in Materials and Methods. Interaction was measured by monitoring β-galactosidase activity on MacConkey indicator plates (b) and in liquid assays (c). In (b), a representative plate from three independent Fig. 3 . 3 Fig. 3. DnaK NBD substitution mutants are defective in direct interaction with Hsp90 Ec in vitro. (a) Bio-Layer Interferometry (BLI) was used to monitor the kinetics of association and dissociation between biotinylated Hsp90 Ec and DnaK wild-type as described in Materials and Methods. Representative curves are shown for numerous concentrations of DnaK as indicated. The single-exponential fit of the association step was used to obtain the response value (pm) for the plateau plotted in (b). (b) Steady-state analysis of the DnaK wild-type-Hsp90 Ec interaction. The response value (pm) for the plateau obtained in (a) is plotted vs. Fig. 4 . 4 Fig. 4. Fig. 5 . 5 Fig. 5.DnaK mutants defective in Hsp90 Ec interaction are defective in synergistic stimulation of ATP hydrolysis with Hsp90 Ec and a client protein, L2. ATP hydrolysis by DnaK wild-type or mutant in the presence of L2 was determined in the absence and presence of Hsp90 Ec as described in Materials and Methods. The fold above additive is calculated by dividing the rate of ATP hydrolysis by Hsp90 Ec and DnaK in the presence of L2 by the sum of the rate for Hsp90 Ec in the presence of L2 and the rate for DnaK in the presence of L2. Data from three or more replicates are presented as mean ± SEM and are also presented in Supplemental TableS5. The dashed line indicates the rate of ATP hydrolysis by Hsp90 Ec with L2 and is meant to aid the eye. Fig. 6 . 6 Fig. 6. DnaK NBD mutants defective in Hsp90 Ec interaction are defective in functional collaboration with Hsp90 Ec in luciferase reactivation. Heat-inactivated luciferase was reactivated as described in Materials and Methods using DnaK wild-type or mutant, CbpA, GrpE and Hsp90 Ec as indicated. (a-l) Luciferase reactivation by DnaK wild-type (black/ gray) or mutant (color), as indicated in panels a-l in combination with CbpA and GrpE (open symbols and dashed lines) or CbpA, GrpE and Hsp90 Ec (filled symbols and solid lines) as a function of time. Dotted lines indicate luciferase alone control. In a-l, data from three or Fig. 7 . 7 Fig. 7. J Mol Biol. Author manuscript; available in PMC 2018March 24. Acknowledgments We thank Grzegorz Piszczek (NHLBI Biophysical Core Facility) and Jonathan McMurry for helpful comments and suggestions with BLI experiments and analysis. We also thank George Stan for valuable discussions regarding molecular docking. This research was supported by the Intramural Research Program of the NIH, NCI, Center for Cancer Research. Supplementary Material Refer to Web version on PubMed Central for supplementary material. DnaK residues within 8 Å of Hsp90 Ec residues in the model of the ADP-bound DnaK-apo Hsp90 Ec complex. Abbreviations Red indicates DnaK residues shown in this study to be important for interaction with Hsp90 Ec . Blue indicates Hsp90 Ec residues previously shown to be important for the interaction with DnaK [START_REF] Genest | Hsp70 and Hsp90 of E. coli Directly Interact for Collaboration in Protein Remodeling[END_REF]. Green indicates Hsp90 Ec residues identified in this study to be important for interaction with DnaK. Black indicates residues not tested in this study, except for DnaK residue 214, which when mutated was similar to DnaK wild-type (Fig. 2). Bold, italics indicates DnaK residues interacting with DnaJ as identified from previous studies [START_REF] Ahmad | Heat shock protein 70 kDa chaperone/DnaJ cochaperone complex employs an unusual dynamic interface[END_REF][START_REF] Suh | Interaction of the Hsp70 molecular chaperone , DnaK , with its cochaperone Dna[END_REF][START_REF] Gässler | Mutations in the DnaK chaperone affecting interaction with the DnaJ cochaperone[END_REF]. J Mol Biol. Author manuscript; available in PMC 2018 March 24.
01773931
en
[ "info.info-cr" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01773931/file/2018-JCN-PHOABE-Sana-Nesrine.pdf
Sana Belguith Nesrine Kaaniche Maryline Laurent Abderrazak Jemai Rabah Attia PHOABE: SECURELY OUTSOURCING MULTI-AUTHORITY ATTRIBUTE BASED ENCRYPTION WITH POLICY HIDDEN FOR CLOUD ASSISTED IOT Keywords: Attribute based encryption, Hidden policy, Decryption outsourcing, Cloud computing, Privacy, Data security Attribute based encryption (ABE) is an encrypted access control mechanism that ensures efficient data sharing among dynamic group of users. Nevertheless, this encryption technique presents two main drawbacks, namely high decryption cost and publicly shared access policies, thus leading to possible users' privacy leakage. In this paper, we introduce PHOABE, a Policy-Hidden Outsourced ABE scheme. Our construction presents several advantages. First, it is a multi-attribute authority ABE scheme. Second, the expensive computations for the ABE decryption process is partially delegated to a Semi Trusted Cloud Server. Third, users' privacy is protected thanks to a hidden access policy. Fourth, PHOABE is proven to be selectively secure, verifiable and policy privacy preserving under the random oracle model. Five, estimation of the processing overhead proves its feasibility in IoT constrained environments. INTRODUCTION The Internet of Things (IoT) refers to connecting different kinds of devices (things), mainly sensors, RFID tags, PDAs or smartphones to build a network. The deployment of these IoT devices is gaining an expanding interest in the academic research and industrial areas as well as in daily life [START_REF] Gubbi | Internet of things (iot): A vision, architectural elements, and future directions[END_REF] such as smart grid [START_REF] Yun | Research on the architecture and key technology of internet of things (iot) applied on smart grid[END_REF], e-health [START_REF] Abuarqoub | Behaviour profiling in healthcare applications using the internet of things technology[END_REF], smart city, etc. Currently, applications based on IoT can be found everywhere. According to Yao et al. [START_REF] Yao | A lightweight attribute-based encryption scheme for the internet of things[END_REF], IoT is classified into Unit IoT and Ubiquitous IoT categories according to the number of the involved applications or domains [START_REF] Ning | Cyberentity security in the internet of things[END_REF]. The unit IoT category is involved in a single application, where only one authority is required. However, in the ubiquitous IoT category, IoT is used in cross domain applications, where local, national and industrial IoTs are interacting, thus requiring multiple authorities across domain applications. Both unit IoT and Ubiquitous IoT are becoming popular, and there is a strong need for both of them to handle data processing and sharing among different IoT devices. The significant growth of involved IoT devices imposes high requirements for data security and privacy preservation. Hence, security problems have become a hurdle in fulfilling the vision for IoT [START_REF] Carlin | Intrusion detection and countermeasure of virtual cloud systems-state of the art and current challenges[END_REF][START_REF] Ghafir | Social engineering attack strategies and defence approaches[END_REF]. In IoT applications, data are always transmitted, stored and dynamically shared through the heterogeneous and distributed networks [START_REF] Hammoudeh | A wireless sensor network border monitoring system: Deployment issues and routing protocols[END_REF]. Consequently, encryption and access control mechanisms are important in order to prevent unauthorized entities from accessing data [START_REF] Roman | Securing the internet of things[END_REF][START_REF] Carlin | Defence for distributed denial of service attacks in cloud computing[END_REF][START_REF] Kaaniche | Cloudasec: A novel publickey based framework to handle data sharing security in clouds[END_REF][START_REF] Belguith | Enhancing data security in cloud computing using a lightweight cryptographic algorithm[END_REF][START_REF] Kaaniche | Id based cryptography for cloud data storage[END_REF]. Attribute based encryption schemes is a promising cryptographic primitive that ensures efficient encrypted access control to outsourced data. Indeed, recently, several attribute based encryption mechanisms have been proposed in literature [START_REF] De | Selective and private access to outsourced data centers[END_REF][START_REF] Horváth | Attribute-based encryption optimized for cloud computing[END_REF][START_REF] Joseph K Liu | Fine-grained two-factor access control for web-based cloud computing services[END_REF][START_REF] Kaaniche | Attributebased signatures for supporting anonymous certification[END_REF][START_REF] Li | P-cp-abe: Parallelizing ciphertextpolicy attribute-based encryption for clouds[END_REF][START_REF] Li | Tmacs: A robust and verifiable threshold multiauthority access control system in public cloud storage[END_REF][START_REF] Alsboui | A service-centric stack for collaborative data sharing and processing[END_REF]. Most of the proposed attribute based schemes have focused on designing expressive access control policies and providing low communication overheads, through short or constant size ciphertexts [START_REF] Belguith | Pabac: a privacy preserving attribute based framework for fine grained access control in clouds[END_REF][START_REF] Guo | Cp-abe with constant-size keys for lightweight devices[END_REF][START_REF] Attrapadung | Attribute-based encryption schemes with constant-size ciphertexts[END_REF]. Though these solutions present low storage and communication costs, they are still not suitable to be used on resource-constrained devices such as mobile devices and sensors. For instance, the construction of ABE schemes is based on the use of bilinear maps which present expensive computation costs. Moreover, the number of these expensive bilinear operations increases along with the number of attributes involved in the access structure, mainly in the decryption procedure [START_REF] Green | Outsourcing the decryption of abe ciphertexts[END_REF]. Hence, the most relevant challenge is to reduce the decryption processing cost of the introduced ABE mechanism while providing fine-grained access control for users [START_REF] Zuo | Cca-secure abe with outsourced decryption for fog computing[END_REF]. Green et al. [START_REF] Green | Outsourcing the decryption of abe ciphertexts[END_REF] proposed, in 2011, the first attributebased encryption scheme with outsourced decryption. This scheme consists in securely offloading the decryption process of ABE to an external cloud based provider. This solution ensures that most of the decryption cost can be released from the IoT devices to the cloud. In most attribute based encryption schemes, the access structure is shared publicly with the related ciphertext. Hence, any user who get the ciphertext can see its content. This exposure of data's access structure will disclose sensitive information about the decryption or encryption party. Meanwhile, in order to avoid disclosing these sensitive information, the access structure should be hidden [START_REF] Nishide | Attribute-based encryption with partially hidden encryptor-specified access structures[END_REF][START_REF] Viet | Hidden ciphertext policy attribute-based encryption under standard assumptions[END_REF][START_REF] Xu | A cp-abe scheme with hidden policy and its application in cloud computing[END_REF]. In addition, in single-authority ABE schemes, a central attribute authority is responsible for managing and issuing all users' attributes and related secret keys. Although this setting facilitates the key management, it can be a bottleneck since central attribute authority is able to achieve a key escrow attack, due to its knowledge of the users' private keys. To solve this problem, many multi-attribute authority ABE schemes have been proposed. These solutions rely on multiple parties to distribute attributes and private keys to users. Such approach offers the scalability for the system even if the number of users becomes important [START_REF] Belguith | Pabac: a privacy preserving attribute based framework for fine grained access control in clouds[END_REF][START_REF] Lewko | Decentralizing attribute-based encryption[END_REF][START_REF] Chase | Improving privacy and security in multi-authority attribute-based encryption[END_REF]. In this paper, we introduce a novel Policy-Hidden Outsourced Attribute Based Encryption (PHOABE) scheme. Our proposed mechanism is multifold. First, we extend the original multi-authority CP-ABE scheme proposed by Lewko et al. [START_REF] Lewko | Decentralizing attribute-based encryption[END_REF] to support the outsourced decryption in order to better fit processing and communication requirements of resource-constrained devices. For instance, our scheme consists in delegating the expensive computations during the decryption phase, to a Semi Trusted Cloud Server, referred to as STCS. Second, we apply policy-hidden techniques to ensure users' privacy and access policy confidentiality preservation. Third, we introduce a secure mechanism consisting in verifying that the partially decrypted ciphertext was correctly generated by the remote cloud server referred to as the verifiability concept. Fourth, we show that our proposed mechanism is selectively secure, verifiable and policy privacy preserving under the random oracle model. Paper Organisation -The remainder of this paper is organized as follows. First, Section 2 highlights security considerations and design goals. Then, Section 3 reviews related work and introduces attribute based mechanisms. In Section 4, we describe the system model and the threat model of the system. Afterwards, we detail the framework design and we introduce the construction of our proposed scheme in Section 5. In Section 6, we perform security analysis of PHOABE based on security games. Finally, a theoretical performance analysis is provided in Section 7, before concluding in Section 8. PROBLEM STATEMENT As e-health systems are witnessing increased popularity, several health organisations are using these systems in order to centralize and share medical data in an efficient way. Let us consider the following example, where a medical organisation relies on cloud based services to collect and share Electronic Health Records (EHRs) among the medical staff. Note that the medical staff can belong to different organisations such as hospitals, research laboratories, pharmacies, health ministry as well as doctors. The use of a cloud architecture enables the hospital employees to access the data using their smart devices (such as PDAs, smartphones • • • ), considered as resource-constrained devices. Health Insurance Portability and Accountability Act (HIPAA) [START_REF]Health Insurance Portability and Accountability Act (HIPAA)[END_REF] states that access policies must finely precise different access privileges of authorized users to the shared outsourced data. In fact, a health-care information system based on cloud services is required to protect medical records from unauthorized access. Hence, the system must restrict access of protected data to eligible doctors. For instance, hospital employees, mainly doctors, have to share patients' health information, in order to collaborate with the involved hospital employees to properly prescript treatments. Thus, they usually form dynamic sharing groups with different granted privileges. As data are always shared through the heterogeneous and distributed networks, the proposed security mechanisms should provide lightweight processing at the client side, while supporting flexible sharing of encrypted outsourced data among dynamic group of users. To support all these features with efficiency, we propose to design a multi-attribute authority ABE scheme with outsourced decryption to be run at the client side. Thus, the proposed scheme PHOABE must fulfill the following properties: • low computation overhead -PHOABE must introduce cryptographic algorithms with low processing complexity especially at the client side in order to ensure access by different resource-constrained devices. • data confidentiality -PHOABE has to protect the secrecy of outsourced and encrypted data contents against both curious cloud service providers and malicious users. • flexible access controlour proposal should ensure fine grained access control to allow authorized users to access data. • privacy -PHOABE must protect group members' access patterns privacy, while requesting access to outsourced data. ABE-RELATED WORK Attribute Based Encryption (ABE) was first designed by Sahai and Waters to ensure encrypted access control [START_REF] Sahai | Fuzzy identity-based encryption[END_REF]. In ABE schemes, the ciphertext is encrypted for many users instead of encrypting to a single user as in traditional public key cryptography. In attribute based encryption schemes, user's private keys and ciphertext are associated with an access policy or a set of attributes [START_REF] Bethencourt | Ciphertext-policy attribute-based encryption[END_REF]. Thus, a data user is able to decrypt the ciphertext if his private key matches the ciphertext. ABE schemes are classified into two categories, namely: Key-Policy Attribute Based Encryption (KP-ABE) and Ciphertext-Policy Attribute Based Encryption (CP-ABE) [START_REF] Goyal | Attribute-based encryption for fine-grained access control of encrypted data[END_REF]. In KP-ABE, the ciphertext are labeled with a set of attributes while the users' private keys are associated with an access policy which can be any monotonic tree. The user is able to decrypt the ciphertext if its access policy is satisfied by the attributes embedded in the ciphertext. Although the KP-ABE scheme offers fine-grained access control feature, it has one main disadvantage. Indeed, the data owners cannot decide on who has access to their encrypted data, except by their choice of descriptive attributes for the data, as the access policy is embedded in the user's private keys. Consequently, the data owners have to trust the key issuer. Ciphertext-policy ABE schemes remove such inconvenience by directly embedding the access policy on the ciphertext. As such, the data owners can now authorize who can have access on their encrypted data [START_REF] Belguith | Pabac: a privacy preserving attribute based framework for fine grained access control in clouds[END_REF][START_REF] Lewko | Decentralizing attribute-based encryption[END_REF][START_REF] Goyal | Attribute-based encryption for fine-grained access control of encrypted data[END_REF]. CP-ABE schemes allow the data owner to precise the users authorized to access the data, by embedding the access policy to the ciphertext [START_REF] Belguith | Pabac: a privacy preserving attribute based framework for fine grained access control in clouds[END_REF], [START_REF] Waters | Ciphertext-policy attribute-based encryption: An expressive, efficient, and provably secure realization[END_REF], [START_REF] Nishide | Attribute-based encryption with partially hidden encryptor-specified access structures[END_REF] [START_REF] Cheung | Provably secure ciphertext policy abe[END_REF]. In order to issue private keys related to the user's set of attributes, ABE schemes rely on trusted authorities. ABE schemes can be categorized into two types namely single-authority ABE schemes and multi-authority ABE schemes. In a single-authority ABE scheme, the attributes and their related private keys are issued by a central attribute authority. Although this centralized approach makes the key management easier, it does not ensure scalability especially when involving a huge number of users. To address this problem, multi-attribute authority ABE schemes [START_REF] Lewko | Decentralizing attribute-based encryption[END_REF][START_REF] Chase | Improving privacy and security in multi-authority attribute-based encryption[END_REF][START_REF] Božović | Multi-authority attribute-based encryption with honest-but-curious central authority[END_REF] have been proposed. In 2011, Lewko and Waters [START_REF] Lewko | Decentralizing attribute-based encryption[END_REF] proposed a multiattribute authority scheme consisting on issuing attributes and their related secret keys from different attribute authorities. For instance, an attribute authority is responsible for generating a private key associated to a user's attribute. Consequently, this scheme does not rely on a central trusted authority to manage attributes' secret keys. In addition, Lewko and Waters use a Unique Global Identifier (GID) for each user to prevent collision attacks. Hence, a user must send his unique GID to each attribute authority to receive his attribute's secret key. Although CP-ABE ensures fine grained and flexible access control, it requires an expensive decryp-tion costs which be not convenient for resource-constrained devices. For instance, these expensive computation costs are mainly related to the execution of several pairing functions. In addition, the decryption process requires the execution of a number of pairing operations which increases with the number of attributes involved in the access policy [START_REF] Green | Outsourcing the decryption of abe ciphertexts[END_REF]. Additionally, the use of CP-ABE schemes consists in sharing the access structure associated with the ciphertext with the involved authorities which can disclose the attributes in the access policy. In the following, we present a review of the proposed outsourcing attribute based encryption mechanisms in Section 3.1. Then, we introduce policy hidden attribute based encryption schemes in Section 3.2. Outsourcing Attribute Based Encryption As detailed in the aforementioned section, ABE schemes present expensive decryption costs which increase along with the number of attributes of involved attributes in the access policy. In fact, thanks to their bilinearity properties, ABE schemes usually rely on expensive-computing pairing-based operations. Obviously, this limit is mainly prominent for resource-constrained devices. To reduce expensive costs, several research works rely on the use of constant attribute based encryption schemes generating ciphertext size and relying on a constant number of bilinear operations [START_REF] Attrapadung | Attribute-based encryption schemes with constant-size ciphertexts[END_REF][START_REF] Cheung | Provably secure ciphertext policy abe[END_REF][START_REF] Odelu | Expressive cp-abe scheme for mobile devices in iot satisfying constant-size keys and ciphertexts[END_REF][START_REF] Belguith | Constant-size threshold attribute based signcryption for cloud applications[END_REF][START_REF] Odelu | Design of a new cp-abe with constant-size secret keys for lightweight devices using elliptic curve cryptography[END_REF][START_REF] Odelu | Pairing-based cp-abe with constant-size ciphertexts and secret keys for cloud environment[END_REF]. However, these schemes consist in using threshold or conjunctive access policies which do not provide the desired expressiveness. To mitigate this drawback, in 2011, Green et al. [START_REF] Green | Outsourcing the decryption of abe ciphertexts[END_REF] proposed a new approach consisting in outsourcing the expensive operations during the decryption phase to a third party. This approach consists in generating a transformation key derived from the user's secret key. Then, the user shares this generated transformation key with a semi-trusted cloud server (STCS). The ciphertext is then submitted to the STCS which uses the transformation key to generate a partially decrypted ciphertext of the same message and sends it to the user. Afterwards, the user can recover the original message using the short ciphertext and his secret key with only one exponentiation operation. Note that the semi-trusted cloud server cannot gain any information about the encrypted message while partially decrypting the ciphertext. In addition, this process helps the user to save the local computation costs. This new concept proposed by Green et al. [START_REF] Green | Outsourcing the decryption of abe ciphertexts[END_REF] is similar to the concept of proxy re-encryption [START_REF] Yang | Cloud based data sharing with fine-grained proxy reencryption[END_REF][START_REF] Xu | Multi-authority proxy re-encryption based on cpabe for cloud storage systems[END_REF][START_REF] Canetti | Chosenciphertext secure proxy re-encryption[END_REF] where an untrusted proxy is given a re-encryption key that allows it to transform an encryption under Alice's key of m into an encryption under Bob's key of the same m, without allowing the proxy to learn anything about m. While using attribute based encryption schemes with out-sourced decryption, the user relies on the use of a semi-trusted third party to partially decrypt the ciphertext. Thus, to ensure data security, the user should be able to verify that the received partially decrypted ciphertext is not altered. For instance, a lazy STCS can return a ciphertext which has been previously computed for the user or a malicious STCS can generate a forged transformation of the ciphertext [START_REF] Qin | Attribute-based encryption with efficient verifiable outsourced decryption[END_REF]. Some research works introduced verifiable computation techniques which can be used to construct attribute based encryption schemes with verifiable outsourced decryption [START_REF] Chung | Improved delegation of computation using fully homomorphic encryption[END_REF][START_REF] Chevallier-Mames | Secure delegation of elliptic-curve pairing[END_REF][START_REF] Gennaro | Non-interactive verifiable computing: Outsourcing computation to untrusted workers[END_REF]. The solutions proposed in [START_REF] Chung | Improved delegation of computation using fully homomorphic encryption[END_REF][START_REF] Gennaro | Non-interactive verifiable computing: Outsourcing computation to untrusted workers[END_REF] apply a fully homomorphic encryption chemes [START_REF] Gentry | Fully homomorphic encryption using ideal lattices[END_REF] which require a huge computation costs. In 2013, Lai et al. [START_REF] Lai | Attribute-based encryption with verifiable outsourced decryption[END_REF] proposed a security model to verify the correctness of a ciphertext generated by an outsourced attribute based encryption scheme. Although the authors introduced a verifiable outsourced attribute based encryption scheme, this latter presents an expensive computation cost at the encrypting entity side which is not suitable for resourceconstrained devices. For instance, in the proposed scheme, the ciphertext is composed by the encrypted message and an encrypted random message. Thus, to ensure the verification of the correctness of the partially decrypted ciphertext, the encrypting entity adds a redundant encrypted message in the ciphertext. Li et al. proposed, in 2014, an attribute based encryption scheme with outsourced decryption while ensuring the ciphertext verifiability [START_REF] Li | Securely outsourcing attribute-based encryption with checkability[END_REF], based on their first construction presented in [START_REF] Li | Fine-grained access control system based on outsourced attribute-based encryption[END_REF]. Their proposal requires the deployment of two cloud service providers to perform the outsourced key-issuing and decryption algorithms. However, proposed scheme relies on a single-authority ABE scheme authority. Thus, all the attribute involved in the system are issued and managed using a central attribute authority. Afterwards, Wang et al. [START_REF] Wang | Server aided ciphertext-policy attribute-based encryption[END_REF] introduced, in 2015, a server aided CP-ABE system. In fact, users can rely on a proxyserver to pre-compute a partially encrypted ciphertext permitting to improve the efficiency of the encryption process. Then, the encrypting entity uses the partially encrypted ciphertext to generate the encryption of the message and share it. In the decryption phase, users rely on a computing server to perform expensive operations introduced by the decryption process. Although this proposal presents a significant reduction of computation costs, it relies on the use of a single domain architecture which is inconvenient for distributed IoT systems. Qin et al. [START_REF] Qin | Attribute-based encryption with efficient verifiable outsourced decryption[END_REF] extend an attribute based encryption scheme with the verifiable outsourcing feature. Although this proposal presents low computation cost, it relies singleauthority ABE scheme. Hence, this can be a bottleneck as this authority may achieve a key escrow attack, due to its knowledge of all users' private keys. In 2015, Lin et al. [START_REF] Lin | Revisiting attribute-based encryption with verifiable outsourced decryption[END_REF] propose a verifiable oustourced attribute based encryotion scheme. In this proposal, the veri-fication process relies on the use of an attribute based key encapsulation mechanism, a symmetric-key encryption scheme and a commitment scheme. Their construction is based on the ABE scheme proposed by Waters in 2011 [START_REF] Waters | Ciphertext-policy attribute-based encryption: An expressive, efficient, and provably secure realization[END_REF] which is a single attribute authority ABE scheme. Zuo et al. [START_REF] Zuo | Cca-secure abe with outsourced decryption for fog computing[END_REF] present a CCA-secure ABE scheme with outsourced decryption for fog computing applications. This scheme does not introduce a mechanism to verify the correctness of the partially decrypted ciphertext, generated from the outsourcing decryption algorithm. Recently, in 2017, Li et al. [START_REF] Li | Verifiable outsourced decryption of attribute-based encryption with constant ciphertext length[END_REF] proposed an attribute based encryption scheme with the verifiable outsourced decryption feature. This single-authority construction provides constant-size ciphertexts while relying on the use of monotone access structures. Above all, the aforementioned schemes rely on singleauthority attribute based encryption schemes. However, IoT is used in cross domain applications, where local, national and industrial IoTs are interacting. Consequently, multi-attribute authority ABE schemes are more suitable in IoT context. This motivates us to introduce a verifiable outsourced attribute based encryption scheme with multi-attribute authority construction. Policy Hidden Attribute Based Encryption To support flexible access data control, CP-ABE schemes have been widely applied in distributed architectures. However, access policies are usually publicly shared with the different involved entities, which may disclose sensitive information about both decrypting and encrypting entities. To protect sensitive information included in access policies, several research works [START_REF] Cheung | Provably secure ciphertext policy abe[END_REF][START_REF] Lai | Expressive cp-abe with partially hidden access structures[END_REF] introduces CP-ABE schemes with a partially hidden policies. In fact, an access policy involves a set of attributes expressed as a couple: the generic attribute name and the attribute value. Usually, the attribute value contains more sensitive information. For instance, the attribute values "pediatrician" and "XF12599" are more sensitive than the attribute names "Doctor" and "Patient", respectively. Therefore, CP-ABE schemes with partially hidden policies consists in hiding the attribute value to protect the sensitive information. That is, instead of a full access structure, a partially hidden access structure (e.g., "(Patient: * AND Hospital: *) OR (Doctor: * AND Hospital: *)") which consists of only attribute names without attribute values is attached to a ciphertext. Although ABE schemes with partially hidden access structures ensure attributes' values secrecy, they still suffer from a set of security issues, mainly the off-line dictionary attacks. In 2008, Nishide et al. [START_REF] Nishide | Attribute-based encryption with partially hidden encryptor-specified access structures[END_REF] proposed an attribute based encryption scheme with partially hidden access control policy. This construction relies on the single authority ABE scheme proposed by Cheung et al. [START_REF] Cheung | Provably secure ciphertext policy abe[END_REF]. As such, the [START_REF] Nishide | Attribute-based encryption with partially hidden encryptor-specified access structures[END_REF] proposal uses a central authority to issue attributes and secret keys to different users. In 2012, Lai et al. [START_REF] Lai | Expressive cp-abe with partially hidden access structures[END_REF] proposed an attribute based encryption scheme with partially hidden access policy. This proposal is based on the Waters' attribute based encryption scheme [START_REF] Waters | Ciphertext-policy attribute-based encryption: An expressive, efficient, and provably secure realization[END_REF]. Hence, it relies on the use of a single authority architecture which is not convenient for distributed IoT architectures. To address the security and privacy issues raised by CP-ABE with partially hidden policy schemes, CP-ABE with hidden access policy schemes are introduced [START_REF] Viet | Hidden ciphertext policy attribute-based encryption under standard assumptions[END_REF][START_REF] Xu | A cp-abe scheme with hidden policy and its application in cloud computing[END_REF][START_REF] Zhou | Efficient privacy-preserving ciphertext-policy attribute based-encryption and broadcast encryption[END_REF]. Although these schemes ensure the privacy of access policies, they still suffer from high processing overhead. However, the trade-off between efficiency and perfect privacy is the main design challenge of several security mechanisms. Xu et al. [START_REF] Xu | A cp-abe scheme with hidden policy and its application in cloud computing[END_REF] extended the attribute based encryption scheme proposed by Bethencourt et al. [START_REF] Bethencourt | Ciphertext-policy attribute-based encryption[END_REF] with the hidden access policy feature, for cloud applications. However, this ABE scheme relies on the use of a central attribute authority to manage all the attributes and private keys in the system. Hence, this can be a bottleneck as a central attribute authority is able to achieve a key escrow attack. In 2015, Zhou et al. [START_REF] Zhou | Efficient privacy-preserving ciphertext-policy attribute based-encryption and broadcast encryption[END_REF] proposed a privacy preserving attribute based broadcast encryption scheme. This proposal consists in encrypting a message using an expressive hidden access policy. Then, the encrypted message can be broadcasted with or without explicitly specifying the receivers. However, the [START_REF] Zhou | Efficient privacy-preserving ciphertext-policy attribute based-encryption and broadcast encryption[END_REF] construction introduces a high computation cost, such that the deciphering entity needs to perform several pairing operations to decrypt the ciphertext. Phuong et al. [START_REF] Viet | Hidden ciphertext policy attribute-based encryption under standard assumptions[END_REF] proposed an attribute based encryption scheme with hidden access policy. This construction uses an access policy with only AND gates. Thus, this proposal presents less expressiveness compared to other schemes. Above all, the mentioned proposals rely on single-authority attribute based encryption schemes. Recently, Zhong et al. [START_REF] Zhong | Multi-authority attribute-based encryption access control scheme with policy hidden for cloud storage[END_REF] proposed the first policy hidden attribute based encryption scheme using multi-attribute authority architecture. However, because of the required pairing operations, this scheme introduces an expensive computation cost at the client side to execute the decryption process. In most of the existing policy-hidden CP-ABE schemes, The decryption computation costs grow proportionally with complexity of the access structures. This motivates us to introduce a policy hidden multi-attribute authority CP-ABE scheme with decryption outsourcing. To evaluate the objectives given in Section 2, we introduce, in Table 1, a comparison of our scheme PHOABE with different CP-ABE constructions, that are most closely-related to our context. On one hand, several research works have introduced ABE with outsourced decryption scheme [START_REF] Green | Outsourcing the decryption of abe ciphertexts[END_REF][START_REF] Zuo | Cca-secure abe with outsourced decryption for fog computing[END_REF][START_REF] Qin | Attribute-based encryption with efficient verifiable outsourced decryption[END_REF][START_REF] Lai | Attribute-based encryption with verifiable outsourced decryption[END_REF][START_REF] Li | Securely outsourcing attribute-based encryption with checkability[END_REF][START_REF] Lin | Revisiting attribute-based encryption with verifiable outsourced decryption[END_REF][START_REF] Li | Verifiable outsourced decryption of attribute-based encryption with constant ciphertext length[END_REF], PHOABE is the first multi-authority attribute based scheme with outsourced decryption ensuring the ciphertext verifiability. On the other hand, ABE with hidden policy has been addressed [26-28, 57, 58] in several research works, but PHOABE is he only scheme suitable for resource-constrained devices. Mathematical Background In this section, we first introduce the access structure in Section 3.3.1. Then, in Section 3.3.2, we present the bilinear maps. Finally, we introduce some security assumptions. Access Policies Access policies can be represented by one of the following formats: boolean functions of attributes or a Linear Secret Sharing Scheme (LSSS) matrix [START_REF] Lewko | Decentralizing attribute-based encryption[END_REF]. Definition 1. Access Structure Let {P 1 , • • • , P n } be a set of parties. A collection A ⊆ 2 {P 1 ,••• ,P n } is monotone if ∀B,C if B ∈ A and B ⊆ C then C ∈ A. An access structure is a collection A of non-empty subsets of {P 1 , • • • , P n }, such as A ⊆ 2 {P 1 ,••• ,P n } \ / 0. We note that any access structure can be converted into a boolean function. Boolean functions can be represented by an access tree, where the leaves present the attributes while the intermediate and the root nodes are the logical operators AND (∧) and OR (∨). Definition 2. Linear Secret Sharing Schemes (LSSS) A Linear Secret Sharing Scheme LSSS [START_REF] Lewko | Decentralizing attribute-based encryption[END_REF] over a set of parties P is defined as follows: 1. the shares of each party form a vector over Z p . 2. Let us consider an (n × l) matrix A called the sharegenerating matrix for a Linear Secret Sharing Scheme LSSS. The row i ∈ [1, • • • , n] of A is is labeled by a function ρ(i) : {1, • • • , n} → P. Let s ∈ Z p be a secret value to be shared, then we consider a column vector v = [s, r 2 , • • • , r n ] where r 2 , • • • , r n ∈ Z p are random val- ues. Consequently, A. v = λ is the vector of n shares of the secret s according to LSSS. Bilinear Maps Let us consider two multiplicative cyclic groups G 1 and G T of prime order P and g a generater of G 1 . ê : G 1 × G 1 → G T is a bilinear map if it fulfill the bilinearity, the non-degeneracy and computability properties. These properties are defined as follows: 1. bilinearity: ∀u, v ∈ G 1 and a, b ∈ Z p , we have ê(u a , v b ) = ê(u, v) ab . 2. non-degeneracy: ê(g, g) = 1. 3. computability: ê is efficiently computable. ê is computed by an efficient algorithm for any g 1 , g 2 ∈ G 0 . We say that G 0 is a bilinear group if the group operation MODEL DESCRIPTION Our proposed framework considers a cloud storage system involving multiple attribute authorities as detailed in Figure 1. Hence, PHOABE involves five different entities: the Cloud Service Provider (CSP), the Central Trusted Authority (CTA), the Semi-Trusted Cloud Server (STCS), the Attribute Authorities (AA), the data owner (O) and data users (U). In this section, we introduce our system model in Section 4.1. Then, we detail our security model in Section 4.2. System Model Our PHOABE scheme relies on seven randomized algorithms defined as follows: setup(λ) → PP -the setup algorithm is performed by the central trusted authority (CTA) to output the global public parameters PP. Thus, this randomized algorithm takes as input λ which is a chosen security parameter. setup auth (PP) → (sk AA j , pk AA j ) -an attribute authority AA j ( j ∈ N) executes this randomized algorithm, where N is the number of attribute authorities in the system. The setup auth takes as input PP and generates the pair of private and public keys (sk AA j , pk AA j ). encrypt(PP, {pk AA j }, M, (A, ρ)) → CT -the encryption algorithm is executed by the data owner (O) to generate the ciphertext CT . It takes as input PP, the set of involved attribute authorities' public keys {pk AA j }, the data file M and the access policy Ψ = (A, ρ). keygen(PP, sk AA j , pk AA j , GID, S j,GID ) → sk j,GID -this algorithm is performed by an attribute authority AA j in order to generate the user's secret key related to a set of attributes S j,GID = {a 1 j , • • • , a n j }, where n j is the number of attributes of S j,GID . It takes as input the global parameters PP, the pair of private and public attribute authority's keys (sk AA j , pk AA j ) and the users' identity GID. It outputs the secret key sk j,GID related to the set of attributes S j,GID . transform(PP, {sk j,GID } j∈N , (A, ρ),CT ) → {tk j,GID } j∈N -this algorithm is performed by the user (U) having a set of attributes S GID and their related secret keys {sk j,GID } j∈N received from the different involved attribute authorities {AA j } j∈N . It takes as input the global public parameters PP, the user's secret keys {sk j,GID } j∈N , the access policy Ψ = (A, ρ) and the ciphertext CT . The transform algorithm generates the set of the transformation keys {tk j,GID } j∈N = ({t pk j,GID } j∈N ,tsk GID ) related to the user's secret keys, where {t pk j,GID } j∈N and tsk GID are the public and private transformation keys, respectively. decrypt out (PP, {t pk j,GID } j∈N , (A, ρ),CT ) → M -the semi-trusted cloud server (STCS) executes the decrypt out algorithm to retrieve the partially decrypted ciphertext M . This algorithm takes as input the public parameters PP, the transformation key {t pk j,GID } j∈N , an access policy (A, ρ) and the ciphertext CT . decrypt(M ,tsk GID ) → M -the user U executes the decryption algorithm to retrieve the message M. This algorithm takes as input the transformation secret key tsk GID and the partially decrypted ciphertext M and outputs the message M. Security Model We consider two realistic threat models for proving security and privacy properties of our PHOABE construction. We first consider a honest but curious cloud provider. That is, the cloud is honest as it provides proper inputs or outputs, at each step of the protocol, properly performing any calculations expected from it, but it is curious in the sense that it attempts to gain extra information from the protocol. As such, we consider the honest but curious threat model against the access policy privacy requirement, as presented in Section 4.2.3. Then, we study the case of malicious users and servers, trying to override their rights. That is, they may attempt to deviate from the protocol or to provide invalid inputs. As such, we consider the malicious user security model against the access policy privacy requirement and the confidentiality property, detailed in Section 4.2.1. Also, we consider the malicious STCS security model against the verifiability requirement, as introduced in Section 4.2.2. For instance, a lazy STCS can return a ciphertext which has been previously computed for the user or a malicious STCS can generate a forged transformation of the ciphertext. Confidentiality To design the most suitable security model considering the confidentiality requirement, we adopt a relaxation introduced by Canetti et al. [START_REF] Canetti | Relaxing chosen-ciphertext security[END_REF] called Replayable CPA (RCPA) security. Indeed, under the RCPA security model, the provided ciphertext can be modified without changing the message in a meaningful way. In our security model, we assume that the adversary is allowed to query for any secret keys that cannot be used for decrypting the challenge ciphertext. In addition, we consider the assumption introduced in the Lewko et al. proposal [START_REF] Lewko | Decentralizing attribute-based encryption[END_REF] that states that the adversary can only corrupt authorities statically. Let consider S AA the set of all attribute authorities and S AA a set of corrupted attribute authorities. Our PHOABE scheme is RCCA-Secure if there is no probabilistic polynomial time (PPT) adversary that can win the Exp con f security game defined below with non-negligible advantage. The Exp con f security game is formally defined, between an adversary A and a challenger C as follows: Initialisation -in this phase, the adversary A chooses a challenge access structure Ψ * = (A * , ρ * ) and sends it to the challenger C . Setup -during this phase, the challenger C first runs the setup algorithm to generate the public parameters. Then, the adversary A selects a set of corrupted attribute authorities S AA ⊂ S AA and runs the setup auth algorithm to obtain their public and private keys. Subsequently, C queries the honest attribute authorities' public and private keys by running the setup auth algorithm. Afterwards, the challenger C publishes the public keys of the honest attribute authorities. Queries phase 1 -in this phase, the challenger first initializes an empty table T and an empty set D. Then, for each session k, the adversary issues the following queries: • Private Key query: the adversary queries the secret keys {sk j,GID } S AA related to a set of attributes {S GID } k belonging to a set of non-corrupted attribute authorities a i ∈ S AA \ S AA . Then, the challenger sets D = D ∪ {S GID } k returns the corresponding secret keys to the adversary. Note that the set of attributes {S GID } k does not satisfy the access policy Ψ * = (A * , ρ * ) i.e; Ψ * ({S GID } k ) = 1. • Transformation Key query: the adversary queries the secret keys {sk j,GID } S AA related to a set of attributes {S GID } k belonging to a set of non-corrupted attribute authorities a i ∈ S AA \ S AA . Afterwards, the challenger searches the entry (S GID , {sk j,GID } S AA , {tk j,GID } S AA ) in table T . If such entry exists, it returns the set of the transformation keys {tk j,GID } S AA . Otherwise, it generates h used to run the tranform . Then, the challenger runs keygen(PP, sk AA j , pk AA j , GID, S GID ) and the transform(PP, {sk j,GID } j∈N , (A, ρ),CT ) algorithms and stores in the table T the entry (S GID , {sk j,GID } S AA , {tk j,GID } S AA ). Then, it returns to the adversary the set of the transformation keys {tk j,GID } S AA . Challenge -during the challenge phase, the adversary chooses two equal length plaintexts M 0 and M 1 and sends them to the challenger. The challenger C chooses a random bit b such that b ∈ {0, 1} and encrypts CT b under the access structure (A * , ρ * ). The generated ciphertext CT b is then returned to the adversary. Queries phase 2 -in this phase, the adversary A who has already received M b , can query a polynomially bounded number of queries as in Queries Phase 1, except that the adversary A can not query secret keys related to a set of attributes which satisfy the access policy Ψ * = (A * , ρ * ). Guess -the adversary tries to guess which message M b where b ∈ {0, 1} corresponds to the challenge ciphertext CT b . The advantage of the adversary to win the game is defined as: Adv A [Exp Con f (1 ξ )] = |Pr[b = b ] - 1 2 | Definition 3. Our PHOABE scheme is RCPA-Secure ABE (i.e; secure against replayable chosen plaintext attacks) against static corruption of the attribute authorities if the advantage Adv A [Exp Con f (1 ξ )] is negligible for all PPT adversaries. Verifiability Our PHOABE scheme is said to be verifiable if there is no probabilistic polynomial time (PPT) adversary that can win the Exp veri f security game defined below with non-negligible advantage. The Exp veri f security game is formally defined, between an adversary A and a challenger C as follows: Initialisation -in this phase, the adversary A chooses a challenge access structure Ψ * = (A * , ρ * ) and sends it to the challenger C . Setup -during this phase, the challenger C runs the setup algorithm to generate the public parameters. Then, the adversary A selects a set of corrupted attribute authorities S AA ⊂ S AA and runs the setup auth algorithm to obtain their public and private keys. Subsequently, C queries the honest attribute authorities' public and private keys by running the setup auth algorithm. Afterwards, the challenger C publishes the public keys of the honest attribute authorities. Queries phase 1 -in this phase, the challenger first initializes an empty table T and an empty set D. Then, for each session k, the adversary issues the following queries: • Private Key query: the adversary queries the secret keys {sk j,GID } S AA related to a set of attributes {S GID } k belonging to a set of non-corrupted attribute authorities S AA \ S AA . Then, the challenger sets D = D ∪ {S GID } k returns the corresponding secret keys to the adversary. Note that the set of attributes {S GID } k does not satisfy the access policy Ψ * = (A * , ρ * ) i.e; Ψ * ({S GID } k ) = 1. • Transformation Key query: the adversary queries the secret keys {sk j,GID } S AA related to a set of attributes {S GID } k belonging to a set of non-corrupted attribute authorities a i ∈ S AA \ S AA . Then, the challenger searches the entry (S GID , {sk j,GID } S AA , {tk j,GID } S AA ) in table T . If such entry exists, it returns the set of the transformation keys {tk j,GID } S AA . Otherwise, it generates h used to run the transform algorithm. Then, the challenger runs keygen(PP, sk AA j , pk AA j , GID, S GID ) and the transform(PP, {sk j,GID } j∈N , (A, ρ),CT ) algorithms and stores in the table T the entry (S GID , {sk j,GID } S AA , {tk j,GID } S AA ). Then, it returns to the adversary the set of the transformation keys {tk j,GID } S AA . Challenge -during the challenge phase, the adversary chooses a challenge message M * and sends it to the challenger. The challenger C encrypts M * and generates the verification key V * under the access structure (A * , ρ * ). Then, the generated ciphertext CT * is returned to the adversary. Queries phase 2 -in this phase, the adversary A can query a polynomially bounded number of queries as in Queries Phase 1, except that the adversary A can not query secret keys related to a set of attributes which satisfy the access policy Ψ * = (A * , ρ * ). Forge -the adversary generates an attribute set {S * GID } and a partially decrypted ciphertext M * by running the algorithm decrypt out (PP, {t pk * j,GID } j∈N , (A * , ρ * ),CT * ) . We suppose that the tuple (S * GID , {sk * j,GID } S AA , {tk * j,GID } S AA ) is included in the table T . Otherwise, the challenger generates the tuple as a response for transformation key query. The adversary A wins the game if decrypt(M * , {tk * j,GID } S AA ) / ∈ {M * , ⊥} and the challenger can verify the generated partially decrypted ciphertext using the verification key V * . Hence, the adversary's advantage is defined as follows: Adv A [Exp veri f (1 ξ )] = |Pr[Exp veri f (1 ξ )] = 1| Definition 4. Our PHOABE scheme is verifiable, if the ad- vantage Adv A [Exp Veri f (1 ξ )] is negligible for all PPT adversaries. Access Policy Privacy Preservation The notion of access policy privacy consists in hiding the access structure used to encrypt a message from both the server and the decrypting entity. Indeed, the server or any decrypting entity should not be able to gain any knowledge of the policy except that the user knows whether his attributes satisfy the access policy. Our PHOABE scheme ensures the access policy privacy preservation requirement if there is no probabilistic polynomial time (PPT) adversary that can win the Exp Priv security game defined below with non-negligible advantage. The Exp Priv security game is formally defined, between an adversary A and a challenger C as follows: Setup -in this phase, the challenger C first runs the setup algorithm to generate the public parameters. Then, the adversary A selects a set of corrupted attribute authorities S AA ⊂ S AA and runs the setup auth algorithm to obtain their public and private keys. Subsequently, C queries the honest attribute authorities' public and private keys by running the setup auth algorithm. Afterwards, the challenger C publishes the public keys of the honest attribute authorities. Queries phase 1 -in this phase, the challenger first initializes an empty table T and an empty set D. Then, for each session k, the adversary issues the following queries: • Private Key query: the adversary queries the secret keys related to a set of attributes {S GID } k belonging to a set of non-corrupted attribute authorities S AA \ S AA . Then, the challenger sets D = D ∪ {S GID } k returns the corresponding secret keys to the adversary. Note that the set of attributes {S GID } k does not satisfy the access policy Ψ * = (A * , ρ * ) i.e; Ψ * ({S GID } k ) = 1. • Transformation Key query: the challenger searches the entry (S GID , {sk j,GID } S AA , {tk j,GID } S AA ) in table T . If such entry exists, it returns the set of the transformation keys {tk j,GID } S AA . Otherwise, it generates h used to run the tranform . Then, the challenger runs keygen(PP, sk AA j , pk AA j , GID, S GID ) and the transform(PP, {sk j,GID } j∈N , (A, ρ),CT ) algorithms and stores in the table T the entry (S GID , {sk j,GID } S AA , {tk j,GID } S AA ). Then, it returns to the adversary the set of the transformation keys {tk j,GID } S AA . Challenge -during the challenge phase, the adversary A sends two challenge messages M * 1 and M * 2 and two valid access policies Ψ 1 and Ψ 2 to the challenger under the following restriction: Either all the adversaries satisfy none of the policies Ψ 1 and Ψ 2 or they all satisfy both policies throughout the game. For instance, to ensure that the adversary has not any knowledge of the policy except that he knows whether his attributes satisfy the access policy, the adversary should satisfy either both policies or none of them. Then, the challenger flips a fair coin b ∈ {1, 2} and encrypts a message M * b under the access policy Ψ b according to the sender's encryption algorithm by running the encryption algorithm encrypt(PP, {pk AA j }, M * b , Ψ b ) . The resulting challenge ciphertext CT * is then returned to the adversary. Queries phase -in this phase, the adversary A can query a polynomially bounded number of queries as in Queries Phase 1, except that the adversary A satisfy none of the policies Ψ 1 and Ψ 2 or they all satisfy both policies throughout the game and A may not ask the challenger C for decrypting the challenge ciphertext CT * . Guess -the adversary tries to guess which message M * b where b ∈ {1, 2} corresponds to the challenge ciphertext CT * . The advantage of the adversary to win the game is defined as: Adv A [Exp Priv (1 ξ )] = |Pr[b = b ] - 1 SECURELY OUTSOURCING POLICY HIDDEN ATTRIBUTE BASED ENCRYPTION Overview In this paper, we develop an outsourcing policy hidden multiauthority attribute based encryption scheme as a novel security mechanism for encrypted access control to outsourced data in cloud storage environments. Our proposal is based on the use of the multi-authority attribute based encryption scheme proposed by Lewko et al. [START_REF] Lewko | Decentralizing attribute-based encryption[END_REF] in 2011 which has been extended to provide security and functional features such as low computation cost, privacy preservation, fine grained access control and data confidentiality. For instance, our scheme introduces a novel outsourcing attribute based encryption which consists in reducing the decryption cost by securely delegating the most expensive computations in the decryption phase of CP-ABE to a semi-trusted party while ensuring privacy preservation of the access policy. Figure 2 illustrates the general overview of the PHOABE algorithms. The different notations used in this paper are listed in Table 2. A set of attributes belonging to a user GID received from AA j sk j,GID the secret key related to the set of attributes S j,GID , received from AA j {tk j,GID } j∈N Transformation keys ({t pk j,GID } j∈N Transformation public key tsk GID Transformation secret key M Data file CT The encrypted data file M' Partially decrypted data file Complexity Assumptions In our outsourcing policy hidden multi-authority attribute based encryption, we rely on the following complexity assumptions: Definition 6. Computational Diffie Hellman problem (CDH) Given a generator g of a multiplicative cyclic group G of order N and given two group elements g a ∈ G and g b ∈ G where a, b ∈ Z N are two secrets, the problem of calculating g ab from g a and g b is called the Computational Diffie Hellman problem. N are three secrets, the problem of distinguishing between tuples of the form (g a , g b , g c , ê(g, g) abc ) and (g a , g b , g c , ê(g, g) z ) for some random integer z, is called the Decisional Bilinear Diffie-Hellman Assumption (DBDH). Concrete Construction Our PHOABE construction is based on seven algorithms defined as follows: • setup -the CTA defines two multiplicative groups G 1 and G T of order P, a symmetric bilinear map ê : G 1 × G 1 → G T , PP = {G 1 , G T , P, H , H 0 , H 1 , H 2 , ê, g} • setup auth -recall that each attribute authority AA j , where j ∈ {1, • • • , N}) and N is the number of attribute authorities, manages a set of attributes S AA j . Each attribute authority chooses two random numbers α i ,t i ∈ Z * N for each attribute i ∈ S AA j and a number y j ∈ Z * N . Then, it generates the pair of private and public keys (sk AA j , pk AA j ) defined as follows: sk AA j = ({α i ,t i } i∈S AA j , y j ) pk AA j = ({ ê(g, g) α i , g t i } i∈S AA j , g y j ) • encrypt -this randomized algorithm is based on the following three steps: (i) First, the data owner O selects a random value a ∈ Z * N and computes q i = ê((g y j ) a , H (x i )), where {x i } i∈F denotes one attribute of the access policy Ψ and F is the number of attributes in Ψ. In order to ensure the access policy privacy preservation feature, the data owner replaces each attribute x i specified in the access structure Ψ, by the computed value q f . Then, the access policy Ψ is converted to LSSS access matrix (A n×l , ρ). (ii) Afterwards, the data owner O picks a random values s ∈ Z N . In addition, O selects p i ∈ Z * N for each row A i of A. O chooses a random message R ∈ G T . Then, the encrypt algorithm computes λ i and w i such that λ i = A i • v, where v = [s, v 1 , • • • , v l ] ∈ Z N l is a random vec- tor and w i = A i • τ such that τ = [0, τ 1 , • • • , τ l ] ∈ Z N l is a random vector. The encrypt algorithm outputs the ciphertext as a tuple CT ABE = (h,C 0 ,C 1,i ,C 2,i ,C 3,i ) i∈ [1,n] , where i presents a matrix row corresponding to an attribute i, defined as follows:                                h = g a C 0 = R • ê(g, g) s C 1,i = g λ ρ(i) g α ρ(i) p i C 2,i = g p i C 3,i = g t ρ i p i g w i (iii) Finally, the algorithm sets R 0 = H 0 (R) and computes a symmetric key K sym = H 1 (R). Subsequently, it computes the encryption of the message M using a symmetric encryption algorithm Encrypt sym such as CT sym = encrypt sym (K sym , M) and the verification key V = H 2 (R 0 ||CT sym ). The encrypt algorithm outputs the ciphertext CT = {CT ABE ,CT sym } as well as the verification key V . • keygen -for any user U having a set of attributes S j,GID = {a 1 j , • • • , a n j } related to an attribute authority AA j , this latter computes the secret key sk j,GID as follows: sk j,GID = ({K 1,i , K 2,i } i∈S j,GID ) = {g α i H (GID) t i , H (i) y j } i∈S j,GID • tranform -this randomized algorithm, executed by the decrypting entity, relies on the two following steps: (i) First, in order to reconstruct the access policy, the user computes the following equation: q i = ê(h, H (i) y i ) = ê(g a , H (i) y j )∀i ∈ S j,GID Then, using q i to replace the attribute i, an attribute set S GID is constructed. The user can identifies the set of attributes L = {i : (ρ(i) ∩ S GID ) i∈[n] } required for the decryption. (ii) Second, the user chooses a random value z ∈ Z * N and sets the transformation keys {tk j,GID } j∈N = ({t pk j,GID } j∈N , tsk GID ), where {t pk j,GID } j∈N and tsk GID are computed as follows:    {t pk j,GID } j∈N = ({K 1/z 1,i } i∈L , g 1/z , H(GID) 1/z ) tsk GID = z Finally, the user outsources the ciphertext CT and the set of the transformation public keys {t pk j,GID } j∈N to the STCS. • decrypt out -to partially decrypt the ciphertext CT , the STCS proceeds as follows. First, for each matrix row corresponding to an attribute i, the STCS computes: = ê(g 1/z ,C 1,i ). ê(H (GID) 1/z ,C 3,i ) ê(g α i /z H (GID) t i /z ,C 2,i ) = ( ê(g, g) λ i ê(H (GID), g) w i ) 1/z Afterwards, the STCS chooses a set of constants {c i } i∈[1,n] ∈ Z N such that ∑ i c i A i = [1, 0, • • • , 0]. Then, it computes the ∏ n i=1 c i , such as: n ∏ i=1 c i = ( n ∏ i=1 ( ê(g, g) λ i ê(g, g) w i t j ) 1/z = ( ê(g, g) ∑ n i=1 λ i c i ê(g, g) ∑ κ j=1 t j ∑ n i=1 w i c i ) c i z We note that λ i = A. v and w i = A. τ, where v.[1, 0, • • • , 0] = ∑ n i=1 λ i c i = s and w.[1, 0, • • • , 0] = ∑ n i=1 w i c i = 0. In the sequel, the STCS gets the following result: M = n ∏ i=1 c i = ê(g, g) s z (1) Finally, the STCS returns M to the user. • decrypt-the decrypt algorithm includes the following two steps: (i) First, based on the partially decrypted ciphertext M , the user executes Equation 2 while performing only one exponentiation without calculating any pairing functions to recover the message. R = C 0 (R ) tsk = C 0 ( ê(g, g) s z ) z = C 0 ê(g, g) s (2) (ii) Then, the user computes R 0 = H 0 (R). If, the algorithm checks H 2 (R 0 ||CT sym ) = V , then it returns ⊥ and halts immediately. Otherwise, it computes K sym = H 1 (R). Then, it returns the message M = Decrypt sym (K sym ,CT sym ). The proof of correctness of the decryption algorithm is detailed in Section 6.1. SECURITY ANALYSIS In this section, we first prove the correctness of our PHOABE construction, with respect to the data decryption algorithms, in section 6.1. Then, we prove the security of our proposal, with respect to the indistinguishability, verifiabity and privacy preserving properties, in Section 6.2, 6.3 and 6.4 respectively. Correctness The correctness of our PHOABE construction is detailed by the proof of Lemma 1. Lemma 1. Data Decryption Correctness. Proof. (i) After receiving the set of the user's transformation keys related to the involved attributes {t pk j,GID } j∈N , the STCS first computes: = ê(g 1/z , g λ ρ(i) g α ρ(i) p i ). ê(H (GID) 1/z , g t ρ i p i g w i ) ê(g α i /z H (GID) t i /z , g p i ) = ê(g, g) λ ρ(i) z ê(g, g) α ρ(i) p i z ê(H (GID), g) tρ i p i z ê(H (GID), g) w i z ê(H (GID), g) α ρ(i) p i z ê(H (GID), g) t ρ(i) p i z = ê(g, g) λ ρ(i) z ê(H (GID), g) w i z Then, STCS calculates the constants c i ∈ Z N such that ∑ i c i • A i = [1, 0, • • • , 0]. Note that λ ρ(i) = A ρ(i) • v, where v = [s, v 2 , • • • , v n ] and w i = A i • τ such as τ = [0, τ 2 , • • • , τ n ]. Hence, we deduce that ∑ n i=1 λ ρ(i) c i = s and ∑ i w i c i = 0. Consequently, the partially decrypted ciphertext M = ê(g, g) s z is derived as follows: n ∏ i=1 c i = n ∏ i=1 ê(g, g) c i λ ρ(i) z ê(H (GID), g) c i w i z = ê(g, g) ∑ κ j=1 c i λ ρ(i) z ê(H (GID), g) c i ∑ κ j=1 w i z = ê(g, g) ∑ κ j=1 c i λ ρ(i) z ê(H (GID), g) ∑ κ j=1 w i c i z = ê(g, g) s z ê(H (GID), g) 0 = ê(g, g) s z (ii) In the sequel, the user uses the partially decrypted ciphertext M and the secret transformation key tsk to compute R as follows: R = C 0 (R ) tsk = C 0 ( ê(g, g) s z ) z = C 0 ê(g, g) s (3) (iii) Then, in order to verify the correctness of the outsourced decryption, the user computes R 0 = H 0 (R). Then, if he checks H 2 (R 0 ||CT sym ) = V ,then it returns ⊥ and halts immediately. If the decryption is verified, then the user computes the original message as follows: K sym = H 1 (R) . As such, he retrieves the message as: M = Decrypt sym (K sym ,CT sym ) Indistinguishability In the following proof, we prove that our scheme is RCPA-Secure against static corruption of the attribute authorities with respect to Theorem 1. Theorem 1. If Lewko et al. decentralized CP-ABE scheme [29] is CPA-secure, then, our PHOABE scheme is selectively RCPA-secure such that Adv A [Exp con f ] ≤ Adv A [Exp Lewko ], according to Definition 3. Proof. We define a PPT algorithm adversary A running the Exp con f security game defined in Section 4.2.1 with an entity B. This entity B is also running the Lewko et al's CPAsecurity game (Lewko-Game) with a challenger C . The objective of the proof is to show that the advantage of the adversary A to win the Exp con f game is smaller than the advantage of the entity B to win Lewko-Game. Hereafter A, B and C interactions are described, with A running the following steps and algorithms, as specified in the Exp con f game: Initialisation -in this phase, the adversary A gives the algorithm B a challenge access structure Ψ * = (A * , ρ * ). Setup -B runs the setup to generate the public parameters. For instance, it sets two multiplicative groups G 1 and G T of order P, a bilinear map ê : for each attribute i ∈ S AA j and a number y j ∈ Z * N . Then, it generates the attribute authorities public keys {pk AA j } defined as pk AA j = ({ ê(g, g) α i , g t i } i∈S AA j , g y j ). Finally, C sends to B the attribute authorities public keys {pk AA j } which is sent next by B to A. G 1 × G 1 → G T , Queries phase 1 -B first initializes an empty table T and an empty set D. Then, for each session k, the adversary issues the following queries: Private Key query: when the adversary issues a key query by submitting a set of attributes S GID and his identity GID. Then, the algorithm B uses the challenger C to generate and return the corresponding secret keys to the adversary {K 1,i } i∈S j,GID ) = {g α i H (GID) t i } i∈S j,GID . Afterwards, the challenger C chooses y j ∈ Z * N and sets {sk j,GID } j∈N = ({K 1,i , K 2,i } i∈S j,GID ) = {g α i H (GID) t i , H (i) y j )} i∈S j,GID . The secret keys {sk j,GID } j∈N ) are returned to the adversary A. Then, B sets D = D ∪ {S GID } k and returns the secret keys {sk j,GID } j∈N to the adversary A. Transformation Key query: B searches the entry (S GID , sk S GID, j , tk GID, j ) in table T . If such entry exists, it returns the transformation key tk GID, j to the adversary A. Otherwise, it generates a random value a and sets the value h = g a to simulate the output of the encrypt algorithm used to run the tranform. h is, then, sent to the challenger C . Then, the challenger chooses a random exponent z ∈ Z * N and sets the transformation key such as {t pk j,GID } j∈N = ({K 1/z 1,i } i∈L , g 1/z , H(GID) 1/z ). Then, B stores in the table T the entry (S GID , sk GID, j ,tk GID, j ). Then, it returns to the adversary the transformation key {tk j,GID } j∈N = ({t pk j,GID } j∈N ,tsk GID ). [Exp A Lewko (1 ξ )] ≥ Pr[Exp A con f -real (1 ξ )], and the advantage of adversary A is negligible. Then our PHOABE scheme satisfies the confidentiality property. Verifiability In this section, we prove that our scheme is verifiable against malicious servers with respect to Theorem 2. Theorem 2. If H 2 and H 0 are two collision-resistant hash functions, then, our PHOABE scheme is verifiable against malicious servers. Proof. We define a PPT algorithm B which aims to break the verifiability property of the outsourcing ABE system under the help of the adversary A. B simulates the adversary views with respect to the Exp veri f security game defined in Section 4.2.1. To do so, B tries to break the collision resistance of the H 2 or the H 0 hash functions . Given two challenge hash functions (H * 2 , H * 0 ), B simulates the security game introduced in Definition 4 as follows: First, B runs the setup algorithms to generate the public parameters except the hash functions H 2 or H 0 . In addition, B generates the attribute authorities public and secret keys by running the setup auth algorithm. Afterwards, B runs the adversary queries Queries phase 1 and Queries phase 2 in order to get secret keys and the transformation keys related to a set of attributes. In the challenge phase, the adversary A sends a challenge message M * to the algorithm B. Then, this latter picks a random message R * ∈ G T and encrypts R * under the challenge access structure Ψ * = (A * , ρ * ) using Lewko and Waters scheme. Then, B sets R * 0 = H * 0 (R * ) and computes a symmetric key K * sym = H * 1 (R * ). Subsequently, it computes the encryption of the message M * using a symmetric encryption algorithm Encrypt sym such as CT * sym = encrypt sym (K * sym , M * ). In addition, C sets the verification key V * = H * 2 (R * 0 ||CT * sym ). Afterwards, the algorithm B sends CT * = (CT * ABE ,CT * sym ) = (h,C 0 ,C 1,i ,C 2,i ,C 3,i ,CT * sym ) to the adversary as a challenge ciphertext as well as the verification key V * . If A breaks the verifiability game, B will recover a message M / ∈ {M * , ⊥} relying on the partially decryption algorithm decrypt out (PP, {t pk j,GID } j∈N , (A * , ρ * ),CT * ). Notice that the decryption algorithm outputs ⊥ if H * 2 (R * 0 ||CT * sym ) = V * where R * 0 = H * 0 (R * ) and R * = decrypt sym (K * sym ,CT * sym ). As a consequence, the following two cases are considered: • Case 1: Since B knows (R * 0 ,CT * sym ), if (R 0 ,CT sym ) = (R * 0 ,CT * (R) = R 0 = R * 0 = H * 0 (R * ). Consequently, using an absurdum reasoning, since the hash functions H 2 and H 0 are two collision resistant functions, then our scheme PHOABE is verifiable. Policy Privacy Preservation In this section, we prove that our scheme is privacy-preserving against both malicious users and servers with respect to Theorem 3. Theorem 3. Our PHOABE scheme is policy privacy preserving according to Definition 5. Sketch of proof. In PHOABE scheme, the encrypting entity encrypts data under an access policy Ψ. Afterwards, he generates a new value equal to ê((g y j ) a , H (x i )) based on the one-way anonymous key agreement protocol [START_REF] Kate | Pairing-based onion routing[END_REF] where a is a random number. This generated value is used to obfuscate the attribute x i . For instance, in order to ensure the access policy privacy preservation feature, the encrypting entity replaces each attribute x i specified in the access policy Ψ, by the generated value ê((g y j ) a , H (x i )). In order to recover the access policy, the decrypting entity uses his corresponding private key {K 2,i } i∈S j,GID = {H (i) y j )} i∈S j,GID to compute the value q i = ê(h, H (i) y i ) = ê(g a , H (i) y i )∀i ∈ S GID . Hence, only authorized users having the right secret keys can recover the access policy. For instance, thanks to the random value a, unauthorized users cannot guess x i from the obfuscated value ê((g y j ) a , H (x i )) Consequently, the policy privacy preservation requirement is ensured thanks to the security of the one-way anonymous key agreement protocol [START_REF] Kate | Pairing-based onion routing[END_REF]. In addition, users cannot reveal the embedded attribute in the access policy when they collude, because they cannot infer the attribute x i from ê(g a , H (i) y i ). PERFORMANCE ANALYSIS In this section, we present the computation and storage complexities of our PHOABE scheme. For this purpose, we are interested by the computations performed at the data owner side in order to execute the encrypt algorithm. In addition, we consider the computation cost related to the execution of decrypt and decrypt out algorithms performed by the user (U) and the Semi Trusted Cloud Server (STCS), respectively. Moreover, we introduce the size of the user's secret keys as well as the ciphertext's size. For this purpose, we denote by: • E 1 : exponentiation in G 1 • E T : exponentiation in G T • τ P : computation of a pairing function ê • n is number of attributes used in the access policy • |S| is the number of attributes in the set of attributes related to a user Table 3 details the performance comparison with the most closely related ABE schemes. Computation Complexities In [START_REF] Nishide | Attribute-based encryption with partially hidden encryptor-specified access structures[END_REF], Nishide et al. proposed a hidden policy ABE scheme requiring one exponentiation in G T and (2n + 1) exponentiations in G 1 in the encryption process. For the decryption phase, the user needs to compute (2n + 1) pairing functions. In 2011, Green et al. [START_REF] Green | Outsourcing the decryption of abe ciphertexts[END_REF] proposed the first ABE scheme with outsourced decryption. This proposal consists in using a semi trusted server to compute (2n + 1) pairing functions and 2n exponentiation in G T in order to generate a partially decrypted ciphertext. Afterwards, the user needs only one exponentiation in G T to retrieve the original message. In the encryption process, the data owner needs (4 + 3n) exponentiations in G 1 and one exponentiation in G T . Lai et al. [START_REF] Lai | Attribute-based encryption with verifiable outsourced decryption[END_REF] introduced an outsourced ABE scheme consisting in verifying the correctness of the ciphertext. In this scheme, the encryption algorithm requires 2 exponentiations in G T and the decryption algorithm performs 6n exponentiations in G 1 . Beyond the encryption and decryption costs, this scheme consists in computing 2 hash functions, in both the encryption and decryption processes, in order to use them in the verification process. Similarly, Qin et al. [START_REF] Qin | Attribute-based encryption with efficient verifiable outsourced decryption[END_REF] and Lin et al. [START_REF] Lin | Revisiting attribute-based encryption with verifiable outsourced decryption[END_REF] proposed two verifiable outsourced ABE schemes. In the [START_REF] Qin | Attribute-based encryption with efficient verifiable outsourced decryption[END_REF] proposal, the encryption and decryption algorithms overheads are equal to τ p + E 1 (4 + 3n) and 2nE T + τ p (2n + 1), respectively. To verify the ciphertext's correctness, 3 hash functions have to be performed at both the data owner and user sides. In the [START_REF] Lin | Revisiting attribute-based encryption with verifiable outsourced decryption[END_REF] construction, the data owner needs to compute (3 + 2n) exponentiations in G 1 , one exponentiation in G T and 2 hash functions. Moreover, the user computes one exponentiation in G T and 2 hash functions to decrypt and verify the ciphertext. In addition, the semi trusted server computes 2n exponentiations in G T and (2n + 1) pairing functions. In the outsourced attribute based encryption scheme proposed by Li et al. [START_REF] Li | Securely outsourcing attribute-based encryption with checkability[END_REF], the user needs to compute only one exponentiation in G T while outsourcing (2n + 2) pairing functions and (2n + 2) exponentiations in G T . The encryption algorithm performs (3 + 2n) exponentiations in G 1 . In 2015, Zhou et al. [START_REF] Zhou | Efficient privacy-preserving ciphertext-policy attribute based-encryption and broadcast encryption[END_REF] introduced a policy hiding broadcast ABE scheme where the encryption and decryption overheads are independent from the number of attributes involved in the access structure. Afterwards, Xu et al. [START_REF] Xu | A cp-abe scheme with hidden policy and its application in cloud computing[END_REF] proposed a policy privacy preserving ABE scheme. The processing costs are equal to E 1 (n + 2) and τ p (n + 1) + nE T for the encryption and decryption phases, respectively. In 2016, Zhong et al. [START_REF] Zhong | Multi-authority attribute-based encryption access control scheme with policy hidden for cloud storage[END_REF] proposed the first multi-attribute authority ABE scheme with hidden access policy. This proposal consists in performing 2 pairing functions, 3n exponentiations in G 1 and (1 + 2n) exponentiations in G T in the encryption phase. Moreover, the decryption algorithm computes (1+2n) pairing functions and n exponentiations in G T . Zho et al. [START_REF] Zuo | Cca-secure abe with outsourced decryption for fog computing[END_REF] proposed an outsourced ABE scheme for fog computing applications. This scheme consists in computing (N + 1) exponentiations in G 1 , 4 exponentiations in G T , where N is the cardinal of the attribute universe, and 2 hash functions to decrypt the message. The semi trusted server partially decrypts the message by performing (4 + 2N + n) pairing functions, N exponentiations in G T and 2 hash functions. Finally, the user has to perform only 4 exponentiations in G T and 2 hash functions to decrypt data. Recently, Li et al. [START_REF] Li | Verifiable outsourced decryption of attribute-based encryption with constant ciphertext length[END_REF] proposed a verifiable outsourced ABE scheme with constant ciphertext size. In this proposal, to encrypt data, the data owner performs 6 exponentiations in G 1 , 2 exponentiations in G T and one hash function. In addition, the semi trusted server computes 4 pairing functions and the user performs one exponentiation in G T and 2 hash functions. In our PHOABE scheme, the data owner performs 5 exponentiations in G 1 , one exponentiation in G T and 3 hash functions. In addition, our proposal consists in using a semi trusted cloud server to compute the expensive computation operations such as performing 3n pairing functions and one exponentiation in G T . As a consequence, the user only computes one exponentiation in G T and 3 hash functions to execute the verification process. Above all, the processing costs of our PHOABE scheme remain competitive compared to other encryption schemes. Indeed, our construction consists in outsourcing the computation of expensive operations. 5). CONCLUSION The number of devices connecting to the Internet of Things (IoT) is growing exponentially, as does the amount of data produced. As such, cloud assisted IoT services is emerging as a promising solution to deal with the computation and storage of the huge amounts of produced data and to delegate the expensive operations to cloud servers. The widespread use of these services requires customized security and privacy levels to be guaranteed. In this paper, we present PHOABE, a novel privacypreserving outsourcing multi-authority attribute based encryption scheme, that permits to overcomes the computational costs of decryption that scale with the complexity of the access policy and the number of attributes. The proposed technique can be considered as an optimization of the decryption algorithm to mitigate the practical issues in implementing CP-ABE on resource-constrained devices. For instance, PHOABE is a multi-attribute authority mechanism which ensures delegating the expensive computations for ABE decryption to a Semi Trusted Cloud Server and protecting user's privacy using a hidden access policy. Finally, PHOABE is proven to be selectively secure, verifiable and policy privacy preserving under the random oracle model. As for future work, we plan to further improve PHOABE by exploring direct revocation solution to achieve more security features. For instance, a compromised user should be revoked without affecting other users. Fig. 1 . 1 Fig. 1. The PHOABE main architecture entities and their interaction 2 | 5 . 25 Definition Our PHOABE scheme ensures the access policy privacy preservation against Adaptive Chosen plaintext Attack if the advantage AdvA [Exp Priv (1 ξ )] = |Pr[b = b ] -1 2 | is negligible for all PPT adversaries. Definition 7 .Fig. 2 . 72 Fig. 2. General Overview of PHAOBE Challenge scheme and returns CT b,ABE to B. Afterwards, with the probability equal to the advantage of the adversary in Lewko-game (Adv A [Exp Lewko ]), B tries to Fig. 3 .Fig. 4 .Fig. 5 .Fig. 6 . 3456 Fig.3. Exponentiation Execution Time[START_REF] Ometov | Feasibility characterization of cryptographic primitives for constrained (wearable) iot devices[END_REF] Table 1 . 1 Features and Functionality Comparison of Attribute Based Encryption Schemes Scheme Type Access Policy Hidden Policy Outsourced Decryption Verifiability Multi-authority Security Models [26] CP-ABE LSSS Partially hidden Single Selective CPA [56] CP-ABE LSSS Partially hidden Single Selective CPA [24] CP-ABE LSSS Single RCCA [50] CP-ABE LSSS Single Selective CPA [51] CP-ABE LSSS Single RCCA [57] CP-ABE LSSS Fully Hidden Single Selective CPA [28] CP-ABE LSSS Fully Hidden Single Selective CPA [45] CP-ABE LSSS Single RCCA [54] CP-ABE LSSS Single Selective CPA [58] CP-ABE LSSS Fully Hidden Multi Selective CPA [27] CP-ABE AND gates Fully Hidden Single Selective CPA [25] CP-ABE LSSS Single Selective CCA [55] CP-ABE AND gates Single RCCA PHOABE CP-ABE LSSS Fully Hidden Multi Selective RCPA Table 2 . 2 The different notations used in this paper Notation Description CSP Cloud Service Provider CTA Central Trusted Authority STCS Semi Trusted Cloud Server AA Attribute Authority O Data Owner U User PP Public Parameters sk AA j Secret key related to AA j pk AA j Public key related to AA j A LSSS access matrix Ψ Access policy D F Data file GID User Global Identifier S j,GID 1} n H 2 , where M is the message universe. Finally, it outputs the global public parameters PP defined as follows: g a generator of G 1 and four collusion resistant hash functions H : {0, 1} * → Z P , H 0 : {M } → {0, 1} n H 0 , H 1 : {M } → {0, 1} * , H 2 : {0, 1} * → {0, 1} * → Z P . Finally, it outputs the public parameters PP defined as PP = {G 1 , G T , P, H * , H * 0 , H * 1 , H * 2 , ê, g}. In addition, B calls the challenger C to execute the setup auth algorithm to generate the attribute authorities public keys. Then, C chooses two random numbers α i ,t i ∈ Z * g a generator of G 1 and four collusion resistant hash functions H * , H * 0 , H * 1 , H * 2 : {0, N sym ) is returned as a result, then B obtains a collision of the hash function H * 2 .• Case 2: If we get (R 0 ,CT sym ) = (R * 0 ,CT * sym ), while R * and R are not equal (R * = R). Then, B breaks the collision resistance condition of H * 0 as H * 0 Table 4 . 4 Selected Devices[START_REF] Ometov | Feasibility characterization of cryptographic primitives for constrained (wearable) iot devices[END_REF] Device Type Processor Sony SmartWatch 3 SWR50 Smart Watch 520 MHz Single-core Cortex-A7 Samsung I9500 Galaxy S4 Smartphone 1.6 GHz Dual-Core Cortex-A15 Jiayu S3 Advanced Smartphone 1.7 GHz Octa-Core 64bit Cortex A53 Intel Edison IoT Development Board 500 MHz Dual-Core Intel AtomTM CPU, 100 Mhz MCU Raspberry Pi 1 model B IoT Development Board 700 MHz Single-Core ARM Cortex-A6 Raspberry Pi 2 model B IoT Development Board 900 MHz Quad-Core ARM Cortex-A7 3.5 3 2.5 Operation Time (ms) 1.5 2 Sony Smart Watch Samsung Galaxy S4 JIAYU S3 Advanced Intel Edison Rasberry Pi 1B 1 Rasberry Pi 2B 0.5 0 Exponentiation Procedure Table 5 . 5 Number of Cryptographic Operations computed by STCS and User in PHOABE. Exp Mul Pairing STCS 1 2 3n USer 1 1 0 ACKNOWLEDGEMENTS This work is part of the MOBIDOC project achieved under the PASRI program, funded by the European Union and administered by the ANPR. Storage Complexities The policy hidden attribute based encryption schemes [START_REF] Nishide | Attribute-based encryption with partially hidden encryptor-specified access structures[END_REF][START_REF] Xu | A cp-abe scheme with hidden policy and its application in cloud computing[END_REF] have the same size of user's secret keys which is equal to 1 + |S|, where S represents the set of attributes of the user. Differently, the proposals introduced in [START_REF] Zhou | Efficient privacy-preserving ciphertext-policy attribute based-encryption and broadcast encryption[END_REF][START_REF] Zhong | Multi-authority attribute-based encryption access control scheme with policy hidden for cloud storage[END_REF] have almost the same size of user's secret keys which is equal to |2S|. The attribute based encryption schemes with outsourced decryption, presented in [START_REF] Green | Outsourcing the decryption of abe ciphertexts[END_REF][START_REF] Qin | Attribute-based encryption with efficient verifiable outsourced decryption[END_REF][START_REF] Lai | Attribute-based encryption with verifiable outsourced decryption[END_REF][START_REF] Lin | Revisiting attribute-based encryption with verifiable outsourced decryption[END_REF] consist in using user's secrets keys and transformation keys. Consequently, the total size of user's keys is equal to double the size of his set of attributes. The ABE schemes [START_REF] Viet | Hidden ciphertext policy attribute-based encryption under standard assumptions[END_REF][START_REF] Li | Verifiable outsourced decryption of attribute-based encryption with constant ciphertext length[END_REF] have constant size of user's secret keys. However, since Li et al. [START_REF] Li | Verifiable outsourced decryption of attribute-based encryption with constant ciphertext length[END_REF] proposal is an ABE scheme with outsourced decryption, the user's secret keys involve the transformation keys whose size is equal to 8. The size of the user's keys in the Zuo's et al. proposal [START_REF] Zuo | Cca-secure abe with outsourced decryption for fog computing[END_REF] are equal to N + |S| + 1 and N + 2 for the user's secret keys and the transformation keys, respectively. Note that N is the cardinal of the attribute universe. In both Li et al. [START_REF] Li | Securely outsourcing attribute-based encryption with checkability[END_REF] and PHOABE proposals, the size of the user's secret keys is equal to 2|S| while the size of the transformation keys is equal to 2|S| + 3. The ABE schemes [START_REF] Viet | Hidden ciphertext policy attribute-based encryption under standard assumptions[END_REF][START_REF] Li | Verifiable outsourced decryption of attribute-based encryption with constant ciphertext length[END_REF][START_REF] Zhou | Efficient privacy-preserving ciphertext-policy attribute based-encryption and broadcast encryption[END_REF] present a ciphertext with a constant size which does not depend on the number of attributes used in the access policy. In the existing ABE schemes such as [START_REF] Green | Outsourcing the decryption of abe ciphertexts[END_REF][START_REF] Nishide | Attribute-based encryption with partially hidden encryptor-specified access structures[END_REF][START_REF] Qin | Attribute-based encryption with efficient verifiable outsourced decryption[END_REF], the authors introduce ABE schemes where the size of the generated ciphertext is approximately equal to 2n. Recall that n is the number of attributes involved in the access policy. In [START_REF] Xu | A cp-abe scheme with hidden policy and its application in cloud computing[END_REF] [START_REF] Zhong | Multi-authority attribute-based encryption access control scheme with policy hidden for cloud storage[END_REF] introduces a ciphertext's size equivalent to 3 times the cardinal of the access policy. Similarly, our PHOABE con-struction produces a ciphertext which is 3 times the cardinal of the access structure. Taking into consideration that PHOABE is a multiauthority scheme, we state that our construction presents interesting performances especially related to the key size and the size of the ciphertext. Resource-Constrained Performance Analysis Several research works have been proposed to evaluate the computation overhead of attribute based encryption schemes on resource-constrained devices [START_REF] Zhou | Efficient and secure data storage operations for mobile cloud computing[END_REF][START_REF] Moreno Ambrosin | On the feasibility of attribute-based encryption on smartphone devices[END_REF][START_REF] Moreno Ambrosin | On the feasibility of attribute-based encryption on internet of things devices[END_REF][START_REF] Ometov | Feasibility characterization of cryptographic primitives for constrained (wearable) iot devices[END_REF][START_REF] Wang | Performance evaluation of attribute-based encryption: Toward data privacy in the iot[END_REF]. These papers introduced experimental performance analysis of the different ABE algorithms on mobile devices and sensors. Ometov et al. [START_REF] Ometov | Feasibility characterization of cryptographic primitives for constrained (wearable) iot devices[END_REF] evaluate the impact of elementary cryptographic operations on different resource-constrained devices as detailed in Table 4. As our PHOABE framework relies on the use of bilinear maps as well as mathematical operations in a multiplicative group, we investigate the impacts of these operations (Figures 3, 4, and5) on the performance of different IoT devices, based on the results introduced in [START_REF] Ometov | Feasibility characterization of cryptographic primitives for constrained (wearable) iot devices[END_REF]. Figure 5 shows the computation cost of one pairing operation in different IoT devices (cf., Table 4). The most efficient device here is Intel Edison with JDK 1.8.0 that computes a single pairing operation in around 500 ms. As the pairing operations are extremely expensive, attribute based encryption can not be applied to secure IoT devices. Thus, the only solution to benefit from the security feature of ABE is to apply it using outsourced decryption feature. In ciphertext attribute based encryption, the size of an encrypted file mainly depends on the number of attributes involved in the access policy used in the encryption phase [START_REF] Belguith | Constant-size threshold attribute based signcryption for cloud applications[END_REF]. To estimate the real-world computation costs on a client side, we investigate the computation overhead of PHOABE on a Samsung I9500 Galaxy S4. As shown in Figure 6, the computation costs at the user side increase with the number of attributes used in the encryption. However, while applying outsourced decryption, the user computation costs are independent of the number of attributes as the user only needs to compute one exponentiation and one
01569248
en
[ "phys.meca.mefl" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01569248/file/DLES17_IBMCompressibleFlow.pdf
H Riahi E Constant J Favier P Meliga E Serre M Meldi E Goncalves DIRECT NUMERICAL SIMULATION OF COMPRESSIBLE FLOWS AROUND SPHERICAL BODIES USING THE IMMERSED BOUNDARY METHOD Introduction The three-dimensional flow around a sphere is one of the most classical subjects of investigation for fundamental analysis of external aerodynamics. In fact this flow configuration, which is described by a very simple geometrical shape, exhibits the potential for complex multi-physics analysis. Some aspects that can be investigated include turbulence, acoustics and heat transfer, and this test case is particularly favorable for the analysis of coupled problems. In addition, the emergence of a number of different regimes is observed for moderate Reynolds number, which are extremely sensitive to the Mach number Ma investigated. Furthermore, multiple physical systems can be modeled by multi-spherical bodies in motion involving complex interactions. Owing to this large number of aspects which are relevant for industrial applications, this case represents an important benchmark for validation of new numerical /modeling strategies. In the present work, this test case is analyzed via the Immersed Boundary Method (IBM). The surface of the sphere is not directly embedded in the physical domain, but it is represented by a set of discrete Lagrangian points which are associated with volume forces included in the Navier-Stokes equation. This procedure allows for flow representation through Cartesian grids, instead of the classical solution of a spherical frame of reference. This analysis aims to produce ground research for future fluid structure interactions analysis, including moving spherical objects in the physical domain. In this scenario, the use of a spherical frame of reference is clearly problematic. The flow configurations here investigated encompass a large range of Ma numbers, including subsonic, transonic and supersonic flows, for low to moderate Reynolds numbers Re. This very large parametric two dimensional space [Ma, Re] ∈ [0.3 -2, 50 -600] allows for a robust validation of the proposed IBM methodology, which must achieve a successful representation for numerous physical configurations exhibiting different features. Numerical ingredients and IBM development The starting point of the present work are the compressible Navier-Stokes equations: ∂ ρ ∂t + div(ρU) = 0 (1) ∂ ρU ∂t + div(ρU ⊗ U) = -gradp + divτ + F (2) ∂ ρE ∂t + div(ρEU) = -div(pU) + div(τU) + div(λ (T )gradT ) + F U ( 3 ) where ρ is the density, p the pressure, T the temperature, λ the thermal conductivity, U the velocity, τ the tensor of the viscous constraints, E the total energy and F a prescribed volume force. The IBM exploits this last term to account for the presence of the immersed body, which is not represented via a boundary condition. Among the favorable characteristics of this method we have that mesh elements are not stretched / distorted close to the body surface. In addiction, expensive updates of the mesh are naturally excluded in the analysis of moving bodies. The present method roots in previous works proposed by Uhlmann [9] and Pinelli et al. [START_REF] Pinelli | Immersed-boundary methods for general finite-difference and finite-volume Navier-Stokes solvers[END_REF] which combine strengths of classical continuous forcing methods [START_REF] Peskin | Flow Patterns Around Heart Valves: A Numerical Method[END_REF] and discrete forcing methods [START_REF] Mohd-Yosuf | Combined immersed boundary/B-spline methods for simulation of flow in complex geometries[END_REF]. The novelty of the approach is represented by: 1. the extension to compressible flow configurations 2. the addition of a component which penalizes deviation from the expected behavior of the pressure gradient. In numerical simulation, the pressure field must comply with a Neumann condition in the wall normal direction. The forcing is calculated on Lagrangian points representing the discretized shape of the body via interpolation of the physical fields available on the Eulerian Cartesian Grid. This step is followed by a consistent spreading of this value back to the Eulerian mesh elements. The resulting forcing term F in Eulerian coordinates, which will be referred to as F IB , is expressed as: F IB = 1 ∆t ρ interpol (U target -U interpol ) -(gradp target -gradp interpol ) (4) here the subscript interpol represents the quantities that have been interpolated on the Lagrangian points, while the subscript target represents the expected behavior of the flow close to the wall. ∆t is the time step of the numerical simulation. Numerical implementation & validation The implementation of the IBM model has been performed in the framework of a specific open source library for numerical simulation, namely OpenFOAM. This code has been identified as the best tester because of the simplicity in implementation as well as the availability of numerous routines already integrated [START_REF] Constant | An Immersed Boundary Method in OpenFOAM: verification and validation[END_REF]. Owing to the large spectrum of Ma numbers investigated, the IBM has been implemented in two different solvers: • segregated pressure-based solver with pimple loop for compressible flow with low Mach number (Ma ≤ 0.3) [START_REF] Gutirrez Marcantoni | High speed flow simulation using OpenFOAM[END_REF]. • segregated density-based solver with Kurganov and Tadmor divergence scheme for compressible flow with high Mach number (Ma > 0. 3) [START_REF] Kurganov | New high-resolution central schemes for nonlinear conservation laws and convection-diffusion equations[END_REF]. Two 2D test cases have been identified to validate the performance of the compressible IBM solver, namely the flow around a circular cylinder and the flow around a three cylinders configuration. For both test cases, numerical results indicate that the present version of the IBM successfully captures the physical features over the whole parametric space investigated. In addition, the pressure correction term in equation 4 proves to be essential in obtaining an accurate near wall estimation of the flow. Results are shown for reference in figure 1a for the unsteady subsonic flow around circular cylinder, where the Karman street is correctly represented, and in figure 1b for the supersonic flow around a three cylinder configuration. For this last case, it is observed that the presence of the lateral cylinders decreases the drag coefficient of the central cylinder. DNS of compressible flows around a sphere The three-dimensional flow around a sphere has been investigated for different configurations including subsonic, transonic and supersonic flow cases. Present results are compared with classical DNS by Nagata [START_REF] Nagata | Investigation on subsonic to supersonic flow around a sphere at low Reynolds number of between 50 and 300 by direct numerical simulation[END_REF] for embedded surfaces using a spherical mesh. The near wall Cartesian mesh resolution has been fixed accordingly with Johnson and Patel formula for direct numerical simulation [START_REF] Johnson | Flow past a sphere up to a Reynolds number of 300[END_REF], resulting in cubic elements of resolution 0.0078 D, where D is the sphere diameter. This level of refinement is imposed in a region of size x × y × z = [-1, 1] × [-1, 1] × [-1, 1] in D units. The origin is fixed in the center of the sphere. A progressive coarsening ratio is imposed outside this region, resulting in a total of 2 × 10 7 mesh elements. As discussed in the introduction, this test case exhibits numerous physical configurations which are sensitive to the value of Ma and Re initially imposed. Results for nine configurations are here discussed, as summarized in table 1. Depending on the choice of the parameters Ma, Re a steady axisymmetric configuration or an unsteady flow configuration is observed. A very good agreement with results in the literature [START_REF] Krumins | A review of sphere drag coefficients applicable to atmospheric density sensing[END_REF][START_REF] Nagata | Investigation on subsonic to supersonic flow around a sphere at low Reynolds number of between 50 and 300 by direct numerical simulation[END_REF] is observed for all the configurations investigated. In particular, results for the bulk flow quantities (friction coefficient C D , recirculation bubble Xs, Strouhal number St and shock distance from stagnation point Dshock) are presented in table 2. In the following, a brief discussion is proposed clustering the results with respect to the Mach number. The subsonic flow configuration for Ma = 0.3 clearly exhibits an stationary behavior for Re = 50, while unsteady flows are obtained for Re = 300 and Re = 600 (see figures 2a and 2b). For the unstationary cases, the IBM method allows for a precise estimation of the bulk statistical quantities. The results obtained for the transonic case (Ma = 0.95) are shown in figures 3a, 3b, 3c and 3d. For this case, steady configurations are observed for Re = 50 and Re = 300, while an unsteady flow is obtained for Re = 600. The most interesting aspect for this class of simulations is that an accurate representation of the supersonic zone at the wall is observed, which is usually a challenging point for IBM methods. At last, the supersonic flow configurations for Ma = 2 are considered. In this case compressibility effects are very strong and all the simulations produce steady flows. Again, the analysis of the main bulk flow quantities indicate that all the physical features are accurately captured, when compared with data in the literature [START_REF] Krumins | A review of sphere drag coefficients applicable to atmospheric density sensing[END_REF][START_REF] Nagata | Investigation on subsonic to supersonic flow around a sphere at low Reynolds number of between 50 and 300 by direct numerical simulation[END_REF]. Isocontours are shown in figures 4a and 4b. Conclusion The flow around a sphere has been analyzed via an IBM for adiabatic compressible flows. The analysis has encompassed a wide range of Re, Ma for which various physical features emerge. The results of the present analysis indicate that the proposed IBM model successfully captures the physical features for the entire spectrum of configurations investigated. An accurate prediction of the main bulk quantities has been obtained and, in particular, the method has proven robustness characteristics in capturing shock features and the supersonic zone on the sphere surface. This research work has been developed using computational resources in the framework of the project DARI-GENCI A0012A07590. Fig. 1a Q 3 Fig. 1b 1a31b Fig. 1a Q-Criterion for a 2D subsonic flow around a circular cylinder , Ma=0.3 Fig. 2a 2a Fig. 2a Mach isocontours for a 3D steady flow around a sphere , Ma=0.3 Re=50 Fig. 3a 3a Fig. 3a Mach isocontours for a 3D steady flow around a sphere , Ma=0.95 Re=50 Fig. 4a 4a Fig. 4a Mach field of a 3D supersonic flow around a sphere , Ma=2 Re=300 Riahi • M. Meldi • E. Goncalves Institut PPRIME, Department of Fluid Flow, Heat Transfer and Combustion, ENSMA -CNRS -Université de Poitiers, UPR 3346, Poitiers, France., e-mail: {hamza.riahi,marcello. meldi,eric.goncalves}@ensma.fr E. Constant • J. Favier • P. Meliga • E. Serre Aix-Marseille Université, CNRS,Ecole Centrale Marseille, Laboratoire M2P2 UMR 7340, 13451, Marseille, France. 1 H. Table 1 1 Flow regimes SUBSONIC LOW MACH TRANSONIC FLOW SUPERSONIC FLOW Ma=0.3 Ma=0.95 Ma=2 Re=50 STEADY AXISYMMETRIC STEADY AXISYMMETRIC STEADY AXISYMMETRIC Re=300 UNSTEADY STEADY AXISYMMETRIC STEADY AXISYMMETRIC Re=600 UNSTEADY UNSTEADY STEADY AXISYMMETRIC Table 2 2 Results of a 3D compressible flow around sphere Ma=0.3 Ma=0.95 Ma=2 C D Xs St C D Xs St C D Xs Dshock Re=50 IBM Results 1.6 0.96 - 2.116 1.15 2.03 0.5 0.73 Nagata et al.[6] 1.57 0.95 - - - 2.25 0.5 0.75 Re=300 IBM Results 0.703 - 0.123 1.03 3.8 1.39 1 0.7 Nagata et al.[6] 0.68 - 0.128 1 4.1 1.41 1 0.7 Re=600 IBM Results 0.58 - 0.143 0.91 - 0.138 1.27 1.7 0.68 Krumins.[4] 0.54 - - 0.9 - - 1.17 - -
01774028
en
[ "phys.cond.cm-ms", "phys.cond.cm-scm" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01774028/file/Nanoletters2017.pdf
Ranime Alameddine Astrid Wahl Fuwei Pi Kaoutar Bouzalmate Laurent Limozin Anne Charrier Kheya Sengupta email: [email protected]. Printing functional protein nano-dots on soft elastomers: from transfer mechanism to cell mechanosensing Living cells sense the physical and chemical nature of their micro/nano environment with exquisite sensitivity. In this context, there is a growing need to functionalize soft materials with micro/nano-scale bio-chemical patterns for applications in mechanobiology. This however is still an engineering challenge. Here a new method is proposed, where sub-micronic proteinpatterns are rst formed on glass and are then printed on to an elastomer. The degree of transfer is shown to be governed mainly by hydrophobic interactions and to be inuenced by grafting an appropriate uorophore onto the core protein of interest. The transfer mechanism is probed by measuring the forces of adhesion/cohesion using atomic force microscopy. The transfer of functional arrays of dots with size down to about 400 nm, on elastomers with stiness ranging from 3 kPa to 7 MPa, is demonstrated. Pilot studies on adhesion of T lymphocytes on such soft patterned substrates are reported. In the last decades more and more experiments have conrmed that living cells are sensitive to the mechanics of their immediate environment. They behave dierently on sti and soft substrates in terms of adhesion, migration, actin organization, force generation, dierentiation and a host of other properties. 15 In separate studies, it was shown that adherent cells respond to the way adhesive ligands are distributed or grouped into micro or nano-patterns. 69 Single molecules of adhesive ligands that are separated by distances larger than a cut-o length scale of about 80 nm fail to support adhesion, spreading, proliferation and growth of connective tissue cells like broblasts. 7,10 Intriguingly, the behavior of lymphocyte-like cells may depend not on ligand spacing but on average ligand density. 9,1113 Directing cell behavior and fate, in a cell type and ligand dependent manner and through physical means, is of major interest for tissue-engineering applications. 14 In the general context of cell biology as well as the specic context of lymphocytes, there is therefore a lot of interest in patterning of soft substrates but even at the micro-scale this is a current engineering challenge. 15,16 At the nanoscale there is, so far, only one available technique that has been applied to cell studies, 17 which is based on patterning soft hydrogels employing block copolymer micelle lithography (BCML) for the production of surfaces patterned with gold nanoparticles on a hard template, then transfer the pattern to a polymeric hydrogel. 18,19 This technique is therefore dependent on the chemistry and optics of gold and is limited to the visco-elastic range of the hydrogel. Another technique with the potential to achieve nano-scale features on a hydrogel is deep UV etching followed by transfer, but so far this has been mainly conned to larger, micron size patterns and is also limited by the visco-elastic range of the hydrogel. 20,21 Moreover, in the view of seminal work dissecting possible dierences between cells on hydrogels and on elastomers, 14 patterning both types of material is important. A large number of techniques have been devised to nanopattern hard inorganic substrates like glass or silicon, 22,23 including photo or electron-beam lithography, [START_REF] Bucknall | Nanolithography and patterning techniques in microelectronics[END_REF] dip pen nanolithography, [START_REF] Li | [END_REF]26 and nano-imprint technologies. 10,27 An universal technique to transfer such patterns to soft substrates will address a specic need in the community. Micro and nano contact printing, which uses elastomeric stamps with a pattern of relief structures to transfer molecules to glass or gold surface, is now an ubiquitous tool in biology. 6,2835 These techniques, and their oshoots, depend on the high anity of the ink molecules for the target surface which is typically glass or gold. The reverse transfer of protein patterns from glass to elastomer was demonstrated at micron-scale on Polydimethylsiloxane (PDMS) but was limited by a requirement of treatment of PDMS leading to its hardening. 36,37 Similar transfer to soft native PDMS turned out to be possible, 38,39 but only under very special conditions. Here we present the key idea that by tuning molecular anities of the transferred proteins via grafting of selected chemical moieties, the reverse transfer from glass to an elastomer can be systematically controlled. We demonstrate the transfer at sub-micron scale onto very soft PDMS and present pilot experiments with cells. We show that the transfer can be predicted from measured adhesive/cohesive forces between the protein molecules and the glass/PDMS surface. We argue that the degree of hydrophobicity is the major, and the presence of ionic groups is a minor factor that governs the transfer. The pattern and transfer process is presented schematically in Figure 1. In brief, to pattern glass, cover-slides were thoroughly washed and a colloidal bead mask was formed on them as described before. 38 A uorosilane (Trichloro(1H,1H,2H,2H-peruorooctyl)silane, PFOTCS) was deposited from the vapor phase on the glass through the bead-mask. The beads were removed, revealing a layer of hydrophobic uorosilane patterned with holes. The hydrophobic regions were passivated by absorption of a triblock poloxamer (chosen to be Pluronic F68, see SI). The holes exposing bare glass were subsequently back-lled with a protein of choice. The protein of choice here is variously functionalized bovine serum albumin (BSA) which was usu-ally biotinylated (bBSA) and additionally labeled with a uorophore, or neutravidin (NAV) also conjugated to a uorophore. As examples, we chose two well known uorophores : the hydrophobic dye Texas Red (TR) and the hydrophillic dye Atto-488 (Atto). The binding steps are schematically presented in Figure 1, and were checked by imaging with atomic force microscopy (AFM) (Figure SI.1). At this stage, the cover-slide is chemically patterned with protein nano-dots, and is henceforth called the glass-master. A second glass cover-slide was coated with a layer of elastomer precursor solution by spin-coating and was appropriately cured. The elastomers used here include: a polydimethyl based silicone rubber (corresponding to sylgard-184 used with base to cross-linker ratio of 10:1, henceforth called PD), or a silicone gel containing additional methylphenyl groups and phenyl polymers (corresponding to Q-gel 920 used with ratio of 1:2, henceforth called MP), or, for certain experiments, another polydimethyl based silicone rubber (corresponding to CY 52-276, henceforth called PD2). All three types of elastomers were either used as is after curing or were exposed to oxygen plasma (accordingly, the latter are henceforth called pPD, pMP or pPD2). SI table 1 and 2 report the Young modulus and contact angle measurements on the dierent surfaces, showing that a stiness range of about 3 kPa to 7 MPa was covered and that the elastomers can be arranged in order of hydrophobicity as PD2 ≈ MP > PD pMP > pPD2 > pPD. For some experiments, the plasma exposed PDMS was further functionalized with a organo-aminosilane and glutaraldehyde (gluMP and gluPD2), re-rendering them hydrophobic. The surface of the elastomer layer was brought into physical contact with the glass-master in presence of a drop of water. The presence of water, in agreement with other reports 40 facilitates the transfer. A minimal pressure was applied manually to ensure conformal contact. Since both the surfaces are at and do not appreciably deform on contact, pressure is not a crucial parameter and it was veried that the quality of transfer is not sensitive to the applied pressure. The surfaces were carefully separated the next day to obtain the patterned elastomer. The patterned surface was imaged using epi-uorescence microscopy (Figure 2) and AFM (Fig - ure SI.2). Visual inspection of images in Figure 2 reveals that bBSA functionalized with Texas Red dye (bBSA-TR) transferred on PD (a,b) but the same core protein functionalized with Atto-488 dye (hbBSA-Atto) failed to transfer (c,d). bBSA-Atto could however be transferred to pPD (e,f). We note that on one hand, from the molecular structure of Atto and TR we expect TR to be more hydrophobic than Atto, the latter having an isolated primary amine group that is prone to losing an anion. On the other hand, native PD is hydrophobic but plasma treated pPD is hydrophillic. Thus, bBSA could be transferred to native hydrophobic PDMS if conjugated to a hydrophobic dye, and to hydrophillic PDMS if conjugated to a hydrophillic dye. These observations lead us to test a variety of proteins and elastomer surfaces. The table in Figure 2 summarizes the success, or not, of the attempted transfers. As conjectured, proteins (bBSA, BSA or NAV) labeled with hydrophobic dyes transfer well on hydrophobic PDMS: either native or glutaraldehyde treated (PD,MP,PD2,gluMP,gluPD2), whereas bBSA labeled with hydrophilic dyes transfers well on hydrophilic, plasma treated elastomers (pPD,pMP). Cross transfer is possible only in some cases. To put these observations on a quantitative basis, we selected the transfer of bBSA-TR or bBSA-Atto onto PD,MP,pPD and pMP. Transfer ratio of proteins inside the dots (a) and outside the dots (poloxamer covered zones) (b). Calculated from data presented in gure SI.5. Data are for bBSA-Atto or bBSA-TR dots and PDMS type PD, MP,pPD or pMP, as indicated ages for these transfers and Fig. 2 summarizes the dot size and contrast (see SI text for details of data analysis). The parameters reported here are averages calculated from at least 3 samples, with at least 6 elds each. The transfer of the pattern to both PD and MP conserves the size, but transfer to pPD or pMP increases the dot size slightly (Ttest: p<0.001). The contrast is systematically diminished on transfer, implying that the amount of protein transferred from within the dots is not identical to the transfer outside the dots. To characterize this, a transfer ratio was dened as I elastomer max /I glass max . A similar transfer ratio can be separately calculated for intensity outside the dots (I elastomer min /I glass min ) to quantify of the amount of transfer in the polaxamer coated zones. Here, Imax and Imin are the maximum and minimum intensity in the pattern which essentially correspond to the peak intensity in the dots and the background intensity out-side the dots (see Figure ). The transfer ratio inside (Figure 4a) clearly shows that the chemical nature of the grafted uorophore as well as the elastomer surface inuences the success of transfer. For the same uorophore and elastomer, the transfer ratio outside (Figure 4b) roughly similar to the transfer inside, and is often non-negligible. It is clear that to obtain good patterns, the transfer of proteins inside the dots should be maximal. Outside, in the ideal case, while making the glass-master, there should be no protein absorbed on the regions passivated with the poloxamer and even if there is some protein absorbed, it should not be transferred to PDMS. In practice, the amount of protein absorbed and transferred both depend on the quality of the poloxamer layer. In light of the non-negligible transfer in all cases, the strategy here was to minimize protein absorption outside the dots at the glass-master stage. The poloxamer type and concentration used was optimized accordingly (data SI Figure SI.6). In this work, an inside transfer ratio > 0.2 is considered a successful stamping, irrespective of the outside transfer ratio. To verify the hypothesis that the forces of adhesion, possibly originating from hydrophobic/hydrophilic anities, govern the success of transfer, we quantied the eective force of adhesion of the protein on glass and elastomers using AFM force curves. 41,42 The protein of interest was covalently bound to a glass bead attached to the AFM cantilever (Figure SI.7). The protein covered bead was approached and made to touch a test surface which was either bare clean glass, or bare elastomer, or glass covalently functionalized with the same protein. The retraction curves were analyzed in order to extract the force of adhesion. As control measurements, we also obtained force curves from intermediate steps of functionalization to ensure that the functionalization steps were correctly realized. PD turned out to have a very strong non-specic adhesion with the bead, probably due to van der Wall's interactions and these measurements were not amenable to interpretation. However, forcecurves could be consistently measured and interpreted for pPD (Figure SI.8). For MP, such measurements were difcult even after plasma treatment, probably due to its extreme softness. The force of adhesion for pPD (called FpP D ), glass (called F glass ), and protein coated glass (called Fprt) are summarized in table 1. Table 1 shows that the force required to separate a protein layer from glass is less than that required to pull apart two layers of protein (F glass < Fprt). Therefore the protein multi-layers expected to be present on the dots on the glass-master are transferred to the elastomer by peeling from glass, in practice, they in fact fracture due to presence of defects. Comparing the adhesion of bBSA-TR and bBSA-Atto, we see that the latter has a stronger interaction with pPD. This is consistent with the higher transfer ratio (about 0.6 for bBSA-Atto and about 0.4 for bBSA-TR) reported for bBSA-Atto in Figure 4a. Let us now re-examine the transfer table (Figure 1). As discussed above, the hydrophobic bBSA-TR transfers well on both PD and MP, with the transfer being better on the latter (TrRatio about 0.3 and 0.6 respectively). Consistent with this, hydrophilic FITC transfered to pMP. Crosstransfers (hydrophobic on hydrophilic or vice versa), may however either fail as expected or be possible due to additional considerations. As expected, the hydrophillic bBSA-Atto fails to transfer to hydrophobic PD. In fact though the transfer is feeble and cannot be detected with the stan-dard camera settings, it can be detected with higher camera amplication (Figure SI.9). Furthermore, PD is known to be slightly negatively charged in aqueous solution at neutral pH, 43 resulting in a additional electrostatic repulsion towards the negatively charged bBSA-Atto. Consistent with this, hydrophilic FITC conjugated bBSA failed to transfer to untreated PD2. However, bBSA-Atto does transfer to some extent (TrRatio about 0.4) to MP, which is even more hydrophobic as judged by contact angle measurements. We rationalize this observation by noting that bBSA-Atto has many phenyl groups that may chemically interact with the phenyl groups on MP through π-π interactions. The transfer of bBSA (not conjugated to any uorophore) can not be directly tested since in absence of an attached uorophore, bBSA can not be imaged in uorescence microscopy. It needs to be revealed by functionalization with uorescent neutravidin (NAV) after transfer and therefore transfer ratios can not be reported. To do this, after transfer of the bBSA, the bare elastomer around the dots was passivated with a polaxamer and then the NAV was allowed to bind from solution phase (see Figure SI.10 for example). bBSA by itself fails to transfer to PD2 and has very unreliable transfer to MP (data not shown). We conclude that on the glass master, bBSA mainly exposes hydrophilic groups, thus preventing its transfer to the hydrophobic PDMS surface. To conrm the general hypothesis that the inclusion of a hydrophobic moiety renders a protein more amenable to transfer on hydrophobic untreated elastomers, we checked that neutravidin conjugated to Texas red dye (NAV-TR) transferred well on all the elastomers studied here. However, probably due to drying, the transferred neutravidin was not functional and failed to bind to a biotinylated protein. In a related set of experiments we functionalized the plasma treated elastomer surface (pPD2 and pMP) with APTES ((3-Aminopropyl)triethoxysilane) and glutaraldehyde, which is known to render the surface hydrophobic. 44 The hydrophobic NAV-TR, transfered well on this type of surface, but hydrophilic bBSA and BSA-FITC showed a very low transfer, again showing that the transfer is governed by physico-chemical anity. We also compared the impact of hardness on transfer. To minimize artifacts arising from elastomer surface chemistry, hard and soft versions of the same elastomer were employed (Figure SI.11). It is seen that the width does not significantly change in either case and that the transfer is successful in both cases as judged from the calculated transfer ratios. We next conrmed that the bBSA-TR transferred to an elastomer could be functionalized with NAV. First, the area around the dots, still exposing the PDMS substrate, was passivated using poloxamers.Using NAV labeled with another uorophore (dylight-650), the specic binding of NAV to bBSA-TR dots can be conrmed (Figure SI.10). The NAV can be further functionalized with another biotinylated protein, here an antibody against the CD3 domain in the TCR-complex in T lymphocytes (α-CD3, multibiotinylated UCHT1). A soft elastomer, chosen to be MP with Young's modulus 20 kPa, was patterned with α-CD3 dots and passivated with a poloxamer. T cells were allowed to interact with this substrate for 30 minutes and were then xed and labeled with an antibody against TCR or with uorescent phalloidin to label actin (SI for details). Cells are seen to adhere, as judged from reection interference contrast microscopy (Figure 5 a), with an irregular contour but with a rather homogeneous adhesion within the contact zone. The actin is in the form of a ring (Figure 5 b), as has been reported in the case of homogeneous α-CD3 absorbed to glass. 45 TCR clusters are detectable and partially co-localize with the underlying anti-CD3 dots (Figure 5 c andd). The overall behavior on patterned glass is fairly similar (Figure SI.12) . The invariance in actin and TCR organization was unexpected since for other cell types, as reported for adhesion to homogeneously distributed ligands, the cell adhesion is diminished on soft substrates and the cytoskeletal organization is strongly impacted. 4,5 A comparison with homogeneously functionalized elastomer reveals that detectable micro-clusters do not form in this case. This is consistent with the dierence observed between patterned and homogeneously functionalized glass substrates. 9 The absorption of proteins to synthetic surfaces is important for a number of applications. For example, a newly implanted prosthesis is rst coated by proteins from the body uids before cells can interact with it. In a laboratory setting, many biology experiments depend on successful protein absorption as the rst step towards functionalization. Yet, the non-specic absorption of proteins to a surface from a solution phase is dicult to model, partly because of the complex and diverse molecular structure of the proteins. In the context of micro-contact printing, several attempts have been made to quantitatively model the transfer and yet they serve at best as indicative and most laboratories still rely on trial and error to test if their protein of interest can be stamped. In fact, it turns out that most proteins of interest readily transfer from traditional PDMS elastomer stamps to glass. Here we have shown that the reverse transfer of proteins from glass to PDMS is also possible, but depends crucially on the physico-chemical interactions between the protein and the PDMS surface. One point of great practical signicance is that proteins that can not be reverse transferred in native state, can be made to do so after by grafting an appropriate molecular moiety, here chosen to be dierent uorophores. We have shown that proteins grafted with a hydrophobic moiety always transfer well on PDMS with hydrophobic surfacewhich is the case for most available PDMS in their native state. The reverse transfer of proteins that are intrinsically hydrophillic or those that are grafted with a hydrophillic group, onto hydrophobic PDMS is not reliable. However, these typically transfer well on PDMS rendered hydrophilic by plasma treatment. Interestingly, atomic force microscopy based force measurements conrmed that indeed the degree of reverse transfer depends on the force required to detach a bead, grafted with a layer of protein, from the surface of the PDMS. We showed that the protein pattern created by reverse printing from at stamps remains functional and can be further functionalized with a more complex protein of choice. We demonstrated the adhesion of T cells to RCP-patterns functionalized with an antibody against the TCR complex. Consistent with previous observations on glass, the TCR can gather to form micro-clusters on patterned but not on homogeneously coated PDMS. The knowledge of mechanisms governing reverse transfer elucidated here should open the way to systematically using such glass based at-stamps to pattern elastomers with any desired protein molecules. Acknowledgement The authors thank Pierre Dillard for Figure 1 . 1 Figure 1. Schematic representation of the fabrication of protein nano-patterns on glass and soft substrates: (a) Deposition of uorosilane from a gas phase through a self-assembled colloidal bead mask on a glass substrate. (b) Removal of the mask and grafting of a poloxamer (pluronic) to passivate the uorosilane covered area. (c) Functionalization of the bare patches with the desired protein. (d) Thin layer of elastomer supported on a glass coverslide. (e) Transfer of the protein from glass to elastomer by reverse contact printing in presence of water. (f) Protein pattern on the elastomer. Figure 2 . 2 Figure 2. TOP: Epi-uorescence images of protein dots on glass master before transfer and on elastomer after transfer with same camera and display settings for all images. (a,b) bBSA-TR to PD. (c,d) bBSA-Atto to PD. (e,f) bBSA-Atto to pPD. Insets display Fourier transforms of the corresponding images to emphasize the ordering of the lattice. BOTTOM: Table summarizing the transfer of proteins to elastomer surfaces. Rows correspond to one kind of protein, the core being either BSA or NAV, which are then decorated with various uorophores as indicated. Hydrophilic molecules are depicted in blue and hydrophobic in red. Columns correspond to dierent elastomers, either native (hydrophobic, in red) or plasma treated (hudrophilic, in blue). Successful transfers are indicated with and unsuccessful ones with ×. Note that the intersection between identically colored lines and columns invariably results in a successful transfer (red or blue circled checkmark). The intersection between dierently colored lines and columns results in failed transfer in most cases ( × symbol). Fig.SI.3 shows representative im- Figure 3 . 3 Figure 3. Quantication of the bBSA nano-dots from the epiuorescence images before and after the transfer from glass to elastomer. Values are medians and error bars are median absolute deviation, both averaged over at least 3 independent samples, each with at least 6 elds each containing hundreds of dots. (a) Dot-size (FWHM of the intensity prole). (b) Contrast of the dots with respect to the background. Data are for bBSA-Atto or bBSA-TR dots and PDMS type PD, MP,pPD or pMP, as indicated Figure 4 . 4 Figure 4. Figure 5. T-cells adhered on soft elastomer (MP) patterned with nano-dots of α-CD3 antibody, observed after 30 minutes of spreading. a) RICM image showing a at membrane topography in the cell adhesion zone. b) TIRF image showing peripheral actin organization. c) TIRF image of labelled TCR on the cell surface d) epi-uorescence image of the underlying α-CD3 dots. Arrows on c and d point to partial co-localization of the TCR with the α-CD3 dots. Scale bar: 4 µm. Table 1 . 1 Adhesion force measurement. Each value is extracted from 100 independent force curves taken at 3 dierent regions for at least 2 dierent samples. (nN) F glass (nN) FpP DM S (nN) BSAb TR 4.8 ± 1.5 2.7±0.8 5.1±0.6 BSAb Atto 488 10.5 ± 3.5 2.3 ± 0.8 12.8±1.7 fruitful discussions regarding cell experiments, and Martine Biarnes for help with cell culture. This work was partially funded by European Research Council via grant no. 307104 FP/2007-2013/ERC-Stg SYNINTER. Fprt Supporting Information Available: The le SI.pdf is available free of charge and contains tables (Young's modulus and contact angles), gures (AFM images, force-curves, untreated optical images, complementary data) and details on materials and methods. This material is available free of charge via the Internet at http://pubs.acs.org/.
01774033
en
[ "phys.cond.cm-ms", "phys.cond.cm-scm" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01774033/file/mame.201600497_R1-1-AC.pdf
Dr Mohammed H Elmahmoudy Dr Sahika Inal email: [email protected] Dr Anne Charrier Prof Ilke G G Uguz George G Malliaras Dr Sébastien Sanaur Malliaras Tailoring the Electrochemical and Mechanical Properties of PEDOT: PSS Films for Bioelectronics Keywords: organic bioelectronics, poly(3, 4-ethylenedioxythiophene): polystyrene sulfonate (PEDOT: PSS), organic electrochemical transistor, thin films, cross-linker, Young's modulus Introduction Bioelectronics uses electrical signals to interact with biological systems. Sensors that allow for electrical read-out of important disease markers, and implants/stimulators used for the detection and treatment of pathological cellular activity are only a few examples of what this technology can offer [START_REF] Karunakaran | Biosensors and bioelectronics[END_REF][START_REF] Lin | [END_REF] . In the last few decades, due to their intriguing electroactive and mechanical properties, organic electronics or -conjugated materials have been extensively explored regarding their use in bioelectronics applications [3][4][5][6][7][8] . Historically, the interest in organic electronic materials stemmed from their soft and flexible nature which dampens the mechanical properties mismatch with tissue [7] . This less "foreign" surface enhances the signal transfer to/from cells in vitro [9] . It also elicits a small foreign body response when used in vivo, improving the performance as well as the lifetime of the implanted device. The other Revised Manuscript attractive feature of -conjugated materials and more particularly of conducting polymers for bioelectronics is their mixed electronic/ionic conductivity [10][11][12] . Mixed conductivity enables coupling between the electronic charges in the bulk of the organic films with ion fluxes in biological medium. This translates into low electrochemical impedance at the biotic interface and therefore efficient transduction as well as stimulation of biological signals. As a matter of fact, the materials research for bioelectronics strives for soft materials that exhibit low impedance. The prototypical material of organic bioelectronics is the conducting polymer poly (3,4-ethylenedioxythiophene) (PEDOT) doped with polystyrene sulfonate (PSS). PEDOT: PSS is commercially available, water-dispersible conjugated polymer complex that can be cast into films of high hole and cation conductivity, good charge storage capacity, biocompatibility, and chemical stability. In PEDOT: PSS films, PEDOT chains accumulate in a pancake-like morphology surrounded by a PSS network [11] . While hole transport is facilitated within/among the PEDOT aggregates, the PSS phase attracts a considerable amount of water, enabling penetration and transport of ions in the film. Ion transport, and consequently electrochemical activity, benefits from such hydrated pathways. This, however, brings together the challenge of maintaining the integrity of films when exposed to an aqueous environment such as the biological tissue for in vivo applications and the cell culture media for in vitro studies. In order to avoid delamination/disintegration of the films in aqueous environment, PEDOT: PSS dispersions are typically mixed with other chemical compounds. Some studies have reported blending the dispersion with water soluble polymers such as the polyvinyl alcohol (PVA) [13][14][15][16] . The main problem with these blends is the dramatic drop in the conductivity. For instance, when mixed with PVA at a weight ratio of 20 and 60 %, the electrical conductivity of films dropped by 1 and 5 orders of magnitude, respectively [13] . The silane based crosslinking agent, 3-glycidoxypropyltrimethoxysilane (GOPS), on the other hand, was reported to make relatively stable PEDOT: PSS films (with conductivities up to 800 S cm -1 ) for a large variety of bioelectronics devices [17][18][19][20][21][22][23][24] , when used at a particular concentration in the dispersion (0.1 wt%). The epoxy group in GOPS (see the inset of Figure 1 for the chemical structure of GOPS) can react with amines, thiols, and acids, as well as interacting with itself and covalently with SiO2 substrates [25] . Although the interaction mechanism between GOPS and PEDOT: PSS is yet unclear, XPS studies suggested that the cross-linker can polymerize in water to form a multilayered structure [26] . However, depending on its concentration in the film, GOPS might change electronic and ionic transports as well as mechanical properties of PEDOT: PSS, presumably due to alterations on the structure and morphology of films. Only a few reports have touched upon the effect of GOPS concentration on the electrical properties of the resulting films [27] . Nevertheless, it is essential to maintain the electrical performance and softness of PEDOT: PSS while improving the thin film stability in aqueous media using a stabilizer such as GOPS. This is particularly crucial for long term use of bioelectronic devices, a direct example being the organic electrochemical transistors (OECTs) which are chronically implanted into cortex to record neural activities. Balancing these needs requires a systematic synthetic work or processing related interventions which aim to improve the electrochemical properties while not impeding mechanical properties. Moreover, materials with high performance electronic properties and physical characteristics matched to those of the tissue have great potential to bring forth applications for soft electronics. Here, post-processing can provide alternative modification routes as crystalline materials exhibiting high charge mobility typically have low mechanical resilience. In this work, we investigate the effect of GOPS content in PEDOT: PSS dispersions on the properties of films spun cast from these formulations. We find out that the concentration of GOPS has a tremendous, yet gradual impact on the electrical, electrochemical, and mechanical properties of the PEDOT: PSS/GOPS films and that there is an optimum concentration which maximizes a particular feature of the film such as its water uptake or elasticity. The benefits of aqueous stability and mechanical strength with GOPS are to be compensated by an increase in the electrochemical impedance. Our findings suggest that a trade-off cross-linker concentration exists, which enables sufficient electrical conductivity with mechanical robustness and stability in aqueous environment. Results and Discussion Effect of the cross-linker on electrical properties To gain insight into the effect of the cross-linker on the electrical properties of dry PEDOT: PSS films, we casted films from dispersions containing a variety of GOPS concentration (0.05, 1, 2.5, 3.5 and 5 wt%) in addition to a constant concentration of the conductivity enhancer, ethylene glycol (EG, 5 vol%) and the surfactant, dodecyl benzene sulfonic acid (DBSA, 0.002 vol%). The films were spin-cast at the same speed on four sets of substrates: Au-coated polyimide films (Au thickness: 100 µm, surface area: 96.7 mm² and 24 mm²) and glass substrates of different geometries (25 x 25 mm², 75 x 25 mm²). Figure 1 shows that GOPS content in the film affects the bulk conductivity significantly: the highest conductivity (ca. 460 S.cm -1 ) is observed for the film containing the least amount of GOPS (0.05 wt%), whereas the conductivity dropped by 4 times (120 S.cm -1 ) with a GOPS concentration of 5 wt%. These results are in agreement with those of Zhang et al who reported a gradual decrease in the conductivity of PEDOT: PSS film (cast from a dispersion with 5 vol% glycerol and 0.5 vol% of DBSA) with an increase in GOPS concentration [27] . The Clevios PH1000 PEDOT: PSS dispersion has a polymer content of 1.15 wt%, with a PEDOT to PSS ratio of 1:2.5 (ca. 0.3 wt% PEDOT and ca. 0.8 wt% PSS). It is intriguing that although PEDOT: PSS comprises only 18.5 wt% in the presence of 5 wt% GOPS, the conductivity of the film exceeds 100 S.cm -1 . Moreover, the films processed from dispersions that contained more GOPS are thicker than the ones that had less of the cross-linker (ca. 3x difference between 0.05 wt% and 5 wt% of GOPS, Figure S1). We attribute this to an increase in the viscosity of the dispersions in the presence of GOPS. On the other hand, interactions of the cross-linker with the non-volatile additives in the dispersion might lead to an increased material content since the amount of cross-linked network in the film with respect to the PEDOT: PSS increases with GOPS content [27] . This can as well account for the observed decrease in the conductivity as a larger content of non-evaporating and non-conducting species would be present in the film [27] . However, for our case, i.e., dispersions containing EG, X-ray studies showed no evidence for EG remaining in the films [11] . We therefore attribute the decrease in electrical conductivity to a dilution effect of the conducting phase by the cross-linker. Indeed, hole mobility also decreases ca. 4 times in the range of cross-linker concentrations investigated. The mobilities were estimated by measuring OECTs prepared from PEDOT: PSS formulations of different GOPS concentrations (Figure S2 and experimental section). For instance, while the hole mobility of the 0.05 wt% GOPS-cast film is 6.4 cm 2 V -1 s -1 , this value drops to 1.7 cm 2 V -1 s -1 for the 5 wt% GOPS-cast film. It is likely that GOPS limits the extent of aggregation of PEDOT chains by introducing crosslinks into the system. In an OECT, upon application of a positive gate voltage (VG), cations from the electrolyte are injected into the channel, compensate the sulfonate groups of the PSS and deplete the holes of the PEDOT (See Figure S2 for a schematic of an OECT). This mechanism is measured as a decrease in the drain current (ID). The performance of an OECT is therefore evaluated as its transconductance (gm= 𝜕𝐼 D 𝜕𝑉 G ), i.e., the extent of the modulation of the drain current with a change in gate voltage. As the GOPS content in the channel increases, not only the channel becomes less conductive, we also measure a gradual decrease in the transconductance of OECTs (Figure 2). Cross-linked PEDOT: PSS properties in aqueous environment Since applications in bioelectronics necessitate an aqueous environment, it is critical to characterize the properties of the polymer film in aqueous working conditions. Using the quartz crystal microbalance dissipation (QCM-D), we quantify the swelling capability of the PEDOT: PSS/GOPS films cast on quartz crystals when exposed to DI water or an aqueous solution of NaCl. In these measurements, a decrease in the frequency (f) accompanied with an increase in dissipation (D) with the inflow of water indicates an increase in the mass of the film, i.e. swelling due to uptake of molecules. Here, the inflow of NaCl induces a further decrease in f for all samples, attributed to penetration of solvated ions in addition to the trapped water molecules. In order to quantify the swelling of the films casted from different GOPS concentrations, we treated our data both with Sauerbrey model which directly correlates the change in f to coupled mass (more appropriate for rigid films) and with Kelvin-Voigt model which is considered typically for soft films (see Experimental section). The swelling percentages estimated from these two models are summarized in Table S1. The results suggest a reduction in the swelling capacity of films with an increase in GOPS content (Figure 3a). The ability of PEDOT: PSS film to uptake water drops from 397% to 12% when GOPS concentration increases from 0.05 to 5 wt%. Using AFM to estimate the thickness of PEDOT: PSS films before and after exposure to DI water, Duc et al. reported a swelling ratio of a 40 ± 1% for a PEDOT: PSS film that contained 1 wt% of GOPS [28] . This value is well below our estimations (ca. 266%) for the same formulation. These authors also reported 660 ± 90% swelling for a PEDOT: PSS film that does not contain GOPS. Using the same technique, Stavrinidou et al. reported 155 ± 53 % of swelling for the film prepared in the absence of GOPS and 35 ± 4 % in the presence of 1 wt% of GOPS [19] . These variations in the reported values could be due to the characterization techniques, the film/dispersion preparation, and the measurement conditions. The latter is particularly challenging to control since PEDOT: PSS films change their volume rather rapidly due to the humidity in the environment. Our results are rather meant to demonstrate the relative decrease in swelling with changing the GOPS content in PEDOT: PSS/GOPS films. For our measurements, we dried the films under vacuum over night to ensure that the film has minimal water trapped prior to its interactions with water molecules. Notably, we observed that such a dry film requires ca. two hours under constant water flow to reach a steady-state change in frequency, i.e., fully hydrated state. (Table S1). Interestingly, although the film that contained a higher GOPS content was thicker than the one that had less GOPS, its stabilization time was shorter: 30 min for 5 wt% GOPScast film in comparison to ca. 103 min for 1 wt% GOPS-cast film. Taken together, the trend of stabilization time and GOPS content is consistent throughout the whole formulation series. It is absolutely mandatory to investigate the impedance characteristics of conducting materials dedicated to bioelectronics applications. Figure 3b shows the electrochemical impedance (Z) of PEDOT:PSS/GOPS films measured at 1 kHz as a function of the cross-linker concentration. Here, since the films had different thicknesses due to variations in GOPS content (Figure S1), we normalized Z values measured at 1 kHz for a 100nm-thick film. First, the Bode plots (log Z vs log frequency) were fit to an equivalent circuit model (RC) to extract the resistance (R) and capacitance (C) values. Then, the capacitance was estimated for a 100 nm thick film using 𝐶 ′ = 𝐶×100 𝑑 , where C' is the normalized impedance and d (in nm) is the film thickness. Finally, C' was substituted in the impedance formula (|𝑍 ′ | = √𝑅 2 + 1 𝜔 2 𝐶' 2 ) to estimate the normalized impedance (𝑍 ′ ), knowing that the change in R is negligible. Our results show that 1 kHz impedance increases indistinctly as the film contains more GOPS. This is in fact consistent with the trend in swelling. At low GOPS concentrations, the film uptakes more water (Figure 3a), suggesting that the ions of the electrolyte can more readily penetrate and travel inside the polymer film without significant accumulation at the polymer/electrolyte interface, leading to low impedance values. Likewise, less swelling at high GOPS content impedes ionic mobility in the film, resulting in higher impedance. Finally, we studied the effect of GOPS on the mechanical properties of PEDOT: PSS films in solution via Nano-indentation experiment using the tip of an atomic force microscope (Figure 4a). During the course of the experiment, the tip-sample distance is modulated and the subsequent interaction between the tip and the sample is monitored through the vertical displacement of the cantilever probe. Young's modulus can be extracted from such forcedistance curves using the appropriate model. In this experiment, we used a derived model of the Sneddon contact mechanics assuming a conical tip with a non-negligible radius of curvature at its apex in contact with a flat surface [29] . The applied force (F) and the indentation (h) are related with the Young modulus (E), the Poisson ratio ( ~ 0.5), the radius of curvature of the tip apex (R), and the half opening angle of the tip () as in the following: 𝐹 = 2𝐸 𝜋.(1- 2 ) {2𝑅ℎ[1 -𝑡𝑎𝑛(𝜃)] + ℎ 2 𝑡𝑎𝑛(𝜃)} (1) For each sample prepared, a series of 50 measurements was performed at different locations of the film. A typical challenge encountered for thin films is that the deformation of the film under the tip goes through continuous change by the presence of the hard substrate underlying the sample. This results in an overestimation of the Young moduli. In our case, the thickness of the films varies from 60 nm for the 0.05 wt% GOPS-cast film to 180 nm for the 5 wt% GOPS. In order to overcome challenges related to thin films, we performed the measurements at varying indentation depths. We found the Young's modulus to drastically increase with the indentation depth, reflecting the substrate effect and that it can therefore be minimized with low indentation depth experiments (Figure S3). Therefore, we limited the indentations to 10 nm. The roughness of the PEDOT:PSS/GOPS films was also estimated from the AFM images taken in water (Figure S4). Samples containing 0.05 and 3.5 wt.% of GOPS exhibited an average surface roughness of 1.46 and 1.43 nm (root mean square = 1.98 and 1.81 nm) respectively. These roughness values are much smaller than the chosen indentation depth (10nm). Figure 4b shows that the Young modulus of the films increases from 90 MPa for the 0.05 wt% of GOPS to 150 MPa for the 1 wt% of GOPS. Above 2.5 wt%, the cross-linker doesn't seem to influence the elasticity of the films (ca. 350 MPa). Moreover, we observe that the mechanical stiffness of PEDOT: PSS films decreases dramatically (by ca. 25x) in aqueous environment compared to air (Figure S5). The decrease in film stiffness in DI water is directly related to the swelling of the polymer. Our results are in agreement with those reported by GPa) [30][31][32] . These results suggest that GOPS is a versatile additive that can be used not only to improve the aqueous stability of films but also to modify their elasticity. As par these results, considering the use of PEDOT: PSS based OECTs for long term implantations in the brain [22] , we intended to evaluate the performance of devices over several days. The OECT that was selected for this test had GOPS content that led to films with optimal water uptake, softness and conductivity: 1 wt%. Figure S6a depicts the change in the transconductance of an OECT comprising 1 wt% GOPS in the channel over 21 days, within sub-chronic period, measured in PBS. In order to obtain stable current values at day 0 (prior to the performance evaluation experiments), the devices were incubated in water overnight followed by multiple current-voltage cycles. This enabled the dissolution/diffusion of low molecular components in the film into the solution. In fact, another study reported that when gold electrodes coated with electropolymerized EDOT were soaked in a buffer for several days, the impedance of some of these electrodes raised steadily associated with decreases in charge storage capacity. This decrease was attributed to partial delamination of the PEDOT coating in PBS [33] . Nevertheless, our devices (width = 10 µm, length = 5 µm, thickness = 275∓25 nm) showed stable transconductance values (gm= 7.3 ∓ 0.2 mS) over the course of this study. In a separate study, we found that the devices that contain even higher (3.5 wt%) GOPS in the channel had similar performance in stability, tested over 5 days in aqueous environment (Figure S6b). Overall, the results obtained from 5 different OECTs suggest that the films cast with 1% wt of GOPS maintain their structural integrity in aqueous media and but also exhibit adequate long-term electrical performance. Conclusions In this work we investigated the electrical, swelling, electrochemical, and mechanical properties of PEDOT: PSS films modified with varying amounts of the silane based crosslinker, GOPS. As the cross-linker content increases from 0.05 to 5 wt%, we observed a drop in the bulk conductivity from ca. 530 to 120 S.cm -1 , a decrease in the swelling from ca. 397% to 12%, and a relative increase in the electrochemical impedance (from ca. 15 to 20 Ohms at 1 kHz for films with thickness of 100 nm with a surface area of 96.7 mm²). The benefits of aqueous stability with GOPS are therefore to be compensated by losses in electronic transport and increase in the electrochemical impedance. Nevertheless, the presence of the cross-linker led to an increase in the mechanical strength of the films when they are hydrated (ca. 90 to 300 MPa in DI water for 0.05 and 5 wt% of GOPS in the dispersion, respectively), as these films uptake significantly less amount of water. We also emphasize the tolerance of PEDOT: PSS films to a large quantity of the cross-linker (only 18.5 wt% of PEDOT: PSS in the solution can lead to conductivity up to 100 S.cm -1 ). GOPS aids obtaining highly conducting films with excellent mechanical integrity in aqueous media. Moreover, devices that contain 1 wt% GOPS, which is a concentration that leads to film with high electrical conductivity with sufficient mechanical stability, exhibit steady performance over 3 weeks. These results suggest that variations in the concentration of such a dispersion additive like GOPS can enable facile co-optimization of electrical and mechanical properties of a conducting polymer film. Experimental Section Sample Preparation: PEDOT: PSS (Clevios PH-1000 from Heraeus Holding GmbH.), dodecyl benzene sulfonic acid (DBSA; 0.002 vol%), ethylene glycol (EG; 5 vol%) and GOPS (ranging from 0.05 to 5 wt%) were mixed, sonicated for 30 minutes at room temperature and then filtered using 1.2 µm hydrophilic syringe filters (Minisart, from Sartorius Stedim Biotech). The substrates were cleaned and exposed to plasma oxygen for 2 minutes at 100 Watts for surface activation and further cleaning. All films were spin cast at 2500 rpm for 40 sec. The films were then baked at 140°C for 1 hour. The thickness of PEDOT: PSS films was determined using a Dektak mechanical profilometer. Electrical and Electrochemical Characterization: We measured the sheet resistance (RS in Ω.sq -1 ) of PEDOT:PSS/ GOPS films cast on glass substrates using a four-point probe (Jandel RM3-AR). Given the film thickness (d), we could calculate the resistivity (ρ = RS × d, where ρ is resistivity in Ω.cm) from which the conductivity (1/ρ in S.cm -1 ) was obtained. Electrochemical impedance spectroscopy (EIS) was performed in NaCl solution (0.1M) via an impedance spectrometer (potentiostat/galvanostat, Metrohm Autolab B.V.) with a threeelectrode configuration, where the polymer-coated substrate is the working electrode, a Pt mesh is the counter electrode, and Ag/AgCl is used as a standard reference electrode. EIS was performed over a range of 10 kHz to 1Hz with an AC 10 mV sine wave, and a DC offset of 0 V. In order to extract capacitance (C), and the resistance, R, the spectra of films were fit to an (RC) equivalent circuit using NOVA software. OECT fabrication and characterization: OECTs were fabricated using photolithography, as previously described [34] . Briefly, 150 nm thick gold lines were patterned on a glass slide with impedance matching method reported for OECTs [21] . Swelling Measurements: The swelling of the thin polymer films was investigated by quartz crystal microbalance with dissipation set-up (QCM-D) (Q-Sense, from Biolin Scientific). PEDOT: PSS dispersions at a given GOPS concentration were spun-cast on cleaned goldcoated Q-sensors. They were then kept under vacuum overnight to ensure complete drying of the film. Filtered DI water or aqueous NaCl solution (0.1 M) were flown over the samples at 24 ℃ at a flow rate of 50 -100 µL.min -1 controlled by a peristaltic pump. The adsorbed mass (∆𝑚) can be approximately estimated from ∆𝑓 using the Sauerbrey equation: ∆𝑚 = -𝐶 ∆𝑓 𝑛 𝑛 ( 2 ) where 𝐶 is the mass sensitivity constant (17.7 ng.cm -2 Hz at 𝑓 = 5 MHz) and ∆𝑓 𝑛 is the change in resonance frequency at 𝑛 th overtone [35] . We used the 5 th overtone for our calculations. Given the initial thickness of the films, we could estimate the water uptake. Kelvin-Voigt viscoelastic model was also used (equations 3, 4, and 5) where G* is the complex shear modulus, ρ is the density (kg m -3 ),  is the viscosity (G"/) (kg ms -1 ), µ is the elasticity (G') (Pa), and δ is the thickness (m) [36] . G* = G' + jG'' = µ + j2πf (3) Δf = f1 (n, f, ρf, µf, δf) (4) ΔD = f2 (n, f, ρf, µf, δf) (5) Mechanical Characterization: Young's modulus was obtained from the force-curve measurements that were realized by using an NTEGRA AFM system (from NT-MDT). In all experiments AFM tips (NSC35 from Mikromash) were used with typical resonant frequency of 150 kHz, spring constants ranging from 5 to 12 N.m -1 and apex radius of 8 nm as verified by scanning electronic microscopy. For each tip, the spring constant was determined using the thermal noise method after obtaining the deflection sensitivity of the cantilever by pressing the AFM tip against a hard reference silicon surface. The measurements were all performed in water after allowing the samples to hydrate for 2 hours. S1813 photoresist, exposed to UV light using a SUSS MBJ4 contact aligner, and developed using MF-26 developer. Upon the deposition, a standard metal lift-off process in acetone was employed and gold interconnects and pads were insulated from the electrolyte by a 1.5 µm parylene C film deposited using a SCS Labcoater 2. A second sacrificial layer of parylene C was coated, patterned with AZ9260 photoresist, developed, and selectively etched by an CF6/O2 plasma using an Oxford 80 plus to define the transistor channel. Finally, PEDOT:PSS dispersion was cast and the sacrificial layer of parylene C was peeled, and the devices were baked at 110 °C for 1 hour.The PEDOT: PSS channel had a width/length (W/L) of 50 μm/50 μm. The transistors were operated in the common source configuration with a Ag/AgCl pellet electrode (Warner Instruments) immersed in NaCl solution (0.1M). The characterization was performed using a National Instruments PXIe-1062Q system. The gate bias was applied and controlled using a NI PXI-6289 modular instrument, and current recorded with either the NI PXI-4145 SMU or a NI PXI-4071 digital multimeter. The recorded signals were saved and analyzed using customized LabVIEW and MATLAB software. Hole mobilities were extracted using Figure 1 . 1 Figure 1. Electrical conductivity of PEDOT: PSS films cast from dispersions with GOPS Figure 2 .Figure 3 . 23 Figure 2. The transconductance (gm= 𝜕𝐼 D 𝜕𝑉 G ) of OECTs comprising PEDOT:PSS channels of Figure 4 . 4 Figure 4. a) Schematic of the AFM tip indenting PEDOT: PSS thin film b) The change in Figure S2 . S2 Figure S2. Hole mobility of PEDOT: PSS films, extracted from working OECTs, as a Figure S3 .Figure S4 . S3S4 Figure S3. Young's modulus diverging with the indentation depth of a PEDOT: PSS film Figure S5 . S5 Figure S5. Mechanical elasticity of PEDOT:PSS (5 wt% GOPS) in air (black) and in DI Table S1 . S1 Swelling ratios of PEDOT: PSS films in DI water and in NaCl solution (0.1 M) with varying GOPS content (using Sauerbrey and Kelvin-Voigt models). Standard deviation is calculated according to the swelling capacity estimated via different models. GOPS Swelling using Sauerbrey model (%) Swelling using Voigt model (%) Standard deviation Time to stabilize (wt %) (min) DI NaCl DI NaCl DI NaCl 0.05 283 397 364 480 58 57 66 1.0 266 386 387 497 78 86 103 2.5 113 136 224 248 79 78 61 3.5 10 17 128 130 80 83 38 5.0 2 12 127 138 89 88 31 Acknowledgements The work was financially supported by the Agence Nationale de la Recherche (grant number: ANR-14-CE08-0006). The authors would like to also thank David C. Martin for the fruitful discussions. Received: ((will be filled in by the editorial staff)) Revised: ((will be filled in by the editorial staff)) Published online: ((will be filled in by the editorial staff)) Copyright WILEY-VCH Verlag GmbH & Co. KGaA, 69469 Weinheim, Germany, 2013. Supporting Information Supporting Information is available from the Wiley Online Library or from the author. Supporting Information
01554382
en
[ "phys.astr" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01554382/file/vergani_preprint.pdf
S D Vergani J Palmerio R Salvaterra J Japelj F Mannucci D A Perley P D' T Avanzo M Krühler S Puech Boissier P D'avanzo T Krühler M Puech S Boissier S Campana S Covino L K Hunt P Petitjean G Tagliaferri The chemical enrichment Keywords: Gamma-ray burst, general -Galaxies, abundances -Galaxies, star formation published or not. The documents may come Introduction It has been established that long gamma-ray bursts (LGRBs) are linked to the explosions of massive stars, both from the studies of their host galaxy formation sites (Fruchter et al. 2006;Svensson et al. 2010) as well as from detections of accompanying supernova emission (GRB-SN; see Cano et al. 2016 for a review). It is still not clear which conditions give rise to LGRBs or what is the relation between the progenitors of LGRBs and those of other explosions resulting from deaths of massive stars (e.g., Metzger et al. 2015). The progenitors of nearby core-collapse supernovae can be directly identified as resolved stars in archived high-resolution images of their birth places (Smartt 2015). However, LGRBs have a lower occurrence rate (e.g., Berger et al. 2003;Guetta & Della Valle 2007) and are usually observable at cosmological distances, for which their birth places cannot be resolved. Our understanding of LGRB progenitors therefore depends on linking the predictions of different stellar evolution models with the observed properties of LGRB multiwavelength emission (e.g., Schulze et al. 2011;Cano et al. 2016) and their host galaxy environment (see Perley et al. 2016a for a review). In this work, we focus on the latter. While metallicity is not the only factor that might affect the efficiency of the LGRB production (e.g., van den Heuvel & Portegies Zwart 2013; Kelly et al. 2014;Perley et al. 2016b), it has been one of the most studied in the past as the metal content of the progenitor star is considered to play a major role in the formation of a LGRB explosion. Single-star evolution models predict that the metallicity of LGRB progenitors should be very low (e.g., Hirschi et al. 2005;Yoon & Langer 2005;Woosley & Heger 2006): in this way the progenitor star can expel the outer envelope (hydrogen and helium are not observed spectroscopically) without removing too much angular momentum from the rapidly rotating core. Higher metallicity values are allowed in the case of the models presented by Georgy et al. (2012), also depending on the different prescriptions between the coupling of surface and core angular momentum in the star. Alternatively, the LGRB progenitors could be close interacting binaries, in which case the metallicity is a less constraining factor (e.g., Fryer et al. 2007;van den Heuvel & Yoon 2007). Strong observational constraints are clearly needed to understand which of the evolutionary channels could produce a LGRB. Different observational works on LGRB host galaxies in the literature have indeed revealed that their metallicities are mostly subsolar [START_REF] Modjaz | [END_REF]Levesque et al. 2010a;Graham & Fruchter 2013;Vergani et al. 2015;Krühler et al. 2015;Perley et al. 2016b;Japelj et al. 2016). The evidence is corroborated by numerical simulations (e.g., Nuza et al. 2007;Campisi et al. 2011;Trenti et al. 2015). In particular, Campisi et al. Article number, page 1 of 4 arXiv:1701.02312v1 [astro-ph.HE] 9 Jan 2017 (2011) studied LGRB host galaxies in the context of the mass metallicity (e.g. Tremonti et al. 2004) and fundamental metallicity (Mannucci et al. 2010(Mannucci et al. , 2011) ) relations of field star-forming galaxies by combining a high-resolution N-body simulation with a semi-analytic model of galaxy formation. Campisi et al. (2011) find that a very low metallicity cut is not necessary to reproduce the observed relations. However, previous observational works present one or more of the following issues: (i) they are based on incomplete biased samples (e.g., Levesque et al. 2010a); (ii) they are based on stellar masses directly determined from observations, but on metallicities inferred from the mass-metallicity relation (e.g., Perley et al. 2016b); (iii) they use metallicities directly determined from the observations, but do not consider the stellar masses (e.g.: Krühler et al. 2015); and (iv) they are based on samples limited to small redshift ranges (e.g., 0 < z < 1) as in Japelj et al. (2016). In this paper we study the metallicity of the host galaxies of the complete Swift/BAT6 sample (Salvaterra et al. 2012) of LGRBs at z < 2, visible from the southern hemisphere. Combining the observed properties with simulations, we study their behavior in the stellar mass -metallicity relation (MZ) and fundamental metallicity relation (FMR). After the description of the sample and new data (Section 2), we present the results in Section 3 and discuss them in Section 4. All errors are reported at 1σ confidence unless stated otherwise. We use a standard cosmology (Planck Collaboration et al. 2014): Ω m = 0.315, Ω Λ = 0.685, and H 0 = 67.3 km s -1 Mpc -1 . The stellar masses and star formation rates (SFR) are determined using the Chabrier initial mass function (Chabrier 2003). The sample Our sample is composed of the 27 host galaxies of the Swift/BAT6 complete sample of LGRBs at z < 2 with declination Dec < 30 • . As the spatial distribution of GRB is isotropic, this restriction does not introduce any bias in our results. The choice to select only the LGRBs that are well observable from the southern hemisphere was due to the availability of the Xshooter spectrograph (Vernet et al. 2011) at the ESO VLT (Very Large Telescope) facilities, which, thanks to its wide wavelength coverage, makes possible the detection of the emission lines necessary to determine the SFR and metallicity of the host galaxies at z < 2. In particular, metallicity is available for 81% of the sample (an estimate of the metallicity was not possible for five host galaxies only). As the original Swift/BAT6 sample is selected essentially only on the basis of the LGRB prompt γ-ray flux, and no other selection criterion is applied when gathering the galaxy sample (except the southern hemisphere visibility), our sample does not suffer of any flux bias. Indeed, no correlation has been found between the prompt γ-ray emission and host galaxy properties (see e.g.: Levesque et al. 2010b;Japelj et al. 2016). Furthermore, dark bursts are correctly represented in the sample (see Melandri et al. 2012). The restriction to the southern hemisphere at z < 2 maintains this condition, with 26% of LGRB of the sample being dark. For the part of the sample at z < 1, Vergani et al. (2015) and Japelj et al. (2016) report the tables with the objects in the sample and their properties (including stellar masses, SFR and metallicity). The restriction to the Dec < 30 • excludes GRB 080430 and GRB 080319B from the sample used in this work. The properties (redshift, stellar mass, SFR, and metallicity) of the 1 < z < 2 part of the sample are reported in Table 1. The stellar masses were taken from Perley et al. (2016b), with the Notes. There are 4 LGRBs in the 1 < z < 2 sample for which we could not determine the metallicity of their host galaxies: GRB 050318, GRB 050802, GRB 060908, and GRB 091208B. Indeed, there are no useful spectra to this purpose for the host galaxies of GRB 091208B and GRB 050318. For the host galaxies of GRB 050802 and GRB 060908 we obtained X-shooter spectroscopy (Prog. ID: 097.D-0672; PI: S.D. Vergani), but the spectra do not show sufficient emission lines to allow the metallicity determination. * : from new/unpublished X-shooter observations presented in this paper (see Table 3). exception of the host galaxies of GRB 071117 and GRB 080602, which are not part of the Perley et al. (2016b) sample, and for which we determined the stellar masses using Spitzer observations and the same prescription as Perley et al. (2016b). The host of GRB 071117 lies very close (∼ 2 �� ) to a red galaxy, and, therefore, the spatial resolution of the Spitzer observations allowed us to obtain only an upper limit on its infrared flux. We therefore also performed a spectral energy distribution fitting using the host galaxy photometry (see Table 2) following the same prescriptions as Vergani et al. (2015), and found log(M � /M � )∼ 9.9. Notes. The g, r, i, z magnitudes have been determined from GROND (Greiner et al. 2008) observations, whereas for the K value we used VLT/HAWKI observations (Prog. ID: 095.D-0560; P.I.: S.D. Vergani). The SFR values were taken from Krühler et al. (2015) with the exception of the host galaxies of GRB 061007, GRB 061121 and GRB 071117, not included in that work. We obtained the VLT/X-shooter spectroscopy of these three host galaxies (ESO programs 095.D-0560 and 085.A-0795, PI: S.D. Vergani and H. Flores, respectively). We processed the spectra using version 2.6.0 of the X-shooter data reduction pipeline [START_REF] Modigliani | Observatory Operations: Strategies, Processes, and Systems III[END_REF], following the procedures described in Japelj et al. (2015). The measured emission line fluxes are reported in Table 3. We determine the SFR from the Hα fluxes (corrected by the extinction determined through the Balmer ratio), with the same prescriptions as Krühler et al. (2015). Following the same prescription as in Japelj et al. (2016), we determined the metallicity of the objects in the sample with the Maiolino et al. (2008) method on the strong emission line fluxes reported in the literature (Piranomonte et al. 2015;Krühler et al. 2015) or on those measured by us; in the relevant cases, the results are consistent within errors to those already reported in the literature. Mannucci et al. (2010Mannucci et al. ( , 2011)). The dark blue curve and area correspond to FMR relation and of its quartiles obtained using the simulation of Campisi et al. (2011). The cyan curve and area correspond to the best-fit model results. In Fig. 1 we plot the host galaxies of our sample in the MZ and FMR spaces. The dearth of high metallicity galaxies is evident as well as the fact that there are more massive galaxies at the higher redshifts (1 < z < 2) than at z < 1. At low stellar masses (log(M * /M � ) < 9.5) there is some agreement with the MZ relation and FMR found for general starforming galaxy populations (see also Japelj et al. 2016), whereas massive LGRB host galaxies are clearly shifted toward lower metallicities than predicted by the general relations. While the MZ relation evolves in redshift, the FMR has the advantage that it is redshift independent in the redshift range considered here, hence strengthening the statistics of our results. For the general population of star-forming galaxies with log(M � ) -0.32log(SFR)� 9.2, the FMR is valid up to z ∼ 2.2, has been defined over SFR and stellar mass ranges encompassing those of the host galaxies in our sample, and has a smaller scatter (0.06 dex) than the MZ relation Mannucci et al. (2010Mannucci et al. ( , 2011)). To verify that our results are independent of the method used to determine the metallicity, we used the Kobulnicky & Kewley (2004) R23 method to determine the metallicities of the 21 host galaxies for which the relevant lines to use this metallicity indicator are available. The resulting MZ plot confirms the avoidance of super-solar metallicity and the shift of high stellar mass host galaxies toward lower metallicity than those found for general star-forming galaxy populations at similar stellar masses and redshifts. We stress that the five galaxies in the sample for which we could not determine the metallicity (GRB 050318, GRB 050525, GRB 050802, GRB 060908, and GRB 091208B) are all faint galaxies, not hosting dark GRBs, and with stellar masses log(M * /M � )< 9.2 (three of these galaxies have log(M * /M � )< 8.7 ; see Vergani et al. 2015;Perley et al. 2016b). A super-solar metallicity for a large portion of these host galaxies is therefore extremely unlikely. For two of these galaxies (GRB 050525 and GRB 050802) SFR limits are available (Japelj et al. 2016;Palmerio et al. in preparation). Under the conservative hypothesis that they follow the FMR relation, we can derive limits on their metallicities from their SFR and stellar masses of 12 + log(O/H) < 8.1, 8.4, respectively. We further investigate the implications of our observational results by comparing them with the expectations of a dedicated numerical simulation of the LGRB host galaxy population presented in Campisi et al. (2009Campisi et al. ( , 2011)), coupling high resolution numerical simulation of dark matter with the semi-analytical models of galaxy formation described in De Lucia & Blaizot (2007). Previous work (De Lucia et al. 2004) has shown that the simulated galaxy population provides a good match with the observed local galaxies properties and relations among stellar mass, gas mass and metallicity. Moreover, Campisi et al. (2011) shows that the simulations nicely reproduce the observed FMR of SDSS galaxies and its spread. Following Campisi et al. (2011) we compute the expected number of LGRBs hosted in each simulated galaxy, assumed to be proportional to the number of shortliving massive stars (i.e., star particles less than 5 × 10 7 yr in age), applying different metallicity thresholds (Z th ) for the GRB progenitor, with probability equal to one below Z th and zero otherwise. We construct the FMR of simulated hosts in the redshift range z = 0.3 -2 and we determined the best-fit value of Z th by minimizing the χ 2 against the BAT6 host data in the same redshift interval. The best-fit model (see Fig. 1) is obtained for Z th = 0.73 +0.08 -0.07 Z � (1σ errors). This is consistent with indirect results inferred from the distribution of the LGRB host stellar masses at z < 1 (Vergani et al. 2015) or of the infrared luminosities over a wider redshift range (Perley et al. 2016b). Discussion and conclusions In this paper we considered the properties of the host galaxies of the complete Swift/BAT6 sample of LGRBs (Salvaterra et al. 2012) that are visible from the southern hemisphere and at z < 2. We studied them with respect to the MZ and FMR relation of field star-forming galaxies. This is the first study considering at the same time the SFR, metallicity (both directly determined from the host galaxy spectroscopy), and stellar masses for a complete sample of LGRBs and on a large redshift range. Furthermore, we use LGRB host galaxy simulations to interpret our results. Thanks to the sample extension to z ≈ 2, we could double the sample size compared to Japelj et al. (2016) and show for the first time that LGRB host galaxies do not follow the FMR. We find that LGRBs up to z ≈ 2 tend to explode in a population of galaxies with subsolar metallicity (Z ∼ 0.5-0.8 Z � ). Our results are well reproduced by LGRB host galaxy simulations with a metallicity threshold for the LGRB production of Z th ∼ 0.7 Z � . A&A proofs: manuscript no. VerganiHGz2LE1c Table 3. Emission line fluxes (corrected for MW absorption) of the host galaxies of GRB 061007, GRB061121, and GRB071117 in units 10 -17 erg s -1 cm -2 . Upper limits are given at the 3σ confidence level. Notes. (a). Line strongly affected by a sky line. To determine the host galaxy properties we fixed its value to [O ii]λ3729/1.5 (low electron density case; Osterbrock 1989). (b) Lines falling on too noisy regions to determine a significant upper limit. (c). The line is contaminated by a sky line. The flux has been determined by a Gaussian fit, using the part of the line not contaminated by the sky. Although strong metallicity gradients (> 0.1 -0.2 dex) are unlikely (on the basis of low-redshift, spatially resolved LGRB host galaxies observations; Christensen et al. 2008;Levesque et al. 2011;Kruhler et al. in preparation), we cannot exclude that they are at play in the couple of galaxies showing evidences of super-solar metallicities (as, e.g., in the case of GRB 060306; see also Niino et al. 2015). The existence of some super-solar hosts may as well indicate, however, that the formation of LGRBs is also possible above the general threshold, although at much lower rate. Applying smoother cutoffs to the metallicity, instead of the step function used here, shifts Z th toward lower values depending on the functional shape used. The present statistics does not allow us to discriminate between different cutoff shapes, therefore we do not go into further detail. We point out however that none of them succeed in reproducing the super-solar metallicity value. It should also be stressed that the GRB 060306 metallicity is very uncertain with pretty large error bars. The relatively high metallicity threshold found in this work is much higher than required from standard collapsar models (but see Georgy et al. 2012). Binary stars are a possible solution as progenitors, although detailed models studying the role of metallicity on the fates of binary stars are missing. However, it is important to note that the metallicities determined using strong emission lines are not absolute values (see Kewley & Ellison 2008). In our case, they are relative to the Kewley & Dopita (2002) photoionization models on which the Maiolino et al. (2008) method is based. On the one hand, some works seem to indicate that those models may overestimate oxygen abundances by ∼ 0.2-0.5 dex compared to the metallicity derived using the so-called direct T e method (see e.g., Kennicutt et al. 2003;Yin et al. 2007). On the other hand, other works (see e.g., López-Sánchez et al. 2012;Nicholls et al. 2012) found that the oxygen abundances determined using temperatures derived from collisional-excited lines could be underestimated by ∼ 0.2-0.3 dex. In principle, the simulations should be independent of these models and therefore the curves derived in this work from simulations should not be affected by this issue. The Z th ∼ 0.7 Z � threshold should not be considered, therefore, as an absolute value. Nonetheless, to be in agreement with the metallicities (Z≤ 0.2 Z � ) needed in most LGRB single massive star progenitor models, all the metallicities presented here should be systematically overestimated, most of them by at least ∼ 0.5 dex. Fig. 1. Top panel: MZ plot. The dots correspond to the host galaxies of the Swift/BAT6 sample of LGRBs at z < 2, color coded depending on their redshift as shown in the right bar. The lines correspond to the relations found for field galaxies at the redshift indicated next to each line. Bottom panel: The FMR plane. The dots correspond to the host galaxies of the Swift/BAT6 sample of LGRBs at z < 2, color coded depending on their redshift as shown in the right bar. The gray line corresponds to the FMR found byMannucci et al. (2010Mannucci et al. ( , 2011)). The dark blue curve and area correspond to FMR relation and of its quartiles obtained using the simulation ofCampisi et al. (2011). The cyan curve and area correspond to the best-fit model results. Table 1 . 1 Swift/BAT6 sample of LGRB host galaxies at 1 < z < 2 with metallicity determination, visible from the southern hemisphere. Host galaxy GRB080413B GRB090926B GRB061007 * GRB061121 * GRB071117 * GRB100615A GRB070306 GRB060306 GRB080605 GRB080602 GRB060814 redshift 1.1012 1.2427 1.2623 1.3160 1.3293 1.3979 1.4965 1.5597 1.6408 1.8204 1.9223 Log(M � /M � ) 9.3 10.28 9.22 10.31 < 10.12 9.27 10.53 10.5 10.53 9.99 10.82 SFR [M � yr -1 ] 2.1 +3.1 -1.2 26 +19 -11 5.8 +4.8 -4.8 44.2 +19 -10 > 2.8 8.6 +13.9 -4.4 101 +24 -18 17.6 +83.6 -11 47.0 +17 -12 125.0 +145 -65 54.0 +89 -19 Metallicity 12 + log(O/H) 8.4 +0.2 -0.2 8.44 +0.18 -0.20 8.16 +0.18 -0.13 8.5 +0.09 -0.06 8.4 +0.15 -0.09 8.14 +0.26 -0.22 8.45 +0.08 -0.08 9.12 +0.18 -0.42 8.46 +0.08 -0.08 8.56 +0.2 -0.3 8.38 +0.14 -0.28 Table 2 . 2 Observed AB magnitudes (corrected by the Milky Way extinction) of GRB 071117 host galaxy. Host galaxy GRB 071117 g 24.4 ± 0.1 r 24.7 ± 0.2 i 24.8 ± 0.3 z > 24.4 K 22.9 ± 0.2 Acknowledgements. This work is based in part on observations made with the Spitzer Space Telescope (programs 90062 and 11116), which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. SDV thanks M. Rodrigues, H. Flores and F. Hammer for useful discussions. JJ acknowledges financial contribution from the grant PRIN MIUR 2012 201278X4FL 002. TK acknowledges support from a Sofja Kovalevskaja Award to Patricia Schady. We thanks G. Cupani for sharing his expertise on Xshooter data reduction.
01774078
en
[ "phys.cond.cm-ms", "phys.cond.cm-scm" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01774078/file/Manuscript-R1.pdf
Ahmad Kenaan Racha El Zein Dr Volkan Kilinc Prof Sébastien Lamant Jean-Manuel Raimundo email: [email protected] Anne M Charrier email: [email protected] Jean-Manuel Raimundo Dr A Kenaan Dr R Elzein Dr J.-M Raimundo Ultra-thin supported lipid monolayer with unprecedented mechanical and dielectric properties Keywords: Supported lipid monolayer, ultra-thin dielectric, mechanical properties, dielectric properties, lipid engineering ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction With the development of flexible and printed electronics, organic dielectrics have been investigated extensively in the past decade. [1,2] They are chiefly used in organic (OFETs) and thin film (TFT) transistors as gate dielectric materials with performances combining low leakage current, high breakdown strength, large capacitance, good mechanical flexibility and stability which in some cases near or surpass those of amorphous (a-Si) or poly crystalline (c-Si) silicon. [3,4] Moreover for low power operations in TFT it is meaningful to maintain the dielectric thickness as thin as possible. [5] To this aim, nano-dielectrics such as nanoparticles or selfassembled monolayers or multilayers have been recently developed and implemented in OFETs. [6][7][8][9] Another class of material, the supported lipid layers, with thicknesses of a few nanometers also constitutes good candidates. In living cells lipid membrane forms indeed a natural insulator [10] which plays an efficient role as barrier to both ionic and electronic transport associated with an electrical resistance of the order of several giga-Ohms in magnitude. [11,12] These unique properties make them very attractive to be used as ultra-thin dielectric layer in electronic devices and have recently raised interest, marked by the increasing number of studies reporting the formation of lipid layers on various substrates such as silicate, [13] H-terminated silicon surface, [14,15] gold, [16] alkylated surfaces [17] and more recently graphene, [18][19][20] grapheneoxide [20] or polymeric substrates such as PEDOT:PSS. [21] So far in most studies lipid layers have been used as biocompatible interfaces or as ionic barriers in field effect transistors to study ion channels in lipid membranes. [22][23][24][25] However, despite excellent insulating properties, lipid bilayers and even more lipid monolayers have been poorly exploited in devices due to their inherent instability under application of an electric field, leading to damages caused mainly by an electroporation process occurring at low electric field (1 MV/cm). [26][27][28][29][30] Furthermore a lack of mechanical stability is often observed. Layers are stabilized on substrates by van der Waals interactions whereas the layer integrity is maintained by hydrophobic forces between the lipids aliphatic chains. Drying off or exposing the sample to solvents usually induces lipid layer destruction, therefore limiting its use to applications in solution as well as its storage. Several strategies have been carried out in order to overcome these issues including the direct substrate surface bonding [31,32] or the internal crosslinking within the plane of the lipid layer. [20,33] Interestingly, it was previously shown that the mechanical force required to disrupt a selfassembled monolayer made from 1,2-bis-(10,12-tricosadiynoyl)-sn-glycero-3-phosphocholine (DC8,9PC or DC-PC) (SLM, Supported Lipid Monolayer) can be aptly improved, by a factor of ten, after a radical reticulation within the plane of the monolayer. [19] This property has been successfully exploited in the fabrication of a field effect transistor sensors based on modified SLMs, as ultra-thin gate dielectric layers, directly immobilized at the surface of a H-terminated silicon-based channel. [34,35] Additional experiments performed on SLM's arrays fabricated using harsh lift-off processes, and used as capacitive sensing platform clearly confirmed these improvements. [36] In these studies, SLMs were subjected to electric fields up to 6 MV/cm, i.e. to values much higher than those at which electroporation occurs. Surprisingly these results seem to indicate that the mechanical stability was accompanied with a substantial increase of the dielectric stability suggesting that the cross-linking approach may constitute a beneficial strategy to achieve powerful ultrathin dielectric and must be pursued. In the present work we aim at studying the relationship between mechanical and dielectric stabilities of SLMs and at investigating new routes to further strengthen them. Dense SLMs on Si-H surfaces are readily obtained according to a synthetic procedure developed in our group from the vesicle fusion method. [18,19] Subsequently an innovative two-step process is used to mechanically stabilize the dielectric monolayer. This process encompasses two consecutive crosslinking reactions respectively at the inner and outer side of the monolayer. At each step, mechanical and electrical stabilities were carefully investigated. Thereby, mechanical stability is ascertained by quantifying the force necessary to rupture the monolayer from indentation measurements using an atomic force microscope (AFM) and the dielectric properties is determined by measuring both the leakage current, the dielectric strength and the lifetime at a constant electric field. Results and discussion Making of the lipid monolayers Several different SLMs were obtained from pristine DC-PC, a commercially available lipid, and from modified ones (Table 1). DC-PC lipids have been chosen due to their intrinsic structural properties that can be useful for subsequent changes both at the inner and outer parts of the supported monolayers. The phosphocholine head group of the DC-PC derivative can be easily cleaved with a phospholipase [34] affording a free hydroxyl group (DC-Glycerol) that can be further converted to other functions or appropriately modified. For instance, in the course of this study DC-MTS and DC-OTS layers were achieved by reacting the free OH group with methyltrichlorosilane (MTS) or Trichloro(octadecyl)silane (OTS) respectively. The genesis and properties of these novel dielectric layers will be discussed below. In addition, DC-PC lipids possess in their aliphatic chain acetylenic groups that can be used to reticulate the lipids in the plane of the layer in order to ensure a greater molecular cohesion and therefore stability. This crosslinking, named herein reticulation 1, consists of a radical reticulation reaction initiated by AAPH (2,2'-Azobis(2-methylpropionamidine) as a heat-sensitive water-soluble free radical initiator. At this stage the supported lipid monolayer exhibits sufficient stability to be rinsed with solvent or dried and can be further easily manipulated. [36] Using this procedure, DC-Glycerol-R monolayers were obtained and have been used as the starting point to generate the DC-MTS and DC-OTS layers as depicted in Figure 1. The supported DC-Glycerol monolayer was first crosslinked using the reticulation 1 process to give DC-Glycerol-R and then secondly the free OH groups were reacted with the alkyltrichlorosilane derivatives leading to a second reticulation (reticulation 2) at the outer side of the SLM resulting in end-capping the dielectric layer. As reference, a non-reticulated supported DC-Glycerol monolayer was also made and tested. Homogeneity of the different lipid layers is shown on the AFM images (Fig. 1(A'-C')) with a surface roughness below 0.7 nm in all cases. Their thickness was determined by making a hole across the layer with an AFM tip and reported in Table 1. All results are in perfect agreement with what is expected according to the length of the deposited lipid. Effect reticulation on the lipid monolayer mechanical properties The effect of reticulation on the mechanical stability of the layers was investigated using DCglycerol, DC-Glycerol-R, and DC-MTS layers. These layers have roughly the same thickness and differ mainly by the number of reticulations. As a probe of stability we measured the normal force that needs to be applied to the layer by an AFM tip to make the tip rupture the layer. For each layer, a minimum of 240 independent indentation measurements were performed on at least three samples. All experiments were done in water in order to minimize capillary forces. During indentation measurement, the layer may resist to the tip penetration until the force is large enough to break-through the layer. [19,37] This rupture is evidenced on the force curves as a jump of the tip to the substrate surface at a given loading force (see Figure 2). Figure 2B summarizes the results and reports the average rupture forces that were extracted from the force measurements for each layer (the corresponding data points are shown in Figure S1). In the case of DC-Glycerol, a very small mechanical resistance is observed and the tip can easily penetrate throughout the layer. From 400 measurements, only 113 times rupture forces were measurable, the rest of the curves showing no resistance of the layer to the tip (the rupture force was smaller than the resolution of our experimental setup and hence could not be measured). Nevertheless, based on the measurable data, an average breakthrough force was found at a value of 0.25±0.17 nN, which therefore is over-estimated. After reticulation 1, i.e. internal reticulation, the rupture force measured on DC-Glycerol-R is enhanced by a factor of 6 (1.59±0.65 nN), demonstrating a significant effect of the reticulation process on the monolayer resistance to normal forces. In the case of DC-MTS layer, the rupture force is further raised by a factor of 2.3 to a value of 3.74±1.06 nN with respect to DC-Glycerol-R. Furthermore, the thicknesses of the layers before and after each reticulation process remain unchanged, proving that the enhancement of the mechanical stability comes undoubtedly from the reticulation and not, for instance, from an increase of the layer thickness [38] (Here the layer thickness is determined for each point as the tip jump distances obtained experimentally from the force measurements, see Figure S1) . It is generally accepted that the rupture of a layer is an activated process. An energy barrier has to be overcome which corresponds to the activation energy for the formation of a hole in the layer which is large enough to initiate tip penetration. [38] For DC-PC layers, it was shown in a previous work that this activation energy which increases with the indentation rate, by a factor of 3 after reticulation 1 (DC-PC-R) is related to the lipids diffusion coefficient in the layer. [18] Moreover, El Zein et al. estimated the reduction of diffusion coefficient from ~10 -12 cm²/s to ~10 -14 cm²/s after reticulation 1 which was associated to the formation of nano-domains or chains of reticulated lipids. [START_REF] Zein | Doctorate thesis: Solid supported lipid monolayer: From biophysical properties to sensor application[END_REF] With the additional reticulation 2, the size of the nano-domains/chains is expected to increase hence reducing the diffusion coefficient of the lipids even more. This could explain the increase of the force needed to induce the layer rupture with the consecutive reticulations. Dielectric properties of reticulated layers In a different set of experiments we investigated the effect of reticulation on the lipid monolayers dielectric properties by measuring both, monolayers leakage current and lifetime when exposed to a constant electric field. Densities of leakage current The measurements were performed in air and we focused on the reticulated layers, i.e., DC-Glycerol-R, DC-MTS and DC-OTS. Examples of JV curves (Current density (J) vs Voltage (V)) are shown in Figure S2 for the three types of layers for V (Starting from 0V, the measurement was stopped after dielectric breakdown occurred, i.e. V max varied between 2 and 15 V depending on the measurement). In contrast with DC-MTS and DC-OTS which J-V curves are very reproducible, for DC-Glycerol-R a high variability is observed from curve to curve and the signal is quiet noisy, hence revealing lesser stability. From the median curves (Figure 3A), at low voltage, all three layers show similar J with values lower than 10 -7 A/cm². As the voltage is increased, J also increases linearly with V in different proportions with J DC-Glycerol-R > J DC-MTC > J DC-OTS reaching values of ~10 -5 A/cm², ~5.10 -7 A/cm² and ~5.10 -8 A/cm² respectively at 2 V. This effect gets even more pronounced at higher voltage; at 4V, DC-Glycerol-R reaches a saturation current (limited by the apparatus) which corresponds to the dielectric breakdown of the layer. In contrast J remains low for both DC-MTS and DC-OTS with values of J of ~5.10 -6 A/cm² and ~1.10 -7 A/cm² respectively. An important difference in median absolute deviation can be also noted with high values for DC-Glycerol-R and much lower values for DC-MTS and DC-OTS measurements, therefore reflecting the variability in DC-Glycerol-R measurements and the stability of DC-MTS and DC-OTS. This is emphasized in the average current densities (Figure S2D) with higher differences even at low voltage. DC-Glycerol-R and DC-MTS have similar length and only differentiate by the reticulation 2 at the head-group. The decrease in J of several orders of magnitude hence shows the effect of additional reticulation 2 on improving the insulating properties of the lipid monolayer. This improvement is even more enhanced with DC-OTS with an additional gain of 1 order of magnitude. The difference between DC-MTS and DC-OTS arises from the length of the aliphatic chains in the MTS and OTS groups from 1 to 18 carbons respectively. -(C-C) n -chains are known to be good insulators [START_REF] Venkataraman | [END_REF]41] and are therefore responsible for the decrease of the leakage current. The results reported herein are rather unique; comparing with dielectrics of similar thickness as the densities of leakage current are at least 1 order of magnitude lower than those reported for oxide layers produced by rapid thermal processing (RTP) (10 -6 A/cm² at 1V for a 2.2 nm thick oxide layer) [42] and 3 orders of magnitude lowers than for high- dielectrics (10 -4 A/cm² at 0.5V for a 4 nm thick HfO 2 layer on graphene). [43] The difference between DC-Glycerol and DC-MTS can be explained by considering the lipid molecules as flexible chains and the inter-molecular bonds due to reticulation as consolidating structures. In DC-Glycerol-R, the upper part (above the reticulation bonds) of the lipid layer can splay under the electric field, [31] hence creating opened and disorganized regions with reduced insulating performances. It was indeed shown that for self-assembled monolayers of long alkyl chains the leakage current density varies drastically with the chain length (for small chain length it is mainly due to electron tunneling through the layer [44] ) as well as with the compacity and ordering of the layer. [45] In DC-MTS, the upper part is stabilized by the reticulation 2 and prevents splaying, thus conserving an effective aliphatic chain of 20 carbons (~2 nm). For The high variability observed in DC-Glycerol-R J-V curves could therefore be explained by a disorganization of the upper part of the lipids under the electric field, hence confirming the drastic effect of the reticulation as a stabilizing process. Lifetime of lipid monolayers under a constant electric field: Determination of dielectric strength The second parameter we tested deals with the lifetime of the lipid layer under an electric field (E). This parameter is crucial for devices applications to estimate their life expectancy. To this aim constant voltage is applied to the layer and current is monitored until dielectric breakdown occurs. The latest shows as a sudden rise of the current leakage by several orders of magnitude (See Figure S3). To decipher the role of the reticulation on the dielectric strength and lifetime, measurements were performed on DC-Glycerol, DC-Glycerol-R, and DC-MTS layers that differ mostly by the number of reticulations. The results are reported in Figure 3B. In the following the direct breakdown electric field (E DB ) is defined as the dielectric strength at which the monolayer breaks within less than one minute at a given electric field value. For DC-glycerol, E DB is very low, it occurs at typically 1-2 MV/cm, suggesting that such monolayer is hardly usable as a dielectric in a device. Such value coincides indeed with those that have been reported in the literature for the electroporation of lipid layers. [29,46,47] For DC-glycerol-R and DC-MTS monolayers the lifetime was measured for E in the range [10-23 MV/cm] and [27-55 MV/cm] respectively. For both these layers the lifetime decreases exponentially with increasing E as indicated by the corresponding curve fits (yellow and blue lines respectively). Such an exponential decay of the lifetime with the electric field was already reported for thin polymer films. [48] Remarkably, both curves are strongly shifted towards high E and exhibit much higher values of E DB reaching ~20 MV/cm and ~35 MV/cm for DC-Glycerol-R and DC-MTS respectively. To evaluate the effect of the aliphatic chain length on dielectric breakdown, similar measurements were performed with DC-OTS layers. The lifetime displayed versus the electric field (Figure S4 (a)) shows a similar curve as described before for DC-Glycerol-R and DC-MTS with an E DB at ~20 MV/cm, i.e. much smaller than for DC-MTS. However, by plotting the lifetime versus the applied voltage both DC-MTS and DC-OTS are perfectly superimposed (Figure S4 (b)). Interestingly, these results show that the length of the aliphatic chain at the headgroup in DC-OTS although playing a major role in reducing the leakage current, does not impact on the breakdown voltage. The latter is hence determined by the number of reticulations. Remarkably, for all cases, the values of E DB are much higher than those reported for silicon oxide, [49] with a layer of same thickness, which fall in the range of 10-15 MV/cm or high- dielectric layers which vary approximately as the reverse of the dielectric constant (For example E DB of 5.7 MV/cm and 11.5 MV/cm were reported for a 59 nm HfO 2 layer and a 63 nm layer of Si 3 N 4 respectively) . [50,51] In addition, the electrical energy per unit area for dielectric breakdown can be calculated for each surface using E DB according to 𝑇 ̅ = 𝑇 + 1 2 ⁄ 𝜀𝜀 0 𝐸 𝐷𝐵 2 ℎ , [28,30] with T the average layer tension considered equal to zero,  0 the vacuum permittivity, and h the thickness of the lipid layer. The lipid layer dielectric constant of all lipid layers was assumed similar in the calculation, i.e. =2.4. [52,53] For the DC-Glycerol layer, a value of 𝑇 ̅ of 0.9 dyne.cm - 1 has been obtained coherently with previously reported data on supported lipid bilayer. [28,30] However for DC-Glycerol-R, DC-MTS and DC-OTS the values of 𝑇 ̅ increased dramatically to 92 dyne.cm -1 and 286 dyne.cm -1 and 183 dyne.cm -1 respectively and outstrip by far the previous reported results (i.e. in the range 0.4-30 dyne.cm -1 ). [28,30] All these results demonstrate the impact of the double reticulation on the dielectric performances of the lipid layer. Relationship between mechanical rupture force and dielectric strength Comparing the direct breakdown electric fields and mechanical rupture forces at every stage of the reticulation process, we notice an interestingly similar increase by factors of ~10 and ~2 after the reticulation 1 and the reticulation 2 respectively. This observation is quiet striking and may suggest that the rupture mechanisms of the lipid layer may be similar for both mechanical and electrical processes. Both mechanisms have been studied theoretically in water for non-reticulated supported lipid layers [28,54] or cell membranes. [26,27,31] In the framework of nano-indentation experiments, Butt and al [51] developed a two-state film rupture reaction theory based on the assumption that an AFM tip can rupture the lipid layer after the formation of a sufficiently large hole under the tip. It correlates the force required to create a hole to the line tension energetic cost and considers that every molecule has binding sites with energetically favorable positions. When an AFM tip is pressed on the layer, inducing a mechanical pressure, it becomes energetically favorable for the molecules to jump to an adjacent free binding site and form a hole under the tip. [38,[55][56][57] The mechanism of electroporation, which is widely discussed in cell biology is not fully understood yet. Several dynamical simulations have been developed to study the phenomenon, and they agree on the fact that electroporation is induced initially by the insertion of water molecules in the lipid layer whose dipoles reorganize with respect to the electric field. [26,27,54,58] The main mechanism for pore formation is a large difference in the dielectric constants of water (~80) and lipid layer (~2.5). The lipid layer can be considered as a planar capacitor, which tends to increase its capacitance in order to minimize electric energy. The electric field pushes water molecules into the membrane to substitute the lipids with low dielectric constant, by water with high dielectric constant, thus creating an effective lateral pressure applied to the pore edge. [31] The result of this mechanism leads to a compression of the lipid layer, i.e. to an electrostatic pressure, which leads to the opening of a hole, similarly to the case of mechano-poration. The two mechanisms of mechanical rupture and dielectric breakdown may hence be both initiated by a normal pressure which depends on the energy required to displace the lipids. Although these models were developed in an aqueous medium for non-reticulated lipids, we believe they can still be applied to describe our data in air (our electrical measurements) considering that DC-Glycerol-R head-group is hydrophilic and with a maximum grafting of MTS or OTS every two lipids (i.e. every 4 aliphatic chains), the MTS/OTS layers are poorly dense. Consequently, a molecular layer of water may be present at the air/lipid interface playing the role of water molecules reservoir. Clearly here instead of displacing individual lipid, pore formation in reticulated layers requires the displacement of the nano-domains/chains of reticulated lipids. In this case, we can anticipate that the displacement energy cost must be higher and so the normal pressure required to inducing this displacement. To investigate this assumption the corresponding mechanical and electrostatic pressures were both calculated for DC-Glycerol, DC-Glycerol-R and DC-MTS layers. The mechanical pressure at rupture, 𝑃 𝑚 = 𝐹 𝑚 𝑆, ⁄ was calculated using the rupture force obtained from the AFM indentation measurements, with S the tip-surface contact area. The breakdown electrostatic pressure (P e ) is defined by the force per unit area created when an electric field is applied across the lipid monolayer. It is given by 𝑃 𝑒 = 𝐹 𝑒 𝑆 ⁄ = 1 2 ⁄ 𝜀𝜀 0 𝐸 2 with 𝜀 = 2.4 the dielectric constant of the lipids, [52,53] 𝜀 0 the vacuum permittivity and E the direct breakdown electric field. The results of such calculations (Figure 4) show that for each type of surface, the mechanical and electrostatic pressures have similar values comforting the assumption that the mechanisms responsible both for mechanical rupture and dielectric breakdown might be the same and both initiated by a normal pressure which depends on the energy required to displace lipids, i.e. on the size of the reticulated lipid domains. Conclusions In conclusion, we have developed a method based on a two-stage reticulation process which allows the formation of lipid monolayers supported on silicon with high mechanical stability. We have shown that this double reticulation leads to improving drastically the dielectric properties of the lipid monolayer with dielectric performances which exceed by far the properties of inorganic dielectrics for equivalent thickness. These results suggest that such lipid monolayers are good candidates to be considered in the development of devices as ultrathin dielectric. Also, using the mechanical rupture forces and the dielectric breakdown electric field, we have demonstrated strong correlation between the electrical and mechanical properties. Surprisingly, we show that the mechanical and electrostatic pressures required to rupture/breakdown the layer, i.e., to making pores, have similar values, therefore suggesting similar pore formation processes whatever the stress is exerted mechanically or electrically. These results question the mechanism involved in pores formation in such reticulated lipid monolayers and a deep understanding of the processes would require the development of theoretical models. Experimental section Lipids cleavage: DC-Glycerol lipids are obtained after cleaving the phosphocholine head-groups of DC-PC lipids (23:2 diyne PC, Avanti Polar Lipids, Alabama) using a phospholipase C (Sigma Aldrich). 0.1% lipids in water are mixed with 10 g phospholipase C at 50°C during 16 hours. After cooling down the solution to room temperature, Electrical measurements: Measurements were performed in air using a homemade setup. Electrical contacts across the lipid layer were taken on the silicon substrate on one side and using a mercury drop on the other side. The contact area between the mercury drop and the sample was determined for each contact by making a picture. Contact areas were typically 2.5x10 -6 cm². HP 4140 pico-ammeter was used and I(V) measurements were realized at a rate ranging from 10 to 100 mV/s. Table 1: Different types of lipid monolayers that were fabricated on top of H-terminated silicon. They differentiate by their head-groups and the number of reticulations. Their aliphatic chains (AC) represented by a black box are all identical and contain acetylenic groups. Each layer thickness was determined by making a hole in the layer using an AFM tip. DC-PC Figure 1 1 Figure 1 (A-C) Scheme representing the two-stage reticulation process of a lipid monolayer with (A'-C') the corresponding AFM images. First, acetylenic groups of DC-glycerol lipid aliphatic chains are reticulated (Reticulation 1) using free radical AAPH for activation. Second, trichlorosilane derivative such as MTS or OTS is use to functionalize hydroxyl functions at lipid head-groups leading to second reticulation. A' was measured in solution using a liquid cell, B' and C' in air. The surface roughness is 0.69, 0.48 and 0.61 nm for DC-glycerol, Dcglycerol-R and DC-OTS respectively. Scale bar is 1 µm. Figure 2 a 2 Figure 2 a) Examples of force versus indentation curves obtained on DC-Glycerol (Green), DC-Glycerol-R (Yellow) and DC-MTS (Blue). For clarity, the curves have been shifted along the x-axis. The layer rupture shows as a jump in the distance as indicated by black arrows. b) Average forces extracted from indentation measurements by AFM to rupture monolayers of non-reticulated (Green) and reticulated (Reticulation 1) DC-Glycerol layers (Yellow) and DC-MTS layers (Reticulations 1 & 2, Blue). Numbers are: N° of useful data/N° of measurements (For DC-Glycerol the rupture force was often smaller than the resolution of our experimental setup and hence non-measurable). Figure 3 a 3 Figure 3 a) Leakage current density (Log scale) measured across DC-Glycerol-R, DC-MTS and DC-OTS lipid monolayers supported on silicon. Each curve represents the median curve and the error bars (colored areas), the median absolute deviation of at least 20 measurements. b) Lifetime of DC-Glycerol, DC-Glycerol-R and DC-MTS layers in a constant electric field. Each point corresponds to an average of at least 5 measurements. Figure 4 4 Figure 4 Mechanical and electrostatic pressures calculated for the rupture force and direct breakdown electric field respectively for the DC-Glycerol, DC-Glycerol-R and DC-MTS layers. Each result is the average of a minimum of 5 independent measurements. ( )11 ( )11 ( )11 Acknowledgements This work was financially supported by the Agence Nationale de la Recherche, under project N°ANR-16-JTIC-0003-01 and by the Societé d'Aide au Transfert Technologique du Sud-Est (SATT-Sud Est) under contract N°147680. the organic phase was extracted two times with chloroform, dried over Mg 2 SO 4 , filtrated and the solvent was removed under reduced pressure. The product was finally dried under vacuum for three hours. Supported lipid monolayer formation and reticulations processes: Lipid monolayer supported on H-terminated silicon surface was made using a protocol developed in our laboratory and based on the vesicle fusion method. Small unilamellar lipid vesicles were first fabricated: 100 l of 0.1% lipid stock solution in chloroform are first heated at 50°C until the chloroform is evaporated, then re-diluted in 100 l of deionized water. The lipid solution is then sonicated 30 min and extruded across 100 nm pores polycarbonate membranes. Just before lipid deposition, Hterminated silicon surfaces were obtained after etching native silicon oxide layer by dipping the silicon sample 2 minutes in 2% HF. The lipid solution is then poured onto the silicon surface at room temperature and then cooled down to 10°C. The temperature is then slowly increased to 32°C at 1°C/min. At this stage DC-Glycerol monolayer is formed. DC-Glycerol-R is obtained from DC-Glycerol by making a reticulation (Reticulation 1) of the aliphatic chains using the acetylenic groups of the lipid chains (Figure 1). It is induced by addition of 1% free radical AAPH (2,2′-Azobis(2-methylpropionamidine) dihydrochloride, Sigma Aldrich) in water and increasing the temperature to 42°C. After 45 minutes the sample is cooled down and rinsed with deionized water. DC-MTS and DC-OTS are obtained from DC-Glycerol-R by exposing the sample to 1 mmol solution of MTS (Methyltrichlorosilane, Sigma Aldrich) or OTS (Octadecyltrichlorosilane, Sigma Aldrich) in 1,4 dioxane (Sigma Aldrich) for 15 minutes. The sample is then rinsed with methanol to remove residual dioxane from the surface. This silanization leads to a second reticulation (Reticulation 2) of the lipid head-groups. Surface imaging and indentation measurements by AFM: All measurements were realized using NTEGRA atomic force microscope from NT-MDT Spectrum Instruments (Moscow, Russia). Sample surface imaging was performed in tapping mode in air using NSC 35 cantilevers (Nanoandmore, USA) with a of diameter 8 nm and spring constant of ~15 N/m. Indentations measurements were performed in water using NSC 38 cantilevers with a diameter of 8 nm as verified by scanning electron microscopy and a spring constant of ~0.09 N/m. For each experiment the cantilever force constant was evaluated using the thermal noise analysis method [59] after calibrating the tip deflection against a hard silicon surface. For each type of lipid layer the measurements were reproduced on three different samples. The loading and unloading rates were fixed at a rate of 18 m/s. Supporting Information Supporting Information is available from the Wiley Online Library or from the author. Conflict of interest The authors declare no conflict of interest. Table of contents entry The development of ultra-thin engineered lipid monolayers with exceptional mechanical and dielectric properties to be used as ultrathin dielectric is reported. The layers are obtained by a simple two-stage reticulation process. A relationship between mechanical and dielectric performances is demonstrated.
01774223
en
[ "info.info-se" ]
2024/03/05 22:32:18
2018
https://inria.hal.science/hal-01774223/file/alleviating_Overfitting.pdf
Zhongxing Yu email: [email protected]@[email protected] Matias Martinez email: [email protected] Benjamin Danglot Thomas Durieux Martin Monperrus email: [email protected] Zhongxing Yu Benjamin Danglot Thomas Durieux Alleviating Patch Overfitting with Automatic Test Generation: A Study of Feasibility and Effectiveness for the Nopol Repair System Keywords: Program repair, Synthesis-based repair, Patch overfitting, Automatic test case generation Among the many different kinds of program repair techniques, one widely studied family of techniques is called test suite based repair. However, test suites are in essence input-output specifications and are thus typically inadequate for completely specifying the expected behavior of the program under repair. Consequently, the patches generated by test suite based repair techniques can just overfit to the used test suite, and fail to generalize to other tests. We deeply analyze the overfitting problem in program repair and give a classification of this problem. This classification will help the community to better understand and design techniques to defeat the overfitting problem. We further propose and evaluate an approach called UnsatGuided, which aims to alleviate the overfitting problem for synthesis-based repair techniques with automatic test case generation. The approach uses additional automatically generated tests to strengthen the repair constraint used by synthesis-based repair techniques. We analyze the effectiveness of UnsatGuided: 1) analytically with respect to alleviating two different kinds of overfitting issues; 2) empirically based on an experiment over the 224 bugs of the Defects4J repository. The main result is that automatic test generation is effective in alleviating one kind of overfitting issue-regression introduction, but due to oracle problem, has minimal positive impact on alleviating the other kind of overfitting issue-incomplete fixing. Introduction Automated program repair holds out the promise of saving debugging costs and patching buggy programs more quickly than humans. Given this great potential, there has been a surge of research on automated program repair in recent years and several different techniques have been proposed [START_REF] Goues | Genprog: A generic method for automatic software repair[END_REF]; [START_REF] Nguyen | Semfix: Program repair via semantic analysis[END_REF]; [START_REF] Xuan | Nopol: Automatic repair of conditional statement bugs in java programs[END_REF]; [START_REF] Pei | Automated fixing of programs with contracts[END_REF]; [START_REF] Long | Automatic inference of code transforms for patch generation[END_REF]). These techniques differ in various ways, such as the kinds of used oracles and the fault classes they target1 [START_REF] Monperrus | Automatic Software Repair: a Bibliography[END_REF]). Among the many different techniques proposed, one widely studied family of techniques is called test suite based repair. Test suite based repair starts with some passing tests as the specification of the expected behavior of the program and at least one failing test as a specification of the bug to be repaired, and aims at generating patches that make all the tests pass. Depending the patch generation strategy, test suite based repair can further be informally divided into two general categories: generate-and-validate techniques and synthesis-based techniques. Generate-and-validate techniques use certain methods such as genetic programming to first generate a set of candidate patches, and then validate the generated patches against the test suite. Representative examples in this category include GenProg [START_REF] Goues | Genprog: A generic method for automatic software repair[END_REF]), PAR [START_REF] Kim | Automatic patch generation learned from human-written patches[END_REF]) and SPR [START_REF] Long | Staged program repair with condition synthesis[END_REF]). Synthesis-based techniques first use test execution information to build a repair constraint, and then use a constraint solver to synthesize a patch. Typical examples in this category include SemFix [START_REF] Nguyen | Semfix: Program repair via semantic analysis[END_REF]), Nopol [START_REF] Xuan | Nopol: Automatic repair of conditional statement bugs in java programs[END_REF]), and Angelix [START_REF] Mechtaev | Angelix: Scalable multiline program patch synthesis via symbolic analysis[END_REF]). Empirical studies have shown the promise of test suite based repair techniques in tackling real-life bugs in real-life systems. For instance, GenProg [START_REF] Goues | Genprog: A generic method for automatic software repair[END_REF]) and Angelix [START_REF] Mechtaev | Angelix: Scalable multiline program patch synthesis via symbolic analysis[END_REF]) can generate repairs for large-scale real-world C programs, while ASTOR (Martinez and Monperrus (2016)) and Nopol [START_REF] Xuan | Nopol: Automatic repair of conditional statement bugs in java programs[END_REF]) have given encouraging results (Martinez et al (2016)) on a set of real-life Java programs from the Defects4j benchmark (Just et al (2014a)). However, test suites are in essence input-output specifications and are therefore typically inadequate for completely specifying the expected behavior. Consequently, the patches generated by test suite based program repair techniques pass the test suite, yet may be incorrect. The patches that are overly specific to the used test suite and fail to generalize to other tests are called overfitting patches [START_REF] Smith | Is the cure worse than the disease? overfitting in automated program repair[END_REF]). Overfitting indeed threats the validity of test suite based repair techniques and some recent studies have shown that a significant portion of the patches generated by test suite based repair techniques are overfitting patches [START_REF] Smith | Is the cure worse than the disease? overfitting in automated program repair[END_REF]; [START_REF] Qi | An analysis of patch plausibility and correctness for generate-and-validate patch generation systems[END_REF]; Martinez et al (2016); Le et al (2017b)). In this paper, we deeply analyze the overfitting problem in program repair and identify two kinds of overfitting issues: incomplete fixing and regression introduction. Our empirical evaluation shows that both kinds of overfitting issues are common. Based on the overfitting issues that an overfitting patch has, we further define three kinds of overfitting patches. This characterization of overfitting will help the community to better understand the overfitting problem in program repair, and will hopefully guide the development of techniques for alleviating overfitting. We further propose an approach called UnsatGuided, which aims to alleviate the overfitting problem for synthesis-based techniques. Given the recent significant progress in the area of automatic test generation, UnsatGuided makes use of automatic test case generation technique to obtain additional tests and then integrate the automatically generated tests into the synthesis process. The intuition behind UnsatGuided is that additional automatically generated tests can supplement the manually written tests to strengthen the repair constraint, and synthesis-based techniques can thus use the strengthened repair constraint to synthesize patches that suffer less from overfitting. To generate tests that can detect problems besides crashes and uncaught exceptions, state-of-art automatic test generation techniques generate tests that include assertions encoding the behavior observed during test execution on the current program. By using such automatic test generation techniques on the program to be repaired, some of the generated tests can possibly assert buggy behaviors and these tests with wrong oracles can mislead the synthesis process. UnsatGuided tries to identify and discard tests with likely wrong oracles through the idea that if the additional repair constraint from a generated test has a contradiction with the repair constraint established using the manually written test suite, then the generated test is likely to be a test with wrong oracle. We analyze the effectiveness of UnsatGuided with respect to alleviating different kinds of overfitting issues. We then set up an empirical evaluation of Unsat-Guided, which uses Nopol [START_REF] Xuan | Nopol: Automatic repair of conditional statement bugs in java programs[END_REF]) as the synthesis-based technique and EvoSuite [START_REF] Fraser | Evosuite: automatic test suite generation for object-oriented software[END_REF]) as the automatic test case generation technique. The evaluation uses 224 bugs of the Defects4J repository (Just et al (2014a)) as benchmark. The results confirm our analysis and show that Unsat-Guided 1) is effective in alleviating overfitting issue of regression introduction for 16/19 bugs; 2) does not break already correct patches; 3) can help a synthesisbased repair technique to generate additional correct patches. To sum up, the contributions of this paper are: -An analysis of the overfitting problem in automated program repair and a classification of overfitting. -An approach, called UnsatGuided, to alleviate the overfitting problem for synthesis-based repair techniques. -An analysis of the effectiveness of UnsatGuided in alleviating different kinds of overfitting issues, and the identification of deep limitations of using automatic test case generation to alleviate overfitting. -An empirical evaluation of the prevalence of different kinds of overfitting issues on 224 bugs of the Defects4J repository, as well as an extensive evaluation of the effectiveness of UnsatGuided in alleviating the overfitting problem. The remainder of this paper is structured as follows. We first present related work in Section 2. Section 3 first provides our analysis of the overfitting problem and the classification of overfitting issues and overfitting patches, then gives the algorithm of the proposed approach UnsatGuided, and finally analyzes the effectiveness of UnsatGuided. Section 4 presents an empirical evaluation of the prevalence of different kinds of overfitting issues and the effectiveness of Unsat-Guided, followed by Section 5 which concludes this paper. This paper is a major revision of an Arxiv preprint [START_REF] Yu | Test Case Generation for Program Repair: A Study of Feasibility and Effectiveness[END_REF]). Related Work Program Repair Due to the high cost of fixing bugs manually, there has been a surge of research on automated program repair in recent years. Automated program repair aims to correct software defects without the intervention of human developers, and many different kinds of techniques have been proposed recently. For a complete picture of the field, readers can refer to the survey paper [START_REF] Monperrus | Automatic Software Repair: a Bibliography[END_REF]). Generally speaking, automated program repair involves two steps. To begin with, it analyzes the buggy program and uses techniques such as genetic programming [START_REF] Goues | Genprog: A generic method for automatic software repair[END_REF]), program synthesis [START_REF] Nguyen | Semfix: Program repair via semantic analysis[END_REF]) and machine learning [START_REF] Long | Automatic patch generation by learning correct code[END_REF]) to produce one or more candidate patches. Afterwards, it validates the produced candidate patches with an oracle that encodes the expected behavior of the buggy program. Typically used oracles include test suites [START_REF] Goues | Genprog: A generic method for automatic software repair[END_REF]; [START_REF] Nguyen | Semfix: Program repair via semantic analysis[END_REF]), pre-and post-conditions [START_REF] Wei | Automated fixing of programs with contracts[END_REF]), and runtime assertions [START_REF] Perkins | Automatically patching errors in deployed software[END_REF]). The proposed automatic program repair techniques can target different kinds of faults. While some automatic program techniques target the general types of faults and do not require the fault types to be known in advance, a number of other techniques can only be applied to specific types of faults, such as null pointer exception [START_REF] Durieux | Dynamic patch generation for null pointer exceptions using metaprogramming[END_REF]), integer overflow [START_REF] Brumley | Rich: Automatically protecting against integer-based vulnerabilities[END_REF]), buffer overflow [START_REF] Shaw | Automatically fixing c buffer overflows using program transformations[END_REF]), memory leak [START_REF] Gao | Safe memoryleak fixing for c programs[END_REF]), and error handling bugs [START_REF] Tian | Automatically diagnosing and repairing error handling bugs in c[END_REF]). Test Suite Based Program Repair Among the various kinds of program repair techniques proposed, a most widely studied and arguably the standard family of techniques is called test suite based repair. The inputs to test suite based repair techniques are the buggy program and a test suite, which contains some passing tests as the specification of the expected behavior of the program and at least one failing test as a specification of the bug to be repaired. The output is one or more candidate patches that make all the test cases pass. Typically, test suite based repair techniques first use some fault localization techniques [START_REF] Jones | Empirical evaluation of the tarantula automatic fault-localization technique[END_REF]; [START_REF] Le | Statistical debugging: A hypothesis testing-based approach[END_REF]; [START_REF] Yu | Does the failing test execute a single or multiple faults?: An approach to classifying failing tests[END_REF][START_REF] Yu | Mutation-oriented test data augmentation for gui software fault localization[END_REF][START_REF] Yu | Gui software fault localization using n-gram analysis[END_REF]; [START_REF] Zhang | Locating faults through automated predicate switching[END_REF]) to identify the most suspicious program statements. Then, test suite based repair techniques use some patch generation strategies to patch the identified suspicious statements. Based on the used patch generation strategy, test suite based repair techniques can further be divided into generate-and-validate techniques and synthesis-based techniques. Generate-and-validate repair techniques first search within a search space to generate a set of patches, and then validate them against the test suite. Gen-Prog [START_REF] Goues | Genprog: A generic method for automatic software repair[END_REF]), one of the earliest generate-and-validate techniques, uses genetic programming to search the repair space and generates patches that consist of code snippets copied from elsewhere in the same program. PAR [START_REF] Kim | Automatic patch generation learned from human-written patches[END_REF]) shares the same search strategy with GenProg but uses 10 specialized patch templates derived from human-written patches to construct the search space. RSRepair [START_REF] Qi | The strength of random search on automated program repair[END_REF]) has the same search space as GenProg but uses random search instead, and the empirical evaluation shows that random search can be as effective as genetic programming. AE [START_REF] Weimer | Leveraging program equivalence for adaptive program repair: Models and first results[END_REF]) employs a novel deterministic search strategy and uses program equivalence relation to reduce the patch search space. SPR [START_REF] Long | Staged program repair with condition synthesis[END_REF]) uses a set of predefined transformation schemas to construct the search space, and patches are generated by instantiating the schemas with condition synthesis techniques. Prophet [START_REF] Long | Automatic patch generation by learning correct code[END_REF]) applies probabilistic models of correct code learned from successful human patches to prioritize candidate patches so that the correct patches could have higher rankings. Given that most of the proposed repair systems target only C code, jGenProg, as implemented in ASTOR (Martinez and Monperrus (2016)), is an implementation of GenProg for Java code. Synthesis-based techniques first use the input test suite to extract a repair constraint, and then leverage program synthesis to solve the constraint and get a patch. The patches generated by synthesis-based techniques are generally by design correct with respect to the input test suite. SemFix [START_REF] Nguyen | Semfix: Program repair via semantic analysis[END_REF]), the pioneer work in this category of repair techniques, performs controlled symbolic execution on the input tests to get symbolic constraints, and uses code synthesis to identify a code change that makes all tests pass. The target repair locations of SemFix are assignments and boolean conditions. To make the generated patches more readable and comprehensible for human beings, DirectFix [START_REF] Mechtaev | Directfix: Looking for simple program repairs[END_REF]) encodes the repair problem into a partial Maximum Satisfiability problem (MaxSAT) and uses a suitably modified Satisfiability Modulo Theory (SMT) solver to get the solution, which is finally converted into the concise patch. Angelix [START_REF] Mechtaev | Angelix: Scalable multiline program patch synthesis via symbolic analysis[END_REF]) uses a lightweight repair constraint representation called "angelic forest" to increase the scalability of DirectFix. Nopol [START_REF] Xuan | Nopol: Automatic repair of conditional statement bugs in java programs[END_REF]) uses multiple instrumented test suite executions to synthesize a repair constraint, which is then transformed into a SMT problem and a feasible solution to the problem is finally returned as a patch. Nopol addresses the repair of buggy if conditions and missing preconditions. S3 (Le et al (2017a)) aims to synthesize more generalizable patches by using three components: a domain-specific language (DSL) to customize and constrain search space, an enumeration-based search strategy to search the space, and finally a ranking function to rank patches. While test suite based repair techniques are promising, an inherent limitation of them is that the correctness specifications used by them are the test suites, which are generally available but rarely exhaustive in practice. As a result, the generated patches may just overfit to the available tests, meaning that they will break untested but desired functionality. Several recent studies have shown that overfitting is a serious issue associated with test suite based repair techniques. Qi et al. [START_REF] Qi | An analysis of patch plausibility and correctness for generate-and-validate patch generation systems[END_REF]) find that the vast majority of patches produced by GenProg, RSRepair, and AE avoid bugs simply by functionality deletion. A subsequent study by [START_REF] Smith | Is the cure worse than the disease? overfitting in automated program repair[END_REF]) further confirms that the patches generated by GenProg and RSRepair fail to generalize. The empirical study conducted by Martinez et al. (Martinez et al (2016)) reveals that among the 47 bugs fixed by jGenProg, jKali, and Nopol, only 9 bugs are correctly fixed. More recently, the study by Le et al. (Le et al (2017b)) again confirms the severity of the overfitting issue for synthesis-based repair techniques. Moreover, the study also investigates how test suite size and provenance, number of failing tests, and semantics-specific tool settings can affect overfitting issues for synthesis-based repair techniques. Given the seriousness and importance of the overfitting problem, Yi et al. [START_REF] Yi | A correlation study between automated program repair and test-suite metrics[END_REF]) explore the correlation between test suite metrics and the quality of patches generated by automated program repair tetchiness, and they find that with the increase of traditional test suite metrics, the quality of the generated patches also tend to improve. To gain a better understanding of the overfitting problem in program repair, we conduct a deep analysis of it and give the classification of overfitting issues and overfitting patches. We wish the classifications can facilitate future work on alleviating the overfitting problem in program repair. In addition, given the recent progress in the area of automatic test generation, we investigate the feasibility of augmenting the initial test suite with additional automatically generated tests to alleviate the overfitting problem. More specifically, we propose an approach called UnsatGuided, which aims to alleviate the overfitting problem for synthesis-based repair techniques. The effectiveness of UnsatGuided for alleviating different kinds of overfitting issues is analyzed and empirically verified, and we also point out the deep limitations of using automatic test generation to alleviate overfitting. In the literature, there are several works that try to use test case generation to alleviate the overfitting problem in program repair. [START_REF] Xin | Identifying test-suite-overfitted patches through test case generation[END_REF]) propose an approach to identify overfitting patches through test case generation, which generates new test inputs that focus on the semantic differences brought by the patches and relies on human beings to add oracles for the inputs. Yang et al. [START_REF] Yang | Better test cases for better automated program repair[END_REF]) aim to filter overfitting patches for generate-and-validate repair techniques through a framework named Opad, which uses fuzz testing to generate tests and relies on two inherent oracles, crash and memory-safety, to enhance validity checking of generated patches. By heuristically comparing the similarity of different execution traces, Liu et al. [START_REF] Liu | Identifying patch correctness in test-based automatic program repair[END_REF]) also aim to identify overfitting patches generated by test suite based repair techniques. UnsatGuided is different from these works. On the one hand, these three works all try to use generated tests to identify overfitting patches generated by test suite based repair techniques and the generated tests are not used by the run of the repair algorithm itself. However, our aim is to improve the patch generated using manually written test suite and the generated tests are used by the repair algorithm to supplement the manually written test suite so that a better repair specification can be obtained. On the other hand, our work does not assume the specificity of the used oracle while the work by Xin and Reiss [START_REF] Xin | Identifying test-suite-overfitted patches through test case generation[END_REF]) uses the human oracle and the work by Yang et al. [START_REF] Yang | Better test cases for better automated program repair[END_REF]) uses the crash and memory-safety oracles. Automatic Test Case Generation Despite tests are often created manually in practice, much research effort has been put on automated test generation techniques. In particular, a number of automatic test generation tools for mainstream programming languages have been developed over the past few years. These tools typically rely on techniques such as random test generation, search-based test generation and dynamic symbolic execution. For Java, Randoop [START_REF] Pacheco | Randoop: feedback-directed random testing for java[END_REF]) is the well-known random unit test generation tool. Randoop uses feedback-directed random testing to generate unit tests, and it works by iteratively extending method call sequences with randomly selected method calls and randomly selected arguments from previously constructed sequences. As Randoop test generation process uses a bottom-up ap-proach, it cannot generate tests for a specific class. Other random unit test generation tools for Java include JCrasher [START_REF] Csallner | Jcrasher: an automatic robustness tester for java[END_REF]), CarFast [START_REF] Park | Carfast: Achieving higher statement coverage faster[END_REF]), T3 [START_REF] Prasetya | T3, a Combinator-Based Random Testing Tool for Java: Benchmarking[END_REF]), TestFul [START_REF] Baresi | Testful: an evolutionary test approach for java[END_REF]) and eToc [START_REF] Tonella | Evolutionary testing of classes[END_REF]). There are also techniques that use various kinds of symbolic execution, such as symbolic PathFinder [START_REF] Păsăreanu | Symbolic pathfinder: Symbolic execution of java bytecode[END_REF]) and DSC [START_REF] Islam | Dsc+mock: A test case + mock class generator in support of coding against interfaces[END_REF]). EvoSuite [START_REF] Fraser | Evosuite: automatic test suite generation for object-oriented software[END_REF]) is the state-of-art search-based unit test generation tool for Java and can target a specific class. It uses an evolutionary approach to derive test suites that maximize code coverage, and generates assertions that encode the current behavior of the program. In the C realm, DART [START_REF] Godefroid | Dart: directed automated random testing[END_REF]), CUTE [START_REF] Sen | Cute: a concolic unit testing engine for c[END_REF]), and KLEE [START_REF] Cadar | Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs[END_REF]) are three representatives of automatic test case generation tools for C. Symbolic execution is used in conjunction with concrete execution by these tools to maximize code coverage. In addition, Pex [START_REF] Tillmann | Pex: White box test generation for .net[END_REF]) is a popular unit test generation tool for C# code based on dynamic symbolic execution. Analysis and Alleviation of the Overfitting Problem In this section, we first introduce a novel classification of overfitting issues and overfitting patches. Then, we propose an approach called UnsatGuided for alleviating the overfitting problem for synthesis-based repair techniques. We finally analyze the effectiveness of UnsatGuided with respect to different overfitting kinds and point out the profound limitation of using automatic test generation to alleviate overfitting. Core Definitions Let us reason about the input space I of a program P . We consider modern objectoriented programs, where an input point is composed of one or more objects, interacting through a sequence of methods calls. In a typical repair scenario, the program is almost correct and thus a bug only affects the program behavior of a portion of the input domain, which we call the "buggy input domain" I bug . We call the rest of the input domain, for which the program behaviors are considered correct as I correct . By definition, a patch generated by an automatic program repair technique has an impact on program behaviors, i.e., it changes the behaviors of a portion of the input domain. We use I patch to denote this input domain which is impacted by a patch. For input points within I bug whose behaviors have been changed by a patch, the patch can either correctly or incorrectly change the original buggy behaviors. We use I patch= to denote the input points within I bug whose behaviors have been incorrectly changed by a patch, i.e., the newly behaviors of these input points brought by the patch are still incorrect. Meanwhile, we use I patch to denote the input points within I bug whose behaviors have been correctly changed by a patch. If the patch involves changes to behaviors of input points within I correct , then the original correct behaviors of these input points will undesirably become incorrect and we use I patch to denote these input points within I correct broken by the patch. Obviously, the union of I patch= , I patch and I patch makes up I patch . For simplicity, hereafter when we say some input points within I bug are repaired by a patch, we mean the original buggy behaviors of these input points have been correctly changed by the patch. Similarly, when we say some input points within I correct are broken by a patch, we mean the original correct behaviors of these input points have been incorrectly changed by the patch. Note as a patch generated by test suite based program repair techniques, the patch will at least repair the input points corresponding to the original failing tests. In other words, the intersection of I patch and I bug will always not be empty (I patch ∩ I correct = ∅). Classification of Overfitting For a given bug, a perfect patch repairs all input points within I bug and does not break any input points within I correct . However, due to the incompleteness of the test suite used to drive the repair process, the generated patch may not be ideal and just overfit to the used tests. Depending on how a generated patch performs with respect to the input domain I bug and I correct , we define two kinds of overfitting issues, which are consistent with the problems for human patches introduced by Gu et al [START_REF] Gu | Has the bug really been fixed?[END_REF]). Incomplete fixing: Some but not all input points within I bug are repaired by the generated patch. In other words, I patch is a proper subset of I bug (I patch ⊂ I bug ). Regression introduction: Some input points within I correct are broken by the generated patch. In other words, I patch is not an empty set (I patch = ∅). Based on these two different kinds of overfitting issues, we further define three different kinds of overfitting patches. A-Overfitting patch: The overfitting patch only has the overfitting issue of incomplete fixing (I patch ⊂ I bug ∧ I patch = ∅). This kind of overfitting patch can be considered as a "partial patch". It encompasses the worst case where there is one single failing test and the overfitting patch fixes the bug only for the input point specified in this specific failing test. B-Overfitting patch: The overfitting patch only has the overfitting issue of regression introduction (I patch = I bug ∧ I patch = ∅). Note that this kind of overfitting patch correctly repairs all input points within the buggy input domain I bug but at the same time breaks some already correct behaviors of the buggy program under repair. AB-Overfitting patch: The overfitting patch has both overfitting issues of incomplete fixing and regression introduction at the same time (I patch ⊂ I bug ∧ I patch = ∅). This kind of overfitting patch correctly repairs some but not all input points within the buggy input domain I bug and also introduces some regressions. Figure 1 gives an illustration of these three different kinds of overfitting patches. This characterization of overfitting in program repair is independent of the technique presented in this paper and can be used by the community to better design techniques to defeat the overfitting problem. A-Overfitting B-Overfitting I patch Failing manual test cases Passing manual test cases I bug I bug I patch AB-Overfitting I bug I patch Input points not covered by manual test cases Fig. 1 A-Overfitting patch is a partial patch on a portion of the buggy input domain. B-Overfitting patch breaks correct behaviors outside the buggy input domain. AB-Overfitting patch partially fixes the buggy input domain and also breaks some correct behaviours. UnsatGuided: Alleviating the Overfitting Problem for Synthesis-based Repair Techniques In this section, we propose an approach called UnsatGuided, which aims to alleviate the overfitting problem for synthesis-based repair techniques. The approach aims to strengthen the correctness specification so that the resulting generated patches are more likely to generalize over the whole input domain. It achieves the aim by using additional tests generated by an automatic test case generation technique. We first give some background knowledge about automatic test case generation techniques and then give the details of the proposed approach. The Bug-exposing Test Problem In the context of regression testing, automatic test case generation techniques typically use the current behavior of the program itself as the oracle [START_REF] Pacheco | Randoop: feedback-directed random testing for java[END_REF]; Xie (2006))2 . We consider those typical regression test generation techniques in this paper and denote an arbitrary technique as T reg . For a certain buggy version, T reg may generate both input points within the buggy input domain I bug and the correct input domain I correct . For instance, suppose we have a calculator which incorrectly implements the add function for achieving the addition of two integers. The code is buggy on the input domain (10, _) (where _ means any integer except 0) and is implemented as follows: add(x,y) { if (x == 10) return x-y; else return x+y; } First, assume that T reg generates a test in the correct input domain I correct , say for input point (5, 5). The resulting test, which uses the existing behavior as oracle, will be assertEquals (10, add(5,5)). Then consider what happens when the generated test lies in I bug , say for input point (10,8). In this case, T reg would generate the test assertEquals (2, add(10,8)). If the input point of a generated test lies in I bug , the synthesized assertion will assert the presence of the actual buggy behavior of the program under test, i.e., the generated assertion encodes the buggy behavior. In such a case, if the input point of a generated test lies in I bug , it is called a "bug-exposing test" in this paper. Otherwise, the test is called a "normal test" if its input point lies in I correct . In the context of test suite based program repair, the existence of bug-exposing tests is a big problem. Basically, if a repair technique finds a patch that satisfies bug-exposing tests, then the buggy behavior is kept. In other words, it means that some of the generated tests can possibly enforce bad behaviors related with the bug to be repaired. UnsatGuided: Incremental Test Suite Augmentation for Alleviating the Overfitting Problem for Synthesis-based Repair Techniques The overfitting problem for synthesis-based repair techniques such as SemFix and Nopol arises because the repair constraint established using an incomplete test suite is not strong enough to fully express the intended semantics of a program. Our idea is to strengthen the initial repair constraint by augmenting the initial test suite with additional automatically generated tests. We wish that a stronger repair constraint would guide synthesis-based repair techniques towards better patches, i.e., patches that are correct or at least suffer less from overfitting. The core problem to handle is the possible existence of bug-exposing test(s) among the tests generated by an automatic test case generation technique. We cannot directly supply all of the generated tests to a synthesis-based repair technique because bug-exposing tests can mislead the synthesis repair process and force incorrect behaviors to be synthesized. To handle this core conceptual problem, we now present an approach called UnsatGuided, which gradually makes use of the new information provided by each automatically generated test to build a possibly stronger final repair constraint. The key underlying idea is that if the additional repair constraint enforced by an automatically generated test has logical contradictions with the repair constraint established so far, then the generated test is likely to be a bug-exposing test and is discarded. Example To help understanding, we use the following toy program to illustrate it. The inputs are any integers and there is an error in the condition which results in buggy input domain I bug = {5, 6, 7}. Suppose we use component based repair synthesis [START_REF] Jha | Oracle-guided component-based program synthesis[END_REF]) to synthesize the correct condition, and to make the explanation easy, we further assume the available components include only variable x, the relational operators < (less than) and > (greater than), logical operator && (logical and), and finally any integer constants. For the three buggy inputs, regression test generation technique T reg considered in this paper can generate bug-exposing tests assertEquals(4, f(5)), assertEquals(5, f( 6)), and assertEquals(6, f(7)). Each test is of the form assertEquals(O,f (I)), which specifies that the expected return value of the program is O when the input is I. For other input points, manually written tests and tests generated by T reg are the same. Each test assertEquals(O,f (I)) will impose a repair constraint of the form x=I→f (I) = O. The repair constraint imposed by a set of tests {t i |assertEquals(O i , f (I i )),1 i N } will be N i=1 (x=I i →f (I i ) = O i ). The repair constraint and available components are then typically encoded into a SMT problem, and a satisfying SMT model is then translated back into a synthesized expression which provably satisfies the repair constraint imposed by the tests. To achieve the encoding, techniques such as concrete execution [START_REF] Xuan | Nopol: Automatic repair of conditional statement bugs in java programs[END_REF]) and symbolic execution [START_REF] Nguyen | Semfix: Program repair via semantic analysis[END_REF]) can be used. For this example, suppose the manually written tests assertEquals(-1, f(0)), assertEquals(2, f(1)), assertEquals(8, f( 7)), and assert Equals(9, f(10)) are provided initially. Using the repair constraint int f(int x) { if (x>0&&x<5) // (x = 0 → f (0) = -1)∧(x = 1 → f (1) = 2)∧(x = 7 → f (7) = 8)∧(x = 10 → f (10) = 9) enforced by these tests, the synthesis process can possibly synthesize a condition if (x>0 && x<10), which is not completely correct as the repair constraint enforced by the 4 manual tests is not strong enough. If a bug-exposing test such as assertEquals(4,f(5)) is generated by T reg and the repair constraint (x = 5 → f (5) = 4) imposed by it is added, the synthesis process cannot synthesize a condition as there is a contradiction between the repair constraint imposed by it and that imposed by the 4 manual tests. The contradiction happens because according to the the repair constraint imposed by manual tests and the available components used for synthesis, the calculation of any integer input between 1 and 7 should follow the same branch as integer inputs 1 and 7, consequently the return value should be 6 (not 4) when the integer input is 5. The core idea of Unsat-Guided is to detect those contradictions and discard the bug exposing tests such as assertEquals(4, f(5)). However, if a normal test such as assertEquals(7, f(8)) is generated by T reg and the repair constraint (x = 8 → f (8) = 7) imposed by it is added, there is no contradiction and a stronger repair constraint can be obtained, which will enable the synthesis process to synthesize the correct condition if (x>0 && x<8) in this specific example. The core idea of UnsatGuided is to keep those valuable new tests for synthesizing and validating patches. Algorithm Algorithm 1 describes the approach in detail. The algorithm takes as input a buggy program P to be repaired, a manually written test suite TS which contains some passing tests and at least one failing test, a synthesis-based repair Algorithm 1 : Algorithm for the Proposed Approach UnsatGuided Input: A buggy program P and its manually written test suite T S Input: A synthesis-based repair technique T synthesis and the time budget T B Input: An automatic test case generation tool Tauto Output: A patch pt to the buggy program P 1: pt initial ← T synthesis (P, T S, T B) 2: if pt initial = null then 3: pt ← null 4: else 5: AGT S ← ∅ 6: pt ← pt initial 7: T Saug ← T S 8: t initial ← getP atchGenT ime(T synthesis (P, T S, T B)) 9: {f ile i }(i = 1, 2, ..., n) ← getInvolvedF iles(pt initial ) 10: for i = 1 to n do 11: AGT S ← AGT S ∪ Tauto(P, f ile i ) 12: end for 13: for j = 1 to |AGT S| do 14: t j ← AGT S(j) 15: T Saug ← T Saug ∪ {t j } 16: pt intern ← T synthesis (P, T Saug, t initial × 2) 17: if pt intern = null then 18: pt ← pt intern 19: else 20: T Saug ← T Saug -{t j } 21: end if 22: end for 23: end if 24: return pt technique T synthesis , a time budget TB allocated for the execution of T synthesis , and finally an automatic test case generation tool T auto which uses a certain kind of automatic test case generation technique T reg . The output of the algorithm is a patch pt to the buggy program P. The algorithm directly returns an empty patch if T synthesis generates no patches within the time budget (lines 2-3). In case T synthesis generates an initial patch pt initial within the time budget, the algorithm first conducts a set of initialization steps as follows: it sets the automatically generated test suite AGTS to be an empty set (line 5), sets the returned patch pt to be the initial patch pt initial (line 6), sets the augmented test suite T S aug to be the manually written test suite TS (line 7), and gets the time used by T synthesis to generate the initial patch pt initial and sets t initial to be the value (line 8). Algorithm 1 then identifies the set of files {f ile i }(i=1, 2,..., n) involved in the initial patch pt initial (line 9) and for each identified file, it uses the automatic test case generation tool T auto to generate a set of tests that target behaviors related with the file and adds the generated tests to the automatically generated test suite AGTS (lines 10-12). Next, the algorithm will use the test suite AGTS to refine the initial patch pt initial . For each test t j in the test suite AGTS (line 14), the algorithm first adds it to the augmented test suite T S aug (line 15) and runs technique T synthesis with test suite T S aug and new time budget t initial × 2 against program P (line 16). The new time budget is used to quickly identify tests that can potentially contribute to strengthening the repair constraint, and thus improve the scalability of the approach. Then, if the generated patch pt intern is not an empty patch, the algorithm updates the returned patch pt with pt intern (lines 17-18). In other words, the algorithm deems test t j as a good test that can help improve the repair constraint. Otherwise, test t j is removed from the augmented test suite T S aug (lines 19-20) as t j is either a bug-exposing test or it slows down the repair process too much. After the above process has been completed for each test in the test suite AGTS, the algorithm finally returns patch pt as the desirable patch (line 24). Remark : Note for a certain synthesis-based repair technique T synthesis that is used as the input, UnsatGuided does not make any changes to the patch synthesis process of T synthesis itself. In particular, most current synthesis-based repair techniques use component based synthesis to synthesize the patch, including Nopol [START_REF] Xuan | Nopol: Automatic repair of conditional statement bugs in java programs[END_REF]), SemFix [START_REF] Nguyen | Semfix: Program repair via semantic analysis[END_REF]), Angelix [START_REF] Mechtaev | Angelix: Scalable multiline program patch synthesis via symbolic analysis[END_REF]). For component-based synthesis, one important problem is selecting and using the build components. UnsatGuided keeps the original component selection and use strategy implemented by each synthesis-based repair technique. In addition, the order of trying each test in the test suite AGTS matters. Once a test is deemed as helpful, it is added to the augmented test suite T S aug permanently and may impact the result of subsequent runs of other tests. The algorithm currently first uses the size of the identified files involved in the initial patch to determine the test generation order. The larger the size of an identified file, the earlier the test generation tool T auto will generate tests for it. We first generate tests for big files as big files, in general, encode more logic compared to small files, thus tests generated for them are more important. Then, the algorithm uses the creation time of generated test files and the order of tests in a generated test file to prioritize tests. The earlier a test file is created, the earlier its test(s) will be tried by the algorithm. And if a test file contains multiple tests, the earlier a test appears in the file, the earlier the algorithm will try it. Future work will prioritize generated tests according to their potential to improve the repair constraint. Analysis of UnsatGuided UnsatGuided uses additional automatically generated tests to alleviate the overfitting problem for synthesis-based repair techniques. The performance of Unsat-Guided is mainly affected by two aspects. On the one hand, it is affected by how the synthesis-based repair techniques perform with respect to the original manually written test suite, i.e., it depends on the overfitting type of the original patch. On the other hand, it is affected by whether or not the automatic test case generation technique generates bug-exposing tests. Let us dwell on this. For ease of presentation, the initial repair constraint enforced by the manually written test suite is referred to as RC initial , and the repair constraints enforced by the normal and bug-exposing tests generated by an automatic test case generation technique are referred to as RC normal and RC buggy respectively. Note due to the nature of test generation technique T reg , RC buggy is wrong. Also, we use P original to denote the original patch generated using the manually written test suite by a synthesis-based repair technique. Finally, we also use the example program in Section 3.3.2 to illustrate the key points of our analysis. (1) P original is correct. In this case, RC initial is in general strong enough to drive the synthesis-based repair techniques to synthesize a correct patch. If the automatic test generation technique T reg generates bug-exposing tests, RC buggy will have contradictions with RC initial (note RC buggy is wrong) and UnsatGuided will recognize and discard these bug-exposing tests. Meanwhile, RC normal is likely to be already covered by RC initial and is not likely to make P original become incorrect by definition. It can happen that the synthesis process coincidentally synthesizes a correct patch even though RC initial is weak, but this case is relatively rare. Thus, UnsatGuided generally will not change an already correct patch into an incorrect one. For the example program in Section 3.3.2, suppose the manually written tests assertEquals(-1, f(0)), assertEquals(2, f(1)), assert Equals(8, f(7)), and assertEquals(7, f(8)) are provided. In this case, the synthesis process can already use the repair constraint imposed by these 4 tests to synthesize the correct condition if (x>0 && x<8). Even if a bug-exposing test such as assertEquals(4, f(5)) is generated, the repair constraint imposed by it will have a contradiction with the initial repair constraint (because it is impossible to synthesize a condition that satisfies the repair constraint imposed by all the 5 tests). Consequently, UnsatGuided will discard this bug-exposing test. (2) P original is A-overfitting. In this case, RC initial is not strong enough to drive the synthesis-based repair techniques to synthesize a correct patch. More specifically, RC initial is in general strong enough to fully reflect the desired behaviors for correct input domain I correct but does not fully reflect the desired behaviors for all input points within buggy input domain I bug . If the automatic test generation tool generates bug-exposing tests, the additional repair constraint enforced by a certain bug-exposing test does not necessarily have contradictions with RC initial . If this happens, UnsatGuided is not able to identify and discard this kind of bug-exposing tests, and the synthesis process will be driven towards keeping the buggy behaviors corresponding to the bug-exposing tests. However, note this does not mean that the overfitting issue of incomplete fixing is worsened. If the behavior enforced by the kept bug-exposing test is already covered by the original patch, then it is likely that the synthesis process is not driven towards finding a new alternative solution and the overfitting issue of incomplete fixing remains the same. If the behavior enforced by the bug-exposing test is not covered by the original patch, then the synthesis process is likely to return a new solution. While the new solution indeed covers the new behavior enforced by the kept bug-exposing test, it can possibly generalize more over the whole I bug compared to the original patch. Thus, the overfitting issue of incomplete fixing can both be worsened and improved if a new solution is returned. Meanwhile, the normal tests generated by T reg by definition are not likely to be able to give additional repair constraints for input points within I bug . Overall, for an A-overfitting patch, Un-satGuided is likely to have minimal positive impact and can coincidentally have a negative impact. To illustrate, assume the provided manually written tests are assertEquals (-1, f(0)), assertEquals(2,f(1)), assertEquals(7,f(6)), and assert Equals(7, f(8)) for the example program in Section 3.3.2. Using the repair constraint enforced by these tests, the synthesis process can possibly synthesize the condition if (x>0 && x<7), which is A-overfitting. Suppose bug-exposing test assertE quals(4, f(5)) is generated, it will be discarded as the repair constraint imposed by it will make the synthesis process unable to synthesize a patch. However, if bug-exposing test assertEquals(6, f(7)) is generated, it will be kept as there is no contradiction between the repair constraint enforced by it and that enforced by the manual tests and the synthesis process can successfully return a patch. In this specific case, even though the bug-exposing test is kept, the synthesized patch is not likely to change as the behavior enforced by the bug-exposing test is already covered by the original patch. In other words, the overfitting issue of incomplete fixing remains the same as the original patch. (3) P original is B-overfitting. In this case, RC initial is also not strong enough to drive the synthesis-based repair techniques to synthesize a correct patch. In particular, RC initial is in general strong enough to fully reflect the desired behaviors for buggy input domain I bug but does not fully reflect the desired behaviors for all input points within correct input domain I correct . In case the automatic test generation tool generates bug-exposing tests, RC buggy is likely to have contradictions with RC initial (note RC initial is in general strong enough for input points within I bug ). Thus, UnsatGuided will identify and discard these bug-exposing tests. Meanwhile, RC normal can supplement RC initial to better or even fully reflect the desired behaviors for input points within I correct . Therefore, UnsatGuided can effectively help a B-overfitting patch reduce the overfitting issue of regression introduction, and can possibly turn a B-overfitting patch into a real correct one. For the example program in Section 3.3.2, assume the manually written tests assertEquals(-1, f(0)), assertEquals(2, f(1)), assertEquals(8, f( 7)), and assertEquals(9, f(10)) are provided. Using the repair constraint enforced by these tests, the synthesis process can possibly synthesize the condition if (x>0 && x<10), which is B-overfitting. If bug-exposing test assert Equals(5, f( 6)) is generated, UnsatGuided will discard it as the repair constraint imposed by it will make the synthesis process unable to synthesize a patch. If a normal test such as assertEquals(8, f( 9)) is generated by T reg , it provides additional repair constraint for input points within I correct and can possibly help the synthesis process to synthesize the condition if (x>0 && x<9), which has less overfitting issue of regression introduction compared to the original patch. In particular, if the normal test assertEquals(7, f( 8)) is generated by T reg , this test will help the synthesis process to synthesize the exactly correct condition if (x>0 && x<8). (4) P original is AB-overfitting. This case is a combination of case (2) and case (3). UnsatGuided can effectively help an AB-overfitting patch reduce the overfitting issue of regression introduction, but has minimal positive impact on reducing the overfitting issue of incomplete fixing. Note as bug-exposing tests by definition are not likely to give additional repair constraints for input points within the correct input domain I correct , so the strengthened repair constraints for input points within I correct are not likely to be impacted even if some bug-exposing tests are generated and not removed by UnsatGuided. In other words, UnsatGuided will still be effective in alleviating overfitting issue of regression introduction. Assume we have the manually written tests assertEquals(-2, f(-1)), assertEquals(2, f(1)), assertEquals(7, f(6)), and assertEquals(7, f(8)) for the example program in Section 3.3.2. Using the repair constraint enforced by these tests, the synthesis process can possibly synthesis the condition if (x>-1 && x<7), which is AB-overfitting. If bug-exposing test assertEquals(6, f(7)) and normal test assertEquals(-1, f(0)) are generated, both of them will be kept and the synthesis process can possibly synthesize the condition if (x>0 && x<7), which has the same overfitting issue of incomplete fixing but less overfitting issue of regression introduction compared to the original patch. In summary, UnsatGuided is not likely to break an already correct patch generated by a synthesis-based repair technique. For an overfitting patch, UnsatGuided can effectively reduce the overfitting issue of regression introduction, but has minimal positive impact on reducing the overfitting issue of incomplete fixing. With regard to turning an overfitting patch into a completely correct patch, Unsat-Guided is likely to be effective only when the original patch generated using the manually written test suite is B-overfitting. Discussion We now discuss the general usefulness of automatic test generation in alleviating overfitting for synthesis-based repair techniques. The overall conclusion is for techniques that make use of automatically generated tests to strengthen the repair constraint, there exists a fundamental limitation which makes the above core limitation of just effectively reducing the overfitting issue of regression introduction general, i.e., not specific to the proposed technique UnsatGuided. The fundamental limitation arises because of the oracle problem in automatic test generation. Due to the oracle problem, some of the automatically generated tests can encode wrong behaviors, which are called bug-exposing tests in this paper. Once the initial patch generated using the manually written test suite has the overfitting issue of incomplete fixing, the normal tests generated by an automatic test generation tool are not likely to be able to strengthen the repair constraints for input points within I bug . While the bug-exposing tests generated by an automatic test generation tool can enforce additional repair constraints for input points within I bug , the additional repair constraints enforced by bug-exposing tests are wrong. Different techniques can differ in how they classify automatically generated tests into normal tests and bug-exposing tests and how they further use these two kinds of tests, but they all face this fundamental problem. Consequently, for synthesis-based repair techniques, automatic test generation will not be very effective for alleviating the overfitting issue of incomplete fixing. However, for the overfitting issue of regression introduction, the normal tests generated by an automatic test case generation tool can effectively supplement the manually written test suite to better build the repair constraints for input points within I correct . By using the strengthened repair constraint, synthesis-based repair techniques can synthesize a patch that has less or even no overfitting issue of regression introduction. According to this analysis, the usefulness of automatic test case generation in alleviating overfitting for synthesis-based repair techniques is mainly confined to reducing the overfitting issue of regression introduction. Experimental Evaluation In this section, we present an empirical evaluation of the effectiveness of Unsat-Guided in alleviating overfitting problems for synthesis-based repair techniques. In particular, we aim to empirically answer the following research questions: -RQ1: How frequently do overfitting issues of incomplete fixing and regression introduction occur in practice for synthesis-based repair techniques? -RQ2: How does UnsatGuided perform with respect to alleviating overfitting issues of incomplete fixing and regression introduction? -RQ3: What is the impact of UnsatGuided on the correctness of the patches? -RQ4: How does UnsatGuided respond to bug-exposing tests? -RQ5: What is the time overhead of UnsatGuided? Subjects of Investigation Subject Programs We selected Defects4J (Just et al (2014a)), a known database of real faults from real-world Java programs, as the experimental benchmark. Defects4J has different versions and the latest version of the benchmark contains 395 faults from 6 open source projects. Each fault in Defects4J is accompanied by a manually written test suite which contains at least one test that exposes the fault. In addition, Defects4J also provides commands to easily access faulty and fixed program versions for each fault, making it relatively easy to analyze them. Among the 6 projects, Mockito has been configured and added to the Defects4J framework recently (after we start the study presented in this paper). Thus we do not include the 38 faults for Mockito in our study. Besides, we also discard the 133 faults for Closure compiler as the tests are organized using scripts rather than the standard JUnit tests, which prevents these tests from running within our repair infrastructure. Consequently, we use the 224 faults of the remaining 4 projects in our experimental evaluation. Table 1 gives basic information about these 4 subjects. Synthesis-based Repair Techniques For our approach UnsatGuided to be implemented, we need a stable synthesisbased repair technique. In this study, Nopol [START_REF] Xuan | Nopol: Automatic repair of conditional statement bugs in java programs[END_REF]) is used as the representative of synthesis-based repair techniques. We select it for two reasons. First, Nopol is the only publicly-available synthesis-based repair technique that targets modern Java code. Second, it has been shown that Nopol is an effective automated repair system that can tackle real-life faults in real-world programs (Martinez et al (2016)). Automatic Test Case Generation Tool The automatic test case generation tool used in this study is EvoSuite [START_REF] Fraser | Evosuite: automatic test suite generation for object-oriented software[END_REF]). EvoSuite aims to generate tests with maximal code coverage by applying a genetic algorithm. Starting with a set of random tests, it then uses a coverage based fitness function to iteratively apply typical search operators such as selection, mutation, and crossover to evolve them. Upon finishing the search, it minimizes the test suite with highest code coverage with respect to the coverage criterion and adds regression test assertions. To our knowledge, EvoSuite is the state-of-art open source Java unit test generation tool. Compared with another popular test generation tool Randoop [START_REF] Pacheco | Randoop: feedback-directed random testing for java[END_REF]), some recent studies [START_REF] Almasi | An industrial evaluation of unit test generation: Finding real faults in a financial application[END_REF]; [START_REF] Shamshiri | Do automatically generated unit tests find real faults? an empirical study of effectiveness and challenges (t)[END_REF]) have shown that Evosuite is better than Randoop in terms of a) compilable test generated, b) minimized flakiness, c) false positives, d) coverage, and e) most importantly-the number of bugs detected. While the generated tests by EvoSuite can possibly have problems of creating complex objects, exposing complex conditions, accessing private methods or fields, creating complex interactions, and generating appropriate assertions, they can be considered as effective in finding bugs in open-source and industrial systems in general [START_REF] Shamshiri | Do automatically generated unit tests find real faults? an empirical study of effectiveness and challenges (t)[END_REF]). Besides, as shown in algorithm 1, the approach UnsatGuided requires that the automatic test case generation tool is able to target a specific file of the program under repair. EvoSuite is indeed capable of generating tests for a specific class. To generate more tests and make the test generation process itself as deterministic as possible, i.e., the generated tests should be the same if somebody else repeats out experiment, we made some changes about the timeout value, search budget value, sandboxing and mocking setting in the default EvoSuite option. The complete EvoSuite setting is available on Github.3 Experimental Setup For each of the 224 studied faults in the Defects4J dataset, we run the proposed approach UnsatGuided against it. Whenever the test generation process is invoked, we run EvoSuite 30 times with different seeds to account for the randomness of EvoSuite following the guideline given in [START_REF] Arcuri | A practical guide for using statistical tests to assess randomized algorithms in software engineering[END_REF]). The 30 seeds are 30 integer numbers randomly selected between 1 and 200. In addition, EvoSuite can generate tests that do not compile or generates tests that are unstable (i.e., tests which could fail or pass for the same configuration) due to the use of nondeterministic APIs such as date and time of day. Similar to the work in (Just et al (2014b); [START_REF] Shamshiri | Do automatically generated unit tests find real faults? an empirical study of effectiveness and challenges (t)[END_REF]), we use the following process to remove the uncompilable and unstable tests if they exist: (i) Remove all uncompilable tests; (ii) Remove all tests that fail during re-execution on the program to be repaired; (iii) Iteratively remove all unstable tests: we execute each compliable test suite on the program to be repaired five times consecutively. If any of these executions reveals unstable tests, we then remove these tests and re-compile and re-execute the test suite. This process is repeated until all remaining tests in the test suite pass five times consecutively. Our experiment is extremely time-consuming. To make the time cost manageable, the timeout value for UnsatGuided, i.e., the input time budget in algorithm 1 for Nopol, is set to be 40 minutes in our experimental evaluation. Besides this change to global timeout value, we use the default configuration parameters of Nopol during its run. The experiment was run on a cluster consisting of 200 virtual nodes running Ubuntu 16.04 on a single Intel 2.68 GHz Xeon core with 1GB of RAM. As UnsatGuided will invoke the synthesis-based repair technique for each test generated, the whole repair process may still cost a lot of time. If so, we reduce the number of considered seeds. This happens for 2 faults (Chart_26 and Math_24), for which combining Nopol with UnsatGuided will generally cost more than 13 hours for each EvoSuite seed. Consequently, we use 10 seeds for these two bugs only for sake of time. Following an open-science ethics, all the code and data is made publicly available on the mentioned Github site in Section 4.1.3. Evaluation Protocol We evaluate the effectiveness of UnsatGuided from two points: its impact on the overfitting issue and correctness of the original patch generated by Nopol. Assess Impact on Overfitting Issue We have several major phases to evaluate the impact of UnsatGuided on overfitting issue of the original Nopol patch. (1) Test Case Selection and Classification. To determine whether a patch has overfitting issue of incomplete fixing or regression introduction, we need to see whether the corresponding patched program will fail tests from buggy input domain I bug or correct input domain I correct of the program to be repaired. As it is impractical to enumerate all tests from these two input domains, we view all tests generated for all seeds during our run of UnsatGuided (see Section 4.2) for a buggy program version as a representative subset of tests from these two input domains for this buggy program version in this paper. We believe it is reasonable from two aspects. On the one hand, we use a large number of seeds (30 in most cases) for each buggy program version, so we will have a large number of tests in general for each buggy program version. On the other hand, these tests all focus on testing the behaviors related with the patched highly suspicious files. We then need to classify the generated tests as being in the buggy input domain or being in the correct input domain. Recall that during our run of UnsatGuided, EvoSuite uses the version-to-be-repaired as the oracle to generate tests. After the run of UnsatGuided for each seed, we thus have an EvoSuite test set which contain both 1) normal tests whose inputs are from I correct and the assertions of them are right, and 2) bug-exposing tests whose inputs are from I bug and the assertions of them are wrong. To distinguish these two kinds of tests, we use the correct version of the version-to-be-repaired to achieve this goal. Note the assumption of the existence of a correct version is used here just for the evaluation purpose, we do not have this assumption for the run of UnsatGuided. More specifically, given a buggy program P buggy , the correct version P correct of P buggy , and an EvoSuite test suite T S Evo_i generated during the run of Unsat-Guided for seed seed i , we run T S Evo_i against P correct to identify bug-exposing tests. As T S Evo_i is generated from P buggy , tests can possibly assert wrong behaviors. Thus, a test fails over P correct is a bug-exposing test and is added to the test set T S bugexpo . Otherwise, it is a normal test and is added to the test set T S normal . For a certain buggy program version, this process is executed for each EvoSuite test suite T S Evo_j generated for each seed seed j of the seed set {seed j |1 j N, N = 30 or 10}. Consequently, for a specific buggy program version, T S bugexpo contains all bug-exposing tests and T S normal contains all normal tests among all tests generated for all seeds during the run of UnsatGuided for this buggy program version. (2) Analyze the Overfitting Issue of the Synthesized Patches. For a buggy program P buggy , the correct version P correct of P buggy , and the patch pc to P buggy , we then use the identified test sets T S bugexpo and T S normal in the previous step to analyze the overfitting issue of pc. To determine whether patch pc has overfitting issue of regression introduction, we execute the program obtained by patching buggy program P buggy with pc against T S normal . If at least one test in T S normal fails, then patch pc has overfitting issue of regression introduction. To determine whether patch pc has overfitting issue of incomplete fixing, it is harder. The basic idea is executing the program obtained by patching buggy program P buggy with pc against T S bugexpo , and patch pc has overfitting issue of incomplete fixing if at least one test in T S bugexpo fails. However, recall that the tests in T S bugexpo are generated based on the buggy version P buggy , i.e., the oracles are incorrect. Consequently, we first need to obtain the correct oracles for all tests in T S bugexpo . We again use the correct version P correct to achieve this goal and the process is as follows. First, for each failing assertion contained in a test from T S bugexpo , we first capture the value it receives when the test is executed on the correct version P correct . For instance, given a failing assertion assertEquals(10,calculateValue(y)), 10 is the value that the assertion expects and the value from calculateValue(y) is the received value. For this specific example, we need to capture the value for calculateValue(y) on P correct (note the value that P buggy returns for calculateValue(y) is 10). Then, we replace the expected value in the failing assertion with the received value established on P correct . For the previous example, if calculateValue(y) returns the value 5 on P correct , the repaired assertion is assertEquals(5, calculateValue(y)). The above process turns T S bugexpo into T S bugexpo so that all bug-exposing tests will have correct oracles. After obtaining T S bugexpo , we run T S bugexpo against the program obtained by patching buggy program P buggy with pc. If we observe any failing tests, then patch pc has overfitting issue of incomplete fixing. (3) Measure Impact. To evaluate the impact of UnsatGuided on the overfitting issue for a certain buggy program version, we compare the overfitting issue of the original Nopol patch pc original generated using the manually written test suite with that of the new patch pc new generated after running UnsatGuided. More specifically, the process is as follows. First, we use phases ( 1) and ( 2) to see whether the original patch pc original has overfitting issue of incomplete fixing or regression introduction. When we observe failing tests from T S normal or T S bugexpo , we record the detailed number of failing tests. The recorded number represents the severity of the overfitting issue. Second, for a patch pc new_i generated by running UnsatGuided using a certain seed seed i , we also use phases ( 1) and (2) to see whether the new patch pc new_i has overfitting issue of incomplete fixing or regression introduction and record the number of failing tests if we observe failing tests from T S normal or T S bugexpo . Note besides the test suite (corresponding to seed i ) used by UnsatGuided to generate pc new_i , we also use all the other test suites generated for other seeds to evaluate the overfitting issue of pc new_i . Finally, the result obtained for pc new_i is compared with that for pc original to determine the impact of UnsatGuided. We repeat this process for each patch generated using each seed for a certain program version (i.e., the patch set {pc new_i |1 i N, N = 30 or 10}), and use the average result to assess the overall impact of UnsatGuided. Assess Impact on Correctness We compare the correctness of the patch generated after the run of UnsatGuided with that generated using Nopol to see the impact of UnsatGuided on patch correctness. To determine the correctness of a patch, the process is as follows. First, we look at whether the generated tests reveal that there exist overfitting issues for a certain generated patch according to the procedure in Section 4.3.1. Second, we manually analyze the generated patch and compare it with the corresponding human patch. A generated patch is deemed as correct only if it is exactly the same or semantically equivalent to the human patch. The equivalence is established based on the authors' understanding of the patch. To reduce the possible bias introduced as much as possible, two of the authors analyze the correctness of the patches separately and the results reported in this paper are based on the agreement between them. Note that the corresponding developer patches for several buggy versions trigger exceptions and emit text error messages if certain conditions are true, we count a generated patch correct if it triggers the same type of exceptions as the human patch under the exception conditions and we do not take the error message into account. Note due to the use of different Nopol versions, the Nopol patches generated in this paper for some buggy versions are different from that generated in (Martinez et al (2016)). We thus replicate the manual analysis of the original Nopol patches. As we use a large number of seeds (30 in most cases) for running UnsatGuided, it can happen that we have a large number of generated patches that are different from the original Nopol patch for a certain buggy version. For the inherent difficulty of the manual analysis, it is unrealistic to analyze all of the newly generated patches. To make the manual analysis realistic, for each buggy version, we randomly select one patch that is different from the original Nopol patch across all of the different kinds of patches generated for all seeds. It can happen that for a certain buggy version, the newly generated patches after the run of UnsatGuided for all seeds are the same as the original Nopol patch. In this case, it is obvious that UnsatGuided has no impact on the change of patch correctness. Result Presentation Table 2 displays the experimental results on combining Nopol with UnsatGuided (hereafter referred to as Nopol+UnsatGuided). This table only shows the De-fects4J bugs that can be originally repaired by Nopol, and their identifiers are listed in column Bug ID. The results obtained by running just Nopol are shown in the columns under the column Nopol. The Time column shows the time used by Nopol to generate the initial patch. The incomplete fix (#failing) column shows what is the overfitting issue of incomplete fixing for the original Nopol patch. Each cell in this column is of the form X (Y), where X can be "Yes" or "No" and Y is a digit number. The "Yes" and "No" mean that the original Nopol patch has and does not have overfitting issue of incomplete fixing respectively. The digit number in parentheses shows the number of bug-exposing tests on which the original Nopol patch fails. Similarly, the regression (#failing) column tells what is the overfitting issue of regression introduction for the original Nopol patch, and each cell in this column is of the same form with the column incomplete fix (#failing). The "Yes" and "No" for this column mean that the original Nopol patch has and does not have overfitting issue of regression introduction respectively. The digit number in parentheses shows the number of normal tests on which the original Nopol patch fails. Finally, the column correctness shows whether the original Nopol patch is correct, with "Yes" representing correct and "No" representing incorrect. The results obtained by running Nopol+UnsatGuided are shown in the remaining columns under the column Nopol +UnsatGuided. The #Removed column shows the total number of removed generated tests during the run of Nopol+UnsatGuided for all seeds. The number of bug-exposing tests among the removed tests is shown in the column #Removed Bug-expo. The Avg#Time column shows the average time used by Nopol+UnsatGuided to generate the patch for each seed. The Change ratio (#unique) column is of the form X /Y (Z ). Here Y is the number of different seeds used, X refers to the number of generated patches by Nopol+UnsatGuided that are different from the original Nopol patch, and Z is the number of distinct patches among all of the patches generated for all seeds. The following two columns fix completeness change (Avg#Removedinc) and regression change (Avg#Removedreg) show the effectiveness of UnsatGuided in alleviating overfitting issue of incomplete fixing and regression introduction respectively. Each cell in these two columns is of the form X (Y), where X can be "improve", "worse", and "same" and Y is a digit number. Compared with the original Nopol patch, the "improve", "worse", and "same" in column fix completeness change (Avg#Removedinc) mean that the new patch generated by running Nopol+UnsatGuided has less, more, and the same overfitting issue of incomplete fixing respectively. The digit number gives a more detailed information. In particular, it gives the average number of removed failing bug-exposing tests for the new patch generated by running Nopol+UnsatGuided compared with the original Nopol patch. In other words, the digital value is obtained by subtracting the average number of failing bug-exposing tests for the new patch generated by running Nopol+UnsatGuided from the number of failing bug-exposing tests for the original Nopol patch. A positive value is good, which shows that the new patch has less overfitting issue of incomplete fixing in a way. For example, a value of 1 says that the new patch does not exhibit overfitting issue of incomplete fixing anymore for a test case within I bug . Similarly, compared with the original Nopol patch, the "improve", "worse", and "same" in column regression change (Avg#Removedreg) mean that the new patch generated by running Nopol+UnsatGuided has less, more, and the same overfitting issue of regression introduction respectively. Compared with the original Nopol patch, the digit number in column regression change (Avg#Removedreg) gives the average number of removed failing normal tests for the new patch generated by running Nopol+UnsatGuided, and it equates to the value obtained by subtracting the average number of failing normal tests for the new patch generated by running Nopol+UnsatGuided from the number of failing normal tests for the original Nopol patch. Again, a positive value is good, which shows that the new patch has less overfitting issue of regression introduction in a way. For example, a value of 2 says that the new patch does not exhibit overfitting issue of regression introduction anymore for two test cases within I correct . Note for the patch generated using Nopol+UnsatGuided for a certain seed, the tests considered are all tests generated using all seeds for the corresponding program version. We average the results for all seeds of a certain program version and the resultant numbers are shown as digit numbers in the columns fix completeness change (Avg#Removedinc) and regression change (Avg#Removedreg). Overall, a positive digit number in these two columns shows an improvement: it means that overfitting issue of incomplete fixing or regression introduction has been alleviated after running UnsatGuided. In addition, we use "perfect" to refer to the situation where for each seed of a certain program version, running Nopol +UnsatGuided with the seed will get a patch that will completely remove the overfitting issue of the original Nopol patch. The "perfect" results are illustrated with ( ). Finally, the column correctness under the column Nopol +UnsatGuided shows whether the selected patch generated by running Nopol +UnsatGuided is correct, again with "Yes" representing correct and "No" representing incorrect. RQ1: Prevalence of the Two Kinds of Overfitting Issues We first want to measure the prevalence of overfitting issues of incomplete fixing and regression introduction among the patches generated by synthesis-based repair techniques. We can see from the incomplete fix (#failing) and regression (#failing) columns under the column Nopol that for the 42 buggy versions that Nopol can generate an initial patch, overfitting can be observed for 26 buggy versions (when there exists "Yes" in either of these two columns). Among the other 16 buggy versions for which we do not observe any kinds of overfitting issues, the manual analysis shows that the Nopol patches for two buggy versions (Lang_44 and Lang_55) are correct. However, the manual analysis shows that the Nopol patches for the remaining 14 buggy versions are incorrect, yet we do not observe any number of failing bug-exposing or normal tests for the programs patched with the patches generated by Nopol. This shows the limitation of automatic test case generation in covering the buggy input domain I bug for real programs, which confirms a previous study [START_REF] Shamshiri | Do automatically generated unit tests find real faults? an empirical study of effectiveness and challenges (t)[END_REF]). Among the 26 buggy versions for which we observe overfitting issues, the original Nopol patches for 13 buggy versions have the overfitting issue of incomplete fixing, the original Nopol patches for 19 buggy versions have the overfitting issue of regression introduction, and the original Nopol patches for 6 buggy versions have both the overfitting issues of incomplete fixing and regression introduction. Thus, both the overfitting issues of incomplete fixing and regression introduction are common for the Nopol patches. It can also be seen from Table 2 that the severity of overfitting differs from one patch to another as measured by the number of failing tests. Among the 13 patches that have overfitting issue of incomplete fixing, the number of failing bugexposing tests is less than 3 for 3 patches (which implies the overfitting issue is relatively light), yet this number is larger than 20 for 3 patches (which implies the overfitting issue is relatively serious). Similarly, for the 19 patches that have overfitting issue of regression introduction, the number of failing normal tests is less than 3 for 1 patch (which implies the overfitting issue is relatively light), yet this number is larger than 20 for 6 patches (which implies the overfitting issue is relatively serious). Answer for RQ1: Both overfitting issues of incomplete fixing (13 patches) and regression introduction (19 patches) are common for the patches generated by Nopol. RQ2: Effectiveness of UnsatGuided in Alleviating Overfitting Issues We then want to assess the effectiveness of UnsatGuided. It can be seen from the column Change ratio (#unique) of Table 2 that for the 42 buggy versions that can be initially repaired by Nopol, the patches generated for 34 buggy versions have been changed at least for one seed after running Nopol+UnsatGuided. If we consider all executions (one per seed per buggy version), we obtain a total of 1220 patches with Nopol+UnsatGuided. Among the 1220 patches, 702 patches are different from the original patches generated by running Nopol only. Thus, Unsat-Guided can significantly impact the output of the Nopol repair process. We will further investigate the quality difference between the new Nopol+UnsatGuided patches and the original Nopol patches. The results for alleviating the two kinds of overfitting issues by running Nopol+ UnsatGuided are displayed in the columns fix completeness change (Avg #Removedinc) and regression change (Avg#Removedreg) of Table 2. With regard to alleviating the overfitting issue of incomplete fixing, we can see from the column fix completeness change (Avg#Removedinc) that UnsatGuided has an effect on 4 buggy program versions (Math_50, Math_80, Math_87 and Time_4). For all those 4 buggy versions, the original Nopol patch already has the overfitting issue of incomplete fixing. With UnsatGuided, the overfitting issue of incomplete fixing has been alleviated in 2 cases (Math_50, Time_4) and worsened for 2 other cases (Math_80, Math_87). This means UnsatGuided is likely to have a minimal positive impact on alleviating overfitting issue of incomplete fixing and can possibly have a negative impact on it, confirming our analysis in Section 3. We will further discuss this point in RQ4 (Section 4.8). In terms of alleviating overfitting issue of regression introduction, we can see from the column regression change (Avg#Removedreg) that UnsatGuided has an effect on 18 buggy program versions. Among the 18 original Nopol patches for these 18 buggy program versions, UnsatGuided has alleviated the overfitting issue of regression introduction for 16 patches. In addition, for 6 buggy program versions, the overfitting issue of regression introduction of the original Nopol patch has been completely removed. These 6 cases are indicated with ( ) in Table 2. Meanwhile, UnsatGuided worsens the overfitting issue of regression introduction for two other original Nopol patches (Math_33 and Time_7). It can possibly happen as even though the repair constraint for input points within I correct has been somewhat strengthened (but not completely correct), yet the solution of the constraint happens to be more convoluted. Overall, with 16 positive versus 2 negative cases, UnsatGuided can be considered as effective in alleviating overfitting issue of regression introduction. Answer for RQ2: UnsatGuided can effectively alleviate the overfitting issue of regression introduction (16/19 cases), but has minimal positive impact on reducing the overfitting issue of incomplete fixing. This results confirm our deductive analysis of the effectiveness of UnsatGuided in alleviating the two kinds of overfitting issues (Section 3). RQ3: Impact of UnsatGuided on Patch Correctness We will further assess the impact of UnsatGuided on the correctness of the patches. More specifically, we will assess 1) whether running Nopol+UnsatGuided destroys the already correct patches generated by Nopol (i.e., make them become incorrect) and 2) whether running Nopol+UnsatGuided can change an overfitting patch generated by Nopol into a completely correct one. Can already correct patches be broken? The previous paper (Martinez et al (2016)) claims that running Nopol can generate correct patches for 5 buggy program versions Chart_5, Lang_44, Lang_55, Lang_58, and Math_50. However, for three of them (Chart_5, Lang_58, and Math_50), we can see from Table 2 that some EvoSuite tests fail on the original Nopol patches. Due to the use of different Nopol versions, the Nopol patch generated in this paper for Math_50 is different from that in (Martinez et al (2016)). We run the EvoSuite tests against the Nopol patch in (Martinez et al (2016)) and we also observe failing tests. To ensure the validity of the bug detection results, two authors of this paper have manually checked the correctness of the patches generated for these three buggy versions in the paper (Martinez et al (2016)). The overall results suggest that the original Nopol patches for these three program versions are not truly correct, which shows the inherent difficulty of manual analysis. For the other 2 buggy program versions (Lang_44 and Lang_55), there is no indication of overfitting and we consider the original Nopol patches as well as the new patches generated by running Nopol+UnsatGuided as correct. We now demonstrate why they can be considered as correct. For Lang_44, the bug arises for a method which parses a string to a number (String to int, long, float or double) (see Figure 2). If the string (val ) only contains the char L which specifies the type long, the method returns an Index-OutOfBoundsException (due to the expression numeric.substring(1) in the if condition) instead of the expected NumberFormatException, the other situations have already been correctly handled. The human patch adds a check at the beginning of the method to avoid this specific situation. The original Nopol patch sim- when the state of the timer is running according to the logic of the utility class. Consequently, both of the two added preconditions are semantically equivalent to the precondition added by human beings. In summary, the correct patches generated by Nopol are still correct for all seeds after running Nopol+UnsatGuided. Can an overfitting patch be changed into a correct one? It has already been shown that running Nopol+UnsatGuided can significantly change the original Nopol patch and can effectively alleviate the overfitting issue of regression introduction in the original Nopol patch. We want to further explore whether an overfitting patch can be changed into a correct one after running Nopol+UnsatGuided. Comparing the two correctness columns under the column Nopol and column Nopol +UnsatGuided, we can see that there exists one buggy version (Math_85) for which the original Nopol patch is incorrect but the sampled patch generated by running Nopol+UnsatGuided is correct. For Math_85, the bug arises as the value of a condition is not handled appropriately (see Figure 4). The human patch changes the binary relational operator from ">=" to ">", i.e., replacing if (fa * fb >= 0.0) with if (fa * fb > 0.0). The original Nopol patch adds a precondition if (fa * fb < 0.0) before the if condition in the code, which in turn will result in a selfcontradictory condition and is thus incorrect. The sampled Nopol+UnsatGuided patch is adding a precondition if (fa * fb != 0.0) before the if condition, which equates to the human patch semantically and is thus correct. After further checking the results for this buggy version across all 30 seeds, we find that the generated Nopol+UnsatGuided patch is the same as this patch for 21 seeds. This example shows that UnsatGuided can possibly change an original overfitting Nopol patch into a correct one. Answer for RQ3: UnsatGuided does not break any already correct Nopol patch. Furthermore, UnsatGuided can change an overfitting Nopol patch into a correct one. This is in line with our analysis of the impact of UnsatGuided on patch correctness. RQ4: Handling of Bug-exposing Tests As we have seen in Section 3.4, the major challenge of using automatic test generation in the context of repair is the handling of bug-exposing tests. However, bug-exposing tests are not always generated. Now we concentrate on the 17 buggy program versions which contain at least one bug-exposing test, i.e., rows in Table 2 with the value of #Bug-expo larger than 0. For 4 bugs (Chart_5, Lang_44, Lang_51, Lang_63), UnsatGuided works perfectly because it removes all bug-exposing tests. Let us now explain what happens in those cases. The column incomplete fix (#failing) shows that for these 4 buggy versions, the original Nopol patch does not fail on any of the bug-exposing tests, which implies that the initial repair constraint established using the manually written test suite is strong and is likely to have reflected the desired behaviors for input points within I bug well. In this case, the additional repair constraints enforced by the bug-exposing tests have contradictions with the initial repair constraint and UnsatGuided indeed removes them, as it is designed for. If we do not take care of this situation and directly use all of the automatically generated tests without any removal technique, we are likely to lose the correct repair constraint and the acceptable patch with no overfitting issue of incomplete fixing. For the other 13 buggy program versions, the bug-exposing tests are either not removed at all (11 cases) or partially removed (2 cases, Math_50 and Math_80). The column incomplete fix (#failing) shows that for these 13 buggy versions, the original Nopol patch already fails on some of the bug-exposing tests, which implies that the initial repair constraint established using the manually written test suite does not fully reflect the desired behaviors for input points within I bug . Consequently, no contradiction happens during the synthesis process and these bug-exposing tests are not recognized and kept. Now, recall that we have explained in Section 3.4 that the presence of remaining bug-exposing tests does not necessarily mean worsened overfitting issue of incomplete fixing. Interestingly, this can be shown in our evaluation: for 9 bugs, the overfitting issue of incomplete fixing remains the same; for 2 bugs (Math_50 and Time_4), the overfitting issue of incomplete fixing is reduced (the digit value in column fix completeness change (Avg#Removedinc) is larger than 0); and for 2 other bugs (Math_80 and Math_87), the overfitting issue of incomplete fixing is worsened (the digit value in column fix completeness change (Avg#Removedinc) is smaller than 0). To sum up, the unremoved bug-exposing tests do not worsen overfitting issue of incomplete fixing for the original Nopol patch in the majority of cases (11/13 cases). Finally, let us check whether the presence of kept bug-exposing tests will have an impact on the capability of UnsatGuided in alleviating overfitting issue of regression introduction. For the 13 buggy program versions with at least one remain-ing bug-exposing test, we see that UnsatGuided is still able to alleviate overfitting issue of regressions introduction. This is the case for 5 bug versions: Math_50, Math_81, Math_87, Math_105, and Time_4. This result confirms our qualitative analysis, i.e., the unremoved bug-exposing tests will not impact the effectiveness of UnsatGuided in alleviating overfitting issue of regression introduction. Answer for RQ4: When bug-exposing tests are generated, UnsatGuided does not suffer from a drop in effectiveness: the overfitting issue of incomplete fixing is not worsened in the majority of cases, and the capability of alleviating overfitting issue of regression introduction is kept. RQ5: Time Overhead The time cost of an automatic program repair technique should be manageable for being used in industry. We now discuss the time overhead incurred by Unsat-Guided. To see the time overhead incurred, we compare the Time column under the column Nopol with the Avg#Time column under the column Nopol +UnsatGuided. First, we see that the approach UnsatGuided incurs some time overhead. Compared with the original repair time used by Nopol to find a patch, the average time used by running Nopol+UnsatGuided to get the patch is much longer. Second, the time overhead incurred is acceptable in many cases. Among the 42 buggy versions that can initially be repaired by Nopol, the average repair time used by running Nopol+UnsatGuided to get the patch is less than or equal to 1 hour for 28 buggy versions, which is arguably acceptable. Finally, we observe that the time overhead incurred can be extremely large sometimes. For 3 buggy versions (Chart_26, Math_24, and Math_33), running Nopol+UnsatGuided will cost more than 10 hours to get the patch on average. In particular, the average time used by running Nopol+UnsatGuided to get the patch for Math_24 is 24.1 hours. The synthesis process of Nopol is slow for those cases and the synthesis process is invoked for each generated test as required by UnsatGuided, thus the large amount of time cost is imaginable. To reduce the time overhead, future work will explore advanced patch analysis to quickly discard useless tests and identify generated tests that have the potential to improve the patch. Answer for RQ5: UnsatGuided incurs a time overhead even though the overhead is arguably acceptable in many cases. To reduce the time overhead, more advanced techniques can be employed to analyze the automatically generated tests and discard useless ones. Threats to Validity We use 224 faults of 4 java programs from Defects4J in this study and one threat to external validity is whether our results will hold for other benchmarks. However, Defects4J is the most recent and comprehensive dataset of java bugs currently available, and is developed with the aim of providing real bugs to enable reproducible studies in software testing research. Besides, Defects4J has been extensively used as the evaluation subjects by recent research work in software testing (B. [START_REF] Le | A learning-to-rank based fault localization approach using likely invariants[END_REF]; [START_REF] Pearson | Evaluating and improving fault localization[END_REF]; [START_REF] Laghari | Fine-tuning spectrum based fault localisation with frequent method item sets[END_REF]), and in particular by work in automated program repair (Martinez et al (2016); [START_REF] Xiong | Precise condition synthesis for program repair[END_REF]). Another threat to external validity is that we evaluate the approach UnsatGuided by viewing Nopol as the representative for synthesis-based repair techniques, and doubts may arise whether the results will generalize to other synthesis-based repair techniques. Nopol, however, is the only open-source synthesis-based repair technique that targets modern java code and can effectively repair real-life faults in real-world programs. A final threat to external validity is that only one automatic test case generation tool, i.e., EvoSuite, is used in the study. But EvoSuite is the state-of-art open source java unit test case generation tool and can target a specific java class as required by the proposed approach. Moreover, we run EvoSuite 30 times with different random seeds to account for the randomness of EvoSuite. Overall, the evaluation results are in line with our analysis of the effectiveness of UnsatGuided in alleviating different kinds of overfitting issues, and we believe the results can be generalized. A potential threat to internal validity is that we manually check the generated patches to investigate the impact of UnsatGuided on patch correctness. We used the human patch as the correctness baseline and the human patch is also used to help us understand the root cause of the bug. This process may introduce errors. To reduce this threat as much as possible, the results reported in this paper are checked and confirmed by two authors of the paper. In addition, the whole artifact related to this paper is made available online to let readers gain a more deep understanding of our study and analysis. Conclusion Much progress has been made in the area of test suite based program repair over the recent years. However, test suite based repair techniques suffer from the overfitting problem. In this paper, we deeply analyze the overfitting problem in program repair and identify two kinds of overfitting issues: incomplete fixing and regression introduction. We further define three kinds of overfitting patches based on the overfitting issues that a patch has. These characterizations of overfitting will help the community to better understand and design techniques to defeat the overfitting problem in program repair. We also propose an approach called UnsatGuided, which aims to alleviate the overfitting problem for synthesis-based repair techniques. The approach uses additional automatically generated tests to strengthen the repair constraint used by synthesis-based repair techniques. We analyze the effectiveness of UnsatGuided with respect to alleviating different kinds of overfitting issues. The general usefulness of automatic test case generation in alleviating overfitting problem is also discussed. An evaluation on the 224 bugs of the Defects4J repository has confirmed our analysis and shows that UnsatGuided is effective in alleviating overfitting issue of regression introduction. results by running EvoSuite are shown in the two columns under the column Tests, among which the #EvoTests column shows the total number of tests generated by EvoSuite for all seeds and the #Bug-expo column shows the number of bug-exposing tests among all of the generated tests. Fig. 4 4 Fig. 4 Code snippet of buggy program version Math_85. Table 1 1 Descriptive Statistics of the 224 Considered Faults in Defects4J Subjects #Bugs Source KLoC Test KLoC #Tests Dev years JFreechart 26 96 50 2,205 10 Commons Math 106 85 19 3,602 14 Joda-Time 27 28 53 4,130 14 Common Lang 65 22 6 2,245 15 Table 2 2 Experimental results with Nopol+UnsatGuided on the Defects4j Repository, only show bugs with test-suite adequate patches by plain Nopol. Code snippet of buggy program version Lang_55. public void stop() { if(this.runningState != STATE_RUNNING && this.runningState != STATE_SUSPENDED) { throw new IllegalStateException("Stopwatch is not running. "); } // MANUAL PATCH: // if (this.runningState == STATE_RUNNING) // NOPOL PATCH: // if (this.runningState!= STATE_SUSPENDED) // NOPOL+UnsatGuided PATCH: // if(this.stopTime <= this.startTime) stopTime = System.currentTimeMillis(); this.runningState = STATE_STOPPED; } Fig. 3 In this paper, we use "fault" and "bug" interchangeably. We do no uses the techniques that generate assertions from runs of different program versions[START_REF] Taneja | Diffgen: Automated regression unit-test generation[END_REF];[START_REF] Evans | Differential testing: A new approach to change detection[END_REF]). https://github.com/Spirals-Team/test4repair-experiments YES plifies the if condition to (dec == null && exp == null) and relies on checks available in the called method (createLong(String val)), which will return a NumberFormatException if the format of input val is illegal. Note the deleted predicate (numeric.charAt(0)=='-' && isDigits(numeric.substring(1)) || isDigits(numeric)) is used to check whether the variable numeric is a legal format of number, and a NumberFormatException will be thrown if not. Consequently, for the specific input L and other inputs which are not legal forms of number, the desired NumberFormatException will also be thrown after the condition is simplified. Among the 30 seeds, running Nopol+UnsatGuided with 27 seeds will get the same patch as the original Nopol run. For the other 3 seeds, running Nopol+UnsatGuided will all get the patch which adds the precondition if(1 < val.length()) before the if condition. After adding this precondition, the if condition is executed only when the length of the string is larger than 1. If this precondition is not respected, the program throws the expected exception. Thus, both the original Nopol patch and the new patch generated by running Nopol+UnsatGuided are semantically equivalent to the human patch. For Lang_55, the bug arises for a utility class for timing (see Figure 3). As discussed in Martinez et al (2016), the bug appears when the user stops a suspended timer and if so, the stop time saved by the suspend action is overwritten by the stop action. To fix the bug, the assignment of the variable stopTime should be executed only when the state of the timer is running. The human patch adds a precondition which checks whether the state of the timer is running. The original Nopol patch and the patch generated by running Nopol+UnsatGuided (running the 30 seeds all get the same patch) both also add preconditions. Note the method stop() should be executed only when the state of the timer is suspended or running (see the if condition inside the method stop()), otherwise an exception will be thrown. Thus, the precondition if (this.runningState!= STATE_SUSPENDED) obtained by running Nopol means the state of the timer is running. Meanwhile, given the two possible states-suspended or running, the precondition if (this.stopTime <= this.startTime) obtained by running Nopol+UnsatGuided can only be true
01774320
en
[ "shs.eco" ]
2024/03/05 22:32:18
2018
https://shs.hal.science/halshs-01774320/file/WP%202018%20-%20Nr%2011.pdf
Renaud Bourlès Anastasia Cozarenco email: [email protected] Dominique Henriet Xavier Joutard Maria Laura Alzua Pavlo Blavatskyy Mohamed Belhaj Sebastian Bervoets Yann Bramoullé Habiba Djebbari Cecilia Garcia Penalosa Marek Hudon Robert Lensink Thierry Magnac David Martimort Bernard Sinclair-Desgagné Ariane Szafarz Business Training and Loan Repayment: Theory and Evidence from Microcredit in France 1 Keywords: microcredit, business training, reverse asymmetric information JEL Codes: C34, C41, D82, G21 Although most Microfinance Institutions (MFIs) invest in non-financial services such as business training, empirical evidence on the impact of training on microborrowers' performance is at best mixed. We address this issue by accounting for business training allocation and its possible effects on borrowers' behavior. We first show empirically (using data from a French MFI) that the relationship between business training allocation and borrowers' risk is complex and nonlinear. By taking this into account, we establish a positive effect of business training on the survival time of loans. These results are robust to controlling for the MFI's selection process. We moreover propose a theoretical explanation for the non-linear relationship between borrowers' risk and training allocation based on reverse asymmetric information, showing that it can lead to increased MFI outreach. Introduction Microfinance clients are individuals rejected by conventional banks due to their lack of collateral, credit history or experience of starting a business. Non-financial services play an important role in the microfinance sector: combined with financial services, they contribute to the alleviation of human capital constraints [START_REF] Schreiner | Replicating microfinance in the United States: Opportunities and challenges[END_REF] through a maximalist (versus minimalist) approach. Non-financial services take various forms, such as financial literacy [START_REF] Sayinzoga | Financial literacy and financial behaviour: Experimental evidence from rural Rwanda[END_REF], information about health and human rights, money management, and business training. European Microfinance Institutions (MFIs) have been involved in business training since their emergence [START_REF] Lammermann | Microfinance and business development services in Europe. A guide on good practices[END_REF]. Business training consists of entrepreneurial training or business development services that generally accompany business microloans, like guiding the definition and development of the business project, providing information and help with obtaining financing, offering courses in accounting, management, marketing and law, etc. [START_REF] Armendariz | Microfinance for self-employment activities in the European urban areas: Contrasting Crédal in Belgium and Adie in France[END_REF] refers to "guided" microcredit to describe the main product provided by European MFIs. In France, the National Council for Statistical Information includes business support services in the definition of microcredit [START_REF] Valentin | Le microcredit[END_REF]. According to [START_REF] Botti | Microfinance in Europe: A survey of EMN-MFC members[END_REF], 58% of surveyed European MFIs provide non-financial services to their clients. Yet although business training is a recognized component of microfinance, existing evidence on its impact is mixed at best. There are at least two potential reasons for this. First, some studies examine samples that may not be representative of the general population [START_REF] Mckenzie | What are we learning from business training and entrepreneurship evaluations around the developing world?[END_REF]. Second, even though recent studies have overcome the selection bias by using randomized controlled trials, they do not account for behavioral reactions that assignment to business training may trigger among participants. The failure to consider borrowers' behavioral reactions, as well as the rationale behind assignment to training, is thus likely to bias results on the impact of business training. Here, using data from a French MFI, we investigate the effect of business training on loan repayment, controlling for the process of assignment to training in bivariate probit and mixed (duration) models. First, we show that business training allocation is complex and that the relationship between borrowers' risk and assignment to business training is non-linear. More specifically, we find that the probability of being assigned to business training first increases with borrowers' risk, and then, beyond a certain threshold, decreases. Second, controlling for this non-linear effect, we find a positive impact of business training on loan survival time. This result is robust to correcting for potential selection bias [START_REF] Heckman | Sample selection bias as a specification error[END_REF] during the MFI's credit approval stage. To rationalize the non-linear effect of borrowers' risk on training allocation, we build a theoretical model based on the mechanisms of reverse asymmetric information, i.e. on the assumption that the MFI has better information on risk than the borrowers themselves. This assumption is plausible in contexts where MFIs are financing first-time micro-entrepreneurs who need financial backing to start a business, and who usually lack the necessary experience. In this case, the contract offered by the MFI (assignment to training or not) 3 reveals to the borrowers information about themselves, thereby impacting their actions. This "looking-glass self" mechanism was introduced by [START_REF] Cooley | Human Nature and the Social Order[END_REF]; to the best of our knowledge, our study is the first to introduce the concept in microfinance. Using a simple discrete model, we show that reverse asymmetric information can indeed generate non-linearity between borrowers' risk and assignment to business training. Moreover, we argue that in such a theoretical setting, reverse asymmetric information is likely to increase outreach to riskier borrowers, which is the ultimate goal of MFIs striving to mitigate financial exclusion. The remainder of the paper is structured as follows. In section 2 we review the extant literature. We present the institution providing data and the dataset in section 3. The econometric strategy is described in section 4 and the empirical results are outlined in section 5. We check the robustness of our results in section 6. A theoretical model rationalizing the intuition behind our empirical results is presented in section 7. Section 8 concludes. Literature review Our study contributes to three strands of the literature: (i) the impact of training programs in microfinance; (ii) the empirics of bivariate and trivariate probit and duration models (which can also be interpreted as scoring models in banking); and (iii) the theoretical effect of reverse asymmetric information. The extant literature is agnostic about the efficiency of business training in microfinance. For instance, [START_REF] Evans | The importance of business development services for microfinance clients in industrialized countries[END_REF] underlines some positive outcomes for business training under the Women's Initiative for Self Employment, whereas [START_REF] Edgcomb | What makes for effective micro-enterprise training[END_REF] reports mixed results on correlations between completed training and successful entrepreneurship outcomes in the United States. However, these studies ignore the non-random allocation of business training and the selection bias this may induce. More recent studies adopt an experimental approach using random business training allocation. For developing countries, for instance, [START_REF] Karlan | Teaching entrepreneurship: Impact of business training on microfinance clients and institutions[END_REF] find a significant impact of training on client retention and business knowledge improvement, but little evidence of impact on profit or revenue increase, in FINCA-Peru. 4 Berge et al. (2014) argue that business training combined with financial services improves business outcomes for male microentrepreneurs in Tanzania (the effects for women being non-significant). [START_REF] Bulte | Do gender and business trainings affect business outcomes? Experimental evidence from Vietnam[END_REF] find considerable impacts on knowledge, business practices and outcomes for female clients of an MFI in Vietnam; however these impacts take time to materialize. For developed countries, the evidence is scarce, with two welcome exceptions. [START_REF] Fairlie | Behind the GATE experiment: Evidence on effects of and rationales for subsidized entrepreneurship training[END_REF] find no long-lasting effects of business training in the United States for individuals who are potentially subject either to credit or human capital constraints or to discrimination in the labor market. Similarly, the randomized controlled trial conducted by [START_REF] Crépon | Les effets du dispositif d'accompagnement á la création d'entreprise CréaJeunes: Résultats d'une expérience contrôlée[END_REF] with ADIE (the largest French MFI) did not identify significant positive impacts in terms of business outcomes for participants. Beyond business outcomes, few studies have focused on the relationship between business training and credit repayment. 5 One exception is [START_REF] Karlan | Teaching entrepreneurship: Impact of business training on microfinance clients and institutions[END_REF], who find that access to training increases the probability of perfect repayment to the MFI; however, this result is only marginally significant. Similarly, [START_REF] Giné | Money or ideas? A field experiment on constraints to entrepreneurship in rural Pakistan[END_REF] find that training has no significant impact on repayment rates for microfinance clients in rural Pakistan. Unfortunately, we do not have data on business outcomes for borrowers in our study. However, we have access to detailed individual data on credit repayment history within the MFI. Therefore our main focus is the relationship between business training and credit repayment. Taking into account the behavioral aspects of assignment to business training (Benabou and Tirole, 2003a), we find a non-significant impact of business training on the probability of default, but a significant positive impact on loan survival time. Our empirical strategy is based on bivariate models where we jointly estimate two equations. The first equation models the business training allocation process, whereas the second equation models borrowers' risk. We first measure borrowers' risk using probability of default in a bivariate probit model. A comparable bivariate probit model was developed by [START_REF] Boyes | An econometric analysis of the bank credit scoring problem[END_REF], where the two probit equations concern the loan granting process and borrowers' default respectively. However, the empirical literature argues that despite defaults, some loans may still be profitable if the default 5 There are two main reasons for this. First, some of the studies focus on the impact of business training for beneficiaries who are not necessarily microcredit recipients. Second, repayment rates are very high in microfinance in developing countries [START_REF] Armendariz | The Economics of Microfinance[END_REF], so there is little heterogeneity in terms of credit default. occurs sufficiently late. The bank might then be more concerned about the timing of a default than the default itself. [START_REF] Roszbach | Bank lending policy, credit scoring, and the survival of loans[END_REF] addresses this issue by providing a bivariate survival time model. In line with this study, we use loan survival time as an alternative measure of risk in a bivariate mixed (duration) model. [START_REF] Boyes | An econometric analysis of the bank credit scoring problem[END_REF] and [START_REF] Roszbach | Bank lending policy, credit scoring, and the survival of loans[END_REF] are examples of credit scoring models underlining the importance of controlling for banks' selection process during the approval stage [START_REF] Heckman | Sample selection bias as a specification error[END_REF]. To address selection bias, our paper pioneers the development of trivariate probit and mixed models to test for robustness of results, by adding a selection equation to our bivariate models. Econometric models in We take advantage of the risk equation to estimate borrowers' intrinsic risk. First, we study the relationship between business training allocation and borrowers' intrinsic risk, which appears to be non-linear. Second, we take into account this complex relationship to study the impact of business training on loan repayment. The original feature of our paper lies in the development of formal empirical models addressing the endogeneity of business training allocation and its consequences. Our theoretical modeling explaining the non-linear relationship between risk and training assignment is based on reverse asymmetric information (where the principal is better informed than the agent) and the looking-glass self effect. The latter occurs when people in their social environment attempt to manipulate self-perception. This phenomenon has been widely studied in the sociological literature. The term "looking-glass self" was coined by [START_REF] Cooley | Human Nature and the Social Order[END_REF], who argued that people obtain a sense of who they are by observing how others perceive or treat them. In economics, this concept was first introduced by Benabou and Tirole (2003a) and Benabou and Tirole (2003b). Benabou and Tirole (2003b) state that for the looking-glass self effect to impact the agent's behavior, the principal must have private information relevant to the agent's behavior and the agent must be aware of the principal's superior information and objectives. Benabou and Tirole (2003a) study various situations where the principal might be better informed than the agent (for example at school, in the labor market, and in the family) and also consider the case of an informed principal choosing a level of help to provide to the agent. 6The notion of informed principal was introduced by [START_REF] Myerson | Mechanism design by an informed principal[END_REF] and [START_REF] Maskin | The principal-agent relationship with an informed principal: The case of private values[END_REF]. However, it is only relevant in specific contexts. [START_REF] Ishida | Optimal promotion policies with the looking-glass effect[END_REF] uses a model with an informed principal to show that promotions in the labor market can be used strategically in the presence of the looking-glass self effect. [START_REF] Villeneuve | The consequences for a monopolistic insurance firm of evaluating risk better than customers: The adverse selection hypothesis reversed[END_REF] studies pooling and separating equilibria in a context where an insurer evaluates risk better than its customers. [START_REF] Swank | Motivating through delegating tasks or giving attention[END_REF] show how delegation and increased attention from an informed employer can improve the motivation of an uninformed employee. Crucially, these authors point out that their model only fits situations where agents are at the beginning of their career or are performing tasks for the first time, whereas the principal has previous experience with similar tasks or agents. This setting is remarkably similar to the microcredit market, where micro-entrepreneurs are borrowing from an experienced MFI to start a business for the first time. One contribution of this paper is to introduce the notion of informed principal and the looking-glass self effect to the credit market. Context and Data Institutional context of the MFI Créa-Sol, the MFI providing data for our study, was created in 2006 in the South of France as a nonprofit NGO, at the initiative of a commercial bank under its corporate social responsibility scheme. This MFI targets individuals who have difficulty accessing financial services from mainstream banks, mainly residing in the Provence-Alpes-Côte-d'Azur region. In line with its social mission statement, most of the MFI's clients are (long-term) unemployed, have low education and income levels and are starting a business for the first time in their lives. Most of them are seeking to become self-employed to escape unemployment and/or poverty. The MFI does not require any collateral or guarantees from its clients, which means that the total pool of applicants of this MFI is considered "too risky" by most commercial banks. The MFI provides both personal and business microcredit. We focus on business microcredit exclusively. In addition to microcredit services, Créa-Sol is highly active in business training provision. Nonfinancial services are an important feature of MFIs in France, which play a counseling and support role [START_REF] Brana | Microcredit: An answer to the gender problem in funding?[END_REF] and use soft information in their screening processes [START_REF] Cozarenco | Gender biases in bank lending: Lessons from microcredit in France[END_REF]. Providing business training is costly for MFIs, so they cut costs by forming partnerships with NGOs. 7We have information on all the applicants who were granted a microcredit by our MFI between May 2008 and May 2011, as well as on any accompanying business training provision. To our knowledge, the MFI's borrowers did not receive any training other than that mentioned in the data set. The MFI's clients include almost equal numbers of individuals receiving and not receiving training (55% and 45% respectively). We have no evidence that the MFI chooses primarily to train riskier clients. Unfortunately, we do not have data on business outcomes (ex. profits, sales, etc.), only business forecasts form the application stage via a business plan presented by the applicant. Hence, we cannot investigate the link between business training provision and business outcomes. However, our data set contains detailed information on borrowers' behavior regarding ex-post repayment to the MFI. We use the number and dates of unpaid installments to explore the impact of business training on loan repayment. Each individual can apply only once for a microcredit. Our MFI aims to have all its clients bankable after their first microcredit. 8 The relationship between the MFI and the borrower proceeds as follows. After receiving a credit application, the MFI decides whether to accept or reject the loan. The decision process involves several stages. First, the loan officer presents the project during a credit committee meeting. Second, the credit committee takes the decision to grant the loan or not. Third, the MFI decides whether or not to provide training to the selected applicants. 9 Training is mandatory for the selected borrowers, who cannot refuse to participate. We then observe each client's microcredit repayment behavior, i.e. the number and dates of unpaid installments. Data Using Créa-Sol's data, we model two different processes: 1. Business training allocation 2. Borrower's risk based on his/her credit history Table 1 gives the descriptive statistics for our data along with the t-tests to compare different group means. Information on 365 business microcredit borrowers was collected between May 2008 and May 2011. 10 The vast majority of these loans were for a business start-up or buy-out, rather than for business development. The average loan approved was EUR 8,900, the average interest rate was 4.2%11 and the mean maturity was 52 months. Column (1) in Table 1 lists 22% of the borrowers as defaulting. We define as "defaulting" borrowers with 3 or more delayed payments in their credit history within the MFI.12 The delayed payments need not be consecutive or remain unpaid, although most delayed payments in the database were consecutive. This definition mirrors the MFI's actual policy: it generally writes off all loans involving three or more consecutive delayed payments. However, our definition is more conservative and results in a larger percentage of defaulting loans than was actually registered by the MFI. In columns (2) and (3) of Table 1 we split the total sample into business training beneficiaries and non-beneficiaries respectively. 19% of the clients receiving business training are defaulting clients, against 25% for clients not receiving business training. However, this difference is not significant according to the t-test. Importantly, these preliminary descriptive statistics suggest that individuals assigned to training are not riskier ex-post than individuals without training. This evidence reflects two possible scenarios: either training is targeted toward (ex-ante) high-risk individuals and is highly efficient (as ex-post borrowers with training are not riskier than borrowers without training) or training is not targeted exclusively toward high-risk borrowers and is not highly efficient. As pointed out above, studies in both developed and developing economies fail to corroborate the first scenario, where business training is highly efficient. This lends credence to the second scenario, where business training is not necessarily allocated to the riskier borrowers. Overall, 55% of the borrowers were assigned to business training. 13 In columns ( 5) and ( 6) we split the sample into defaulting and performing loans (loans that have strictly less than three delayed payments in their credit history). Almost half the defaulting loans versus 57% of performing loans were assigned to a training program, but this difference is not significant either. The individual characteristics of business training beneficiaries and of non-beneficiaries do not appear to differ much. Nevertheless, a few differences deserve mention. The proportion of long-term unemployed individuals is greater among borrowers assigned to business training. Moreover, business training beneficiaries have higher household incomes and their businesses have higher asset levels. Furthermore, they are more likely to have made other applications and to have been granted honor loans, 14 which is consistent with a microcredit setup where NGOs providing training programs in partnership with MFIs also provide honor loans. The variable Other applications often includes ongoing applications for an honor loan. Hence, there is a direct link between the two variables and the likelihood of being assigned to business training. These additional financing sources appear to be important factors in the MFI's decision to assign a borrower to a training program. Interestingly, borrowers sent by a mainstream bank are less likely to be assigned to a training program. Borrowers sent by a mainstream bank either have a co-financing loan from the bank (these are potentially less risky clients) or have been rejected by the bank (these are potentially riskier clients). These descriptive statistics suggest that the relationship between borrowers' risk and training allocation is complex and potentially non-linear. Therefore it is important to account for this effect 13 Assignment to a training program can be interpreted as treatment and borrowers can be divided into a treated and control group respectively. From this perspective, our paper fits into the literature studying treatment effects. Nevertheless, treatment is obviously not randomly assigned in our case. 14 An honor loan is an interest-free loan subsidized by the French government for individuals willing to start a business in order to become self-employed. The government delegates the disbursement of these loans to NGOs, which may also provide training programs. when assessing the effect of business training on loan repayment. For the reasons outlined above, we use the variables Other applications, Honor loan, Sent by a mainstream bank as instruments in the business training allocation process to identify our effects. As Table 1 illustrates, there are significant differences between defaulting and performing clients. Defaulting clients are more likely to be male, single, and long-term unemployed, with lower education, income levels, personal investment 15 and assets. All these variables are taken into account to design our risk measure, or a borrower's score. Actually, we do not have information on the scoring model used by the MFI. Therefore we use ex-post information on credit history to estimate the borrower's ex-ante risk, assuming that the MFI's scoring strategy is based on its previous experience. This risk measure will allow us to model business training assignment and, consequently, to establish a positive effect of business training on credit repayment. Our econometric strategy is outlined in the next section. Econometric model The purpose of our paper is to study the effect of business training on microcredit repayment. To address this issue, we need to control for assignment to business training. We proceed as follows. First, we construct a measure of borrowers' (intrinsic) risk, or score, using the loan repayment equation. Second, we introduce this measure of borrowers' risk (first linearly and then quadratically) into the business training allocation equation. By simultaneously estimating the two equations, we show that this relationship is non-linear and, at the same time, we establish a positive effect of business training on loan survival time. To proxy borrowers' risk, we first use a probit equation that estimates the probability of a borrower 15 Low personal investment is a dummy taking value 1 if the applicant's personal financial contribution to the project is lower than 5% of the project size. We use this cut-off because it is the lowest available in our data after "No personal investment", and very few applicants provided no personal investment. defaulting in a bivariate probit model. Alternatively, we use the inverse of loan survival time in a bivariate mixed model. Among the control variables, we include individual, household and business characteristics presented in Table 1. In addition, we control for business cycles, 16 which obviously impact the riskiness of a project: an unfavorable economic environment during the start-up phase can jeopardize a business's chances of surviving. We therefore include quarterly rates of increase in business failures (as a measure of economic health) and in new business start-ups (as a measure of competition) at the time the loan is granted (and one and two quarters later) for each microenterprise in our sample, according to its sector of activity. Data for business cycles exclusively cover the French Southeastern PACA Region where our MFI operates. To test H1, we first introduce variable Risk linearly and then quadratically (to capture the simplest form of non-linearity) in the business training allocation equation. H2 is tested by assessing the sign and the significance level of the Business training loading in the risk equation, as specified in the remainder of this section. Bivariate probit model To test the relationship between the probability of receiving business training and borrowers' risk, we add intrinsic risk to the business training equation. Actually, we jointly model two processes, namely the business training decision and the probability of defaulting, related by a common unobserved individual heterogeneity factor. This unobserved individual heterogeneity allows us to take into account the unobserved "soft" information about borrowers (motivation, skills, personality, etc.) collected by the MFI during face-to-face meetings. These factors drive the borrowers' behavior (for instance, through effort devoted to the business). In addition, joint modeling controls for the endogeneity of business training in the default equation. y * 1i = β 1 x i + λ 1 Risk + 1i , y 1i =        1 if y * 1i > 0 Business training 0 if y * 1i ≤ 0 Otherwise (1) y * 2i = β 2 w i + ηB i + α 1 y 1i + 2i , y 2i =        1 if y * 2i > 0 Def aulting 0 if y * 2i ≤ 0 Otherwise (2) In Model II equation (1) writes y * 1i = β 1 x i + λ 1 Risk + λ 2 Risk 2 + 1i , y 1i =        1 if y * 1i > 0 Business training 0 if y * 1i ≤ 0 Otherwise (3) and equation (2) remains unchanged. x i is a vector of variables specific to the business training decision including Honor loan, Other applications and Sent by a mainstream bank. As described in the Data section, these three variables are directly linked to the business training process and are used as instruments in this equation to ensure model identification. w i is a vector of various controls composed of individual, household and business characteristics. B i is a vector of variables measuring the business cycle of the sector of activity of enterprise i. They ensure full identification of our model since they cannot impact training, as they occur after assignment to training. The correlation between the business training and defaulting processes is modeled by imposing the following structure on the error terms: 1i = ρ 1 v i + 0 1i 2i = ρ 2i v i + 0 2i where the components 0 1i , 0 2i are independent idiosyncratic parts of the error terms and each is assumed to follow a normal distribution N (0, 1). The common latent factor v i is the individual unobserved heterogeneity factor. We assume that v i ∼ N (0, 1) and that this factor is independent of the idiosyncratic terms. Attached to v i , the scedastic function ρ 2i ≡ ρ 2 exp(α 2 y 1i + δEducation i ) represents uncertainty driven by borrowers' behavioral effects: here, business training indirectly impacts the probability of defaulting through α 2 . We moreover assume that the behavioral effect depends on the borrower's education level (or skills), through the coefficient δ, which also represents the indirect effect of education on the probability of defaulting. The parameters ρ 1 and ρ 2 are free factor loadings which should be estimated. For identification reasons, we impose the constraint ρ 2 = 1. Hence, borrowers' intrinsic risk17 is proxied by Risk = Φ(w i β 2 + v i ), where Φ(•) is the normal cumulative distribution function. We estimate the parameters using the maximum likelihood method (see Appendix 9.1 for details of the likelihood function). If H1 is verified, we expect λ1 (resp. λ1 and λ2 ) to be significantly different from zero in Model I (resp. Model II). If H2 is verified, we expect α1 to be significantly negative in equation (2). The results of the estimation of Models I and II are presented in Table 3 in the next section. Bivariate mixed model A characteristic of our data is that borrowers receive microcredits at different times and some microcredits are still active at the time of observation. Obviously, long-standing clients are more likely to default compared to newly-granted loans. Moreover, the time elapsed before delayed payments occur is observed. This longitudinal aspect of the data allows us to take into account a strong heterogeneity within defaulting loans. This richer information should provide a clearer picture of the true default process and a better assessment of a borrower's intrinsic risk. Importantly, as highlighted by [START_REF] Roszbach | Bank lending policy, credit scoring, and the survival of loans[END_REF], the impact of a default on an MFI's returns (or a bank's returns in his case) depends to a large extent on when (in the history of the loan) this default occurs. However, we cannot claim that this longitudinal approach will allow us to better replicate the MFI's assessment of borrowers' intrinsic risk. Put another way, we do not know whether the MFI is able to use this more sophisticated measure of risk based on longitudinal assessment, or whether it ignores this information and bases its decision solely on a simpler probit scoring model. This extends the previous model by adding information on the survival time of a loan, T i . We choose an alternative measure of risk consisting in the inverse of the expected survival time. In this model, the risk equation covers the time that elapses before a default occurs, rather than just the default. We define t i as follows. For defaulting loans, t i is the number of days between the date the loan is granted and the date default occurs. For non-defaulting loans, t i is the number of days between the date the loan is granted and the date of data extraction. Either the survival time is perfectly observed when a default occurs y 2i = 1, i.e. T i = t i , or it is censored because the loan is still performing when y 2i = 0, i.e. T i > t i . The bivariate mixed model allows us to estimate survival time for each loan, assuming that survival time follows the Weibull distribution, the duration distribution most commonly used in applied econometrics [START_REF] Lancaster | The Econometric Analysis of Transition Data[END_REF]. (T i|v i , w i , B i , y 1i ∼ W eibull(µ i , σ)) (4) where µ i ≡ exp(β 2 w i + ηB i + α 1 y 1i + ρ 2i v i ) and ρ 2i ≡ exp(α 2 y 1i + δEducation i ). The expected survival time is given by: E(T i |w i , B i , y 1i , v i ) = µ -1 i Γ 1 + 1 σ (5) where Γ(•) is the complete Gamma function (for more details see Lancaster, 1992, Appendix 1) and σ is the Weibull scale parameter. Consequently, borrower's risk is necessarily inversely related to expected survival time. We consider an alternative measure of risk given by the inverse of E(T i |w i , v i ). We therefore replace Risk = Φ(w i β 2 + v i ) in Models I and II by Risk = [E(T i |w i , v i )] -1 in the business training decision process. We present the results of the estimation for the bivariate mixed model in Table 4 in the next section. 5 Econometric results Bivariate probit model The estimations of the bivariate probit models are presented in Table 3. According to column (1), the linear relationship between Risk and business training is not significantly different from zero. Furthermore, according to column (2), business training does not significantly impact repayment, since α1 is not significant either. Therefore neither H1 nor H2 is verified in Model I. In contrast, the estimation of Model II, where we account for a non-linear relationship between Risk and business training, yields considerably different results. Both λ1 and λ2 are significant at 1% level with opposite signs in column (3). Therefore H1 is verified in Model II: the intrinsic risk of a borrower non-linearly impacts his/her likelihood of receiving business training. More specifically, the probability of receiving business training first increases with borrowers' risk and then, beyond a certain threshold decreases. We can compute, using the estimates in column (3), the threshold beyond which the probability of receiving business training begins to decrease with risk. To do so we use the derivative: ∂P r(y 1i = 1|x i , Risk, v i ) ∂Risk = ( λ1 + 2 λ2 Risk)φ(•) where φ(•) is a normal density which is always positive. Hence the sign of the previous derivative is given by λ1 + 2 λ2 Risk. It will be positive for Risk smaller than 0.36 and negative otherwise. We estimated the Risk = Φ(w i β2 + v i ) for each borrower in our dataset. 81% of borrowers have an estimated risk lower than 0.36 and 19% have an estimated risk higher than this threshold. Similar to Model I, business training does not significantly impact loan repayment in Model II, since α1 is non-significantly different from zero in column (4). Therefore, H2 is not verified in Model II either. We conclude that business training does not impact the likelihood of defaulting. The effects of other control variables in Model II are similar to those in Model I. In the business training equation we observe a highly significant positive relationship between business training and other applications and honor loan. Being sent by a mainstream bank, however, is negatively associated with the likelihood of receiving business training. Individuals sent by a mainstream bank have either been rejected by the mainstream bank (and are probably the riskiest) or have been granted a co-financing credit by the mainstream bank (and are probably the least risky). In both these situations, we expect that such individuals will be the least likely to be assigned to a training program, due to a potential behavioral effect on their self-confidence (for the riskiest individuals) or due to their expected good performance ruling out any need for business training (for the least risky individuals). ρ1 is not significant, suggesting that adding borrower's risk to the business training equation is sufficient to control for the interdependence of the two processes and potential endogeneity. Turning to the default equation, male clients are significantly more likely to default than female clients. A similar result is reported by D'Espallier et al. (2011) for MFIs in developing countries. Higher education (measured by number of diplomas) significantly decreases a client's riskiness. Household income and expenses are respectively strong negative and positive determinants of the likelihood of defaulting. Borrowers with low personal investment and low assets are significantly riskier. Finally, the gross margin-to-sales ratio is associated with lower credit risk. Concerning heteroscedasticity, the indirect effect of business training is not significant in either model. In contrast, the indirect effect of education is significant at 1% in Model II, suggesting that a higher level of education significantly increases the variance of the unobserved individual heterogeneity term, v i . In other words, there is more uncertainty about risk of default with more educated borrowers. Bivariate mixed model The estimations of the bivariate mixed models are presented in Table 4. The equivalent specifications for Models I and II are given in columns (1)-(2) and columns (3)-( 4) respectively. According to column (1), the linear relationship between Risk (now measured by the inverse of expected survival time of the loan) 19and business training is significant, but only at 10% level. Furthermore, according to column (2), business training does not significantly impact loan survival time, since α1 is not significant. Therefore H1 is verified in Model I: the intrinsic risk of a borrower increases his/her likelihood of receiving business training, although this relationship is only significant at 10% level. Akin to the bivariate probit model, H2 is not verified in Model I. Importantly, Model II, where we account for a non-linear relationship between Risk and business training, yields considerably different results. Both λ1 and λ2 are significant at 1% and 5% levels respectively with Crucially, the coefficient of business training (α 1 ) becomes significant in column (4), meaning that business training increases loan survival time (i.e. reduces borrower's risk), thereby increasing the expected return on the loan for the MFI. This result suggests that business training actually does increase a business's chances of success. H2 is thus verified in the bivariate mixed Model II: business training positively impacts loan repayment when we control for the process of assignment to business training and its possible behavioral effects. The results of Model I and Model II highlight the need to control for the non-linear relationship between borrowers' risk and business training in order to capture the effect of business training. Our findings show that this relationship is complex, potentially due to behavioral reactions generated by it among training beneficiaries. We further analyze this possibility in our theoretical model (see Section 7). Our results also confirm the importance of the informational content of longitudinal data in the evaluation of business training impact. 20 Ignoring behavioral effects and the longitudinal aspect of defaulting loans appears to bias results on training efficiency, which may at least partly explain the mixed results in terms of business training efficiency reported in the existing literature. The coefficients of other controls are in line with the bivariate probit model, although a few differences are worth mentioning. According to column (4), being single increases the survival time of the loan. In contrast, businesses run by clients who are long-term unemployed or in the food and accommodation sector are significantly riskier. The Weibull parameter is significant and positive, suggesting that risk is increasing with time. To check the robustness of our results, we propose two alternative models accounting for the MFI's selection process during the loan approval stage, using the information on rejected applicants available in our dataset. First, in the next section we correct for selection bias [START_REF] Heckman | Sample selection bias as a specification error[END_REF] through trivariate models where we add to our baseline models an equation accounting for the MFI's binary decision to grant or reject a loan. Second, Appendix 9.3 contains the results of a nested logit model where we allow loan approval and training allocation decisions to take place concomitantly. 21 6 Robustness checks: Correcting for selection bias Bivariate models are estimated only for granted loans, as an individual can only be assigned to a training program if he/she has actually been granted a microcredit. In this section we add to the previous bivariate models a third process, namely the loan approval decision, which allows us to correct for selection bias. Adding the approval process will also reveal whether the MFI is choosing its clients optimally in terms of their expected performance. This additional equation for the binary decision loan approval or not, y 0i , writes 20 The non-significance of business training in the bivariate probit model might be due to reduced variability in the risk variable, which is a dummy. 21 We thank an anonymous referee for pointing out a possible scenario where loan approval and training allocation decisions are not strictly sequential. as follows: y * 0i = β 0 w i + η 0 B 0i + 0i y 0i =        1 if y * 0i > 0 Approval 0 if y * 0i ≤ 0 Otherwise (6) We use the same explanatory variables w i as in risk equation, as suggested by [START_REF] Roszbach | Bank lending policy, credit scoring, and the survival of loans[END_REF]. We moreover introduce into the approval equation business cycle variables (B 0i ) that may impact the MFI's decision to grant the loan or not. B 0i corresponds to the rate of increase in business failures and new business start-ups in the sector of enterprise i at the time of loan approval, and one quarter and two quarters before loan approval. The business cycles operating before loan approval will enable the identification of the trivariate model. 22 In this model, we allow for correlation between the two decisions (approval and business training) and the risk equation by imposing a similar structure on error terms having an equivalent error composition and the same distributional assumptions: 0i = ρ 0 v i + 0 0i The results for the trivariate probit and mixed models are presented in Tables 5 and6 respectively. In column (1) we report the results for the selection equation, in column (2) we show the results for the business training allocation equation and in column (3) we present the results for the risk equation. We only present the specifications accounting for the non-linear relationship between business training and borrowers' risk (i.e. Model II). Controlling for selection bias does not alter our main results. According to column (2) (Tables 5 and6), λ1 and λ2 are both significant with opposite signs, suggesting that the probability of business training assignment first increases with risk and then, beyond a certain threshold, decreases. Therefore, H1 is supported by the trivariate probit and mixed models, suggesting a robust non-linear relationship between business training allocation and borrowers' risk. Additionally, according to column (3) (Tables 5 and6), α1 is only significant in Table 6, suggesting that business training decreases borrowers' risk only in the trivariate mixed model. Therefore, similar to our baseline results, H2 is only supported by the trivariate mixed model. We conclude that business training is indeed efficient in increasing loan survival time, when the non-linear relationship between business training allocation and borrowers' risk is accounted for. As expected, the coefficients in the approval equation are generally of the opposite sign to those in the risk equation. However, only two variables significantly impact approval according to column (1) in Table 5. Long-term unemployed applicants and businesses in the food and accommodation sector are less likely to be accepted by the MFI. Three more variables are significant in column (1), Table 6. Larger household income increases the probability of loan approval, whereas higher household expenses and low personal investment decrease it. However, other variables which significantly impact borrowers' risk are not significant in the approval equation, suggesting that the MFI is not perfectly optimizing its approval process with respect to clients' creditworthiness [START_REF] Roszbach | Bank lending policy, credit scoring, and the survival of loans[END_REF], reaches similar conclusions using data on consumer loans from a Swedish bank). Similar to our baseline models, in the trivariate mixed model the indirect effect of business training is not significant and the indirect effect of education is significantly positive, suggesting that default risk uncertainty increases for better educated individuals. Finally, ρ0 is significant in the trivariate models, suggesting that the selection bias is indeed present and has to be taken into account. In this section, loan approval was treated as strictly sequential (and anterior) to business training allocation. In the Appendix, 9.3, we allow these two processes to be concomitant (the MFI chooses from rejecting a loan, accepting it without training and accepting it with training) through a nested logit model. The results, presented in Table 7, are similar to those reported for the trivariate probit model. We find the same non-linear relationship between intrinsic risk and likelihood of being assigned to business training; and a nonsignificant (but this time negative) relationship between business training and default. The non-significance of this last relationship may again be due to the low variability of our default variable (which is a dummy). Business training allocation and reverse asymmetric information In this section, we present a theoretical model aimed at rationalizing the non-linear effect of the borrower's intrinsic risk on his/her probability of being assigned to business training, highlighted in our empirical work. This model is based on the psychological or behavioral effect that business training can have on borrowers unaware of their own risk (or type). This mechanism, termed the "looking-glass self" effect by [START_REF] Cooley | Human Nature and the Social Order[END_REF], is likely to occur when the principal (here the MFI) has better information than the agent (here the borrower) (see for example Benabou and Tirole, 2003b) on the agent's characteristics (the quality of his/her project). The terms "reverse asymmetric information" or "informed principal models" then apply. This situation can be expected in the microcredit market, where MFIs generally finance first-time microentrepreneurs who need financial backing to start a business, and who usually lack the necessary experience. Thus, microfinance institutions (for example through scoring models and/or their past experience) may well be better informed than micro-entrepreneurs about the potential of the project. In this case, the contract offered by the MFI can provide borrowers with information about themselves, thereby impacting their beliefs and shaping their behavior. We model an MFI operating like most of the microcredit market, including our data-source MFI: no collateral is required and the same interest rate is applied to all borrowers. Assignment to business training is thus the only source of heterogeneity in the contracts. We show in this section that reverse asymmetric information can generate a non-linear relationship between borrowers' risk and assignment to business training, consistent with our empirical analysis. Consider an agent, a borrower, who has a project for which he/she needs financing. We assume that borrowers have no collateral and no personal investment. They need to borrow from the bank the total funds for the project, which we normalize to 1. We consider, as in the empirical analysis, that the funding process is in two steps: (i) first, the MFI chooses to reject or approve a loan, and (ii) second, it makes the training allocation decision for approved projects. If undertaken, the project generates a return, ρ, in the case of success and 0 in the case of failure. The principal, the MFI, demands a return of R = 1 + r in the case of success with R < ρ , where r is the fixed interest rate. The MFI receives 0 in the case of failure. The probability of success (denoted p(θ, h, e)) depends on borrower type θ, borrower effort e and level of business training from the MFI h. We assume the probability of success to be increasing in these three terms. The parameter θ represents the intrinsic probability of success (or type) of the borrower, (i.e. p(θ; 0; 0) = θ), depending on borrower's and project's characteristics and excluding the effects of business training and effort (it therefore echoes the variable Risk in our empirical work). We assume that the efficiency of effort and business training are both decreasing with type. Assumption 1. The probability of success p(θ, h, e) is such that ∂ 2 p ∂θ∂e ≤ ∂ 2 p ∂θ∂h ≤ 0 This assumption is pretty standard and the second inequality corresponds to Assumption 3 in Benabou and Tirole (2003a). It moreover means that borrower's type has a stronger impact on the efficiency of effort than on the efficiency of training. Furthermore, effort is costly for the borrower and business training is costly for the MFI. The respective costs are denoted by ψ(θ, e) and ϕ(h). We hence assume that the (psychological) cost of effort is type-dependent and that it is decreasing with type. Assumption 2. The cost of effort is such that ψ (θ, 0) = 0 and is decreasing with type : ∂ψ ∂θ ≤ 0 Regarding the MFI objective, we assume that once a borrower is accepted, the MFI either maximizes profit or minimizes loss on this borrower. The end of the section discusses the overall objective of the MFI. We follow the standard approach in banking modeling by assuming that the MFI is risk neutral so that, once the project is accepted, the objective function of the MFI is given by : p(θ; h; e)R -ϕ(h) To simplify the model, we additionally assume that the borrower is risk neutral, so that the utility of the borrower is given by: p(θ; h; e)(ρ -R) -ψ(θ; e) We analyze two information structures. In the first, the information is perfect and symmetric: both the borrower and the MFI observe borrower's type. The MFI chooses h and the borrower chooses e simultaneously. We focus on cases where, under perfect information, the MFI provides a level of business training decreasing with type. 23 This means that, in the absence of an asymmetric information effect, the allocation of business training is like bad news for borrowers (since it reflects a low probability of success). In the second configuration, we assume reverse asymmetric information, that is, a situation where borrowers do not know their type, while the MFI does. As mentioned above, this informational setting is particularly relevant for the microcredit market, where inexperienced borrowers meet experienced MFIs. In this case, the level of business training chosen by the MFI (h) also conveys information about the borrowers' type and might influence their behavior. In other words, by observing h, borrowers form a belief about their type that leads them to some level of effort. When choosing h, the MFI internalizes this mechanism, which shapes its profit through borrowers' effort. We show that, unlike under symmetric information, there can be a nonmonotonic relationship between business training and borrower type in some Perfect Bayesian Equilibria. In other words, reverse asymmetric information could explain the pattern of business training allocation found in the empirical analysis. To build our theoretical argument, we present a simple discrete version of the model, with two levels of effort and business training (e ∈ {0, 1}, h ∈ {0, 1}) and three types of borrowers, namely weak-, mediumand strong-type borrowers: θ ∈ {W, M, S}. 24 We assume ϕ (0) = 0, ϕ (1) = φ and denote the efficiencies of training and effort ∆ h p (θ, e) ≡ p (θ, 1, e) -p (θ, 0, e) and ∆ e p (θ, h) = p (θ, h, 1) -p (θ, h, 0). From assumption 1, ∆ e p (θ, h) and ∆ h p (θ, e) are decreasing with θ. As explained above, our aim is to show that reverse asymmetric information can lead to the non-monotonic relationship found in our empirical analysis. We therefore focus on simple situations in which the relationship between training and type is monotonic under symmetric information. This would be the case in our simple discrete model under the following plausible assumptions regarding the MFI and borrowers' behavior when information is perfect. Assumption 3. • The MFI is not interested in training the strong-type borrowers: ∀e, ∆ h p (W, e) ≥ ∆ h p (M, e) ≥ φ R ≥ ∆ h p (S, e) • The cost of effort is such that, when informed about their type, only strong-type agents optimally exert effort: ∀h, ψ(M,1) ρ-R ≥ ∆ e p (M, h) ≥ ∆ e p (S, h) ≥ ψ(S,1) ρ-R and ψ(W,1) ρ-R ≥ ∆ e p (W, h) As mentioned previously, the funding process takes place in two stages: a selection stage where the MFI rejects or approves a project, followed by a business training allocation stage where the MFI decides whether of not to train approved borrowers. Backward induction leads us to first focus the second stage (i.e. training allocation). Business training allocation First, given the above assumptions, the following remark holds: Remark 1. Under perfect symmetric information, Assumption 3 leads to a situation where the MFI provides business training to the two weakest types, W and M (if approved), and does not provide business training to the strongest type, S. Borrowers of type S provide effort but the weakest types M and W do not. Thus, under perfect information, weak-type borrowers are pooled with medium-type borrowers. We now assume reverse asymmetric information. In this case, the appropriate equilibrium concept is "Perfect Bayesian Equilibrium" (PBE). We need to consider several cases where projects were approved during the selection stage. First, let us consider first that projects of all three types are approved. Under reverse asymmetric information, borrowers are not aware of their type. Only the MFI observes it. The MFI's action (assignment to business training or not) may therefore convey information to the borrower, who will form beliefs about his/her type from observing the MFI's decision on business training. We show that there exists a Perfect Bayesian Equilibrium in which assignment to business training is a non-monotonic function of borrower type, that is, in which the MFI only trains M-type borrowers. In this case, borrowers observing that they are not assigned to training infer that they are either weak (W) or strong (S) type. Let us denote by α the probability that a borrower aware of being S-or W-type is actually S-type ((1 -α) is then the probability that he/she is actually W-type). In other words, α represents the borrower's belief that he/she is strong-type when he/she observes that the MFI chooses not to train him/her. Correlatively, in the considered equilibrium, a borrower observing that the MFI has decided to train him/her is convinced that his/her type is M. This leads to the following proposition: Proposition 1. Under reverse asymmetric information, if all projects are approved, there exists a PBE As ∆ h p (θ, 0) and ∆ e p (θ, 0) are decreasing with θ , condition (8) implies that ∆ e p (W, 0) -∆ e p (M, 0) has to be large; that is, that effort has to be more efficient for W-type borrowers than for M-type ones. Under condition (8) the perfect information outcome is not a PBE. Indeed, such an equilibrium would imply that absence of training convinces the borrower that he/she is an S-type, and therefore leads him/her to exert effort. But as ∆ h p (W, 0) -∆ e p (W, 0) ≤ φ R , that would induce the principal not to train W-type borrowers. It can even be shown that under conditions (7) and ( 8), E * is the only semi-separating PBE (in pure strategies) when projects of all types are approved. 25 Indeed, under (8) we have p(M, 1, 1)R-φ ≥ p(M, 1, 0)R-φ ≥ p(M, 0, 1)R ≥ p(M, 0, 0)R, so that the MFI always trains M-type borrowers in any PBE. In a semi-separating equilibrium, M-types can hence be pooled either with W-types or with S-types. Pooling them with W-types corresponds to the perfect information outcome, which is not a PBE. Pooling M-types with S-types (and hence training both) induces W-types (not trained and aware of their type) to exert no effort. This is not a PBE, since the MFI obtains a higher return by training them (since p(W, 1, e)R -φ ≥ p(W, 0, 0)R). We thus assume in the following that when all the projects are approved, the MFI chooses E * , where W-types are pooled with S-types. We now assume that W-type projects are rejected. Under condition (8), when only M and S-types are approved, the only equilibrium is the perfect information one (since φ R ≤ ∆ h p (M, 0) -∆ e p (M, 0) no pooling equilibrium can exist). 25 A pooling equilibrium in which all borrowers are assigned to business training and all provide effort can also exist. As is always the case with pooling PBE, it however requires stringent conditions on beliefs outside the equilibrium (that is, when borrowers are not trained). We therefore rule out this equilibrium and focus in the following on E * . Proposition 2. If only M and S-type projects are approved, then in the second stage the MFI provides training only to M-type borrowers, S-types exert effort and M-types do not. Finally, by assumption 3, if only S-type projects are approved, then the MFI does not provide training and all agents exert effort. Selection stage In the selection stage, the MFI bases its decision on whether or not to approve a project on anticipated business training allocation. Thus, reverse asymmetric information can lead to an increase in approvals of W-type borrowers. As stated in the following proposition, this holds both for MFIs seeking to maximize their profits during the selection stage (we term these MFIs "for-profit"), and for MFIs whose objective is to increase their outreach while remaining sustainable (we term these MFIs "non-profit"). Proposition 3. Under conditions (7) and ( 8), the MFI earns a greater expected profit under reverse asymmetric information. Reverse asymmetric information then increases the outreach for a non-profit MFI if p(W, 0, 1)R < 1 + φ (9) and for a for-profit MFI if p (W, 0, 1) R ≥ 1 (10) on top of (9). As shown above, the only difference between symmetric and reverse asymmetric information concerns Wtype borrowers, if approved. Under symmetric information they do not exert effort and receive business training; whereas, in E * they do not receive business training but exert effort. Thus, under condition (8), the MFI makes greater profit on W-type projects (and thus greater total profit) under reverse asymmetric information. MFIs using cross-subsidization (i.e. non-profit MFIs)26 can then finance more W-type projects under reverse asymmetric information, in cases where they have negative expected profit for W-type projects Furthermore, loan repayment potentially depends on business training, both directly and indirectly through borrowers' behavioral reactions to business training assignment. We attempt to isolate these two effects. To identify the direct effect of business training, we introduce into the risk equation an additional covariate, the Business training dummy, taking value one if a borrower receives business training and zero otherwise. To isolate behavioral effects, we also introduce into the risk equation a form of heteroscedasticity linked to individual unobserved heterogeneity and depending on, among others, business training. This approach allows us to estimate the variable Risk depending solely on individual, household, and business characteristics to proxy borrowers' intrinsic risk (i.e. the risk net of potential direct and indirect effects of business training and net of business cycle influence). We test the following hypotheses: H1: The intrinsic risk of a borrower impacts his/her probability of receiving business training. H2: Business training positively impacts loan repayment when we control for the pro-cess of assignment to business training and its possible behavioral effects. Furthermore, the heteroscedasticity of the model captures the idea that observing the same level of business training can trigger different behavioral reactions in two different borrowers. In other words, assignment to business training could introduce noise into a borrower's behavior, thereby engendering noise in his/her probability of defaulting, which could imply higher and non-constant variance. This can naturally be represented by a scedastic function attached to the unobservable individual heterogeneity. By introducing heteroscedasticity into the default equation, we isolate behavioral effects on the probability of defaulting. Thus, controlling for endogeneity and introducing heteroscedasticity help disentangle three different components in the risk equation: the direct effect of business training, the indirect effect of business training through borrower's behavior and the intrinsic risk of the borrower. We first study a linear relationship between business training and Risk in Model I. Then, we consider the simplest form of non-linearity by introducing Risk and Risk 2 into the business training equation in Model II. The bivariate probit model consists of two simultaneous equations: the first for the binary decision to provide business training or not, y 1i ; and the second for the binary outcome defaulting or not, y 2i : Model I: 18 Columns (1) and (3) contain the estimates for equations (1) (Model I) and (3) (Model II), where we test for a linear and a quadratic relationship between Risk and business training allocation respectively. Columns (2) and (4) contain the estimates for equation (2) using Model I and Model II respectively. opposite signs in column(3). Therefore, H1 is again verified in Model II: the intrinsic risk of a borrower non-linearly impacts his/her likelihood of receiving business training. Similar to the bivariate probit model, the probability of receiving business training first increases with borrower's risk and then, beyond a certain threshold, decreases. ( denoted by E * ) where the MFI provides business training only to M-type borrowers, S-and W-type borrowers exert effort and M-type do not, if:α∆ e p (S, 0) + (1 -α) ∆ e p (W, 0) ≥ α ψ (S, 1) (ρ -R) + (1 -α) ψ (W, 1) (ρ -R)(7)and∆ h p (W, 0) -∆ e p (W, 0) ≤ φ R ≤ ∆ h p (M, 0) -∆ e p (M, 0) Table 1 : 1 Descriptive Statistics Variables Total Training No Training t-test a Defaulting Performing t-test (1) (2) (3) (4) (5) (6) (7) Defaulting (dummy) 0.22 0.19 0.25 -0.05 Business training (dummy) 0.55 0.49 0.57 -0.08 Individual Characteristics Male (dummy) 0.61 0.62 0.60 0.02 0.75 0.58 0.17*** Education (no. of diplomas) 1.89 1.89 1.89 0.00 1.46 2.01 -0.55*** Single (dummy) 0.53 0.50 0.57 -0.07 0.63 0.51 0.13** Unemployed more than 12 months (dummy) 0.33 0.37 0.28 0.08* 0.42 0.31 0.11* Household Characteristics Household income (kEUR) 1.49 1.61 1.33 0.29** 1.11 1.59 -0.49*** Household expenses (kEUR) 0.45 0.47 0.42 0.06 0.47 0.44 0.03 Business Characteristics Low personal investment (dummy) 0.26 0.25 0.27 -0.02 0.38 0.23 0.15*** Assets (kEUR) 18.86 21.34 15.73 5.62** 12.19 20.74 -8.55*** Food and accommodation sector (dummy) 0.10 0.08 0.13 -0.04 0.09 0.11 -0.02 Gross margin(EUR)/Sales(EUR) 0.74 0.74 0.74 0.00 0.71 0.75 -0.03 Instruments for business training process Other applications (dummy) 0.62 0.82 0.38 0.44*** Honor loan (dummy) 0.47 0.63 0.28 0.36*** Sent by a mainstream bank (dummy) 0.18 0.12 0.26 -0.14*** No. of observations 365 202 163 79 286 ***p<0.01, **p<0.05, *p<0.1 a The t-test is a two-sample two-sided test for equal means. Table 2 presents descriptive statistics on the survival time of each microcredit. Table 2 : 2 Descriptive statistics for survival time (in days) Percentiles Table 3 : 3 Determinants of Business Training and Default Processes Model Bivariate probit (Model I) Bivariate probit (Model II) (1) (2) (3) (4) Dependent variable: Business training Defaulting Business training Defaulting Explanatory variables: Risk ( λ1 ) 0.36 (0.51) 5.97*** (2.28) Risk 2 ( λ2 ) -8.21*** (3.03) Other applications 1.07*** (0.19) 1.29*** (0.26) Honor loan 0.5*** (0.16) 0.64*** (0.20) Sent by a mainstream bank -0.55*** (0.19) -0.65*** (0.22) ρ1 0.03 (0.45) 0.24 (0.27) Business training (direct effect) ( α1 ) -0.11 (0.43) 0.15 (0.31) Male 0.7*** (0.26) 0.81*** (0.26) Education (direct effect) -0.25** (0.11) -0.28*** (0.09) Single 0.1 (0.25) 0.31 (0.24) Unemployed at least 12 months 0.36 (0.24) 0.16 (0.21) Household income (kEUR) -0.41** (0.16) -0.52*** (0.18) Household expenses (kEUR) 0.92*** (0.3) 1.20*** (0.36) Low personal investment 0.5** (0.23) 0.47** (0.23) Assets (kEUR) -0.02** (0.01) -0.02*** (0.01) Food and accommodation sector 0.06 (0.42) 0.32 (0.41) Gross margin(EUR)/Sales(EUR) -1.21** (0.56) -0.93* (0.51) Business training (indirect effect) -12.25 (892.58) -2.20 (1.80) Education (indirect effect) 0.07 (0.15) 0.28*** (0.02) Intercept -0.71*** (0.22) 0.09 (0.69) -1.09*** (0.32) -0.41 (0.57) Business cycles Yes Yes -2 Log Likelihood 644 639 Observations 340 340 Standard errors in parentheses. ***p<0.01, **p<0.05, *p<0.1 Table 4 : 4 Determinants of Business Training and Inverse of Survival Time Model Bivariate mixed (Model I) Bivariate mixed (Model II) (1) (2) (3) (4) Dependent variable: Business training Inverse of Business training Inverse of Survival Time Survival Time Explanatory variables: Risk ( λ1 ) 1.61* (0.97) 1.22*** (0.43) Risk 2 ( λ2 ) -0.22** (0.10) Other applications 1.13*** (0.18) 1.18*** (0.19) Honor loan 0.52*** (0.17) 0.56*** (0.18) Sent by a mainstream bank -0.57*** (0.2) -0.56*** (0.21) ρ1 -0.22 (0.32) 0.29* (0.16) Business training (direct effect) ( α1 ) -0.19 (0.4) -1.29*** (0.14) Male 0.64** (0.25) 0.63*** (0.11) Education (direct effect) -0.25** (0.11) -0.52*** (0.06) Single -0.07 (0.23) -0.39*** (0.08) Unemployed at least 12 months 0.73*** (0.23) 0.74*** (0.09) Household income (kEUR) -0.16 (0.13) -0.6*** (0.06) Household expenses (kEUR) 0.62** (0.26) 0.99*** (0.1) Low personal investment 0.64*** (0.21) 0.71*** (0.09) Assets (kEUR) -0.02** (0.01) -0.01** (0.004) Food and accommodation sector -0.03 (0.38) 0.27* (0.15) Gross margin(EUR)/Sales(EUR) -1.22** (0.51) -1.43*** (0.19) Business training (indirect effect) -0.38 (0.37) -0.08 (0.07) Education (indirect effect) 0.01 (0.09) 0.15*** (0.03) Weibull parameter (σ) 1.63*** (0.33) 3.91*** (0.47) Intercept -0.87*** (0.2) -6.53*** (0.54) -0.94*** (0.18) -5.13*** (0.24) Business cycles Yes Yes -2 Log Likelihood 1616 1603 Observations 340 340 Standard errors in parentheses. ***p<0.01, **p<0.05, *p<0.1 Table 5 : 5 Determinants of Approval, Business Training and Default Processes Model Trivariate probit (1) (2) (3) Dependent variable: Approval Business training Defaulting Explanatory variables: Risk ( λ1 ) 2.15* (1.13) Risk 2 ( λ2 ) -2.83*** (1.03) Other applications 1.09*** (0.17) Honor loan 0.53*** (0.17) Sent by a mainstream bank -0.57*** (0.19) ρ1 -0.13 (0.31) Business training (direct effect) ( α1 ) 0.09 (0.49) Male -0.29 (0.28) 1.03*** (0.39) Education (direct effect) -0.03 (0.09) -0.19 (0.14) Single 0.02 (0.26) 0.18 (0.28) Unemployed at least 12 months -0.72** (0.35) 0.59** (0.26) Household income (kEUR) 0.22 (0.14) -0.51** (0.2) Household expenses (kEUR) -0.43 (0.29) 1.16** (0.45) Low personal investment -0.46 (0.3) 0.65** (0.31) Assets (kEUR) 0.01 (0.01) -0.03*** (0.01) Food and accommodation sector -0.96** (0.4) 0.78 (0.67) Gross margin(EUR)/Sales(EUR) -0.78 (0.59) -0.99 (0.63) ρ0 -2.14** (1.03) Business training (indirect effect) 0.35 (0.33) Education (indirect effect) 0.08 (0.27) Intercept 1.26** (0.64) -0.89*** (0.28) -0.89*** (0.28) Business cycles Yes Yes -2 Log Likelihood 1537 Observations 662 Standard errors in parentheses. ***p<0.01, **p<0.05, *p<0.1 Table 6 : 6 Determinants of Approval, Business Training and Inverse of Survival Time Model In the microfinance context where loans are not collateralized and interest rates are fixed, MFIs generally tailor loan size to the applicant's expected creditworthiness[START_REF] Agier | Microfinance and gender: Is there a glass ceiling on loan size?[END_REF]. However, the MFI in our study is constrained by a French regulatory loan ceiling[START_REF] Cozarenco | Microcredit in industrialized countries: Unexpected consequences of regulatory loan ceilings[END_REF], reducing the opportunity for loan size tailoring. Therefore, in our case, assignment to training is the main source of contract heterogeneity. See McKenzie and Woodruff (2013) for an extended review of existing studies of impacts of business training in developing countries. Other situations where help, in general, can be detrimental to the agent are presented by[START_REF] Gilbert | Overhelping[END_REF]. Using different experiments, the authors show that help can be used to undermine the beliefs of the observers, who might attribute a successful performance to help rather than to the performer's abilities. According to Botti et al. (2016), 87% of Western European MFIs providing non-financial services externalized them to third-parties. As a consequence, the MFI does not provide dynamic incentives through progressive lending. We cannot completely rule out the possibility that loan granting and training assignment processes are not strictly sequential or intertwined. We account for this eventuality in a robustness check using a nested logit model. We do not study consumer loans, in contrast to[START_REF] Roszbach | Bank lending policy, credit scoring, and the survival of loans[END_REF]. The interest rate was fixed at 4% per year at the beginning of the period and reached 4.5% at the end of the period of analysis. The interest rate is fixed and hence does not depend on borrower characteristics. This percentage might appear particularly high in the microfinance context. Indeed, D'Espallier et al. (2011) report 6% of the total loan portfolio as more than 30 days overdue, and only 1% of loans as written-off in a study of 350 MFIs in 70 countries. However, for MFIs in Western Europe the percentage of the total loan portfolio overdue more than 30 days is 13.4% and the write-off ratio is 5.6% according to[START_REF] Botti | Microfinance in Europe: A survey of EMN-MFC members[END_REF], comparable with the figures reported in our study. By intrinsic risk we mean the probability of defaulting "cleaned" of the direct effect of business training, indirect behavioral effects and business cycle effects. In this paper we are interested in the signs of the loadings and not the sizes of marginal effects. Hence all results presented are estimated coefficients rather than marginal effects. We multiply the Risk variable by 100 to scale down the estimated coefficients and render them comparable to other loadings. In the risk equation, business cycles are introduced at the beginning of the loan (and one and two quarters later), whereas in the approval equation, business cycles are introduced at approval, which does not necessarily coincide with the beginning of the loan. There is generally no overlap between the business cycle variables in the approval and risk equations. Our aim is here to show that under plausible assumptions, reverse asymmetric information can create a non-linear relationship between business training allocation and borrowers' type. A more general continuous model can be found in the working paper version of the paper[START_REF] Bourlès | Business training allocation and credit scoring: Theory and evidence from microcredit in France[END_REF] Cross-subsidization corresponds to situations where the MFI uses the profits it makes on some borrowers to sustain lending to other borrowers on which it earns negative (expected) profits[START_REF] Armendariz | On mission drift in microfinance institutions[END_REF]. under symmetric information (i.e. under equation (9)). This also applies to for-profit MFIs, provided they also make positive expected profits for W-type borrowers under reverse asymmetric information (condition (10)). Conclusion This paper analyzes the effect of business training on microcredit repayment. The originality of our approach with respect to the existing literature is that we take into account the process of allocation to business training and its possible behavioral consequences for microborrowers. We first reveal empirically, using bivariate probit and mixed models, that the business training allocation process is complex and non-linear in its relationship to borrowers' risk. More particularly, we show that the probability of being assigned to business training first increases with borrowers' intrinsic risk and then, beyond a certain threshold, decreases. This relationship is found to be robust to different measures of risk (probability of defaulting or the inverse of loan survival time). Controlling for the business training allocation process and this non-linear relationship, we show that business training is efficient since it increases the survival time of loans (although direct effect on probability of default is not significant). We moreover show that these two results (the non-linearity and the beneficial effect of training on loan survival time) are robust to correction for the MFI's selection bias, using data on rejected applicants. Finally, we propose a novel theoretical explanation for the non-linear effect of intrinsic risk on business training allocation, through a reverse asymmetric information model. We show that an MFI can use its superior information on borrowers' risk to increase effort by microentrepreneurs. This enables them to extend their outreach to riskier borrowers, which is the main objective of MFIs striving to alleviate financial exclusion. One of the weaknesses of our dataset consists in our lack of access to the ex-ante evaluation of borrowers' risk by the MFI. We therefore have to estimate it using a probit or a survival time model. Access to such information would ease the identification and the interpretation of our results. In future work it would be worthwhile to testing our last theoretical result revealing a beneficial effect of reverse asymmetric information on outreach to riskier borrowers. One way to test this effect would be to consider borrowers heterogeneous with respect to expertise in their project are and their ability to succeed. Unfortunately, the current dataset does not contain enough observations of borrowers experienced in business creation and management. More generally, our work opens the way to further exploring how reserve asymmetric information could shape microcredit markets. Beyond training allocation, one fruitful avenue for future research would consist in analyzing other ways that MFIs could use their superior information strategically to mitigate the moral hazard problems plaguing microcredit markets in the absence of collateral. It might, for example, be interesting to analyze how superior information can modify the dynamics of microcredit contracts in settings where borrowers can apply several times for a microcredit within the same MFI (progressive lending) or where borrowers can apply for a microcredit from several MFIs (competitive markets). Appendix Bivariate probit and bivariate mixed models: The likelihood functions In the first model defined by the simultaneous probit equations (2) and ( 3), the individual contribution to the likelihood function given the common factor v i can be written as follows: In the bivariate mixed model, the loan survival time is used and it follows the Weibull distribution given by (4). Hence, the individual contribution to the likelihood function conditional on v i can be written as follows: Hence, in the two models, we need to integrate L i with respect to the density function of v i . By using the adaptive Gaussian quadrature integral approximation, we maximize the log of the likelihood function, which is only defined for individuals with granted loans: 2 Trivariate probit and trivariate mixed models: The likelihood functions We extend the previous models by adding a third process, the loan approval decision defined by the probit equation ( 6). For each model, the individual contribution to the likelihood function given the common factor v i can be written, respectively, as follows: and Hence, in the two models, we need to integrate L i with respect to the density function of v i . By using the adaptive Gaussian quadrature integral approximation, we maximize the log of the likelihood function, which is now defined for the entire sample including the rejected applicants: Nested logit In this appendix we allow for the loan approval process and the business training allocation to occur concomitantly using a two-level nested logit model. In this setting, the MFI chooses for each applicant among three alternative decisions: rejecting the loan (y = 0), accepting it without business training (y = 10) or accepting it with business training (y = 11). This set of alternative decisions can then be partitioned into subsets (or nests), forming a hierarchical structure of decisions. The MFI's decision can be indeed modeled at two levels: first reject or accept the loan (first level), and, conditionally on approval, provide or not business training (second level). This nested structure allows accounting for the potential similarities between the last two alternatives. The probability of each outcome can be written using standard logit models (for the details and the foundations of the nested logit models, see [START_REF] Train | Discrete Choice Methods with Simulation[END_REF]. The probability of these three choices can then be written as follows (we use the same set of covariates as in the previous models and denote by Λ(•) the logistic cumulative distribution function: -1 ): P(y i = 10) = P(y 0i = 1, y 1i = 0) = P(y 0i = 1)P(y 1i = 0|y 0i = 1) where is the inclusive value connecting the two decision levels. The scale parameter φ can be interpreted as a measure of dissimilarity between the last two alternatives. Borrowers' intrinsic risk is proxied by Risk = Φ(w i β 2 + ν i ). Hence, the individual contribution to the likelihood function conditional on ν i can be written in the same manner as in appendix 9.2. The results, presented in
01774364
en
[ "info.info-ao", "info.info-ar", "math.math-oc" ]
2024/03/05 22:32:18
2018
https://inria.hal.science/hal-01774364/file/emet_rr.pdf
Nicolas Brisebarre George Constantinides email: [email protected] Miloš Ercegovac Silviu-Ioan Filip Matei Istoan email: [email protected] Jean-Michel Muller A High Throughput Polynomial and Rational Function Approximations Evaluator We present an automatic method for the evaluation of functions via polynomial or rational approximations and its hardware implementation, on FPGAs. These approximations are evaluated using Ercegovac's iterative E-method adapted for FPGA implementation. The polynomial and rational function coefficients are optimized such that they satisfy the constraints of the E-method. We present several examples of practical interest; in each case a resource-efficient approximation is proposed and comparisons are made with alternative approaches. Introduction We aim at designing a system able to approximate (in software) and then evaluate (in hardware) any regularenough function. More precisely, we try to minimize the sup norm of the difference between the function and the approximation in a given interval. For particular functions, ad hoc solutions such as CORDIC [START_REF] Volder | The CORDIC computing technique[END_REF] or some specific tabulate-and-compute algorithms [START_REF] Wong | Fast hardware-based algorithms for elementary function computations using rectangular multipliers[END_REF] can be used. For low precision cases, table-based methods [START_REF] Sarma | Faithful bipartite ROM reciprocal tables[END_REF][START_REF] Schulte | Approximating elementary functions with symmetric bipartite tables[END_REF][START_REF] De Dinechin | Multipartite table methods[END_REF] methods are of interest. However, in the general case, piecewise approximations by polynomial or rational functions are the only reasonable solution. From a theoretical point of view, rational functions are very attractive, mainly because they can reproduce function behaviors (such as asymptotes, finite limits at ±∞) that polynomials do not satisfy. However, for software implementation, polynomials are frequently preferred to rational functions, because the latency of division is larger than the latency of multiplication. We aim at checking if rational approximations are of interest in hardware implementations. To help in the comparison of polynomial and rational approximations in hardware we use an algorithm, due to Ercegovac [START_REF] Ercegovac | A general method for evaluation of functions and computation in a digital computer[END_REF][START_REF]A general hardware-oriented method for evaluation of functions and computations in a digital computer[END_REF], called the E-method, that makes it possible to evaluate a degree-𝑛 polynomial, or a rational function of degree-𝑛 numerator and denominator at a similar cost without requiring division. The E-method solves diagonally-dominant linear systems using a left-to-right digit-by-digit approach and has a simple and regular hardware implementation. It maps the problem of evaluating a polynomial or rational function into a linear system. The linear system corresponding to a given function does not necessarily satisfy the conditions of diagonal dominance. For polynomials, changes of variables allow one to satisfy the conditions. This is not the case for rational functions. There is however a family of rational functions, called E-fractions, that can be evaluated with the E-method in time proportional to the desired precision. One of our aims is, given a function, to decide whether it is better to approximate it by a polynomial or by an E-fraction. Furthermore, we want to design approximations whose coefficients satisfy some constraints (such as being exactly representable in a given format). We introduce algorithmic improvements with respect to [START_REF] Brisebarre | An Efficient Method for Evaluating Polynomial and Rational Function Approximations[END_REF] for computing E-fractions. We 1 present a circuit generator for the E-method and compare its implementation on an FPGA with FloPoCo polynomial designs [START_REF] De Dinechin | On fixed-point hardware polynomials[END_REF] for several examples of practical interest. Since FloPoCo designs are pipelined (unrolled), we focus on an unrolled design of the E-method. An Overview of the E-method The E-method evaluates a polynomial 𝑃 𝜇 (𝑥) or a rational function 𝑅 𝜇,𝜈 (𝑥) by mapping it into a linear system. The system is solved using a left-to-right digit-by-digit approach, in a radix 𝑟 representation system, on a regular hardware. For a result of 𝑚 digits, in the range (-1, 1), the computation takes 𝑚 iterations. The first component of the solution vector corresponds to the value of 𝑃 𝜇 (𝑥) or 𝑅 𝜇,𝜈 (𝑥). Let 𝑅 𝜇,𝜈 (𝑥) = 𝑃 𝜇 (𝑥) 𝑄 𝜈 (𝑥) = 𝑝 𝜇 𝑥 𝜇 + 𝑝 𝜇-1 𝑥 𝜇-1 + • • • + 𝑝 0 𝑞 𝜈 𝑥 𝜈 + 𝑞 𝜈-1 𝑥 𝜈-1 + • • • + 𝑞 1 𝑥 + 1 where the 𝑝 𝑖 's and 𝑞 𝑖 's are real numbers. Let 𝑛 = max{𝜇, 𝜈}, 𝑝 𝑗 = 0 for 𝜇 + 1 𝑗 𝑛, and 𝑞 𝑗 = 0 for 𝜈 + 1 𝑗 𝑛. According to the E-method 𝑅 𝜇,𝜈 (𝑥) is mapped to a linear system 𝐿 : A × y = b: ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 -𝑥 0 • • • 0 𝑞 1 1 -𝑥 0 • • • 0 𝑞 2 0 1 -𝑥 • • • 0 . . . . . . . . . . . . . . . 0 𝑞 𝑛-1 1 -𝑥 𝑞 𝑛 • • • 0 1 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 𝑦 0 𝑦 1 𝑦 2 . . . . . . 𝑦 𝑛-1 𝑦 𝑛 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 𝑝 0 𝑝 1 𝑝 2 . . . . . . 𝑝 𝑛-1 𝑝 𝑛 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (1) so that 𝑦 0 = 𝑅 𝜇,𝜈 (𝑥). Likewise, 𝑦 0 = 𝑃 𝜇 (𝑥) when all 𝑞 𝑖 = 0. The components of the solution vector y = [𝑦 0 , 𝑦 1 , . . . , 𝑦 𝑛 ] 𝑡 are computed, digit-by-digit, the most-significant digit first, by means of the following vector iteration: w (𝑗) = 𝑟 × [︁ w (𝑗-1) -Ad (𝑗-1) ]︁ , (2) for 𝑗 = 1, . . . , 𝑚, where 𝑚 is the desired precision of the result. The term w (𝑗) is the vector residual in iteration 𝑗 with w (0) = [𝑝 0 , 𝑝 1 , . . . , 𝑝 𝑛 ] 𝑡 . The solution y is produced as a sequence of digit vectors: d (𝑗-1) = [𝑑 (𝑗-1) 1 , . . . , 𝑑 (𝑗-1) 𝑛 ] 𝑡 -a digit vector obtained in iteration 𝑗 -1 and used in iteration 𝑗. After 𝑚 iterations, 𝑦 𝑘 = ∑︀ 𝑚 𝑗=1 𝑑 (𝑗) 𝑘 𝑟 -𝑗 . The digits of the solution components 𝑦 0 , 𝑦 1 , . . . , 𝑦 𝑛 are computed using very simple scalar recurrences. Note that all multiplications in these recurrences use 𝑚 × 1 multipliers and that division required by the rational function is not explicitly performed. 𝑤 (𝑗) 𝑖 = 𝑟 × [︁ 𝑤 (𝑗-1) 𝑖 -𝑞 𝑖 𝑑 (𝑗-1) 0 -𝑑 (𝑗-1) 𝑖 + 𝑑 (𝑗-1) 𝑖+1 𝑥 ]︁ , (3) 𝑤 (𝑗) 0 = 𝑟 × [︁ 𝑤 (𝑗-1) 0 -𝑑 (𝑗-1) 0 + 𝑑 (𝑗-1) 1 𝑥 ]︁ (4) and 𝑤 (𝑗) 𝑛 = 𝑟 × [︁ 𝑤 (𝑗-1) 𝑛 -𝑑 (𝑗-1) 𝑛 -𝑞 𝑛 𝑑 (𝑗-1) 0 ]︁ . (5) Initially, d (0) = 0. The radix-𝑟 digits 𝑑 (𝑗) 𝑖 are in the redundant signed digit-set 𝐷 𝜌 = {-𝜌, . . . , 0, 1, . . . 𝜌} with 𝑟/2 𝜌 𝑟 -1. If 𝜌 = 𝑟/2, 𝒟 𝜌 is called minimally redundant, and if 𝜌 = 𝑟 -1, it is maximally redundant. The choice of redundancy is determined by design considerations. The radix of computation is 𝑟 = 2 𝑘 so that internally radix-2 arithmetic is used. The residuals, in general, are in a redundant form to reduce the iteration time. Since the target is an FPGA technology which provides fast carry chains, we have non-redundant residuals. The digits 𝑑 to a single signed digit, following [START_REF]A general hardware-oriented method for evaluation of functions and computations in a digital computer[END_REF][START_REF] Ercegovac | Digital Arithmetic[END_REF]: 𝑑 (𝑗) 𝑖 = 𝑆(𝑤 (𝑗) 𝑖 ) = ⎧ ⎨ ⎩ sign (𝑤 (𝑗) 𝑖 ) × ⌊︁⃒ ⃒ ⃒𝑤 (𝑗) 𝑖 ⃒ ⃒ ⃒ + 1 2 ⌋︁ , if ⃒ ⃒ ⃒𝑤 (𝑗) 𝑖 ⃒ ⃒ ⃒ 𝜌, sign (𝑤 (𝑗) 𝑖 ) × ⌊︁⃒ ⃒ ⃒𝑤 (𝑗) 𝑖 ⃒ ⃒ ⃒ ⌋︁ , otherwise. The selection is performed using a low-precision estimate ̂︀ 𝑤 (𝑗) 𝑖 of 𝑤 (𝑗) 𝑖 , obtained by truncating 𝑤 (𝑗) 𝑖 to one fractional bit. Since the matrices considered here have 1s on the diagonal, a necessary condition for convergence is ∑︀ 𝑗̸ =𝑖 |𝑎 𝑖,𝑗 | < 1. Specifically, ⎧ ⎨ ⎩ ∀𝑖, |𝑝 𝑖 | 𝜉, ∀𝑖, |𝑥| + |𝑞 𝑖 | 𝛼, |𝑤 (𝑗) 𝑖 -̂︀ 𝑤 (𝑗) 𝑖 | ∆/2. (6) where the bounds 𝜉, 𝛼, and ∆ satisfy [START_REF]A general hardware-oriented method for evaluation of functions and computations in a digital computer[END_REF]: 𝜉 = 1 2 (1 + ∆), 0 < ∆ < 1, 𝛼 (1 -∆)/(2𝑟) (7) for maximally redundant digit sets used here. While the constraints (7) may seem restrictive, for polynomials, scaling techniques make it possible to satisfy them. However, this is not the case for all rational functions. To remove this limitation the authors of [START_REF] Brisebarre | An Efficient Method for Evaluating Polynomial and Rational Function Approximations[END_REF] have suggested the derivation of rational functions, called simple E-fractions, which are products of a power of 2 by a fraction that satisfies [START_REF]A general hardware-oriented method for evaluation of functions and computations in a digital computer[END_REF]. In this work we make further improvements to the rational functions based on E-fractions. Outline of the paper In Section 2, we discuss the effective generation of simple E-fractions, whose coefficients are exactly representable in a given format. Section 3 presents a hardware implementation of the E-method that targets FPGAs. In Section 4 we present and discuss some examples in various situations. We also present a comparison with FloPoCo implementations. Effective computation of simple E-fractions We show how to compute a simple E-fraction with fixed-point or floating-point coefficients. A first step (see Section 2.1), yields a simple E-fraction approximation with real coefficients to a function 𝑓 . In [START_REF] Brisebarre | An Efficient Method for Evaluating Polynomial and Rational Function Approximations[END_REF], linear programming (LP) is used. Here, we use faster tools from approximation theory. This allows us to quickly check how far the approximation error of this E-fraction is from the optimal error of the minimax approximation (obtained using the Remez algorithm [START_REF] Cheney | Introduction to Approximation Theory[END_REF][START_REF] Powell | Approximation theory and methods[END_REF]), and how far it is from the error that an E-polynomial, with the same implementation cost, can yield. If this comparison suggests that it is more advantageous to work with an E-fraction, we use the Euclidean lattice basis reduction approach from [START_REF] Brisebarre | An Efficient Method for Evaluating Polynomial and Rational Function Approximations[END_REF] for computing E-fractions with machine-number coefficients. We introduce in Section 2.2.2 a trick that improves its output. Real approximation step Let 𝑓 be a continuous function defined on [𝑎, 𝑏]. Let 𝜇, 𝜈 ∈ N be given and let R 𝜇,𝜈 (𝑥) = {𝑃/𝑄 : 𝑃 = ∑︀ 𝜇 𝑘=0 𝑝 𝑘 𝑥 𝑘 , 𝑄 = ∑︀ 𝜈 𝑘=0 𝑞 𝑘 𝑥 𝑘 , 𝑝 0 , . . . , 𝑝 𝜇 , 𝑞 0 , . . . , 𝑞 𝜈 ∈ R}. The aim is to compute a good rational fraction approximant 𝑅 ∈ R 𝜇,𝜈 (𝑥), with respect to the supremum norm defined by ‖𝑔‖ = sup 𝑥∈[𝑎, 𝑏] |𝑔(𝑥)|, to 𝑓 such that the real coefficients of 𝑅 (or 𝑅 divided by some fixed power of 2) satisfy the constraints imposed by the E-method. As done in [START_REF] Brisebarre | An Efficient Method for Evaluating Polynomial and Rational Function Approximations[END_REF], we can first apply the rational version of the Remez exchange algorithm [11, p. 173] to get 𝑅 ⋆ , the best possible rational approximant to 𝑓 among the elements of R 𝜇,𝜈 (𝑥). This algorithm can fail if 𝑅 ⋆ is degenerate or the choice of starting nodes is not good enough. To bypass these issues, we develop the following process. It can be viewed as a Remez-like method of the first type, following ideas described in [11, p. 96-97] and [START_REF] Reemtsen | Modifications of the first Remez algorithm[END_REF]. It directly computes best real coefficient E-fractions with magnitude constraints on the denominator coefficients. If we remove these constraints, it will compute the minimax rational approximation, even when the Remez exchange algorithm fails. We first show how to solve the problem over 𝑋, a finite discretization of [𝑎, 𝑏]. We apply a modified version (with denominator coefficient magnitude constraints) of the differential correction (DC) algorithm introduced in [START_REF] Cheney | Two new algorithms for rational approximation[END_REF]. It is given by Algorithm 1. System ( 8) is an LP problem and can be solved in practice very efficiently using a simplex-based LP solver. Convergence of this EDiffCorr procedure can be shown using an identical argument to the convergence proofs of the original DC algorithm [START_REF] Dua | Further remarks on the differential correction algorithm[END_REF][START_REF] Barrodale | The differential correction algorithm for rational ℓ∞-approximation[END_REF] (see Appendix A). 𝛿 ← max 𝑥∈𝑋 |𝑓 (𝑥) -𝑅(𝑥)| 4: find 𝑅 new = 𝑃 new /𝑄 new = ∑︀ 𝜇 𝑘=0 𝑝 ′ 𝑘 𝑥 𝑘 1 + ∑︀ 𝜈 𝑘=1 𝑞 ′ 𝑘 𝑥 𝑘 such that the expression max 𝑥∈𝑋 {︂ |𝑓 (𝑥)𝑄 new (𝑥) -𝑃 new (𝑥)| -𝛿𝑄 new (𝑥) 𝑄(𝑥) }︂ (8) subject to max 1 𝑘 𝜈 |𝑞 ′ 𝑘 | 𝑑, is minimized 5: 𝛿 new ← max 𝑥∈𝑋 |𝑓 (𝑥) -𝑅 new (𝑥)| 6: 𝑅 ← 𝑅 new 7: until |𝛿 -𝛿 new | < 𝜀 Algorithm 2 E-fraction Remez algorithm Input: 𝑓 ∈ 𝒞([𝑎, 𝑏]), 𝜇, 𝜈 ∈ N, finite set 𝑋 ⊆ [𝑎, 𝑏] with |𝑋| > 𝜇 + 𝜈, threshold 𝜀 > 0, coefficient magnitude bound 𝑑 > 0 Output: approximation 𝑅 ⋆ (𝑥) = ∑︀ 𝜇 𝑘=0 𝑝 ⋆ 𝑘 𝑥 𝑘 1 + ∑︀ 𝜈 𝑘=1 𝑞 ⋆ 𝑘 𝑥 𝑘 of 𝑓 over [𝑎, 𝑏] s.t. max 1 𝑘 𝜈 |𝑞 ⋆ 𝑘 | 𝑑 // Compute best E-fraction approximation over 𝑋 using a // modified version of the differential correction algorithm 1: 𝑅 ⋆ ← EDiffCorr(𝑓, 𝜇, 𝜈, 𝑋, 𝜀, 𝑑) 2: 𝛿 ⋆ ← max 𝑥∈𝑋 |𝑓 (𝑥) -𝑅 ⋆ (𝑥)| 3: ∆ ⋆ ← max 𝑥∈[𝑎,𝑏] |𝑓 (𝑥) -𝑅 ⋆ (𝑥)| 4: while ∆ ⋆ -𝛿 ⋆ > 𝜀 do 5: 𝑥 new ← argmax 𝑥∈[𝑎,𝑏] |𝑓 (𝑥) -𝑅 ⋆ (𝑥)| 6: 𝑋 ← 𝑋 ∪ {𝑥 new } 7: 𝑅 ⋆ ← EDiffCorr(𝑓, 𝜇, 𝜈, 𝑋, 𝜀, 𝑑) 8: 𝛿 ⋆ ← max 𝑥∈𝑋 |𝑓 (𝑥) -𝑅 ⋆ (𝑥)| 9: ∆ ⋆ ← max 𝑥∈[𝑎,𝑏] |𝑓 (𝑥) -𝑅 ⋆ (𝑥)| 10: end while To address the problem over [𝑎, 𝑏], Algorithm 2 solves a series of best E-fraction approximation problems on a discrete subset 𝑋 of [𝑎, 𝑏], where 𝑋 increases at each iteration by adding a point where the current residual term achieves its global maximum. Our current experiments suggest that the speed of convergence for Algorithm 2 is linear. We can potentially decrease the number of iterations by adding to 𝑋 more local extrema of the residual term at each iteration. Other than its speed compared to the LP approach from [START_REF] Brisebarre | An Efficient Method for Evaluating Polynomial and Rational Function Approximations[END_REF], Algorithm 2 will generally converge to the best E-fraction approximation with real coefficients over [𝑎, 𝑏], and not on a discretization of [𝑎, 𝑏]. Once 𝑅 ⋆ is computed, we determine the least integer 𝑠 such that the coefficients of the numerator of 𝑅 ⋆ divided by 2 𝑠 fulfill the first condition of [START_REF] Ercegovac | A general method for evaluation of functions and computation in a digital computer[END_REF]. It gives us a decomposition 𝑅 ⋆ (𝑥) = 2 𝑠 𝑅 𝑠 (𝑥). 𝑅 𝑠 is thus a rescaled version of 𝑅. We take 𝑓 𝑠 = 2 -𝑠 𝑓 to be the corresponding rescaling of 𝑓 . The magnitude bound 𝑑 is usually equal to 𝛼 -max(|𝑎|, |𝑏|), allowing the denominator coefficients to be valid with respect to the second constraint of [START_REF] Ercegovac | A general method for evaluation of functions and computation in a digital computer[END_REF]. Both Algorithm 1 and 2 can be modified to compute weighted error approximations, that is, work with a norm of the form ‖𝑔‖ = max 𝑥∈[𝑎,𝑏] |𝑤(𝑥)𝑔(𝑥)|, where 𝑤 is a continuous and positive weight function over [𝑎, 𝑏]. This is useful, for instance, when targeting relative error approximations. The changes are minimal and consist only of introducing the weight factor in the error computations in lines 3, 5 of Algorithm 1, lines 2, 3, 5, 8, 9 of Algorithm 2 and changing [START_REF] Brisebarre | An Efficient Method for Evaluating Polynomial and Rational Function Approximations[END_REF] with max 𝑥∈𝑋 {︂ 𝑤(𝑥) |𝑓 (𝑥)𝑄 new (𝑥) -𝑃 new (𝑥)| -𝛿𝑄 new (𝑥) 𝑄(𝑥) }︂ . The weighted version of the DC algorithm is discussed, for instance, in [START_REF] Dudgeon | Recursive filter design using differential correction[END_REF]. Lattice basis reduction step Our goal is to compute a simple E-fraction ̂︀ 𝑅(𝑥) = ∑︀ 𝜇 𝑗=0 ̂︀ 𝑝 𝑗 𝑥 𝑗 1 + ∑︀ 𝜈 𝑗=1 ̂︀ 𝑞 𝑗 𝑥 𝑗 , where ̂︀ 𝑝 𝑗 and ̂︀ 𝑞 𝑗 are fixed-point or floating-point numbers [START_REF] Ercegovac | Digital Arithmetic[END_REF][START_REF] Muller | Handbook of Floating-Point Arithmetic[END_REF], that is as close as possible to 𝑓 𝑠 , the function we want to evaluate. These unknown coefficients are of the form 𝑀 2 𝑒 , 𝑀 ∈ Z: • for fixed-point numbers, 𝑒 is implicit (decided at design time); • for floating-point numbers, 𝑒 is explicit (i.e., stored). A floating-point number is of precision 𝑡 if 2 𝑡-1 𝑀 < 2 𝑡 -1. A different format can be used for each coefficient of the desired fraction. If we assume a target format is given for each coefficient, then a straightforward approach is to round each coefficient of 𝑅 𝑠 to the desired format. This yields what we call in the sequel a naive rounding approximation. Unfortunately, this can lead to a significant loss of accuracy. We first briefly recall the approach from [START_REF] Brisebarre | An Efficient Method for Evaluating Polynomial and Rational Function Approximations[END_REF] that makes it possible to overcome this issue. Then, we present a small trick that improves on the quality of the output of the latter approach. Eventually, we explain how to handle a coefficient saturation issue appearing in some high radix cases. Modeling with a closest vector problem in a lattice Every fixed-point number constraint leads to a corresponding unknown 𝑀 , whereas each precision-𝑡 floating-point number leads to two unknowns 𝑀 and 𝑒. A heuristic trick is given in [START_REF] Brisebarre | Efficient polynomial 𝐿 ∞ approximations[END_REF] to find an appropriate value for each 𝑒 in the floating-point case: we assume that the coefficient in question from ̂︀ 𝑅 will have the same order of magnitude as the corresponding one from 𝑅 𝑠 , hence they have the same exponent 𝑒. Once 𝑒 is set, the problem is reduced to a fixed-point one. Then, given 𝑢 0 , . . . , 𝑢 𝜇 , 𝑣 1 , . . . , 𝑣 𝜈 ∈ Z, we have to determine 𝜇 + 𝜈 + 1 unknown integers 𝑎 𝑗 (= ̂︀ 𝑝 𝑗 2 -𝑢𝑗 ) and 𝑏 𝑗 (= ̂︀ 𝑞 𝑗 2 -𝑣𝑗 ) such that the fraction ̂︀ 𝑅(𝑥) = ∑︀ 𝜇 𝑗=0 𝑎 𝑗 2 𝑢𝑗 𝑥 𝑗 1 + ∑︀ 𝜈 𝑗=1 𝑏 𝑗 2 𝑣𝑗 𝑥 𝑗 is a good approximation to 𝑅 𝑠 (and 𝑓 𝑠 ), i.e., ‖ ̂︀ 𝑅 -𝑅 𝑠 ‖ is small. To this end, we discretize the latter condition in 𝜇 + 𝜈 + 1 points 𝑥 0 < • • • < 𝑥 𝜇+𝜈 in the interval [𝑎, 𝑏], which gives rise to the following instance of a closest vector problem, one of the fundamental questions in the algorithmics of Euclidean lattices [START_REF] Nguyen | The LLL Algorithm -Survey and Applications, ser. Information Security and Cryptography[END_REF]: we want to compute 𝑎 0 , . . . , 𝑎 𝜇 , 𝑏 1 , . . . , 𝑏 𝜈 ∈ Z such that the vectors 𝜇 ∑︁ 𝑗=0 𝑎 𝑗 𝛼 𝑗 - 𝑛 ∑︁ 𝑗=1 𝑏 𝑗 𝛽 𝑗 and r (9) are as close as possible, where 𝛼 𝑗 = [2 𝑢𝑗 𝑥 𝑗 0 , . . . , 2 𝑢𝑗 𝑥 𝑗 𝜇+𝜈 ] 𝑡 , 𝛽 𝑗 = [2 𝑣𝑗 𝑥 𝑗 0 𝑅 𝑠 (𝑥 0 ), . . . , 2 𝑣𝑗 𝑥 𝑗 𝜇+𝜈 𝑅 𝑠 (𝑥 𝜇+𝜈 )] 𝑡 and r = [𝑅 𝑠 (𝑥 0 ), . . . , 𝑅 𝑠 (𝑥 𝜇+𝜈 )] 𝑡 . It can be solved in an approximate way very efficiently by applying techniques introduced in [START_REF] Lenstra | Factoring polynomials with rational coefficients[END_REF] and [START_REF] Babai | On Lovász' lattice reduction and the nearest lattice point problem[END_REF]. We refer the reader to [START_REF] Brisebarre | An Efficient Method for Evaluating Polynomial and Rational Function Approximations[END_REF][START_REF] Brisebarre | Efficient polynomial 𝐿 ∞ approximations[END_REF] for more details on this and how the discretization 𝑥 0 , . . . , 𝑥 𝜇+𝜈 should be chosen. We thus propose to fix the problematic values of 𝑏 𝑗 to the closest value to the allowable limit that does not break the E-method magnitude constraints. The change is minor in (9); we just move the corresponding vectors in the second sum on the left hand side of (9) to the right hand side with opposite sign. The resulting problem can also be solved using the tools from [START_REF] Brisebarre | An Efficient Method for Evaluating Polynomial and Rational Function Approximations[END_REF][START_REF] Brisebarre | Efficient polynomial 𝐿 ∞ approximations[END_REF]. This usually gives a valid simple E-fraction ̂︀ 𝑅 of very good quality. Higher radix problems Coefficient saturation issues get more pronounced by increasing the radix 𝑟. In such cases, care must also be taken with the approximation domain: the |𝑞 𝑗 | upper magnitude bound 𝛼 -max(|𝑎|, |𝑏|) can become negative, since 𝛼 = (1 -∆)/(2𝑟) → 0 as 𝑟 → ∞. To counter this, we use argument and domain scaling ideas presented in [START_REF] Brisebarre | Functions approximable by E-fractions[END_REF]. This basically consists in approximating 𝑓 (𝑥) = 𝑓 (2 𝑡 𝑦), for 𝑦 ∈ [2 -𝑡 𝑎, 2 -𝑡 𝑏] as a function in 𝑦. If 𝑡 > 0 is large enough, then the new |𝑞 𝑗 | bound 𝛼 -max(|2 -𝑡 𝑎|, |2 -𝑡 𝑏|) will be 0. A hardware implementation targeting FPGAs We now focus on the hardware implementation of the E-method on FPGAs. This section introduces a generator capable of producing circuits that can solve the system A • y = b, through the recurrences of Equations ( 3)-( 5). The popularity of FPGAs is due to their ability to be reconfigured, and their relevance in prototyping as well as in scientific and high-performance computing. They are composed of large numbers of small look-up tables (LUTs), with 4-6 inputs and 1-2 outputs, and each can store the result of any logic function of their inputs. Any two LUTs on the device can communicate, as they are connected through a programmable interconnect network. Results of computations can be stored in registers, usually two of them being connected to the output of each LUT. These features make of FPGAs a good candidate as a platform for implementing the E-method, as motivated even further below. A minimal interface An overview of the hardware back-end is presented in Figure 1. Although not typically accessible to the user, its interface is split between the functional and the performance specification. The former consists of the radix 𝑟, the input and output formats, specified as the weights of their most significant (MSB) and least significant (LSB) bits, the parameter ∆ (input by the user) and the coefficients of the polynomials 𝑃 𝜇 (𝑥) and 𝑄 𝜈 (𝑥) (coming from the rational approximation). Having 𝑚𝑠𝑏 𝑖𝑛 as a parameter is justified by some of the examples of Section 4, where even though the input 𝑥 belongs to [-1, 1], the maximum value it is allowed to have is smaller, given by the constraints ( 6) and [START_REF]A general hardware-oriented method for evaluation of functions and computations in a digital computer[END_REF]. This allows for optimizing the datapath sizes. The 𝑚𝑠𝑏 𝑜𝑢𝑡 can be computed during the polynomial/rational approximation phase and passed to the generator. The circuit generator is developed inside the FloPoCo framework [START_REF] De Dinechin | Designing custom arithmetic data paths with FloPoCo[END_REF], which facilitates the support of classical parameters in the performance specification, such as the target frequency of the circuit, or the target FPGA device. It also means that we can leverage on the automatic pipelining and test infrastructure present in the framework, alongside the numerous existing arithmetic operators. w (j-1) i d (j-1) 0 d (j-1) i 𝜌x . . . x 0 -x . . . (-𝜌)x d (j-1) i+1 KCM ×𝑞 𝑖 MUX Bitheap ≪ w (j) i + - - + CU i Implementation details An overview of the basic iteration, based on Equation (3), is presented in Figure 2. As this implementation is targeted towards FPGAs, several optimizations can be applied. First, the multiplication 𝑑 (𝑗-1) 0 • 𝑞 𝑖 can be computed using the KCM technique for multiplying by a constant [START_REF] Chapman | Fast integer multipliers fit in FPGAs (EDN 1993 design idea winner)[END_REF][START_REF] Wirthlin | Constant coefficient multiplication using look-up tables[END_REF], with the optimizations of [START_REF] Volkova | Towards Hardware IIR Filters Computing Just Right: Direct Form I Case Study[END_REF], that extend the method for real constants. Therefore, instead of using dedicated multiplier blocks (or of generating partial products using LUTs), we can directly tabulate the result of the multiplication 𝑑 (𝑗-1) 0 • 𝑞 𝑖 , at the cost of one LUT per output bit of the result. This remains true even for higher radices, as LUTs on modern devices can accommodate functions of 6 Boolean inputs. A second optimization relates to the term 𝑑 𝑖+1 . The multiplications by the negative values in the digit set come at the cost of just one bitwise operation and an addition, which are combined in the same LUT by the synthesis tools. Finally, regarding the implementation of the CUs, the multi-operand addition of the terms of Equation ( 3) is implemented using a bitheap [START_REF] Brunie | Arithmetic core generation using bit heaps[END_REF]. The alignments of the accumulated terms and their varied sizes would make for a wasteful use of adders. Using a bitheap we have a single, global optimization of the accumulation. In addition, managing sign extensions comes at the cost of a single additional term in the accumulation, using a technique from [START_REF] Ercegovac | Digital Arithmetic[END_REF]. The sign extension of a two's complement fixed point number sxx . . . xx is performed as: 00...0sxxxxxxx + 11...110000000 = ss...ssxxxxxxx The sum of the constants is computed in advance and added to the accumulation. The final shift comes at just the cost of some routing, since the shift is by a constant amount. Modern FPGAs contain fast carry chains. Therefore, we represent the components of the residual vector 𝑤 using two's complement, as opposed to a redundant representation. The selection function uses an estimate of 𝑤 𝑖 with one fractional bit allowing a simple tabulation using LUTs. Iteration 0, the initialization, comes at almost no cost, and can be done through fixed routing in the FPGA. This is also true for the second iteration, as simplifying the equations results in 𝑤 can be pre-computed and stored directly. This not only saves one iteration, but also improves the accuracy, as 𝑤 (1) 𝑖 and 𝑑 (1) 𝑖 can be pre-computed using higher-precision calculations. Going one iteration further, we can see that most of the computations required for 𝑤 (2) 𝑖 can also be done in advance, except, of course, those involving 𝑥. Figure 3 shows an unrolled implementation of the E-method, that uses the CUs of Figure 2 as basic building blocks. At the top of the architecture, a scaling can be applied to the input. This step is optional, and the scale factor (optional parameter in the design) can either be set by the user, or computed by the generator so that, given the input format, the parameter ∆ and the coefficients of 𝑃 and 𝑄, the scaled input satisfies the constraints ( 6) and [START_REF]A general hardware-oriented method for evaluation of functions and computations in a digital computer[END_REF]. The multiplications between 𝑥 and the possible values of the digits 𝑑 (𝑗) 𝑖 are done using the classical shift-and-add technique, a choice justified by the small values of the constants and the small number of bits equal to 1 in their representations. At the bottom of Figure 3, the final result 𝑦 0 is obtained in two's complement representation. Again, this step is also optional, as users might be content with having the result in the redundant representation. There is one more optimization that can be done here due to an unrolled implementation. Because only the 𝑑 (𝑗) 0 digits are required to compute 𝑦 0 , after iteration 𝑚 -𝑛 we can compute one less element of w (𝑗) and d (𝑗) at each iteration. This optimization is the most effective when the number of required iterations 𝑚 is comparable to 𝑛, in which case the required hardware is reduced to almost half. Error Analysis To obtain a minimal implementation for the circuit described in Figure 3, we need to size the datapaths in a manner that guarantees that the output 𝑦 0 remains unchanged, with respect to an ideal implementation, free of potential rounding errors. To that end, we give an error-analysis, which follows [START_REF] Ercegovac | A general method for evaluation of functions and computation in a digital computer[END_REF]Ch. 2.8]. For the sake of brevity, we focus on the radix 2 case. In order for the circuit to produce correct results, we must ensure that the rounding errors do not influence the selection function: 𝑆( ̃︀ 𝑤 (𝑗) 𝑖 ) = 𝑆(𝑤 (𝑗) 𝑖 ) = 𝑑 (𝑗) 𝑖 , where the tilded terms represent approximate values. In [START_REF] Ercegovac | A general method for evaluation of functions and computation in a digital computer[END_REF], the idea is to model the rounding errors due to the limited precision used to store the coefficients 𝑝 𝑗 and 𝑞 𝑗 inside the matrix 𝐴 as a new error matrix E A = (𝜀 𝑖𝑗 ) 𝑛×𝑛 . With the method introduced in this paper, the coefficients are machine representable numbers, and therefore incur no additional error. What remains to deal with are errors due to the limited precision of the involved operators. The only one that could produce rounding errors is the multiplication 𝑑 (𝑗-1) 0 • 𝑞 𝑖 . We know that 𝑑 (𝑗-1) 0 1 (the case 𝑑 (𝑗-1) 0 = 0 is clearly not a problem), so the LSB of 𝑑 (𝑗-1) 0 • 𝑞 𝑖 is at least that of 𝑞 𝑖 , if not larger. If the output precision satisfies 𝑙𝑠𝑏 𝑜𝑢𝑡 𝑙𝑠𝑏 𝑞𝑖 (which is usually the case), we perform this operation on its full precision, so we do not require any additional guard bits for the internal computations. If this assumption does not hold, based on [START_REF] Ercegovac | A general method for evaluation of functions and computation in a digital computer[END_REF], we obtain the following expression for the rounding errors introduced when computing w (𝑗) inside Equation ( 2), denoted with 𝜀 (𝑗) w : 𝜀 (𝑗) w = 2 • (𝜀 (𝑗-1) w + 𝜀 𝑐𝑜𝑛𝑠𝑡_𝑚𝑢𝑙𝑡 + E A • d (𝑗-1) ). We can thus obtain an expression for 𝜀 (𝑚) w , the error vector at step 𝑚, where 𝑚 is the bitwidth of 𝑦 0 and 𝜀 𝑐𝑜𝑛𝑠𝑡_𝑚𝑢𝑙𝑡 are the errors due to the constant multipliers. Since 𝜀 (0) w = 0, 𝜀 (𝑚) 𝑤0 = 2 𝑚 • (𝜀 𝑐𝑜𝑛𝑠𝑡_𝑚𝑢𝑙𝑡 + ‖E A ‖ • 𝑚 ∑︁ 𝑗=1 𝑑 (𝑗) 0 • 2 -𝑗 ), where ‖E A ‖ is the matrix 2-norm. We use a larger intermediary precision for the computations, with 𝑔 extra guard bits. Therefore, we can design a constant multiplier for which 𝜀 𝑐𝑜𝑛𝑠𝑡_𝑚𝑢𝑙𝑡 ε𝑐𝑜𝑛𝑠𝑡_𝑚𝑢𝑙𝑡 2 -𝑚-𝑔 . Also, ‖E A ‖ max 𝑖 𝑛 ∑︁ 𝑗=1 |𝜀 𝑖𝑗 | and 𝑚 ∑︁ 𝑗=𝑖 𝑑 (𝑗) 0 • 2 -𝑗 < 1, hence we can deduce that for each 𝑤 (𝑚) 𝑖 we have 𝜀 (𝑚) 𝑤𝑖 ε(𝑚) 𝑤𝑖 2 𝑚 (2 -𝑚-𝑔 + 𝑛 • 2 -𝑚-𝑔 ). In order for the method to produce correct results, we need to ensure that ε(𝑚) 𝑤 ∆/2, therefore we need to use 𝑔 2 + log 2 (2(𝑛 + 1)/∆) additional guard bits. This also takes into consideration the final rounding to the output format. Examples, Implementation and Discussion In this section, we consider fractions with fixed-point coefficients of 24, 32, and 48 bits: these coefficients will be of the form 𝑖/2 𝑚 , with -2 𝑚 𝑖 2 𝑚 , where 𝑤 = 24, 32, 48. The target approximation error in each case is 2 -𝑚 , i.e., ∼ 5.96 • 10 -8 , 2.33 • 10 -10 , 3.55 • 10 -15 respectively. Examples. All the examples are defined in the first column of Table 1. When choosing them we considered: • Functions useful in practical applications. The exponential function (Example 2) is a ubiquitous one. Functions of the form log 2 (1 + 2 ±𝑘𝑥 ) (as the one of Example 3) are useful when implementing logarithmic number systems.The erf function (Example 4) is useful in probability and statistics, while the Bessel function 𝐽 0 (Example 5) has many applications in physics. • Functions that illustrate the various cases that can occur: polynomials are a better choice (Example 3); rational approximation is better (Examples 1, 2, Example 4 if 𝑟 8 and Example 5 if 𝑟 = 2). We also include instances where the approximating E-fractions are very different from the minimax, unconstrained, rational approximations with similar degrees in the numerator and denominator (Examples 1 and 2). All the examples start with a radix 2 implementation after which higher values of 𝑟 are considered. Table 1 displays approximation errors in the real coefficient and fixed-point coefficient E-fraction cases. Notice in particular the lattice-based approximation errors, which are generally much better than the naive rounding ones. We also give some complementary comments. Example 1. The type (4, 4) rational minimax unconstrained approximation error is 4.59 • 10 -16 , around 5 orders of magnitude smaller than the E-fraction error. A similar difference happens in case of Example 2, where the type [START_REF] Sarma | Faithful bipartite ROM reciprocal tables[END_REF][START_REF] Sarma | Faithful bipartite ROM reciprocal tables[END_REF] unconstrained minimax approximation has error 2.26 • 10 -16 . Example 2. In this case, we are actually working with a rescaled input and are equivalently approximating exp(2𝑥), 𝑥 ∈ [0, 7/128]. Also, for 𝑟 = 8, the real coefficient E-fraction is the same as the E-polynomial one (the magnitude constraint for the denominator coefficients is 0). Example 3. Starting with 𝑟 = 8, we have to scale both the argument 𝑥 and the approximation domain by suitable powers of 2 for the E-method constraints to continue to hold (see end of Section 2.1). Example 4. As with the previous example, for 𝑟 = 16, 32 we have to rescale the argument and interval to get a valid E-polynomial. Example 5. By a change of variable, we are actually working with 𝐽 0 (2𝑥 -1/16), 𝑥 ∈ [0, 1/16]. If we consider 𝑟 16, the 48 bits used to represent the coefficients were not sufficient to produce an approximation with error below 2 -48 . Implementation. We have generated the corresponding circuits for each of the examples, and synthesized them. The target platform is a Xilinx Virtex6 device xc6vcx75t-2-ff484, and the toolchain used is ISE 14.7. The resulting circuit descriptions are in an easily readable and portable VHDL. For each of the examples compare against a state of the art polynomial approximation implementation generated by FloPoCo (described in [START_REF] De Dinechin | On fixed-point hardware polynomials[END_REF]). FloPoCo [START_REF] De Dinechin | Designing custom arithmetic data paths with FloPoCo[END_REF] is an open-source arithmetic core generator and library for FPGAs. It is, to the best of our knowledge, one of the only alternatives capable of producing the functions chose for comparison. Table 2 presents the results. At the top of Table 2, for Example 1, we show the flexibility of the generator: it can achieve a wide range of latencies and target frequencies. The examples show how the frequency can be scaled from around 100MHz to 300MHz, at the expense of a deeper pipeline and an increased number of registers. The number of registers approximately doubles each time the circuit's period is reduced by a factor 2. This very predictable behavior should help the end user make an acceptable trade-off in terms of performance to required resources. The frequency cap of 300MHz is not something inherent to the E-method algorithm, neither to the implementation. Instead it comes from pipeline performance issues of the bitheap framework inside the FloPoCo generator. We expect that once this bottleneck is fixed, our implementations will reach much higher target frequencies, without the need for any modifications in the current implementation. Discussion. Examples 1 and 2 illustrate that for functions where classical polynomial approximation techniques, like the one used in FloPoCo, manage to find solutions of a reasonably small degree, the ensuing architectures also manage to be highly efficient. This shows, as implementations produced by FloPoCo (with polynomials of degree 6 in both cases) are twice (if not more) as efficient in terms of resources. However, this is no longer the case when E-fractions can provide a better approximation. This is reflected by Examples 3 to 5, where we obtain a more efficient solution, by quite a large margin in some cases. For Example 5, Table 2 does not present any data for the FloPoCo implementation as they do not currently support this type of function. There are a few remarks to be made regarding the use of a higher radix in the implementations of the E-method. Example 4 is an indication that the overall delay of the architecture reaches a point where it can no longer benefit from increasing the radix. The lines of Table 2 marked with an asterisk were generated with an alternative implementation for the CUs, which uses multipliers for computing the 𝑑 (𝑗-1) 𝑖+1 • 𝑥 products. This is due to the exponential increase of the size of multiplexers with the increase of the radix, while the equivalent multiplier only increases linearly. Therefore, there is a crossover point from which it is best to use this version of the architecture, usually at radix 8 or 16. Finally, the effects of truncating the last iterations become the most obvious when the maximum degree 𝑛 is close to the number of required iterations 𝑚 in radix 𝑟. This effect can be observed for Example 3 and 4, where there is a considerable drop in resource consumption between the use of radix 8 and 16, and 16 and 32, respectively. Summary and Conclusions A high throughput system for the evaluation of functions by polynomials or rational functions using simple and uniform hardware is presented. The evaluation is performed using the unfolded version of the E-method, with a latency proportional to the precision. An effective computation of the coefficients of the approximations is given and the best strategies (choice of polynomial vs rational approximation, radix of the iterations) investigated. Designs using a circuit generator for the E-method inside the FloPoCo framework are developed and implemented using FPGAs for five different functions of practical interest, using various radices. As it stands, and as our examples show for FPGA devices, the E-method is generally more efficient as soon as the rational approximation is significantly more efficient than the polynomial one. From a hardware standpoint, the results show it is desirable to use the E-method with high radices, usually at least 8. The method also becomes efficient when we manage to find a balance between the maximum degree 𝑛 of the polynomial or E-fraction and the number of iterations required for converging to a correct result, which we can control by varying the radix. An open-source implementation of our approach will soon be available online at https://github.com/sfilip/emethod. A Convergence of the E-fraction differential correction algorithm The E-fraction approximation problem can be formulated in terms of finding functions of the form 𝑅(𝑥) = 𝑃 (𝑥)/𝑄(𝑥) := ∑︀ 𝜇 𝑗=0 𝛼𝑗𝑥 𝑗 1 + ∑︀ 𝜈 𝑗=1 𝛽𝑗𝑥 𝑗 , (10) that best approximate 𝑓 , where the denominator coefficients are bounded in magnitude, i.e., max 1 𝑘 𝜈 |𝛽 𝑘 | 𝑑 > 0, (11) and 𝑄(𝑥) > 0, ∀𝑥 ∈ 𝑋. We denote this set as ℛ𝜇,𝜈 (𝑋). In this case, the differential correction algorithm is defined as follows: given 𝑅 𝑘 = 𝑃 𝑘 /𝑄 𝑘 ∈ ℛ𝜇,𝜈 (𝑋) satisfying ( 10) and [START_REF] Cheney | Introduction to Approximation Theory[END_REF], find 𝑅 = 𝑃/𝑄 that minimizes the expression 𝑀 over 𝑋 for any rational function satisfying [START_REF] Ercegovac | Digital Arithmetic[END_REF] and [START_REF] Cheney | Introduction to Approximation Theory[END_REF] where 𝑧 𝑘 is an element of 𝑋 minimizing 𝑄 𝑘 (𝑥) 𝑄 𝑘+1 (𝑥) . Since 𝛿 𝑘 converges to 𝛿 > 𝛿, [START_REF] Reemtsen | Modifications of the first Remez algorithm[END_REF] It follows that ∏︀ 𝑥∈𝑋 𝑄 𝑘 (𝑥) diverges as 𝑘 → ∞. This contradicts the normalization condition [START_REF] Cheney | Introduction to Approximation Theory[END_REF]. (𝑗) 𝑖 are selected so that the residuals |𝑤 (𝑗) 𝑖 | remain bounded. The digit selection is performed by rounding the residuals 𝑤 (𝑗) 𝑖 Figure 2 : 2 Figure 2: The basic Computation Unit (CU). (𝑗- 1 ) 1 𝑖+1 • 𝑥, from Equation (3). Since 𝑑 (𝑗-1) 𝑖+1 ∈ {-𝜌, . . . , 𝜌}, we can compute the products 𝑥 • 𝜌, 𝑥 • (𝜌 -1), 𝑥 • (𝜌 -2), . . . , only once, and then select the relevant one based on the value of 𝑑 (𝑗-1) Figure 3 : 3 Figure 3: The E-method circuit generator. max 𝑥∈𝑋 {︂ |𝑓 (𝑥)𝑄(𝑥) -𝑃 (𝑥)| -𝛿 𝑘 𝑄(𝑥) 𝑄 𝑘 (𝑥) }︂ , subject to max 1 𝑘 𝜈 |𝛽𝑗| 𝑑, where 𝛿 𝑘 = max𝑥∈𝑋 |𝑓 (𝑥) -𝑅 𝑘 (𝑥)|. If 𝑅 = 𝑃/𝑄 is not good enough, continue with 𝑅 𝑘+1 = 𝑅. The convergence properties of this variation are similar to those of the original differential correction algorithm. Let 𝛿 * = inf 𝑅∈ℛ𝜇,𝜈 (𝑋) ‖𝑓 -𝑅‖, where ‖𝑔‖ = max𝑥∈𝑋 |𝑔(𝑥)|. Theorem 1. If 𝑄 𝑘 (𝑥) > 0, ∀𝑥 ∈ 𝑋 and if 𝛿 𝑘 ̸ = 𝛿, then 𝛿 𝑘+1 < 𝛿 𝑘 and 𝑄 𝑘+1 (𝑥) > 0, ∀𝑥 ∈ 𝑋. Proof. Since 𝛿 𝑘 ̸ = 𝛿 * , there exists 𝑅 = 𝑃 /𝑄 ∈ ℛ𝜇,𝜈 (𝑋) s.t. ‖𝑓 -𝑅‖ = 𝛿 < 𝛿 𝑘 . Let Δ = min𝑥∈𝑋 ⃒ ⃒ 𝑄(𝑥) ⃒ ⃒ and 𝑀 = 1 + 𝑑 max𝑥∈𝑋 ∑︀ 𝜈 𝑘=1 ⃒ ⃒ 𝑥 𝑘 ⃒ ⃒ so that |𝑄(𝑥)| Algorithm 1 E-fraction EDiffCorr algorithm Input: 𝑓 ∈ 𝒞([𝑎, 𝑏]), 𝜇, 𝜈 ∈ N, finite set 𝑋 ⊆ [𝑎, 𝑏] with |𝑋| > 𝜇 + 𝜈, threshold 𝜀 > 0, coefficient magnitude bound 𝑑 > 0 𝑘 𝑥 𝑘 of 𝑓 over 𝑋 s.t. max 1 𝑘 𝜈 |𝑞 𝑘 | 𝑑 // Initialize the iterative procedure (𝑅 = 𝑃/𝑄) Output: approximation 𝑅(𝑥) = ∑︀ 𝜇 𝑘=0 𝑝 𝑘 𝑥 𝑘 1 + ∑︀ 𝜈 𝑘=1 𝑞 1: 𝑅 ← 1 2: repeat 3: While we generally obtain integer 𝑎 𝑗 and 𝑏 𝑗 which correspond to a good approximation, the solution is not always guaranteed to give a valid simple E-fraction. What happens is that, in many cases, some of the denominator coefficients in 𝑅 𝑠 are maximal with respect to the magnitude constraint in (6) (recall that the second line in (6) can be restated as |𝑞 𝑗 | 𝛼 -max(|𝑎|, |𝑏|)). In this context, the corresponding values of |𝑏 𝑗 | are usually too large. radix 𝑟 input format (𝑚𝑠𝑏 𝑖𝑛 , 𝑙𝑠𝑏 𝑖𝑛 ) output format (𝑚𝑠𝑏 𝑜𝑢𝑡 , 𝑙𝑠𝑏 𝑜𝑢𝑡 ) Circuit generator .vhdl 0 < ∆ < 1 (𝑝 𝑗 ) 0 𝑗<𝜇 and (𝑞 𝑗 ) 0 𝑗<𝜈 FPGA frequency Functional specification Performance specification Figure 1: Circuit generator overview. 2.2.2 A solution to a coefficient saturation issue Table 1 : 1 Approximation errors in the real coefficient and fixed-point coefficient E-fraction cases Function Type of error Δ 𝑟 (𝜇, 𝜈) 𝑚 Real coefficient E-fraction error Lattice-based error Naive rounding error Ex. 1 √︀ 1 + (9𝑥/2) 4 , 𝑥 ∈ [0, 1/32] absolute 1 8 2 4 8 (4, 4) 32 5.22 • 10 -11 6.32 • 10 -11 8.25 • 10 -11 5.71 • 10 -11 4.93 • 10 -10 1.11 • 10 -10 1.11 • 10 -9 7 • 10 -11 1.78 • 10 -9 Ex. 2 exp(𝑥), 𝑥 ∈ [0, 7/64] relative 1 8 2 4 8 (3, 3) (4, 4) (5, 0) 32 1.64 • 10 -10 10 -12 1.16 • 10 -12 1.94 • 10 -10 1.11 • 10 -12 1.39 • 10 -12 3.24 • 10 -10 1.91 • 10 -11 1.74 • 10 -11 Ex. 3 log 2 (1 + 2 -16𝑥 ), 𝑥 ∈ [0, 1/16] absolute 1 2 2 4,8,16 (5, 5) 24 (5, 0) 1.98 • 10 -8 2.04 • 10 -8 2.33 • 10 -8 2.64 • 10 -8 4.37 • 10 -7 4.22 • 10 -7 Ex. 4 erf(𝑥), 𝑥 ∈ [0, 1/32] absolute 1 8 2 4 8,16,32 (5, 0) (4, 4) (4, 4) 48 2.92 • 10 -17 3.44 • 10 -17 1.34 • 10 -15 3.43 • 10 -17 4.23 • 10 -17 1.64 • 10 -15 1.67 • 10 -16 1.13 • 10 -16 2.7 • 10 -15 Ex. 5 𝐽0(𝑥), 𝑥 ∈ [-1/16, 1/16] relative 1 2 2 4,8 (4, 4) 48 (6, 0) 2.15 • 10 -17 1.23 • 10 -17 2.37 • 10 -15 2.37 • 10 -15 2.49 • 10 -15 2.53 • 10 -15 Table 2 : 2 Synthesis results for a Xilinx Virtex6 device Design Approach radix Resources LUT reg. Performance cycles@period(ns) 7,880 0 [email protected] 2 7,966 7,299 1,523 2,689 [email protected] [email protected] 6,786 5,202 [email protected] 4,871 0 [email protected] Ours 4 4,768 4,600 988 1,583 [email protected] [email protected] 4,853 3,106 [email protected] Ex. 1 4,210 0 [email protected] 3,875* 0 [email protected]* 8 5,307* 309 [email protected]* 5,184* 499 [email protected]* 4,707* 1,027 [email protected]* 994 0 [email protected] FloPoCo - 1,032 138 [email protected] 1,147 335 [email protected] 2 6,820 0 [email protected] Ex. 2 Ours 4 8 6,356 5,042 0 0 [email protected] [email protected] FloPoCo - 3,024 0 [email protected] 2 2,944 0 [email protected] 4 2,742 0 [email protected] Ex. 3 Ours 8 16 2,582 2,856 1,565* 0 0 0 [email protected] [email protected] [email protected]* FloPoCo - 3,622 0 [email protected] 2 19,564 0 [email protected] Ex. 4 Ours 4 8 23,052 21,179* 15,388* 0 0 0 [email protected] [email protected]* [email protected]* 16 12,878* 0 [email protected]* 32 3,909* 0 [email protected]* FloPoCo - 20,494 0 [email protected] 2 19,423 0 [email protected] Ex. 5 Ours 4 8 13,642 18,653 0 0 [email protected] [email protected] FloPoCo - - - - . By the definition of 𝑃 𝑘+1 and 𝑄 𝑘+1 , we havemax𝑥∈𝑋 {︂ |𝑓 (𝑥)𝑄 𝑘+1 (𝑥) -𝑃 𝑘+1 (𝑥)| -𝛿 𝑘 𝑄 𝑘+1 (𝑥) 𝑄 𝑘 (𝑥)This chain of inequalities tells us that𝛿 𝑘 𝑄 𝑘+1 (𝑥)/𝑄 𝑘 (𝑥) Δ(𝛿 𝑘 -𝛿)/𝑀, ∀𝑥 ∈ 𝑋(12)resulting in 𝑄 𝑘+1 (𝑥) > 0, ∀𝑥 ∈ 𝑋 and{|𝑓 (𝑥)𝑄 𝑘+1 (𝑥) -𝑃 𝑘+1 (𝑥)| -𝛿 𝑘 𝑄 𝑘+1 (𝑥)} /𝑄 𝑘 (𝑥) < 0, ∀𝑥 ∈ 𝑋, leading to |𝑓 (𝑥) -𝑃 𝑘+1 (𝑥)/𝑄 𝑘+1 (𝑥)| < 𝛿 𝑘 , ∀𝑥 ∈ 𝑋.The conclusion now follows by the definition of 𝛿 𝑘 . 𝛿 𝑘 → 𝛿 * as 𝑘 → ∞.Proof. By the previous theorem, we know that {𝛿 𝑘 } is a monotonically decreasing sequence bounded below by 0. It is thus convergent. Denote its limit by 𝛿 and assume that 𝛿 > 𝛿 * . Then, there exists 𝑅 = 𝑃 /𝑄 ∈ ℛ𝜇,𝜈 (𝑋) s.t.Let Δ and 𝑀 be as in the previous theorem. By the aforementioned chain of inequalities, we get 𝛿 𝑘 -Δ 𝑀 (𝛿 𝑘 -𝛿) 𝑄 𝑘 (𝑧 𝑘 ) 𝑄 𝑘+1 (𝑧 𝑘 ) , }︂ max𝑥∈𝑋 {︃ 𝑄 𝑘 (𝑥) ⃒ 𝑓 (𝑥)𝑄(𝑥) -𝑃 (𝑥) ⃒ ⃒ ⃒ -𝛿 𝑘 𝑄(𝑥) }︃ max𝑥∈𝑋 {︂[︂⃒ ⃒ ⃒ ⃒ 𝑓 (𝑥) - 𝑃 (𝑥) 𝑄(𝑥) ⃒ ⃒ ⃒ ⃒ -𝛿 𝑘 ]︂ 𝑄 𝑘 (𝑥) 𝑄(𝑥) }︂ max𝑥∈𝑋 {︂ (𝛿 -𝛿 𝑘 ) 𝑄 𝑘 (𝑥) 𝑄(𝑥) }︂ = -(𝛿 𝑘 -𝛿) min𝑥∈𝑋 𝑄(𝑥) 𝑄 𝑘 (𝑥) - Δ 𝑀 (𝛿 𝑘 -𝛿). Theorem 2. ⃦ ⃦ 𝑓 -𝑅 ⃦ ⃦ = 𝛿 < 𝛿. ⃒ ⃒ ⃒ ⃒ 𝑓 (𝑥) - 𝑃 𝑘+1 (𝑥) 𝑄 𝑘+1 (𝑥) ⃒ ⃒ ⃒ ⃒ -𝛿 𝑘 - Δ 𝑀 (𝛿 𝑘 -𝛿) 𝑄 𝑘 (𝑥) 𝑄 𝑘+1 (𝑥) , ∀𝑥 ∈ 𝑋 and 𝛿 𝑘+1 - it follows from[START_REF] Cheney | Two new algorithms for rational approximation[END_REF] that there exists a positive integer 𝑘0 such that𝑄 𝑘 (𝑧 𝑘 ) 𝑄 𝑘+1 (𝑧 𝑘 ) 𝑐 |𝑋| , ∀𝑘 𝑘0.(16)Using (15) and (16), we have 𝑄 𝑘 (𝑧 𝑘 ) 𝑄 𝑘+1 (𝑧 𝑘 ) ∏︁ 𝑥∈𝑋,𝑥̸ =𝑧 𝑘 𝑄 𝑘 (𝑥) 𝑄 𝑘+1 (𝑥) 𝑐 |𝑋| (︂ 1 𝑐 )︂ |𝑋|-1 = 𝑐, and ∏︁ 𝑥∈𝑋 𝑄 𝑘+1 (𝑥) 𝑄 𝑘 (𝑥) 1 𝑐 2. implies that lim 𝑘→∞ 𝑄 𝑘 (𝑧 𝑘 ) 𝑄 𝑘+1 (𝑧 𝑘 ) = 0. (14) From (12) we get 𝑄 𝑘+1 (𝑥) 𝑄 𝑘 (𝑥) Δ 𝑀 (𝛿 𝑘 -𝛿)/𝛿 𝑘 𝑐𝑄 𝑘 (𝑥), ∀𝑥 ∈ 𝑋, (15) provided that 𝑐 Δ 𝑀 (1 -𝛿/𝛿) Δ 𝑀 (𝛿 𝑘 -𝛿)/𝛿 𝑘 . Taking 𝑐 = min {︂ 1 2 , Δ 𝑀 (1 -𝛿/𝛿) }︂ , Acknowledgments This work was partly supported by the FastRelax project of the French Agence Nationale de la Recherche, EPSRC (UK) under grants EP/K034448/1 and EP/P010040/1, the Royal Academy of Engineering and Imagination Technologies.
01681583
en
[ "sde", "sdu.stu", "sdu.envi" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01681583/file/BCON-16-361%20Revised%20draft%20Ait%20hamza%20et%20al.pdf
Mohamed Ait Hamza email: [email protected] Hicham Lakhtar Hafssa Tazi Abdelmajid Moukhli Odile Fossati-Gaschignard Lucie Miche Sevastianos Roussos Zahra Ferji Abdelhamid El Mousadik Thierry Mateille Mohamed Aït Hamza Lucie Miché Sebastianos Roussos Abdelhamid El Mousadik Hassan Boubaker email: [email protected] Diversity of nematophagous fungi in Moroccan olive nurseries: Highlighting prey-predator interactions and efficient strains against root-knot nematodes Keywords: Culture substrate, Ecology, Fungal antagonists, Olive tree, Root-knot nematode published or not. The documents may come L'archive ouverte pluridisciplinaire Introduction The olive tree (Olea europaea L.) is widely distributed throughout the Mediterranean Basin. In Morocco alone, the olive-growing area is estimated at 650,000 ha and produces 120,000 tons of oil and 150,000 tons of table olives per year (Ministry of Agricultural and Marine Fisheries, 2009). Planting material comes from several nurseries distributed throughout the olive-producing areas and olive plantlets are certified free of pathogens (e.g., Verticillium dahlia) and parasites (e.g., plant-parasitic nematodes). Nevertheless, standard health practices are not applied in all nurseries and seasonal and informal nurseries coexist. Several plant-parasitic nematode species (PPN) have been found to be associated with olive trees [START_REF] Castillo | Plant-parasitic nematodes attacking olive trees and their management[END_REF][START_REF] Ali | Plant-parasitic nematodes associated with olive tree (Olea europaea L.) with a focus on the Mediterranean Basin: A review[END_REF][START_REF] Hamza | Plant-parasitic nematodes associated with olive tree in Southern Morocco[END_REF], including root-knot nematodes (RKN, Meloidogyne spp.) that were shown to impact olive growth [START_REF] Afshar | Effects of the root-knot nematodes Meloidogyne incognita and M. javanica on olive plants growth in glasshouse conditions[END_REF]. RKN have been reported to be major pests on olive trees, mainly in nurseries where irrigation conditions are favorable to their multiplication [START_REF] Nico | Incidence and population density of plant-parasitic nematodes associated with olive planting stocks at nurseries in southern Spain[END_REF][START_REF] Sanei | Incidence of plant-parasitic nematodes associated with olive planting stocks at nurseries in northern Iran[END_REF]. Moreover, most RKN infestations in olive orchards result from contaminated plant material produced in uncertified nurseries [START_REF] Nico | Incidence and population density of plant-parasitic nematodes associated with olive planting stocks at nurseries in southern Spain[END_REF]. To protect nurseries, vegetative propagation of olive is a crucial step to prevent plant pest dispersal. While some industrial olive-producing countries use synthetic substrates to cultivate olive trees, other countries use more traditional olive cultivation techniques, planting roots in soil substrates. In countries that have implemented large-scale planting programs (e.g., 1,220,000 ha in the Morocco Green Plan, 2015), the "nursery risk" is a concern, with two main consequences: (i) the weakening of olive plants before their transplantation into orchards; and (ii) the introduction of pathogens into orchards. The nematode risk is now controlled in nurseries, either by multiplication of resistant varieties [START_REF] Palomares-Rius | Nematode community populations in the rhizosphere of cultivated olive differs according to the plant genotype[END_REF] or by inoculating microbial antagonists in the substrates, such as mycorrhizae [START_REF] Castillo | Protection of olive planting stocks against parasitism of root-knot nematodes by arbuscular mycorrhizal fungi[END_REF]. Thus, in view of the large-scale development of olive cultivation in Morocco, sanitization of the substrates must be a priority. Some previous studies highlighted that certain substrates can be suppressive depending on their composition, either because of microbial antagonisms or of toxicity. For example, the addition of forest residues provides organic matter that could contribute to PPN suppression by increasing soil microflora [START_REF] Rodriguez-Kabana | Biological control of nematodes: soil amendments and microbial antagonists[END_REF]. Composted dry cork, especially in nurseries, could be effective for M. incognita suppression due to the toxicity of the products released (e.g., ammonia, phenolic compounds) [START_REF] Nico | Control of root-knot nematodes by composted agro-industrial wastes in potting mixtures[END_REF]. Moreover, the introduction of microbial antagonists into substrates would strengthen the sustainability of preventive techniques and help introduce biocontrol agents into orchards. Microbial antagonists of PPN include nematophagous fungi (NF), raising expectations for their use in integrated pest management [START_REF] Waller | From discovery to development: current industry perspectives fo the development on novel methods of helminth control in livestock[END_REF][START_REF] Larsen | Biological control of nematodes parasites in sheep[END_REF][START_REF] Maingi | Control of gastrointestinal nematodes in goats on pastures in South Africa using nematophagous fungi Duddingtonia flagrans and selective anthelmintic treatments[END_REF]. These fungi act by antibiosis, parasitism or predation [START_REF] Imerglik | Recherches préliminaires sur la spécificité du piégage des nématodes par des hyphomycètes prédateurs[END_REF][START_REF] Gaspard | Nematophagous fungi associated with Tylenchulus semipenetrans and the citrus rhizosphere[END_REF]. Fungi trap nematodes using adhesive networks or buttons. Some NF capture their prey with constrictor rings before strangling them and others with hyphae networks that produce sticky compounds. The specific enzymes then digest the nematodes. Recognition and attachment of the mycelium to the nematode cuticle are mainly due to compatible glycoproteins such as lectins [START_REF] Nordbring-Hertz | Nematophagous fungi[END_REF]. Nevertheless, edaphic factors such as soil pH, temperature, moisture and structure influence their efficiency [START_REF] Brown | Principles and practice of nematode control in crops[END_REF]. Fungal diversity can increase in stressed or polluted sites [START_REF] Korol | Recombination variability and evolution[END_REF] in order to adapt to changing environments [START_REF] West | The fourth dimension of life: fractal geometry and allometric scaling of organisms[END_REF]. In Arthrobotrys oligospora, genetic diversity varies according to environmental conditions. Their strains could generate recombinant genotypes by crossing with native strains, thus enhancing their environmental adaptability and parasitizing ability [START_REF] Zhang | Genetic diversity and recombination in natural populations of the nematode-trapping fungus Arthrobotrys oligospora from China[END_REF]. However, the reproductive strategies of fungi like A. oligospora can change depending on the population. According to [START_REF] Cook | Making greater use of introduced microorganisms for biological control of plant pathogens[END_REF], the search for microorganisms from the rhizospheric soil of a specific crop could lead to the isolation of effective antagonists against pathogens that could be adapted to the plant species as well as to particular environmental conditions. Considering that olive seedlings in Morocco are generally cultivated in non-sanitized substrates consisting of soil material from various habitat origins, the recovery of native NF from substrates may be of great interest in order to develop RKN biocontrol agents adapted to nursery conditions. In this context, this work aims to (i) evaluate the nematode populations in substrates from olive nurseries in Morocco; (ii) isolate and characterize NF able to control RKN (Meloidogyne spp.); (iii) determine the occurrence and diversity of nematode-associated NF and discuss prey-predator interactions; and (iv) investigate their in vitro predatory potential towards M. javanica juveniles and eggs, and their usefulness as biocontrol agents for olive protection in nurseries and in orchards. Materials and methods Site description and olive plantlet sampling Soil samples were collected in spring 2013 and 2014 from 25 commercial olive nurseries located in the main olive-producing areas in Morocco (Fig. 1 and Table 1): the Jbala, Guerouane, Haouz and Souss regions. The nurseries were selected for their plantlet production, the cultivars grown and the rearing substrates used. In each nursery and for each variety, five olive plantlets (Olea europaea subsp. europaea) growing in plastic bags were sampled. Information about the origin and the preparation of the growth substrates and about the cultivars was recorded. A total of 305 olive plantlets were collected and maintained in the laboratory and kept under greenhouse conditions (12 h light at 25 °C; 12 h dark at 20°C). Nematode extraction and quantification A 250-cm 3 substrate subsample was removed from the rhizosphere of each olive plantlet and used for nematode extraction using the [START_REF] Oostenbrink | Estimating nematode populations by some selected methods[END_REF] elutriation procedure (ISO 23611-4). Free-living nematodes (FLN) and plant-parasitic nematodes (PPN), which specifically exhibit a stylet for plant-cell feeding, were enumerated in 5-cm 3 counting chambers [START_REF] Merny | Les techniques d'échantillonnage des peuplements de nématodes dans le sol[END_REF] under a stereomicroscope (×60 magnification). Nematode population levels were expressed as the number of individuals per dm 3 of fresh substrate. Among PPN, rootknot nematodes belonging to the Meloidogyne genus [START_REF] Mai | Plant-parasitic nematodes: a pictorial key to genera, 5th edn[END_REF] were counted. Isolation of nematophagous fungi NF were isolated from solid substrate samples infested with RKN using the soil sprinkling technique [START_REF] Duddington | Notes on the technique of handling predacious fungi[END_REF] as modified by [START_REF] Santos | Detection and ecology of nematophagous fungi from Brazil soils[END_REF]. The direct soil powdering on media was preferable to the modified Baermann method (aqueous soil suspensions) [START_REF] Hernández-Chavarría | A simple modification of the Baermann method for diagnosis of strongyloidiasis[END_REF] because it provided more concentrated material for fungal isolation. Moreover, isolation success was increased by sprinkling cold soil (stored at 4°C) on culture medium at 37 to 40°C [START_REF] Davet | Detection and isolation of soil fungi[END_REF]. Substrate aliquots from each olive plantlet were spread on a tray to be air-dried. One gram was then sprinkled on the surface of Petri dishes containing water-agar (WA 2% w/v) supplemented with antibiotics (0.05% streptomycin-sulphate and 0.05% chloramphenicol). Three replicates were done per olive plantlet. A 1-mL suspension containing approximately 3,000 M. javanica second-stage juveniles (J2) and 10 eggs produced in the laboratory was added as fungal bait, according to the procedure used by [START_REF] Drechsler | Some hyphomycetes parasitic on free-living terricokms nematodes[END_REF] Morphological and molecular characterization of the fungi The initial identification of the fungi was based on colony morphology and microscopic characteristics. Slide sub-cultures from pure NF cultures were observed under a dissecting microscope (up to x100 magnification). Genera and species were assigned according to specialized morpho-taxonomical keys [START_REF] Cooke | Nematode-trapping species of Dactylella and Monacrosporium[END_REF][START_REF] Haard | Taxonomic studies on the genus Arthrobotrys Corda[END_REF][START_REF] Barron | The nematode-destroying fungi: Canadian Biological Publications Ltd[END_REF][START_REF] Yu | Taxonomy of nematode-trapping fungi from Orbiliaceae, Ascomycota[END_REF][START_REF] Philip | Nematophagous fungi: Guide by Philip Jacobs, BRICVersion online[END_REF]. Sequence analyses of the ITS (internal transcribed spacer) region in the ribosomal RNA gene cluster were performed to confirm the identity of the NF species. The DNA was extracted from 50-200 mg of mycelium (fresh weight) using the NucleoSpin ® Plant II Genomic DNA Purification Kit (Promega ® ) according to the manufacturer's instructions. PCR reaction was performed according to the method described by [START_REF] White | Amplification and direct sequencing of fungal ribosomal RNA genes for phylogenetics[END_REF]: the ITS rDNA gene cluster was amplified using the primers ITS1 (5´TCC GTA GGT GAA CCT GCG G 3´) and ITS4 (5´TCC TCC GCT TAT TGA TAT GC 3´). The PCR amplification was carried out using the GeneAmpR PCR System 9700 (Applied Biosystems ® ). Twenty µl of reaction mixture contained 2 µl (10 ng) of template DNA, 1 µl of each ITS1 and ITS4 primer (10 mM), 4 µl PCR buffer, 2.4 µl MgCl2 (25 mM), 0.6 µl dNTPs (10 mM), 0.1 µl BSA (0.1 mg/ml) and 0.2 units of GoTaq ® DNA polymerase. PCR cycling conditions consisted of an initial denaturation step at 94°C for 3 min, 35 cycles of denaturation at 94°C for 30 sec, annealing at 55°C for 30 sec and elongation at 72°C for 10 min. PCR products were checked for length, quality and quantity by agarose gel electrophoresis (1% (w/v) in 0.5x Tris-Acetate-EDTA (TAE). PCR products were sequenced from both ends by Eurofins MWG GmbH (Ebersberg, Germany), using the same ITS primers. CHROMAS LITE v2.1.1 (Technelysium Pty Ltd.) software was used to edit and assemble DNA sequences. BLAST similarity searches were performed in the non-redundant nucleotide database of GenBank [START_REF] Altschul | Gapped BLAST and PSI-BLAST: a new generation of protein database search programs[END_REF] to identify/verify species or genus affiliation of collected isolates. Sequences were aligned with ITS sequences of reference strains obtained from GenBank. Subsequently, the alignment was used to perform the phylogenetic tree with PHYLOGENY.FR [START_REF] Dereeper | Phylogeny. fr: robust phylogenetic analysis for the non-specialist[END_REF] and Mega 6 [START_REF] Tamura | MEGA4: molecular evolutionary genetics analysis (MEGA) software version 4.0[END_REF] using the neighbor-joining method [START_REF] Saitou | The neighbor-joining method: a new method for reconstructing phylogenetic trees[END_REF] and the Jukes-Cantor correct distance model [START_REF] Jukes | Evolution of protein molecules[END_REF]. The phylogenetic tree was obtained from data using one of three equally parsimonious trees through 1,000 bootstrap replicates [START_REF] Felsenstein | Confidence intervals on phylogenetics:an approach using bootstrap[END_REF] with a heuristic search consisting of 10 random-addition replicates for each bootstrap replicate. Diversity of the fungal communities Four diversity indices were calculated to assess NF communities: total number of isolates found in each soil sample (N); richness (S = number of species in the community); Shannon-Wiener local diversity index (H'=-Σ (pi.lnpi), where pi is the proportion of isolates with the species i); and evenness (E = H'/ln(S)), which quantifies the numerical equality of populations in communities. Pathogenicity analyses An olive population of the RKN M. javanica (detected in 72% of the nurseries surveyed and dominant in orchards) [START_REF] Ali | Trend to explain the distribution of root-knot nematodes Meloidogyne spp. associated with olive trees in Morocco[END_REF] was reared on RKN-susceptible tomato (cv. Roma) in a greenhouse (12 h light at 25°C; 12 h dark at 20°C). Fungal strains that exhibited trapping, adhesive or encysting organs were sub-cultured on WA in 9-cm-diameter Petri dishes. One week later, 100 second-stage juveniles (J2) of M. javanica (the only free form in the soil) were washed five times with 0.05% streptomycin-sulfate in sterilized distilled water, introduced into each fungal sub-culture and maintained at 25°C in darkness. Three Petri dishes per fungal strain were considered as replicates. Fungal predation structures were observed and predated/dead nematodes were counted after four days under a microscope (x100 magnification). Similar procedures were used to study predation of M. javanica eggs by specialized egg parasites such as Paecilomyces and Pochonia fungi. Dishes with nematodes but without fungi were considered as control replicates. Statistical analyses Mean values were analyzed by one-way ANOVA and Kruskall-Wallis tests were used for all pair-wise multiple comparisons. NF community patterns were explored through a Principal Component Analysis (PCA) of the diversity indices. Region grouping was tested using Monte-Carlo tests on PCA eigenvalues (randtest, ade4). Calculations were performed and graphics prepared using R language (readxl, base and ade4 packages) (R Development Core Team, 2011;[START_REF] Chessel | The ade4 package. I. One-table methods[END_REF][START_REF] Dray | The ade4 package: implementing the duality diagram for ecologists[END_REF]Dufour, 2007, Wickham, 2016), with a level of significance = 0.05). A rarefaction regression was used to analyze the dependence between root-knot nematodes (Meloidogyne spp.) and nematophagous fungi. Results Nematofauna A non-significant gradient was present in plant-parasitic-nematodes (PPN), with a north-south increase of the population levels (Table 2). A parallel significant gradient was revealed in root-knot nematodes (RKN, Meloidogyne spp.). Free-living nematodes (FLN) were three times more abundant in the Souss olive nurseries than in the other regions. The ratios between FLN and PPN were different between all regions, the highest ratio being found in the Souss region. Nematophagous species and phylogenetic diversity Several fungal isolates were recovered from the 305 soil samples examined. Observation of characteristic conidia and traps around dead M. javanica juveniles revealed 149 soil samples positive for NF. Morphological identification using microculture techniques revealed 73 NF strains belonging to 11 genera. In order to confirm the characterization of the fungal strains, the ITS regions of rDNA were sequenced. Five species (Catenaria anguillulae, Nematoctonus leiosporus, Haptoglossa heterospora, Dactylaria sp. and Monacrosporium microphoides) were excluded from the sequencing because it was not possible to purify them. The BLAST test showed that the ITS sequences of all sequenced strains were at least 99% similar to the corresponding GenBank reference sequences (Table 3). The phylogenetic analysis including ITS sequences of the Moroccan NF isolates and 16 reference sequences of identified close relatives revealed five distinct clusters (Figs. 2 A-E Diversity patterns Because of their scarcity (only one nursery surveyed), the substrate samples from the Jbala region were excluded from the dataset prior to running analyses. Richness (S) and local diversity (H') were correlated to the PC1 axis, while the PC2 axis was related to numbers (N) and to evenness (E) of fungal isolates on its positive and negative sides, respectively (Fig. 3A). Region grouping was significant in the whole analysis and on the PC2 axis (Fig. 3B). Isolates were more numerous in the Souss region, whereas fungal communities were more numerically alike in the Guerouane region. The Haouz region had lower NF richness and diversity (non-significant PC1 coordinates). Indices were affected by the north-south distribution of the nurseries (Figs. 3C-3E-3D). The percentage of samples with NF was lower in the Haouz region than elsewhere (Table 5). Both trapping and endoparasitic fungi occurred more often in the Souss region than in the others regions, and no endoparasitic species were found in the Jbala region. The rarefaction regression established between the occurrence of RKN (Meloidogyne spp.) and the occurrence of NF (Fig. 4) indicated a significant positive correlation, regardless of the region sampled. 3.4. In vitro efficiency of the nematophagous strains NF were distinguished according to their ability to kill M. javanica J2s (Fig. 5). Talaromyces assiutensis killed all juveniles in all replicates. The Orbiliaceae species (Arthrobotrys spp., Dreschlerella spp., Monacrosporium spp.) were efficient against M. javanica since they trap 50 to 80% of the J2s using adhesives networks, buttons, constricting rings and hyphae networks. Paecilomyces and Trichoderma strains killed 30 to 50% of the J2s. Fusarium oxysporum strains were less efficient (less than 20% of dead J2s). P. lilacinus and P. chlamydosporia strains infected all the M. javanica eggs. Discussion Our first objective was to evaluate nematode populations in substrates from olive nurseries. In all regions of Morocco, PPN abundance was greater than 1.4 nematodes/dm 3 of soil and FLN abundance greater than 1.9. As a comparison, [START_REF] Hamza | Plant-parasitic nematodes associated with olive tree in Southern Morocco[END_REF] found 0.2 to 5.1 PPN/dm 3 of soil and 0.3 to 4.3 FLN/dm 3 of soil in 23 Souss and Haouz orchards. It can thus be hypothesized that the multiplication of nematode populations may be boosted by acidity and by hydrophilic and non-degraded organic matter [START_REF] Neher | Nematode communities in soils of four farm cropping management systems[END_REF][START_REF] Manlay | Relationships between abiotic and biotic soil properties during fallow periods in the sudanian zone of Senegal[END_REF][START_REF] Ou | Vertical distribution of soil nematodes under different land use types in an aquic brown soil[END_REF][START_REF] Mcsorley | Effect of disturbances on trophic groups in soil nematode assemblages[END_REF]. The usual dominance of FLN in olive soils could be due to the origin of organic substrates (mountain soils, peat and manure) used in most nurseries [START_REF] Castillo | Protection of olive planting stocks against parasitism of root-knot nematodes by arbuscular mycorrhizal fungi[END_REF]. FLN are known to dominate in soil substrates not yet used for agriculture [START_REF] Hillocks | Associations between soilborne pathogens and other soil-inhabiting microorganisms[END_REF]. The FLN dominance in the Souss region, one of the most intensively cultured areas in Morocco, was thus unexpected. The FLN/PPN ratios were balanced in all regions except in the Souss where most of the soil substrates come from cropped areas (vegetables and citrus fruits). Because of the sandy texture of the soils in the Souss region, culture practices include high amounts of organic matter (especially cattle manure or tomato leaf compost), leading to an increase in FLN populations and to a decrease in PPN populations because organic matter is unsuitable for PPN [START_REF] Clark | Agronomic, economic, and environmental comparison of pest management in conventional and alternative tomato and corn systems in northern California[END_REF][START_REF] Hominick | Nematodes[END_REF][START_REF] Hu | Abundance and diversity of soil nematodes as influenced by different types of organic manure[END_REF]. These mechanisms may explain the FLN/PPN ratio that is twice as high in the Souss region compared to the other regions The high percentages of RKN (Meloidogyne spp.) in Souss olive nurseries may be justified by the dominance throughout the area of vegetable crops that are highly susceptible to these nematodes [START_REF] Sikora | Plant parasitic nematodes in subtropical and tropical agriculture[END_REF][START_REF] Netscher | Les nématodes parasites des cultures maraîchères au Sénégal[END_REF]. Our second objective was to isolate and characterize nematophagous fungi able to control RKN (Meloidogyne spp.). The success of fungal strain isolation from soils may be correlated with soil temperature, moisture and organic matter content [START_REF] Akhtar | Roles of organic soil amendments and soil organisms in the biological control of plant-parasitic nematodes: a review[END_REF][START_REF] Cayrol | La lutte biologique contre les nématodes phytoparasites[END_REF]. The greatest difficulty encountered was in isolating pure NF strains because of the rapid growth of plant-pathogenic fungi and saprophytes such as Fusarium, Alternaria and Aspergillus species. An increased pH of culture media may prevent the development of other microorganisms [START_REF] Gardner | Production of chlamydospores of the nematode-trapping Duddingtonia flagrans in shake flask culture[END_REF]. Moreover, we found that direct cold soil powdering on hot media (37 to 40°C), like in [START_REF] Kelly | Screening for the presence of nematophagous fungi collected from Irish sheep pastures[END_REF], was preferable to aqueous soil suspensions because it provided more concentrated fungal material [START_REF] Davet | Detection and isolation of soil fungi[END_REF][START_REF] Hernández-Chavarría | A simple modification of the Baermann method for diagnosis of strongyloidiasis[END_REF]. Dispersing agents were avoided because they inhibited the growth of some NF strains [START_REF] Davet | Detection and isolation of soil fungi[END_REF]. This survey provided native NF from Morocco for the first time. The large number of soil samples (305) allowed the detection of numerous NF strains, whereas former studies revealed one strain at best [START_REF] Bridge | Soil fungi: diversity and detection[END_REF]. The strains detected as nematophagous possessed different modes of action: adhesive networks, constricting rings, hyphal tips, adhesive conidia and mycotoxins [START_REF] Imerglik | Recherches préliminaires sur la spécificité du piégage des nématodes par des hyphomycètes prédateurs[END_REF][START_REF] Gaspard | Nematophagous fungi associated with Tylenchulus semipenetrans and the citrus rhizosphere[END_REF]. Despite this, ITS rDNA sequences could not be used to identify some NF strains due to their small size (Pochonia chlamydosporia, for example), BLAST tests were useful to confirm the morphological characterization of other strains. The integrated taxonomical analysis (morphological and molecular) of the NF provided a pool of 28 strains belonging to 19 species. The phylogenic analysis revealed that the ITS rDNA gene was able to distinguish species in a genus group (such as Trichoderma) but did not fully discriminate species belonging to the family Orbiliaceae, indicating the taxonomic proximity of Dreschlerella, Arthrobotrys and Monacrosporium species. Therefore, an integrative molecular analysis should be developed with other molecular markers [START_REF] White | Amplification and direct sequencing of fungal ribosomal RNA genes for phylogenetics[END_REF] in order to improve NF identification. Our third objective was to determine the occurrence and diversity of nematode-associated NF. All NF species except A. oligospora were detected in the Souss region where they were more dominant than in the other regions. This high diversity of NF might be due to the multiple habitat origins of the components (mountain, riverbank and field soils, cattle manure, plant compost, etc.) used to make substrates for root olive plantlets. We hypothesize that the microbial richness detected in the Souss region corresponds to the high endemic plant diversity characteristic of the Macaronesian region [START_REF] Médail | Glacial refugia influence plant diversity patterns in the Mediterranean Basin[END_REF][START_REF] Msanda | Biodiversité et biogéographie de l'arganeraie marocaine[END_REF]. This could also be due to higher organic matter concentration in the soils (i.e., saprophytic substrate for fungi that induce the formation and the activity of trapping structures) and to high PPN levels (i.e., parasitic substrates for fungi) [START_REF] Den Belder | Capture of plant-parasitic nematodes by an adhesive hyphae forming isolate of Arthrobotrys oligospora and some other nematode-trapping fungi[END_REF][START_REF] Singh | Evaluation of biocontrol potential of Arthrobotrys oligospora against Meloidogyne graminicola and Rhizoctonia solani in Rice (Oryza sativa L.)[END_REF]. The number of NF isolates globally increased southwards, whereas evenness decreased (PCA2 axis). It is known that different thermal regimes affect soil microbial diversity [START_REF] Bridge | Soil fungi: diversity and detection[END_REF]. The absence of seasonality and the higher minimum temperatures in the Souss region may explain these developments where irrigation counteracts the arid to semiarid climate. The high occurrence of NF and the high evenness detected in the Guerouane region may be linked to a more continental climate. More than half of the 36 P. lilacinus strains came from the Souss region where the NF richness was the highest but the evenness the lowest. In a restricted soil area, the abundance of P. lilacinus may cause the rarity of the other species. For a constant number of species, maximal diversity is achieved when species have an even distribution. The Haouz, Jbala and Guerouane regions were characterized by less PPN and FLN than in the Souss region, explaining the lower NF richness in those regions. Our fourth objective was to investigate the in vitro predation of the NF against M. javanica juveniles and eggs. Talaromyces assiutensis (strain UIZFSA-31), whose nematode predation was previously unknown, killed all the M. javanica juveniles in four days. The mode of action of T. assiutensis remains unknown but we hypothesize that the strain may produce specific mycotoxins. In vitro trapping tests prove that the Orbiliaceae species (Arthrobotrys spp., Dreschlerella spp., Monacrosporium spp.) were able to trap M. javanica juveniles. The predatory capacity of A. oligospora was similar to data found in the literature [START_REF] Singh | Evaluation of biocontrol potential of Arthrobotrys oligospora against Meloidogyne graminicola and Rhizoctonia solani in Rice (Oryza sativa L.)[END_REF]. The mechanisms involved during predation are well known [START_REF] Imerglik | Recherches préliminaires sur la spécificité du piégage des nématodes par des hyphomycètes prédateurs[END_REF][START_REF] Gaspard | Nematophagous fungi associated with Tylenchulus semipenetrans and the citrus rhizosphere[END_REF]. Recognition and attachment of the mycelium to the cuticle of the RKN juvenile is mainly due to compatible glycoproteins, e.g., lectins [START_REF] Duponnois | Effect of different west african species and strains of Arthrobotrys nematophagous fungi on Meloidogyne species[END_REF]. Trichoderma species are recognizsed as control agents against nematodes, and various mechanisms have been proposed to explain nematode killing, including antibiosis and enzymatic hydrolysis [START_REF] Sivan | Microbial control of plant diseases[END_REF][START_REF] Elad | Biological control of foliar pathogens by means of Trichoderma harzianum and potential modes of action[END_REF]. [START_REF] Thomas | Studies on the parasitism of Globodera rostochiensis by Trichoderma harzianum using low temperature scanning electron microscopy[END_REF] demonstrated direct interactions between T. harzianum and the cyst nematode Globodera rostochiensis: the fungus penetrated eggs in the cysts, leading to the death of the juveniles [START_REF] Sharon | Biological control of the root-knot nematode Meloidogyne javanica by Trichoderma harzianum[END_REF]. Precise information on the mechanisms involved is very limited and this misunderstanding has hindered the selection of active strains and the development of improved biocontrol methods. Paecilomyces lilacinus and Pochonia chlamydosporia are especially powerful nematode egg parasites [START_REF] Cayrol | Etude préliminaire sur les possibilités d'utilisation des champignons parasites comme agents de lutte biologique[END_REF][START_REF] Irving | Variation between strains of the nematophagous fungus, Verticillium chlamydosporium Goddard. II. Factors affecting parasitism of cyst nematode eggs[END_REF], which could explain the relatively low predation rate obtained on juveniles. These species can also act on the movement of infested nematode juveniles via a paralytic toxin [START_REF] Cayrol | Study of the nematicidal properties of the culture filtrate of the nematophagous fungus Paecilomyces lilacinus[END_REF], purified and identified as acetic acid by [START_REF] Djian | Acetic acid: a selective nematicidal metabolite from culture filtrates of Paecilomyces lilacinus (Thom) Samson and Trichoderma longibrachiatum Rifai[END_REF]. This molecule is abundantly produced during fungal growth in liquid medium (ibid). Fusarium oxysporum strains, well-known plant pathogens, exhibited lower predation rates, but some studies revealed that they are partly able to kill nematodes by producing toxic sulfuric heterocycles (fusarenone and moniliformine) [START_REF] Ciancio | Nematicidal effects of some Fusarium toxins[END_REF][START_REF] Cayrol | La lutte biologique contre les nématodes phytoparasites[END_REF]. Moreover, the significant co-occurrence of RKN and NF highlighted by the rarefaction curve suggests a close interaction between prey (nematodes) and predators (fungi) in olive nurseries, as described by Lotka-Volterra models [START_REF] Barbosa | Ecology of predator-prey interactions[END_REF], probably because of parasitism induction by nematodes [START_REF] Jansson | Interactions between nematophagous fungi and plant-parasitic nematodes: attraction, induction of trap formation and capture[END_REF]. Even though the diversity of the NF detected in the four regions was variable at species and population levels, we may expect symmetric dynamics [START_REF] Marrow | Evolutionary instability in predator-prey systems[END_REF]) due to possible co-evolutionary processes involved between nematodes and fungi. Such processes occur between competitive organisms in an ever-changing environment, as described by the "Red Queen" hypothesis [START_REF] Van Valen | A new evolutionary law[END_REF][START_REF] Dawkins | Arms races between and within species[END_REF]. Conclusion Olive nursery solid substrates are infested with PPN, including Meloidogyne species. Various predatory fungi were able to kill RKN. Consequently, before selecting NF strains as candidates for biocontrol, studies must be extended to a wider range of Meloidogyne species and populations present on olive trees in Morocco in order to verify their specificity. Indigenous NF strains were recovered from different substrates (different habitats), making it possible to undertake more research in order to understand the specificity of prey-predator interactions with more diverse PPN species. Predation efficiency in different cropping systems and in varied soil environments should also be explored in the future. Legends for figures and tables Figure 1 Distribution of the olive nurseries surveyed in Morocco. See Table 1 for more information. Olive plants are grown in 2-to-3 liter plastic bags filled with solid substrates from different sources (alluvial sandy soils, forest soils, loamy open-field soils) supplemented with different proportions of sand, peat fertilizer and animal manure. Plants are first grown in plastic greenhouses and then outside. They are watered by sprinklers and fertilized with Osmocote® (Everris Company™). ): Purpureocillium lilacinum, Trichoderma, Fusarium oxysporum, Talaromyces and Arthrobotrys-Dreschslerella-Monacrosporium (family Orbiliaceae). The combination of the morphological and the molecular analyses established that the isolated NF belonged to 19 species, eight families and six orders ( Figure 2 Figure 3 23 Figure2Neighbor-joining tree inferred from ITS rDNA sequences of the nematophagous Figure 4 4 Figure 4 Rarefaction curve for root-knot nematodes (Meloidogyne spp.) and nematophagous Figure 5 . 5 Figure 5. In vitro efficiency of nematophagous fungal strains: percentage of dead M. javanica Figure 2 .Figure 3 .Figure 4 . 234 Figure 2. Neighbor-joining tree inferred from ITS rDNA sequences of the nematophagous Figure 5 . 5 Figure 5. In vitro efficiency of nematophagous fungal stains: percentage of dead M. javanica Table 4 4 Paecilomyces lilacinusthat represented 36 strains (50% of the strains). Half of the P. lilacinus strains detected were isolated from the Souss region, and then gradually decreased northwards (12 in Haouz, four in Guerouane, two in Jbala). Arthrobotrys brochopaga, A. scaphoides, Monacrosporium thaumasium, P. lilacinus, F. oxysporum, T. harzianum, T. asperellum and T. longibrachiatum were encountered in the four regions. A. oligospora, Dactylaria sp., Haptoglossa heterospora, Monacrosporium microscaphoides, Nematoctunus leiosporus and Talaromyces assiutensis were very rare (one isolate in one region). All the fungal species except A. oligospora were detected in the Souss region. Only eight species were detected in the Jbala region. The Guerouane and Haouz regions hosted 11 and 12 species, respectively. ). Arthrobotrys was the most diversified genus with five species, followed by Trichoderma (three species). The abundance of fungal species was low (less than five strains per species) except for Table 1 1 Location and characteristics of the Moroccan olive nurseries surveyed. Number of samples for each cultivar in each geographic region. Table 2 2 Average density (number of nematodes/dm 3 of soil) of plant-parasitic nematodes (PPN) and free-living nematodes (FLN), percentages of samples infested with root-knot nematodes (RKN) and FLN/PPN ratios (a-d indicate significant groups, P < 0.05). Table 3 3 BLAST results of ITS rDNA sequences of the nematophagous fungi isolated. Table 4 4 Nematophagous fungi associated with the Moroccan olive nurseries surveyed. Table 5 5 Functional diversity of the nematophagous fungi (a-d indicate significant groups, P < 0.05). 639 Table 1 640 1 Location and characteristics of the Moroccan olive nurseries. Number of samples for each cultivar in each geographic region. 641 Geographic Location Climate City No. of Main habitat origin Olive cultivar No. of samples region nurseries of the substrates Jbala South-west face Sub humid climate, Ouazzane 1 Clay marls, Picholine marocaine 5 of the Rif Mountains temperature from -6° sand, forest soil Haouzia 5 to 32°C and topsoil Menara 5 Guerouane Sais Plateau, between More continental Meknes 4 Yellow sand, Picholine marocaine the Middle Atlas climate, temperature topsoil, Haouzia to the south and the Rif from -10° to 45°C mature manure Menara Mountains to the north and local Arbequina 5 compost Arbosana 5 Picual Picholine Languedoc 5 Haouz Northern slope Semiarid climate, Marrakech 5 Clay marls, Picholine marocaine of the High Atlas average temperatures sand, Arbequina Mountains from -6° to 49.6°C forest soil, Haouzia mountain soil Menara and topsoil Picholine Languedoc Arbosana 5 El Kelaa des Sraghna 3 Forest soil Picholine marocaine and topsoil Picholine Languedoc Menara Haouzia Sidi Abdellah Ghiat 1 Soil, clay and sand Picholine marocaine 5 Souss On the southern slope Arid constant climate, Agadir 8 Sand, topsoil Picholine marocaine of the High Atlas sunshine > 340 days a and peat moss Haouzia Mountains year, average Menara 5 temperatures from 14° Khmiss Aït Amira 2 Topsoil, peat Picholine marocaine to 25°C and manure Biougra 1 Peat, soil and perlite Menara 5 Table 2 642 2 Average density (number of nematodes/dm 3 of soil) of plant-parasitic nematodes (PPN) and 643 644 free-living nematodes (FLN), percentages of samples infested with root-knot nematodes 645 (RKN) and FLN/PPN ratios (a-d indicate significant groups, P < 0.05). Regions No. of PPN % of samples No. of FLN FLN/PPN (/dm 3 of soil) infested with RKN (/dm 3 of soil) Jbala 1,441 16.0 d 2,220 b 1.54 b Guerouane 1,527 20.0 c 1,914 b 1.25 c Haouz 2,003 58.1 b 2,081 b 1.04 d Souss 2,395 76.4 a 6,194 a 2.59 a P-value 0.153 0.000 0.000 0.000 Table 3 646 3 BLAST results of ITS rDNA sequences of the nematophagous fungi isolated. Distribution of the olive nurseries surveyed in Morocco. See Table1for more information. 647 Strains GenBank reference strains Acknowledgements This research was supported by a Ph.D. grant from the "Institut de Recherche pour le Développement" (Marseille, France). It was also funded by the PESTOLIVE project: Contribution of olive history for the management of soil-borne parasites in the Mediterranean Basin (ARIMNet action KBBE 219262), and by the BIONEMAR project: Development of fungal bionematicides for organic production in Morocco (PHC-Toubkal action 054/SVS/13). Conflict of interest statement We declare that we have no conflict of interest. Table 4 Nematophagous fungi associated with the Moroccan olive nurseries surveyed. Table 5 Functional diversity of the nematophagous fungi (a-d indicate significant groups, P < 0.05). Region
01498180
en
[ "phys" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01498180/file/levitons_fqhe%20%281%29.pdf
Jérôme Rech Dario Ferraro Thibaut Jonckheere Luca Vannucci Maura Sassetti Thierry Martin Minimal Excitations in the Fractional Quantum Hall Regime Keywords: numbers: 73.23.-b, 42.50.-p, 71.10.Pm, 72.70.+m, 73.43.Cd ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Because of its potential application to quantum information processing, time-dependent quantum transport in open coherent nanostructures attracts prodigious attention. Recent years have seen the emergence of several attempts to manipulate elementary charges in quantum conductors [1][2][3][4]. This opened the way to the field of electron quantum optics (EQO) [5] characterized by the preparation, manipulation and measurement of singleparticle excitations in ballistic conductors. In this context, levitons -the time-resolved minimal excitation states of a Fermi sea -were recently created and detected in two-dimensional electron gas [4,6], 20 years after being theoretically proposed [7][8][9]. These many-body states are characterized by a single particle excited above Fermi level, devoid of accompanying particle-hole pairs [10]. The generation of levitons via voltage pulses does not require delicate circuitry and has thus been put forward as a solid candidate for quantum bit applications, in particular the realization of electron flying qubits [11,12]. Interaction and quantum fluctuations strongly affect low dimensional systems leading to dramatic effects like spin-charge separation and fractionalization [13][14][START_REF] Giamarchi | Quantum Physics in One Dimension[END_REF]. These remarkable features were investigated by looking at both time-resolved current [16][17][18] and noise measurements [19][20][21][22][23]. While the emergence of many-body physics and the inclusion of interactions [24][25][26][27] was recently addressed in the framework of EQO, a conceptual gap still remains when it comes to generating minimal excitations. This is particularly true when the groundstate is a strongly correlated state, as are the edge channels of a fractional quantum Hall (FQH) system [28], a situation which has remained largely unexplored so far for time-dependent drives [29]. The building blocks of such chiral conductors are no longer electrons but instead anyons, which have a fractional charge and statistics [30]. For Laughlin filling factors [31], these anyons are Abelian quasiparticles, but more exotic situations involving non-Abelian anyons [32] are predicted. Our understanding of these nontrivial objects would benefit from being able to excite only few anyons at a time [33], allowing us to study their transport and exchange properties, and to combine them through interferometric setups. This calls for the characterization of minimal excitations in the FQH regime. I B V (t) FQHE FQHE 1 4 3 2 In this letter, we study levitons in the edge channels of the fractional quantum Hall regime by analyzing the partition noise at the output of a quantum point contact (QPC). Our results rely on a dual approach combining perturbative and exact calculations of the noise in a Hanbury-Brown and Twiss (HBT) [34,35] configuration. We also provide results in the time-domain, investigating leviton collisions with Hong-Ou-Mandel (HOM) [4,36] interferometry. Consider a FQH bar (see Fig. 1) with Laughlin filling factor ν = 1/(2n + 1) (n ∈ N), described in terms of a hydrodynamical model [37] by the Hamiltonian ( = 1) H = v F 4π dx   µ=R,L (∂ x φ µ ) 2 - 2e √ ν v F V (x, t) ∂ x φ R   , (1) where the bosonic fields φ R,L propagate along the edge with velocity v F and are related to the quasiparticle annihilation operator as ψ R,L (x) = U R,L √ 2πa e ±ik F x e -i √ νφ R,L (x) (with a a cutoff parameter and U a Klein factor), and V (x, t) is an external potential applied to the upper edge at contact 1. Working out the equation of motion for the field φ R , (∂ t + v F ∂ x ) φ R (x, t) = e √ νV (x, t) , one can relate it to the unbiased case using the transformation φ R (x, t) = φ (0) R (x, t) + e √ ν t -∞ dt V (x , t ), (2) with x = x -v F (t -t ), and φ (0) R is the free chi- ral field, φ (0) R (x, t) = φ (0) R (x -vt, 0). Focusing first on the regime of weak backscattering (WB), the tunneling Hamiltonian describing the scattering between counter-propagating edges at the QPC can be written, in terms of the transformed fields, Eq. ( 2), as H T = Γ(t)ψ † R (0)ψ L (0) + H.c., where we introduced Γ(t) = Γ 0 exp ie * t -∞ dt V (t ) [38], with the bare tunneling constant Γ 0 , the fractional charge e * = νe and assuming a voltage V (t) applied over a long contact, in accordance with the experimental setup [4], allowing us to simplify t -∞ dt V (v F (t -t), t ) t -∞ dt V (t ). The applied time-dependent voltage consists of an AC and a DC part V (t) = V dc + V ac (t), where by definition V ac averages to zero over one period T = 2π/Ω. The DC part indicates the amount of charge propagating along the edge due to the drive. The total excited charge Q over one period is then: Q = T 0 dt I(t) = ν e 2 2π T 0 dtV (t) = qe, (3) where the fractional conductance quantum is G 0 = νe 2 /2π and the number of electrons per pulse is q = e * V dc Ω . The AC voltage generates the accumulated phase experienced by the quasiparticles ϕ(t) = e * t -∞ dt V ac (t ), characterized by the Fourier components p l of e -iϕ (t) . In a 1D Fermi liquid, the number of electron-hole excitations resulting from an applied time-dependent voltage bias is connected to the current noise created by the pulse scattering on a QPC [7,9,[START_REF] Lee | Orthogonality catastrophe in a mesoscopic conductor due to a time-dependent flux[END_REF] which acts as a beamsplitter, as in a HBT setup [34,35]. For FQH edge states however, scattering at the QPC is strongly non-linear as it is affected by interactions. Special care is thus needed for the treatment of the point contact, and the definition of the excess noise giving access to the number of excitations. The quantity of interest is the photo-assisted shot noise (PASN), i.e. the zero-frequency current noise measured from contact 3, and defined as S = 2 dτ T 0 d t T δI 3 ( t + τ 2 )δI 3 ( t - τ 2 ) ( 4 ) where δI 3 (t) = I 3 (t)-I 3 (t) and the output current I 3 (t) reduces, since contact 2 is grounded, to the backscattered S 0 = 2 T e * Γ 0 v F 2 Ω Λ 2ν-2 as a function of the number of electron per pulse q, for different reduced temperatures θ and filling factor ν = 1/3, in the case of a square (bottom), a cosine (middle) and a periodic Lorentzian drive with half-width at half-maximum η = W/T = 0.1 (top). current I B (t), readily obtained from the tunnel Hamiltonian I B (t) = ie * Γ(t)ψ † R (0, t)ψ L (0, t) -H.c. . (5) When conditions for minimal excitations are achieved in the perturbative regime, excitations should be transmitted independently, leading to Poissonian noise. It is thus natural to characterize minimal excitations as those giving a vanishing excess noise at zero temperature: ∆S = S -2e * I B (t) , (6) where I B (t) is the backscattered current averaged over one period. Using the zero-temperature bosonic correlation func- tion φ R/L (τ )φ R/L (0) c = -log (1 + iΛτ ), this excess noise is computed perturbatively up to order Γ 2 0 , yield- ing [38] ∆S = 2 T e * Γ 0 v F 2 Ω Λ 2ν-2 1 Γ(2ν) × l P l |l + q| 2ν-1 [1 -Sgn (l + q)] , (7) where Λ = v F /a is a high-energy cutoff and P l = |p l | 2 is the probability for a quasiparticle to absorb (l > 0) or emit (l < 0) l photons, which depends on the considered drive [38]. These probabilities P l also depend on q, as the AC and DC components of the voltage are not independent. Indeed, we are interested here in a periodic voltage V (t) consisting of a series of identical pulses, with V (t) close to 0 near the beginning and the end of each period. This implies that the AC amplitude is close to the DC one. Our formalism could also be used to perform a more general analysis by changing these contributions independently. In particular, fixing the DC voltage and changing the AC amplitude allows us to perform a spectroscopy of the probabilities themselves. Conversely, changing the DC voltage at fixed AC amplitudes, we can reconstruct the tunneling rate associated with each photo-assisted process [10] in the same spirit as finite frequency noise calculations [START_REF] Ferraro | [END_REF]. However, this broader phenomenology does not provide any additional information concerning the possibility of creating minimal excitation by applying periodic pulses. In Fig. 2, we show the variation of the excess noise as a function of q, for several external drives at ν = 1/3 and various reduced temperatures θ = k B Θ/Ω (Θ the electronic temperature). At θ = 0, only the periodic Lorentzian drive leads to a vanishing excess noise, and only for integer values of q. This confirms that as in the 1D Fermi liquid, and as mentioned in earlier work [9], optimal pulses have a quantized flux and correspond to Lorentzians of area dtV = m2π/e * (with m an integer number of fractional flux quanta). More intriguingly, however, this vanishing of ∆S occurs for specific values of q: while levitons in the FQH are also minimal excitations, they do not carry a fractional charge and instead correspond to an integer number of electrons. This shows that integer levitons are minimal excitation states even in the presence of strong electron-electron interactions, and that it is not possible to excite individual fractional quasiparticles using a properly quantized Lorentzian voltage pulse in time. Indeed, it is easy to note that, under these conditions, at q = ν (single quasiparticle charge pulse) no specific feature appears in the noise and ∆S = 0. While fractional minimal excitations may exist, they cannot be generated using either Lorentzian, sine or square voltage drives. Close to integer q the behavior of ∆S is strongly asymmetric. While a slightly larger than integer value leads to vanishingly small excess noise, a slightly lower one produces a seemingly diverging contribution. Indeed, exciting less than a full electronic charge produces a strong disturbance of the ground state, and ultimately leads to the generation of infinitely many particle-hole excitations, which is reminiscent of the orthogonality catastrophe [7,[START_REF] Lee | Orthogonality catastrophe in a mesoscopic conductor due to a time-dependent flux[END_REF]41]. For comparison with experiments, we compute the excess noise at θ = 0. This calls for a modified definition of ∆S (in order to discard thermal excitations): ∆S = S -2e * I B (t) coth q 2θ , (8) which coincides with Eq. ( 6) in the θ → 0 limit. The finite temperature results (see Fig. 2) cure some inherent limitations of the perturbative treatment at θ = 0 (diverging behavior close to integer q). The noiseless status of the Lorentzian drive is confirmed, as ∆S 0 at low enough temperature for some values of q (yet shifted compared to the θ = 0 ones). Our perturbative analysis is valid when the differential conductance is smaller than G 0 . This condition can be achieved on average (Γ 0 is then bounded from above), but it is not fulfilled in general when the voltage drops near zero because of known divergences at zero temperature. In order to go beyond this WB picture, we now turn to an exact non-perturbative approach for the special filling ν = 1/2. While this case does not correspond to an incompressible quantum Hall state, it nevertheless provides important insights concerning the behavior of physical values of ν beyond the WB regime. The agreement between the two methods in the regime where both are valid makes our results trustworthy. We thus extend the refermionization approach for filling factor ν = 1/2 [42,43] to a generic AC drive [38]. Starting from the full Hamiltonian expressed in terms of bosonic fields, one can now write the tunneling contribution introducing a new fermionic entity, ψ(x, t) ∝ e i(φ R (x,t)+φ L (x,t))/ √ 2 . Solving the equation of motion for ψ(x, t) near x = 0, one can define a relation between this new field taken before (ψ b ) and after (ψ a ) the QPC ψ a (t) = ψ b (t) -γΩe iϕ(t)+iqΩt t -∞ dt e -γΩ(t-t ) × e -iϕ(t )-iqΩt ψ b (t ) -H.c. , (9) allowing us to treat the scattering at the QPC at all orders. Expressing the current and noise in terms of ψ a and ψ b , and using the standard correlation function ψ † b (t)ψ b (t ) = dω 2πv F e iω(t-t ) f (ω) (with f the Fermi function), we derive an exact solution for both the backscattered current and PASN. As the DC noise at a QPC does not remain Poissonian when its transmission increases, our definition of ∆S is further extended to treat the non-perturbative regime. In the ν = 1 case, where an exact solution exists, it is standard to compare the PASN to its equivalent DC counterpart [4,9] obtained with the same V dc , and V ac = 0. Here, in order to account for the nontrivial physics involved at the QPC in the FQH, it makes sense to compare our PASN to the DC noise which one obtains for the same charge transferred at the QPC, over one period of the AC drive. At zero temperature, ∆S is redefined as: ∆S = S -2e * I B + (e * ) 2 T 2γ sin T γe * I B , (10) where e * = νe = e/2, and γ = |Γ0| 2 πav F Ω is the dimensionless tunneling parameter. This definition coincides with the Poissonian one Eq. ( 6) at low γ, in that it vanishes for the same values of q. Results for ∆S at ν = 1/2 are presented in Fig. 3. At low γ, structures appear as a function of q, which are very similar to the perturbative calculation (Fig. 2). For the Lorentzian drive only, the excess noise approaches zero close to integer values of q in the tunneling regime γ 1. When increasing γ the position of these minima gets shifted and the excess noise eventually becomes featureless, independently of the AC drive. In the γ → +∞ limit (not shown) the Lorentzian drive shows signatures of Poissonian electron tunneling at the QPC occurring at q multiples of ν, consistent with the duality property of the FQH regime [42]. This Poissonian behavior, not observed for other drives, is also confirmed by the strong backscattering perturbative treatment. At finite temperature, our results are almost unaffected for θ γ, while larger temperatures tend to smear any variations in q. Levitons can also be explored in the time domain through electronic Hong-Ou-Mandel (HOM) interferometry [4,6]. By driving both incoming channels, one can study the collision of synchronized excitations onto a beam-splitter, as two-particle interferences reduce the current noise at the output, leading to a Pauli-like dip. Fig. 4 shows the normalized HOM noise ∆Q [24] as a function of the time delay τ between applied drives. While this does not constitute a diagnosis for minimal excitations, it reveals the special nature of levitons in the WB regime, as the normalized HOM noise is independent of temperature and filling factor [38,44], reducing at q = 1 to the universal form: ∆Q(τ ) = sin 2 πτ T sin 2 πτ T + sinh 2 (2πη) . ( 11 ) The same universal behavior is also obtained for fractional q = ν in the strong backscattering regime (tunneling of electrons at the QPC). Interestingly, although the HOM noise and the PASN are very different from their Fermi liquid counterparts, an identical expression for ∆Q(τ ) was also obtained in this case [10] (where it is viewed as the overlap of leviton wavepackets). Finally, in addition to the excess noise, the timeaveraged backscattering current I B (t) also bears peculiar features. In contrast to the Ohmic behavior observed in the Fermi liquid case, I B (t) shows large dips for integer values of q (see Fig. 5). These dips are present for all types of periodic drives, and cannot be used to detect minimal excitations. However, the spacing between these dips provides an alternative diagnosis (from DC shot noise [19,20]) to access the fractional charge e * of Laughlin quasiparticles, as q is known from the drive frequency and the amplitude V dc [45]. Real-time quasiparticle wave packet emission has thus been studied in a strongly correlated system, showing the existence of minimal excitations (levitons) in edge states of the FQH. These occur when applying a periodic Lorentzian drive with quantized flux, and can be detected as they produce Poissonian noise at the output of a Hanbury-Brown and Twiss setup in the weak backscattering regime. Although FQH quasiparticles typically carry a fractional charge, the charge of these noiseless excitations generated through Lorentzian voltage pulse corresponds to an integer number of e. Furthermore, our findings are confirmed for arbitrary tunneling using an exact refermionization scheme. Remarkably enough, in spite of the strong interaction, two FQH leviton collisions bear a universal Hong-Ou-Mandel signature identical to their Fermi liquid analog. Possible extensions of this work could address more involved interferometry of minimal excitations as well as their generalization to non-Abelian states. Note added in proofs: During the completion of this work, it came to our attention that a simple argument can rule out the possibility of minimal excitations beyond the results presented here. Starting from Eq. ( 7), one readily sees that a minimal excitation (∆S = 0) can only be realized if P l = 0 for all l ≤ -q, independently of the filling factor. At ν = 1, it was shown [7][8][9] that minimal excitations were associated with quantized Lorentzian pulses, so that this type of drive is the only one satisfying the constraint of vanishing P l . Since this condition is independent of ν, it follows that also at fractional filling, minimal excitations can only be generated using Lorentzian drives with quantized charge q ∈ Z. quasiparticle annihilation operators ψ µ (x, t) (µ = R, L) through the bosonization identity ψ R/L (x, t) = U R/L √ 2πa e ±ik F x e -i √ νφ R/L (x,t) , ( B4 ) where a is a short distance cutoff and U µ are Klein factors. Focusing first on the case where no QPC is present, one can derive the following equations of motion for the bosonic fields (∂ t + v F ∂ x ) φ R (x, t) = e √ νV (x, t) (B5) (∂ t + v F ∂ x ) φ L (x, t) = 0 (B6) It follows that the effect of the external voltage bias can be accounted for by a rescaling of the right-moving bosonic field φ R (x, t) = φ (0) R (x, t) + e √ ν t -∞ dt V (x , t ) (with φ (0) R the solution in absence of time dependent voltage) or alternatively by a phase shift of the quasiparticle operator of the form ψ R (x, t) -→ ψ R (x, t) e -iνe t -∞ dt V (x ,t ) (B7) where x = x -v F (t -t ). Accounting for this phase shift, the tunneling Hamiltonian which describes the scattering of single quasiparticles at the QPC (x = 0) in the weak backscattering regime is given by H T = Γ 0 exp iνe t -∞ dt V (v F (t -t), t ) ψ † R (0)ψ L (0) + H. c ., with the bare tunneling constant Γ 0 . Considering for simplicity the experimentally motivated situation of a long contact, located at a distance d from the QPC, one can write the bias voltage as V (x, t) = V (t)θ (-x -d) where V (t) is a periodic timedependent voltage. The tunneling Hamiltonian can then be simplified as H T = Γ 0 exp iνe t-d v F -∞ dt V (t ) ψ † R (0)ψ L (0) + H.c. (B8) Note that the time delay d/v F can safely be discarded as it corresponds to a trivial constant shift in time of the external drive. The backscattering current is readily obtained from H T , after defining Γ(t) = Γ 0 exp ie * t -∞ dt V (t ) and e * = νe: I B (t) = ie * Γ(t)ψ † R (0, t)ψ L (0, t) -H.c. . (B9) Expanding to order Γ 2 0 and taking the average over one period, the backscattering current becomes I B (t) = - 2ie * T Γ 0 2πa 2 +∞ -∞ dτ e 2νG(-τ ) × T 0 d t sin e * t+ τ 2 t-τ 2 dt V (t ) , (B10) with G(t -t ), the connected bosonic Green's function G(t -t ) = φ µ (0, t)φ µ (0, t ) c , which reads at zero temperature G(t -t ) = -log 1 + i v F (t-t ) a . The unsymmetrized shot noise is written in terms of I B (t) [45] as S(t, t ) = I B (t)I B (t ) -I B (t) I B (t ) . To second order Γ 2 0 , the zero-frequency time-averaged shot noise becomes: S = 2 dτ T 0 d t T S t + τ 2 ; t - τ 2 = e * Γ 0 πa 2 dτ e 2νG(-τ ) × T 0 d t T cos e * t+ τ 2 t-τ 2 dt V (t ) . (B11) The excess shot noise at zero temperature then takes the form ∆S = S -2e * I B (t) = e * Γ 0 πa 2 dτ T 0 d t T exp [2νG (-τ )] × exp ie * t+ τ 2 t-τ 2 dt V (t ) . (B12) Splitting the voltage into its DC and AC part, and using the Fourier coefficients p l introduced in Eq. (A1), one has ∆S = e * Γ 0 πa 2 l |p l | 2 dτ e i(l+q)Ωτ +2νG(-τ ) = 2 T e * Γ 0 v F 2 1 Γ(2ν) Ω Λ 2ν-2 × l P l |l + q| 2ν-1 [1 -Sgn (l + q)] , (B13) where the zero-temperature expression of G(-τ ) has been used, which allows to perform the integration. As in the main text, we also introduced the notations q = e * V dc Ω for the charge per pulse, Λ = v F /a for the high-energy cutoff of the chiral Luttinger liquid theory and P l = |p l | 2 for the probability for a quasiparticle to absorb (l > 0) or emit (l < 0) l photons [10]. At finite temperature, ∆S needs to be slightly amended, ∆S = S -2e * I B (t) coth q 2θ = - 2 T e * Γ 0 v F 2 2θ Γ(2ν) 2πΩθ Λ 2ν-2 × l P l Γ ν + i l + q 2πθ 2 sinh l 2θ sinh q 2θ , (B14) in order to get rid of the thermal noise (∆S → 0 for large temperature). There, the reduced temperature is θ = k B Θ Ω (Θ the electron temperature) and the finitetemperature expression of G (-τ ) has been employed. ) This form of H T (specific to ν = 1/2) allows us to refermionize the e iφ-field, and to decouple φ + , following Ref. [42]. A new fermionic field ψ(x) and a Majorana fermion field f , which satisfy {f, f } = 2 and {f, ψ(x)} = 0, are introduced. These obey the equations of motion: H 0 = v F 4π r=± dx (∂ x φ r ) 2 , (D1) H T = Γ(t) e iφ-(0) 2πa + Γ * (t) e -iφ-(0) 2πa . ( D2 -i∂ t ψ(x, t) =iv F ∂ x ψ(x, t) + Γ(t) √ 2πa f (t)δ(x), (D3) -i∂ t f (t) =2 1 √ 2πa [Γ * (t)ψ(0, t) -H.c.] . (D4) Solving this set of equations near the position x = 0 of the quantum point contact (QPC), one can relate the fields ψ b and ψ a corresponding to the new fermionic field taken respectively before and after the QPC: ψ a (t) =ψ b (t) -γΩe iϕ(t)+iqΩt t -∞ dt e -γΩ(t-t ) × e -iϕ(t )-iqΩt ψ b (t ) -H.c. . (D5) The backscattered current is the difference of the leftmoving current after and before the QPC: I B (t) = ev F 2 ψ † b (t)ψ b (t) -ψ † a (t)ψ a (t) . ( D6 ) After some algebra, the time-averaged backscattered current becomes: I B = - e T γ l P l Im Ψ 1 2 + γ -i(q + l) 2πθ , (D7) where Ψ(z) is the digamma function, and the dimensionless tunneling parameter is γ = |Γ0| 2 πav F Ω . Similarly, the zero-frequency time-averaged shot noise defined in Eq. (B11) takes the form: Finally, the excess shot noise is obtained using the known θ = 0 DC results [42] for the backscattered current I B dc and corresponding zero-frequency noise S dc : The excess noise at θ = 0 associated with an arbitrary drive V (t) is then defined as the difference between the photo-assisted shot noise (PASN) and the DC noise S dc (Q T ) obtained for the same charge Q T = T I B transferred during one period of the AC drive ∆S = S -2e * I B + (e * ) I B dc = e 2π ξ arctan eV dc 2ξ , ( In the present work, we apply the voltage to a long contact, so that V 1 (x, t) = θ(-x -d)V (t). This leads to a phase shift of the form Φ 1 = t -∞ dt V 1 (x -v F (t -t ), t ) = t -∞ dt θ(-x + v F (t -t ) -d)V (t ) = t-x+d v F -∞ dt V (t ). (F2) Following now Ref. 9, one can recast the single particle Hamiltonian defined in their Eq. ( 4) into a form similar to the one presented here in Eq. (B3) provided that one defines the applied voltage drive as V 2 (x, t) = v F δ(x + d) t -∞ dτ V (τ ). This, in turn, leads to the phase shift Φ 2 = t -∞ dt V 2 (x -v F (t -t ), t ) = t -∞ dt v F δ(x -v F (t -t ) + d) t -∞ dτ V (τ ) = t-x+d v F -∞ dτ V (τ ) (F3) One thus readily sees that at the level of the phase shift experienced by the quasiparticles as a result of the external drive, the protocol presented in Ref. 9 and the one presented in the text are completely equivalent. Figure 1 . 1 Figure 1. Main setup: a quantum Hall bar equipped with a QPC connecting the chiral edge states of the FQH. The left-moving incoming edge is grounded at contact 2 while the right-moving one is biased at contact 1 with a time-dependent potential V (t). Figure 2 . 2 Figure 2. Excess noise in units of S 0 = 2 Figure 3 . 3 Figure 3. Rescaled excess noise ∆S/γ in units of e 2T as a function of the number of electrons per pulse q, for different values of the dimensionless tunneling parameter γ = |Γ 0 | 2 πav F Ω . Results are obtained at zero temperature, with filling factor ν = 1/2, in the case of a square (bottom), a cosine (middle) and a periodic Lorentzian drive with η = 0.1 (top). Figure 4 . 4 Figure 4. Normalized HOM noise ∆Q at q = 1, as a function of the time delay τ between pulses. Results are presented in the WB case at ν = 1/3 and θ = 0.1, for a square, a cosine and a periodic Lorentzian drive with η = 0.1. Inset: HOM setup with applied drives on both incoming arms. Figure 5 . 5 Figure 5. Averaged backscattered current IB(t) as a function of the number of electrons per pulse q, in the case of a square, cosine and periodic Lorentzian drive with η = 0.1 and θ = 0.1. Results are presented for ν = 1/3 in the perturbative regime in units of I 0 = e * T Γ 0 v F 2 Ω Λ 2ν-2 (top) and for the exact treatment at ν = 1/2 in units of e T (bottom). S = e 2 T 4γ 2 klmp * k p l p * l+m p k+m m 2 22 Re where we introduced ξ = |Γ0| 2 πav F . This allows to express the DC noise, not as a function of the applied bias, but rather as a function of the charge Q ∆t = ∆t I B dc transferred through the QPC over a given time interval ∆tS dc (Q ∆t ) = e Q ∆t ∆t - ACKNOWLEDGMENTS We are grateful to G. Fève, B. Plaçais, P. Degiovanni and D.C. Glattli for useful discussions. This work was granted access to the HPC resources of Aix-Marseille Université financed by the project Equip@Meso (Grant No. ANR-10-EQPX-29-01). It has been carried out in the framework of project "1shot reloaded" (Grant No. ANR-14-CE32-0017) and benefited from the support of the Labex ARCHIMEDE (Grant No. ANR-11-LABX-0033) and of the AMIDEX project (Grant No. ANR-11-IDEX-0001-02), all funded by the "investissements d'avenir" French Government program managed by the French National Research Agency (ANR). Appendix A: External drives and corresponding Floquet coefficients The applied drive V (t) is split into a DC and an AC part V (t) = V dc + V ac (t), where by definition V ac (t) averages to zero over one drive period T . The AC voltage is handled through the accumulated phase experienced by the quasiparticles ϕ(t) = e * t -∞ dt V ac (t ) (with the fractional charge e * = νe). We use the Fourier decomposition of e -iϕ(t) , defining the corresponding coefficients p l as e ilΩt e -iϕ (t) . (A1) We focus on three types of drives: where rect(x) = 1 for |x| < 1/2 (= 0 otherwise), is the rectangular function, and η = W/T (W is the half-width at half-maximum of the Lorentzian pulse). The corresponding Fourier coefficients of Eq. (A1) read, for non-integer q = e * V dc Ω : Cosine p l = J l (-q) , (A5) Lorentzian Appendix B: Current and noise in the weak backscattering regime Fractional quantum Hall (FQH) edges at filling factor ν = 1/(2n + 1) are described in terms of a hydrodynamical model [37] through the Hamiltonian of the form where we apply a bias V (x, t) which couples to the charge density of the right moving edge state. Here the bosonic fields satisfy [φ R,L (x), φ R,L (y)] = ±iπSgn(x -y). These bosonic fields propagate along the edge at velocity v F and are directly related to the corresponding The number of excitations created in a onedimensional system of free fermions by the applied timedependent drive V (t) is given by the number of electrons and holes where n F (k) is the Fermi distribution and ψ k is a fermionic annihilation operator in momentum space. Using the bosonized description for ν = 1 [see Eq. (B4)], the number of electrons and holes becomes Minimal excitations correpond to a drive which excites a single electron, while no particle-hole pairs are generated (N h = 0). Generalizing this to a chiral Luttinger liquid (a FQH edge state) [37], this excitation should correspond to a vanishingly small value of the quantity The excess noise [defined in Eq. (B12)] is thus identified as the most suitable quantity to study minimal excitations. At θ = 0, one recovers in N precisely the excess noise, up to a prefactor which depends on the tunneling amplitude [see Eq. (B12)] [9]. At θ = 0, thermal excitations contribute substantially to N , motivating us to include a correction [the coth factor in Eq. (B14)] which gets rid of this spurious contribution in the high temperature limit. A periodic voltage bias is now applied to both rightand left-moving incoming arms of the QPC. We focus here on single leviton collisions with identical potential drives, up to a tunable time delay τ . The drives are periodic Lorentzians with a single electron charge per pulse (q R = q L = 1). Using a gauge transformation, this amounts to computing the noise in the case of a single total drive V Tot (t) = V (t) -V (t -τ ) applied to the right incoming branch only. In the context of electron quantum optics, a standard procedure is to compare this so-called Hong-Ou-Mandel (HOM) noise to the Hanbury-Brown and Twiss (HBT) case where single levitons scatter on the QPC without interfering, leading to the definition of the following normalized HOM noise [5,24] Thermal fluctuations are eliminated by subtracting the vacuum contribution S vac to each instance of the noise. Taking advantage of the gauge transformation, we use the expressions for the noise established earlier in the perturbative and the exact cases. This calls for a new set of Fourier coefficients pl associated with the total drive where z = e -2πη and the Fourier coefficients p l corresponding to a periodic Lorentzian drive V (t) with q = 1 take the form with θ H (x) the Heaviside step function. In the WB case, the PASN is given by Eq. (B11), which gives at finite temperature while the vacuum contribution reduces to Combining the results from Eqs. (E1) through (E6), we obtain for the normalized HOM noise Remarkably, this result is independent of both temperature and filling factor. Indeed, thermal contributions factorize in the exact same way in the numerator and denominator, leading to a universal profile. This result also corresponds to that of Ref. [4] for ν = 1. Appendix F: Applying the voltage bias to a point-like or a long contact In the main text, we focus on the experimentally relevant case of a long contact [4] where electrons travel a long way through ohmic contacts before reaching the mesoscopic conductor, accumulating a phase shift along the way. In a previous work [9] however, the authors consider applying the voltage pulse through a point-like contact. Here we show that these two approaches are equivalent. Indeed, one can see starting from the Hamiltonian (B1), and solving the corresponding set of equations of motion for the fields that the external bias can be accounted for by implementing a phase shift of the quasiparticle operator, which we recall here ψ R (x, t) -→ ψ R (x, t) e -iνe t -∞ dt V (x-v F (t-t ),t ) . (F1) Several choices for the external drive V (x, t) are thus acceptable, provided that the integral t -∞ dt V (xv F (t -t ), t ) leads to the same result. Indeed, this phase shift is the only meaningful physical quantity, which gives us some freedom in the choice of V (x, t). We further consider two different options.
01774479
en
[ "chim.anal" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01774479/file/revised_elsarticle_main_h3o.pdf
Majda Mekić Brice Temime-Roussel Anne Monod Rafal S Strekowski email: [email protected] Quantification of gas phase methyl iodide using H 3 O + as the reagent ion in the PTR-MS technique Keywords: Methyl iodide, CH 3 I, H 3 O +, PTR-MS, collision rate In this work, the proton transfer reaction between the hydronium ion (H 3 O + ) and methyl iodide (CH 3 I) is studied to investigate if consistent quantification of the gas phase CH 3 I is possible in humid air. The neutral CH 3 I molecule was chosen because this compound is of environmental importance in the field of nuclear power plant safety and nuclear energy. Water was used as a reagent ion source in a conventional Insbruck PTR-MS to produce H 3 O + reagent ions. The use of H 3 O + ions allows for fast, sensitive and specific detection of gas phase CH 3 I via a proton-transfer reaction The instrument response was linear in the tested 5 to 96 ppbV range and the PTR-MS sensitivity was observed to be humidity dependent. The observed sensitivity was in found to range between 1.6 to 3.3 cps/ppb at relative humidity between 63 and 15% at T= 23 • C. A typical H 3 O + primary ion signal was 10 7 cps and the normalized sensitivity was in the range between 0.16 and 0.33 ncps/ppb. The instrument CH 3 IH + ion background rate was 6.8 ± 1.4 cps and the dwell time was 1 second. The detection limit was calculated as 3 times the standard deviation of the background level and ranged between 1.3 and 3.8 ppb. The theoretical collision rate based on the dipole moment and molecular polarizability is calculated. The theoretical collision rate is compared with the experimentally obtained values. The results indicate that the PTR-MS technique is a good analytical method to detect and quantify gas phase CH 3 I concentrations. Introduction The atmospheric importance of molecular iodine and alkyl iodides was first suggested by Chameides and Davis (1982). [START_REF] Chameides | Iodine: Its possible role in tropospheric photochemistry[END_REF] It is now well understood that iodine and iodine containing volatile organic compounds play an important role in the oxidizing capacity of the troposphere [START_REF] Bloss | Impact of halogen monoxide chemistry upon boundary layer oh and ho 2 concentrations at a coastal site[END_REF], aerosol formation and in the ozone depleting cycles in the troposphere [START_REF] Davis | Potential impact of iodine on tropospheric levels of ozone[END_REF][START_REF] Mcfiggans | A modeling study of iodine chemistry in the marine boundary layer[END_REF] and stratosphere. [START_REF] Solomon | On the role of iodine in ozone depletion[END_REF] In addition to atmospheric interest, the presence of organic iodides has gained an increased interest in the field of nuclear industry safety to better understand chemical processes responsible for the formation of different fission products if a major nuclear power plant accident type Three Mile Island (U.S.A.) [START_REF] Cline | Measurements of 129 i and radioactive particulate concentrations in the tmi-2 containment atmosphere during and after the venting[END_REF], Chernobyl (Ukraine) [START_REF] Noguchi | Physicochemical speciation of airborne 131i in japan from chernobyl[END_REF] and Fukushima (Japan) [START_REF] Kinoshita | Assessment of individual radionuclide distributions from the fukushima nuclear accident covering central-east japan[END_REF] were to occur, again. It is now known that iodine and methyl iodide are two of the more critical fission products that are released from UO 2 feul during a major nuclear power plant accident in light water form of the fission product iodine (I 2 ) that is difficult to retain by post-accident filtration systems. [START_REF] Ball | Behaviour of iodine project: Final report on organic iodide studies[END_REF] Further, methyl iodide is currently used in the nuclear industry field to test the organic iodine capture ability and ageing on the performance of emergency charcoal filters. [START_REF] Nacapricha | Quality control of nuclear charcoals: Particle size effect and trapping mechanism[END_REF][START_REF] Wren | Methyl iodide trapping efficiency of aged charcoal samples from bruce-a emergency filtered air discharge systems[END_REF] Methyl iodide has additional applications in the agriculture industry where it has been introduced as a fumigant pesticide used to control insects, plant parasitic metabolites, soil-bome pathogens and weed seeds. [START_REF] Waggoner | Methyl iodide: an alternative to methyl bromide for insectary fumigation[END_REF] To date, most environmental CH 3 I measurements are based on gas chromatographic (GC) separation equipped with a mass spectrometer (MS) or an electron capture detector (ECD). A more recent atmospheric CH 3 I measurement method is based on resonant fluorescence (RF) spectroscopy. [START_REF] Bale | Novel measurements of atmospheric iodine species by resonance fluorescence[END_REF] While GC-ECD (LD∼10 ppt) and GC-MS (LD∼100 ppt) methods are two very sensitive analytical techniques used to detect gas phase CH 3 I in the laboratory and field settings, they do not offer the time resolution and response needed to measure rapid flux or concentration changes in the gas phase. The newer RF technique used to detect gas phase CH 3 I in the laboratory and more recently in the field is fast, sensitive and selective but remains a research-grade instrument that requires advanced technical expertise and skills. The chemical ionization mass spectrometric (CIMS) technique has the potential for fast, sensitive, specific and real time CH 3 I measurements when rapid changes in the gas phase mixing ratios need to be known or monitored continuously. The proton-transfer-reaction mass spectrometry (PTR-MS) is a type of CIMS instrument. The PTR-MS combines the concept of chemical ionization [START_REF] Field | Chemical ionization mass spectrometry. i. general information[END_REF] with the flow-drift-tube technique. [START_REF] Mcfarland | Flow-drift technique for ion mobility and ion-molecule reaction rate constant measurements. ii. positive ion reactions of n + , o + , and h + 2 with o 2 and o + with n 2 from thermal to [inverted lazy s] 2 ev[END_REF] While protontransfer-reaction mass spectrometry (PTR-MS) is often used for sensitive detection of volatile organic compounds (VOCs), it can be applied only to gas-phase compounds with proton affinities (PA) higher than that of water, PA(H 2 O) = 691.0 ± 3 kJ mol -1 [START_REF] Hunter | Evaluated gas phase basicities and proton affinities of molecules: an update[END_REF][START_REF]Webbook de chimie NIST[END_REF], and the proton transfer from H 3 O + to the analyte molecule is efficient if the difference in proton affinities is larger than ∼ 35 kJ mol -1 . [START_REF] Bouchoux | A relationship between the kinetics and thermochemistry of proton transfer reactions in the gas phase[END_REF] The proton affinity of CH 3 I, PA(CH 3 I) = 691.7 kJ mol -1 , [START_REF] Hunter | Evaluated gas phase basicities and proton affinities of molecules: an update[END_REF][START_REF]Webbook de chimie NIST[END_REF] is only slightly higher than that of water. Since the difference in proton affinities between water and methyl iodide is very small, the CIMS technique based on proton transfer reaction of hydronium (H 3 O + ) reagent ions with CH 3 I has only been used once [START_REF] Spanel | Selected ion flow tube studies of the reactions of h3o+, no+, and o2+ with several aromatic and aliphatic monosubstituted halocarbons[END_REF] in a laboratory setting to detect methyl iodide. Further, the one previous work has been carried out in dry helium flow only. In this work it is shown that H 3 O + ions may still be employed to detect gas phase CH 3 I. The use of water (H 2 O) as a source of reagent ions in the PTR-MS instrument to detect CH 3 I is proposed. H 3 O + reagent ions are used for sensitive and specific detection of gas phase CH 3 I in humidified and dry air. H 3 O + ion is shown to be a good proton source and soft chemical ionization reagent. The proposed PTR-MS technique appears to be a good tool for online analyses of relatively fast changing concentrations of methyl iodide. Experimental The proton transfer reaction between the H 3 O + ions and CH 3 I was studied using a commercial quadrupole PTR-MS mass analyzer (Ionicon Analytik GmbH, Innsbruck, Austria). The experimental details that are particularly relevant to this work are given below. Generation of gas phase CH 3 I. The gas phase CH 3 I concentration was generated using the gas saturation method, one of the oldest and most versatile ways of studying heterogeneous equilibria involving low vapor pressure compounds, first developed by Regnault in 1845. [START_REF] Regnault | études sur l'hygrométrie[END_REF] The gas saturation method used in this work is similar to the one described originally by [START_REF] Markham | A compact gas saturator[END_REF]. [START_REF] Markham | A compact gas saturator[END_REF] Briefly, nitrogen carrier gas was allowed to flow though the volume containing the CH 3 I sample that itself was mixed with glass beads and supported on a fritted glass surface. The saturator volume itself was immersed in a temperature controlled fluid and kept at a constant temperature using a thermostat with an accuracy of ±0.1K. The temperature inside the saturator volume was measured using a Type-J thermocouple (Omega) with an accuracy of ±0.1K. The carrier gas was allowed to enter the saturator volume, equilibrate with the sample and was then allowed to exit through a capillary passageway and allow to flow through a glass tube. The geometry of the exiting glass tube was such that the diameter of the glass tube increased with increasing length. This was done to avoid any sample condensation as the sample and the carrier gas were allowed to leave the saturator system. Concentration of CH 3 I at the exit of the saturation system was calculated from the given vapor pressure, mass flow rates, pressure within the saturator and the total pressure. The Antoine type equation used to calculate the vapor pressure of CH 3 I is log p = -20.3718 -1253.6/T + 13.645 log T -2.6955 × 10 -2 T + 1.6389 × 10 -5 T 2 where the pressure p is in units of mmHg and the temperature T is in Kelvin and in the range from 207.7K to 528.00K. [START_REF] Yaws | Chemical properties handbook : physical, thermodynamic, environmental, transport, safety, and health related properties for organic and inorganic chemicals[END_REF]. Under normal operating conditions, the saturator was kept at T = 268 K. At this temperature, the vapor pressure of CH 3 I within the saturator was calculated to be 109.31 mmHg. This vapor pressure was then further diluted using nitrogen gas carrier gas and a system of mass flow controllers to obtain the desired concentration. PTR-MS instrument. A commercial PTR-MS (Ionicon Analytic GmbH, Innsbruck, Austria) was used to study the H 3 O + reagent ion ionization process with CH 3 I. [START_REF] Hansel | Proton transfer reaction mass spectrometry: on-line trace gas analysis at the ppb level[END_REF][START_REF] Lindinger | On-line monitoring of volatile organic compounds at pptv levels by means of proton-transfer-reaction mass spectrometry (ptr-ms) medical applications, food control and environmental research[END_REF] The reaction chamber pressure (p drift ) was 2.11 mbar, drift tube voltage was 601V and the drift tube temperature was approximately 310K, albeit it was not controlled. The corresponding E/N ratio was 134 Td (1Td=10 -17 Vcm 2 ) where E is the electric field strength (E) applied to the drift tube and N is the buffer gas density. This E/N ratio value was chosen to limit clustering of the H 3 O + reagent ions with H 2 O because it is known that the resulting cluster ions (m/Q = 37.0501 and m/Q = 55.0395) may act as reagent ions, and, as a result, limit signal intensity. [START_REF] Hansel | Proton transfer reaction mass spectrometry: on-line trace gas analysis at the ppb level[END_REF][START_REF] Lindinger | On-line monitoring of volatile organic compounds at pptv levels by means of proton-transfer-reaction mass spectrometry (ptr-ms) medical applications, food control and environmental research[END_REF] The effect of the water cluster ion formation on signal intensity [START_REF] De Gouw | Validation of proton transfer reaction-mass spectrometry (ptr-ms) measurements of gas-phase organic compounds in the atmosphere during the new england air quality study (neaqs) in 2002[END_REF] was assessed by changing the water content within the carrier gas. Here, nitrogen gas was allowed to pass through a bubbler filled with deionized water at room temperature. The resulting relative humidity (%RH) was measured at the inlet of the PTR-MS using a temperature-humidity mini probe (HygroClip-SC04, Rotronic International). The relative humidity was varied between 0 and 60%. Data analysis. PTR-MS data files were imported using the Technical Data Management (TDM) Excel add-in for Microsoft Excel. Mass spectra were recorded up to m/Q 200 and the integration rate was 1s. No specific mass calibration was performed. However, stability of the hydronium ion signal at m/Q 19 was checked. The Excel raw data files were exported and analyzed using Igor Pro 6.37 commercial software. Materials. The nitrogen carrier gase used in this study was generated using the N2LCMS 1 Nitrogen Generator (Claind S.r.l., Italy). Iodomethane was purchased at Acros Organics (Belgium) and the stated minimum purity was 99%. To limit any photo-catalytic or thermal decomposition during storage, CH 3 I original container bottles were stored under dark conditions at T = 6 • C. Deionized water with a resistivity greater than 18M was prepared by allowing tap water to pass first through a reverse osmosis demineralization filter (ATS Groupe Osmose) and then through a commercial deionizer (Milli-pore, Milli-Q). Results Experimental determination of k (H3O + +CH3I) Based on the original work of Lindinger and coworkers (1998) [START_REF] Lindinger | On-line monitoring of volatile organic compounds at pptv levels by means of proton-transfer-reaction mass spectrometry (ptr-ms) medical applications, food control and environmental research[END_REF], the CH 3 I gas phase concentration may be calculated using the following equation. [ CH 3 I] cm -3 = 1 kt • CH 3 I -H + H 3 O + • Tr H3O + Tr CH3I-H + (1) In the equation ( 1) above, CH 3 I-H + and H 3 O + are the ion count rates, k is the rate coefficient of the proton transfer reaction of the H 3 O + reagent ion with CH 3 I and t is the residence or reaction time of the ion within the drift tube (∼ 100 µs). [START_REF] Cappellin | Proton transfer reaction rate coefficients between h 3 o + and some sulphur compounds[END_REF] The reaction time t is calculated using the length of the drift tube l, ion mobility µ, electric field E and the electric potential applied to the drift tube U drift . Given that l t = µ • E = µ • U drift l (2) therefore, t = l 2 µ • U drift (3) The length of the drift tube l = 9.3 cm and U drift is in units of Volts. The ion mobility µ listed in equations 2 and 3 is calculated using the following equation 4 µ = µ 0 • p 0 p drift • T drift T 0 (4) where the reduced mobility µ 0 = 2.8 cm 2 V -1 s -1 , pressure p 0 = 1013.25 mbar, and temperature at standard conditions T 0 = 273.15 K. As a result, the reaction time within the drift-tube may be calculated using the following equation ( 5) t = l 2 µ 0 • U drift • T 0 T drift • p 0 p drift (5) where T drift is the drift-tube temperature in Kelvin and p drift is the drift-tube pressure in millibar. Using the ideal gas law, the number of air molecules per cm 3 within the drift-tube volume may be calculated using the following equation 6 air cm -3 = N A 22400 • T 0 T drift • p 0 p drift (6) where the Avogadro's number N A = 6.022 × 10 23 mole -1 . As a result, the mixing ration of CH 3 I in the gas phase detected by the PTR-MS may be determined using the following equation. [ CH 3 I] ppbV = 10 9 k • 22400 • µ 0 • U drift N A • l 2 • T 2 drift T 2 0 • p 2 0 p 2 drift • CH 3 I -H + H 3 O + • Tr H3O + Tr CH3I-H + (7) The equation ( 7) may be simplified by including the constant factors listed above to give the following equation 8. [ If the rate coefficient for the proton transfer reaction of the H 3 O + ion with CH 3 I is known, the gas phase mixing ratio of methyl iodide may be determined using the PTR-MS technique without prior calibration by simply using MS signal ion counts and given operating instrument parameters. To the best of our knowledge, there is no rate coefficient listed in the literature for the proton transfer reaction between H 3 O + ion and CH 3 I. As a result, the k value has been determined experimentally and theoretically. The equation [START_REF] Kinoshita | Assessment of individual radionuclide distributions from the fukushima nuclear accident covering central-east japan[END_REF] above may be rewritten in the following form to give k exp , that is, the experimental rate coefficient for the proton transfer H 3 O + + CH 3 I reaction. CH 3 I] ppbV = 1.657 × 10 -11 • U drift • T 2 drift k • p 2 drift • CH 3 I -H + H 3 O + • Tr H3O + Tr CH3I-H + (8) k exp = 1.657 × 10 -11 • U drift • T 2 drift [CH 3 I] ppbV • p 2 drift • CH 3 I -H + H 3 O + • Tr H3O + Tr CH3I-H + (9) Equation ( 9) may then be used to determine k if CH 3 I mixing ratio is known. A typical signal intensity profile of selected ions is shown in Figure 1. As shown in Figure 1, m/Q 21 refers to the hydronium ion H 18 3 O + , m/Q 37 refers to the water complex ion H 3 O + (H 2 O) (other water clusters have been ignored [START_REF] Schwarz | Determining concentration patterns of volatile compounds in exhaled breath by ptr-ms[END_REF]), m/Q 143 refers to the CH 3 I-H + ion and m/Q 142 refers to the CH 3 I + ion. The primary ions H 3 O + at m/Q 19 were not detected but were calculated based on the ion count rates of H 18 3 O + at m/Q 21 using equation 10. count rate m/Q 19 = (count rate m/Q 21) × 500 Transmission H 18 3 O + ion (10) In equation 10 above, the constant 500 is the isotope ratio that reflects the isotope ratio of H 18 3 O + to H 16 3 O + and the intensity of the H 18 3 O + ion is corrected by its transmission efficiency. Product ions of reactions caused by minor impurities, namely NO + and O + 2 , have been shown to have negligible intensities as compared to all other product ions and were not considered. [START_REF] Schwarz | Determining concentration patterns of volatile compounds in exhaled breath by ptr-ms[END_REF] The ion signal intensity at m/Q 143 shown in Figure 1 is used to relate the PTR-MS response to methyl iodide mixing ratio. A typical calibration plot is shown in Figure 2. As shown in Figure 2, the MS response is linear for the given methyl iodide mixing ratios. Under the typical experimental conditions used, no fragmentation of the product ion was observed. That is, only the mother ion at m/Q 143 that refers to the CH 3 I-H + ion was detected. The difference in proton affinities between water and methyl iodide is very small. As a result, there is no sufficient excess of energy to fragment the product ion during the proton transfer ionization process. However, as shown in Figure 1 the signal intensity of the m/Q 142 that refers to the CH 3 I + ion was observed to increase slightly with increasing methyl iodide mixing ratio. It is not believed that the m/Q 142 is a result of the fragmentation of the methyl iodide mother ion. It is known that a small amount of the O + 2 ion is always present within the drift tube (see Figure 1). The O + 2 reagent ion will react with methyl iodide via an electron transfer to produce m/Q 142 ion signal that refers to the CH 3 I + ion. Comparison of reported measured rate coefficients for the proton transfer reaction between H 3 O + and CH 3 I at different humidities is shown in Table 1. The methyl iodide mixing ratio, [CH 3 I] saturator , shown in Table 1 was determined at the exit of the saturation system and calculated from the given temperature, vapor pressure, mass flow rates, pressure within the saturator and total pressure. Theoretical determination of k (H3O + +CH3I) The theoretical value of the collision rate constant for the proton transfer reaction between H 3 O + and CH 3 I was calculated using the average dipole orientation (ADO) theory. [START_REF] Su | Ion-polar molecule collisions: the effect of ion size on ion-polar molecule rate constants; the parameterization of the average-dipole-orientation theory[END_REF][START_REF] Su | Theory of ion-polar molecule collisions. comparison with experimental charge transfer reactions of rare gas ions to geometric isomers of difluorobenzene and dichloroethylene[END_REF] The ADO rate coefficient, k ADO , is given by k ADO = q πα µε 0 + C qµ D ε 0 1 2πµk B T (11) where µ D and α are the dipole moment and polarizability of the neutral molecule, respectively, q is the fundamental charge, ε 0 is the permittivity of the free space, µ is the reduced mass of the colliding partners d k ADO = 2.28 × 10 -9 cm 3 s -1 (here, H 3 O + and CH 3 I), T is the temperature within the drift-tube and k B is the Boltzmann constant. The dipole locking constant C is between 0 and 1 and is a function of µ D / √ α and temperature. [START_REF] Su | Parameterization of the average dipole orientation theory: temperature dependence[END_REF] The structure of methyl iodide was taken from NIST database. [START_REF]Webbook de chimie NIST[END_REF] In this work, the dipole moment µ D = 1.620 Debye and polarizability α = 7.325 Å3 . [START_REF]Webbook de chimie NIST[END_REF] The theoretical rate constant for the protontransfer reaction between H 3 O + and CH 3 I is calculated to be k ADO = 2.28 × 10 -9 cm 3 s -1 . Discussion MS ion signal It was observed that the ion signal sensitivity decreased as a function of relative humidity. The relationship between sensitivity and relative humidity is shown in Figure 3. As shown in Figure 3, the CH 3 IH + ion signal was normalized to the signal of H 3 O + ion (m/Q 19) because the reagent ion also changes with humidity. Since the CH 3 IH + ion signal is proportional to the H 3 O + ion abundance, it needs to be taken into account to show the humidity dependence of the CH 3 IH + ion. The observed drop is sensitivity with increasing relative humidity is best explained by the water cluster formation at low drift-tube E/N ratios. For example, the plot of the fraction of the dihydrate cluster in the drift tube as a function of relative humidity and drift-tube E/N value is shown in Figure 4. It can be seen in Figure 4 that a low drift-tube E/N value promotes clustering of H 3 O + ions with H 2 O. The formation of water clusters may be problematic since they may act as reagent ions. To limit cluster formation within the drift-tube, all experiments were performed at E/N = 140 Td. In a similar study, Cappellin et al. ( 2012) [START_REF] Cappellin | On quantitative determination of volatile organic compound concentrations using proton transfer reaction time-of-flight mass spectrometry[END_REF] reported that under high electric field strength values, the protonated water cluster ion formation is strongly suppressed. Further, since the proton transfer reaction between the H 3 O + ion and CH 3 I is only slightly exothermic, and possibly endothermic given the uncertainties in the proton affinities, the proton transfer reaction between the water cluster and CH 3 I will most likely not occur. For example, the water dimer (H 2 O) 2 , has a proton affinity of 808 ± 6kJ mol -1 . As a result, the increased affinity of the water cluster means that some reactions that occur with H 3 O + ions will not take place with H 3 O + (H 2 O) ions. Rate constants The agreement between the theoretical value for the rate constant for the proton-transfer reaction between H 3 O + and CH 3 I (k ADO ) and the experimental value (k exp ) is very bad (see Table 1). This is not surprising given that the proton-transfer reaction H 3 O + + CH 3 I is only slightly exothermic (∆PA = 0.7 kJ mol -1 ). Further, given the uncertainties in proton affinities (±3 kJ mol -1 ), the reaction itself may be endothermic. It has been argued that the proton-transfer from H 3 O + to the analyte molecule in the drift-tube of the PTR-MS is efficient only when the molecule has a higher proton affinity and that the difference in proton affinities is larger than 35 kJ mol -1 . [START_REF] Bouchoux | A relationship between the kinetics and thermochemistry of proton transfer reactions in the gas phase[END_REF] The experimental values for the proton transfer reaction between H ratio, where [CH 3 I] PTR-MS is the theoretically predicted CH 3 I mixing ratio estimated by the PTR-MS measurements (assuming k ADO = 2.28 × 10 -9 cm 3 s -1 [START_REF] Lindinger | Proton-transfer-reaction mass spectrometry (ptr-ms): on-line monitoring of volatile organic compounds at pptv levels[END_REF]) and [CH 3 I] saturator is the CH 3 I mixing ratio determined at the exit of the saturation system and calculated from the given vapor pressure, mass flow rates, pressure within the saturator and total pressure. The possible explanation for the low k exp /k ADO ration is a reverse reaction of the product CH 3 I-H + ion with the molecules of H 2 O even under low humidity conditions present within the drift-tube. This hypothesis is supported by the data shown in Figure 3. The proton transfer reaction between H 3 O + and CH 3 I has been studied briefly by [START_REF] Spanel | Selected ion flow tube studies of the reactions of h3o+, no+, and o2+ with several aromatic and aliphatic monosubstituted halocarbons[END_REF]. [START_REF] Spanel | Selected ion flow tube studies of the reactions of h3o+, no+, and o2+ with several aromatic and aliphatic monosubstituted halocarbons[END_REF] These authors calculated the k exp /k collision = 0.5, where k collision is the collision rate constant in dry helium carrier gas calculated based on the work of [START_REF] Su | Parametrization of the ion-polar molecule collision rate constant by trajectory calculations[END_REF]. [START_REF] Su | Parametrization of the ion-polar molecule collision rate constant by trajectory calculations[END_REF] In the work presented here, the k exp /k ADO = 0.02 in dry synthetic air. At this point, we cannot determine the discrepancy between the two values. Since [START_REF] Spanel | Selected ion flow tube studies of the reactions of h3o+, no+, and o2+ with several aromatic and aliphatic monosubstituted halocarbons[END_REF] In a scenario similar to this work, Cappellin and coworkers (2014) [START_REF] Cappellin | Ethylene: Absolute real-time high-sensitivity detection with ptr/sri-ms. the example of fruits, leaves and bacteria[END_REF] determined the rate coefficient for the reaction of ethylene with H 3 O + . Similar to the results obtained in this work, Cappellin and coworkers' theoretical value k ADO for the ethylene + H 3 O + reaction does not agree with the k exp value. These authors reported the experimental reaction rate coefficient between 1.7×10 -11 cm 3 s -1 and 3.4×10 -11 cm 3 s -1 , far below the collision rate (1.4 × 10 -9 cm 3 s -1 ). [START_REF] Cappellin | Ethylene: Absolute real-time high-sensitivity detection with ptr/sri-ms. the example of fruits, leaves and bacteria[END_REF] Similar to the results observed in this work, the reaction of ethylene with H 3 O + is very inefficient. The authors attribute the observed inefficiency to the (quasi) reaction endothermicity. The results obtained in this work for the proton-transfer reaction of H 3 O + with methyl iodide are consistent with the results obtained by Cappellin and coworkers [START_REF] Cappellin | Ethylene: Absolute real-time high-sensitivity detection with ptr/sri-ms. the example of fruits, leaves and bacteria[END_REF]. In this work, the observed difference in k exp and k ADO for the H 3 O + + CH 3 I reaction does not imply that the PTR-MS technique is a poor method of choice to be used to detect gas phase CH 3 I. As shown in Figures 1 and2, with a good calibration curve, the PTR-MS technique is a perfect analytical method to detect and quantify rapid changes in the CH 3 I gas phase concentrations. the rate constant is assumed to be the result of the fact that the proton transfer reaction between the hydronium ions and methyl iodide is only slightly exothermic and given the uncertainties in the proton affinities may be even endothermic. Conclusion Figure 1 : 1 Figure 1: Typical signal ion counts (Hz) of selected ions. Figure 2 : 7 ( 27 Figure 2: Plot of the methyl iodide calibration results. The m/Q CH 3 I signal has been normalized by m/Q 19. The solid line is obtained from a linear least squares analysis; its slope is (2.21 ± 0.04) × 10 -7 and the intercept is (3.85 ± 2.5) × 10 -7(Errors are 2σ, precision only). Figure 3 : 3 Figure 3: Plot of the signal ion sensitivity as a function of relative humidity. Figure 4 : 4 Figure 4: Plot of the fraction of the dihydrate cluster in the drift tube versus relative humidity. Experimental conditions: T = 298K, E/N = 140Td (open circles •), E/N = 127Td (closed circles •). The percentage data of dihydrate clusters were calculated using the following equation: [H 2 O-H 18 3 O + ]×250 [H 18 3 O + ]×500× 100 A commercial Ionicon Analytic PTR-MS was used to study the H 3 O + reagent ion ionization process with methyl iodide. The proton transfer reaction between the H 3 O + reagent ion and CH 3 I produces a specific ion at m/Q 142 (CH 3 I-H + ) that allows for fast and sensitive detection of gas phase methyl iodide. The instrument response for CH 3 I was shown to be linear in the 5 -96 ppbV range. The calculated collision rate, k ADO is calculated based on a dipole moment µ(CH 3 I) = 1.620 D [18] and a molecular polarizability α(CH 3 I) = 7.325 Å3 [18] and gives k ADO = 2.28 × 10 -9 cm 3 s -1 . The experimental collision rate, k exp for the proton transfer reaction H 3 O + + CH 3 I → [CH 3 I-H] + + H 2 O is in the range (1.79 -4.90) × 10 -11 cm 3 s -1 . The observed difference in the theoretical and experimental values of Table 1 : 1 Comparison of the measured rate coefficients for the proton transfer reaction between H 3 O + and CH 3 I at different humidities [CH 3 I] saturator range a %RH b k a,c exp k exp /k ADO d 5 -94 dry 4.90 ± 1.70 0.021 75 11.4 4.55 ± 1.02 0.020 75 15.5 3.53 ± 0.41 0.015 5 -96 20.1 2.54 ± 1.57 0.011 5 -95 21.5 2.69 ± 2.37 0.012 5 -96 22.6 2.05 ± 0.40 0.008 75 23.3 2.97 ± 0.54 0.013 75 30.5 2.55 ± 0.22 0.011 75 36.5 2.34 ± 0.25 0.010 75 62.8 1.79 ± 0.20 0.008 a Units: [CH 3 I] saturator : ppbV, k exp : 10 -11 cm 3 s -1 , b Experimental uncertainty in %RH values is ±0.5%RH c Uncertainty is ±2σ, precision only 3 O + and CH 3 I are summarized in Table 1. The experimental values range from 1.79 × 10 -11 cm 3 s -1 to 4.90 × 10 -11 cm 3 s -1 as a function of relative humidity. On the other hand k ADO = 2.25 × 10 -9 cm 3 s -1 . The reaction efficiency k exp /k ADO is calculated to range from 0.008 (62%RH) to 0.021 in dry nitrogen flow. In this work, k exp /k ADO is equal to [CH 3 I] saturator /[CH 3 I] PTR-MS do not list their electric field strength (E/N) values nor which collision rate constant they calculate nor the polarizability nor the dipole moment values, it is difficult to compare the two studies. It is known that the kinetics of the ion-molecule reactions are controlled by drift-tube temperature and pressure, drift tube E/N, reactant molecule dipole moment and polarizability. Acknowledgement This research was supported by the Mitigation of Releases to the Environment (MiRE) project launched by the Institut de Radioprotection et de Sûreté Nucléaire (IRSN) and funded by the the French National Research Agency (ANR) under the convention number 11-RSNR-0013. We gratefully acknowledge this support.
01498176
en
[ "phys.cond.cm-msqhe" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01498176/file/jonckheere_HBT_TS_2017_arxiv.pdf
T Jonckheere J Rech A Zazunov R Egger T Martin Hanbury Brown and Twiss noise correlations in a topological superconductor beam splitter Keywords: numbers: 73.23.-b, 72.70.+m, 74.45.+c We study Hanbury-Brown and Twiss current cross-correlations in a three-terminal junction where a central topological superconductor (TS) nanowire, bearing Majorana bound states at its ends, is connected to two normal leads. Relying on a non-perturbative Green function formalism, our calculations allow us to provide analytical expressions for the currents and their correlations at subgap voltages, while also giving exact numerical results valid for arbitrary external bias. We show that when the normal leads are biased at voltages V1 and V2 smaller than the gap, the sign of the current cross-correlations is given by -sgn(V1 V2). In particular, this leads to positive cross-correlations for opposite voltages, a behavior in stark contrast with the one of a standard superconductor, which provides a direct evidence of the presence of the Majorana zero-mode at the edge of the TS. We further extend our results, varying the length of the TS (leading to an overlap of the Majorana bound states) as well as its chemical potential (driving it away from half-filling), generalizing the boundary TS Green function to those cases. In the case of opposite bias voltages, sgn(V1 V2) = -1, driving the TS wire through the topological transition leads to a sign change of the current cross-correlations, providing yet another signature of the physics of the Majorana bound state. I. INTRODUCTION In the last two decades, Majorana fermions, 1 concepts which initially were the strict property of particle physics, found some correspondence in condensed matter physics settings. Instead of looking whether an elementary particle, such as the neutrino, qualifies as a Majorana fermion, nanoscience physicists are now wondering whether a complex many body electronic system with collective excitations could bear such strange objects: a fermion whose annihilation operator is (sometimes trivially) related to its creation counterpart. Indeed, Kitaev 2 showed that a one dimensional wire with tight-binding interactions and p-wave pairing exhibits Majorana fermions at its boundaries. Recent reviews give a broad summary of this work and its consequences in condensed matter physics. [3][4][5] There has been an ongoing effort to study experimentally whether this toy model has some correspondence in physical systems. Among strong candidates are nanowires with Rashba and Zeeman coupling put in proximity to a BCS superconductor, [6][7][8][9][10] and chains of iron atoms deposited on top of a lead surface. 11 In Refs. 9 and 10 the signature for the presence of a Majorana fermion at the edge of a one dimensional nanowire consists of a zero bias anomaly in the current voltage characteristics. These results call for the exploration of more involved transport settings and geometries, where the behavior of the Majorana fermion can be fully investigated and characterized. Among these settings, multi-terminal hybrid devices offer unique perspectives. Multi-terminal devices have often played an important role for exploring the electronic transport properties of mesoscopic devices. They allow to perform experiments in close analogy with quantum optics scenarios: in the Hanbury-Brown and Twiss 12 (HBT) experiment for instance, photons impinging on a half-silvered mirror are either transmitted or reflected, and the crossed correlations of intensities from these two outputs are measured, yielding a positive signal due to the bunching of photons emitted from a thermal source. Transposed to condensed matter setups, the sign of the HBT crosscorrelations reveals meaningful information concerning the physics at play. For a DC biased three-terminal normal conductor, the electronic analog of the HBT experiment was studied theoretically and experimentally two decades ago. [13][14][15][16][17] This was analyzed in terms of currentcurrent crossed correlations (noise): fermion antibunching (resulting from Pauli principle) leads to a negative HBT noise signal. In the context of conventional BCS superconductivity, three-terminal devices consisting of a superconductor connected to two normal leads were also investigated. In such devices, a Cooper pair can be transferred as a whole in one of the normal leads (via Andreev arXiv:1611.03776v2 [cond-mat.mes-hall] 2 Mar 2017 Reflection or AR), or it can be split into its two constituent electrons in opposite leads (via Crossed Andreev Reflection or CAR). The HBT noise correlations can thus be negative or positive depending on whether AR or CAR is dominant. 18,19 By adding appropriate filters (in energy or spin) to the device in order to rule out AR in each lead, positive noise crossed correlations, due solely to CAR processes, can be guaranteed. 20,21 Experimental evidence for Cooper pair splitting in BCS superconductors has been found both in non-local current measurements [22][23][24] as well as in noise correlation measurements 25 with a device analogous to what was proposed in Refs. 20 and 26. In this work, we study HBT noise correlations for a pair splitter, where a TS nanowire, rather than a standard BCS superconductor, is connected to two biased normal leads. A schematic view of the setup is shown in Fig. 1. Because of the presence of the Majorana zeromode for the TS, the AR and CAR processes are strongly affected. Studying the current and the HBT crossedcorrelations in this setup, one can expect to explore properties and manifestations of the Majorana bound state. This setup was previously studied using scattering theory and tight-binding numerical calculations. [27][28][29] However, these works concentrated on the specific case of equal voltages for the normal leads and focused on voltages below the gap. Our goal is to provide a complete description of the system, exploring the behavior of the current and current correlations, for arbitrary values of the voltages, thus capturing both the effect of the Majorana bound state and the high-energy quasiparticles. Below the gap, we confirm that a TS beam splitter has negative HBT correlations when the two normal leads have equal voltages. 28,29 More importantly, we also predict a reversal of the sign of the current correlations when voltages are changed from equal to opposite, a feature that is directly related to the properties of the Majorana bound state at the end of the TS nanowire. We perform the calculations within a phenomenological tunnel Hamiltonian approach, using the boundary Keldysh Green function of a semi-infinite TS. Solving non-perturbatively the Dyson equation, this allows us to obtain analytical formulas for the current and current correlations. This approach was introduced in Ref. 30, and can be used to treat a system composed of an arbitrary number of leads. The boundary Keldysh Green function of the TS nanowire encapsulates all the properties of the TS boundary. The corresponding density of states shows a zero-energy peak, associated with the Majorana zero-mode, and a non-zero density of states above the proximity-induced gap without BCS singularity. Moreover, we can further emphasize the role played by the Majorana bound state by generalizing the TS boundary Green function to the case of a finite length nanowire, or one with an arbitrary chemical potential (adjustable doping). For a TS of finite length L, the two Majorana bound states localized at the opposite ends of the nanowire can overlap, leading to a vanishing of its effect for small enough L, which we confirm by looking at the L-dependence of the HBT correlations. Also, tuning the chemical potential of the TS allows us to drive the transition from a topological regime to a trivial (nontopological) one, which is also manifest in the current correlations. The structure of the paper is as follow. In Sec. II we introduce the Hamiltonian model and detail the formulas for the transport properties. Sec. III is devoted to our main results. First, analytical expressions and numerical results for the current and differential conductances are presented and discussed. These quantities can readily be measured in experimental setups. We then provide explicit expressions for the current correlations in the subgap regime, along with numerical results at any bias. A detailed qualitative discussion of the noise behavior, in relation to the particular properties of the Majorana bound state is also given. Sec. IV focuses on finite size effects as well as the impact of tuning the chemical potential of the nanowire, driving it away from half-filling. In App. A, the derivation of the boundary Green function for a TS nanowire with a finite bandwidth and arbitrary chemical potential is presented. General analytical expressions for the current and current correlations are provided in App. B. Finally, App. C discusses subtleties of the microscopic tunneling model, and establishes its equivalence with the scattering matrix formalism for subgap voltages. II. MODEL AND FORMALISM We consider a three-terminal device in a T-shaped geometry, as illustrated in Fig. 1, where the end of a topological superconductor (TS) nanowire is contacted by two normal-conducting (N) leads. In a general case, two different voltages V 1,2 are applied across the N-TS contacts, while the TS wire is assumed to be grounded. The full Hamiltonian is given by H = H T S + H N + H t , (1) where the first two terms describe the TS and two Nleads, respectively, and H t is a tunneling Hamiltonian connecting all three leads to each other (see below for details). We model the TS wire as a semi-infinite 1D spinless p-wave superconductor, corresponding to the continuum version of a Kitaev chain 2,3 in the wide-band limit. The Hamiltonian of the TS wire located at x > 0 reads H T S = ∞ 0 dx Ψ † T S (x) (-iv F ∂ x σ z + ∆σ y ) Ψ T S (x), ( 2 ) where ∆ is a proximity-induced pairing gap, assumed to be real, the Nambu spinor Ψ T S (x) = (c r , c † l ) T combines right-and left-moving fermions, with annihilation field operators c r (x) and c l (x), respectively, v F is the Fermi velocity and σ x,y,z are the Pauli matrices in Nambu space. In what follows, we use units with k B = v F = = 1. In this work, following the approach of Ref. 30, we formulate the transport problem in terms of boundary Keldysh Green functions (GFs) describing the leads which are coupled together by tunneling processes. For such a noninteracting setup with point-like tunneling contacts, the exact boundary GFs can be obtained by solving the Dyson equation to all orders in the tunnel couplings. Below we briefly review the boundary GF approach (see Ref. 30 for details) and summarize relevant formulas needed for the calculation of transport observables in the three-terminal N-TS-N junction. The boundary Keldysh GF at x = 0 for the TS wire is defined as follows: ǧT S (t -t ) = -i T C Ψ(t)Ψ † (t ) , (3) where the Nambu spinor Ψ = (c, c † ) T contains the boundary fermion operator c = [c l + c r ](x = 0), and T C denotes Keldysh time ordering. Explicitly, the Fourier transforms of the retarded and advanced GFs for the uncoupled TS wire in the topologically nontrivial phase derived in Ref. 30 are g R/A T S (ω) = ∆ 2 -(ω ± i0 + ) 2 σ 0 + ∆σ x ω ± i0 + , (4) where R/A corresponds to +/-and σ 0 is the unity matrix in Nambu space. Importantly, this simple expression for the retarded/advanced boundary GF of a TS wire captures the zero-energy Majorana bound state as well as continuum quasiparticles, which allows for studying both subgap and above-gap transport on equal footing. The Keldysh component g K T S (ω) is expressed via the retarded and advanced components as g K T S (ω) =(1 -2n F (ω)) g R T S (ω) -g A T S (ω) , (5) where n F (ω) = e ω/T + 1 -1 is the Fermi function with temperature T . Throughout the paper we use the chemical potential µ T S of the (grounded) TS wire as a reference energy level and set µ T S = 0. In App. A, we give a derivation of the boundary Green function for a Kitaev chain with the finite bandwidth and arbitrary values for the band filling, while the wide-band expression (4) exhibiting particle-hole symmetry corresponds to half filling. In the same manner we construct the Keldysh GFs for the normal leads. Within the wide-band approximation, taking into account that the N-TS tunnel coupling effectively involves only one spin component in the normal conductor, the retarded/advanced GF for the normal electrodes follows from Eq. ( 4) by putting ∆ = 0, g R/A N (ω) = ∓iσ 0 , (6) Correspondingly, the Keldysh component g K N (ω) is determined via g R/A N (ω) by a relation similar to Eq. ( 5) but with the respective chemical potential µ N in the Fermi function (matrix in Nambu space), g K N (ω) = -2i [1 -2n F (ω -µ N σ z )] σ 0 . (7) In the voltage-biased junction, µ N is shifted with respect to µ T S = 0 by the dc voltage across the N-TS contact. In terms of the boundary fermions c j representing the three leads, with j = 0 for the TS wire and j = 1, 2 for the normal-conducting electrodes, the tunneling Hamiltonian takes the form 30 H t = 1 2 j,j Ψ † j W jj Ψ j , (8) with Ψ j = (c j , c † j ) T the boundary Nambu spinor and W = W † is the tunneling matrix in lead and Nambu space. In lead space, we impose that W has vanishing diagonal elements W jj = 0 for all j, while the offdiagonal elements of W are matrices in Nambu space, W jj = λ jj σ z , with a hopping amplitude λ jj . For our setup when two normal electrodes are connected to the central TS lead, the only non-vanishing couplings are λ 0j = λ * j0 = λ j for j = 1, 2. Without loss of generality, λ 1,2 can always be chosen real, and in the case of a single tunnel junction they determine the normal transmission probability of the respective N-TS contact. 30 Once the tunneling matrix W is specified, the full Keldysh GF Ǧ of the system follows from the Dyson equation Ǧ = (ǧ -1 -W ) -1 , (9) with the Keldysh matrix W = diag(W, -W ), and the "uncoupled" Keldysh GF ǧ is diagonal in lead space. From the tunneling Hamiltonian (8), it is straightforward to get the Heisenberg operator for the current flowing from lead j to the contact region, Îj (t) = i e j =j Ψ † j (t)σ z W jj Ψ j (t), (10) while the dc current I j = Îj (t) is I j = 1 2 e ∞ -∞ dω 2π j =j tr N σ z W jj G K j j (ω) , (11) where tr N is the trace over Nambu space. The Keldysh component of the full GF, G K , is given by 30 G K (ω) =G R (ω)F (ω) -F (ω)G A (ω) + G R (ω) [F (ω)W -W F (ω)] G A (ω), ( 12 ) where F jk (ω) = δ jk [1 -2n F (ω -µ j σ z ) ] contains the distribution functions of the uncoupled leads with the respective chemical potentials µ j . Finally, the HBT correlations are readily obtained through the same formalism by computing the zerofrequency cross-correlations of the above-defined current operator. Quite generally, the current correlations at zero frequency are defined as S jj = ∞ -∞ dτ δ Îj (τ )δ Îj (0) , (13) with δ Îj (t) = Îj (t) -I j . In terms of the full GF, these current correlations are given by 30 S jj = ∞ -∞ dω 2π j1 =j j2 =j tr N λ jj1 G -+ j1j2 (ω)λ j2j G +- j j (ω) -G -+ j1j (ω)λ j j2 G +- j2j (ω) , (14) where G -s s = (1/2) G K + s G R -G A with s = ±. III. RESULTS As the system consists of three electrodes, with 0-1 and 0-2 couplings only, it is possible to solve explicitly the Dyson equation, Eq. ( 9), in order to obtain an analytical expression for the full Green function Ǧ. Using Eqs. ( 11) and ( 13), one can then derive an explicit form for the average current I j , and the current correlations S jj . The expressions we give below depend on the couplings λ 1 , λ 2 between the TS and the normal electrodes 1 and 2, on the voltages V 1 and V 2 of the two normal electrodes, and on the temperature through the Fermi function n F (x). A natural quantity appearing in these formulas is Λ = λ 2 1 + λ 2 2 , which is related to the total transmission probability τ between the TS and the normal leads: τ = 4Λ 2 /(1 + Λ 2 ) 2 . For all the results presented in this work, we focus on the case 0 < Λ < 1, which covers all possible values of the transmission τ ∈ [0, 1]. Taking Λ > 1 would give the same value of τ , but for a different realization of the physical system -see the discussion in App. C for more details. A. Current and differential conductance The current flowing through normal lead j = 1, 2 can be written in the simple form: I j = e h ∞ -∞ dω k=1,2 s=± s n F (ω -s eV k )J jk (ω) (15) where J jj and J j,k =j determine, respectively, the local and non-local differential conductances at zero temperature. Focusing e.g. on current I 1 , it can be separated into a "direct" contribution related to the chemical potential of electrode 1 (J 11 ), and a "non-local" contribution related to the chemical potential of electrode 2 (J 12 ). The explicit expressions for the differential conductances J 11 (ω) and J 12 (ω) for |ω| < ∆ are while the expressions for |ω| > ∆ are given in App. B. One can see that in the low-voltage regime, the contribution to I 1 from J 11 is linear in V , while the one from J 12 scales as ∼ V 3 therefore not contributing to the linear conductance in the zero-temperature limit. This means in particular that in the low voltage limit, when coupling to the TS occurs through the Majorana bound state only, the current in one normal electrode is not influenced by the voltage in the other one. 31 One can also check that when setting λ 2 = 0, Eq. ( 16) gives back the known formula for a single N-TS junction. While J 12 trivially vanishes in this case, 30 Fig. 2 shows the local and non-local differential conductances J 11 (ω) and J 12 (ω), for three values of the transmission τ . Focusing for simplicity on the zerotemperature limit, the differential conductance G 1 = dI 1 /dV 1 is given by J 11 (ω) = 4λ 2 1 Λ 2 (1 -Λ 4 ) 2 ω 2 ∆ 2 + 4Λ 4 -J 12 (ω), J 12 (ω) = 2λ 2 1 λ 2 2 1 -Λ 4 ω 2 ∆ 2 (1 -Λ 4 ) 2 ω 2 ∆ 2 + 4Λ 4 , (16) J 11 reduces to 1/(1 + ω 2 /Γ 2 ) for |ω| < ∆, with Γ = 2∆Λ 2 /(1 -Λ 4 ). G 1 (V 1 ) = 2e 2 h J 11 (V 1 ) 2e 2 h λ 2 1 /Λ 2 1 + V 2 1 /Γ 2 . ( 17 ) The factor λ 2 1 /Λ 2 reduces to 1/2 for equal couplings λ 1 = λ 2 , and is otherwise related to the asymmetry of the couplings. The local differential conductance J 11 (ω) has a shape which is similar to the one of a simple N-TS junction, with a peak associated with the Majorana bound state, broadened by the couplings to the normal electrodes. Indeed, the Lorentzian factor (1 + V 2 1 /Γ 2 ) -1 in Eq. ( 17) is reminiscent of the well-known conductance peak of width Γ, and height 2e 2 /h. 32,33 Here, the contribution of this peak is split between the two normal electrodes, resulting in an extra factor 1/2 in the equal coupling case, so that the zero-voltage differential conductance is simply e 2 /h for each electrode. The non-local differential conductance J 12 is shown on the right panel of Fig. 2, for the case of a symmetric junction, and for three different values of the total transmission τ . J 12 is negligible for very small transmission (τ = 0.1). For larger transmissions, it starts from 0 at ω = 0, and is positive for |ω| < ∆. J 12 increases as ω gets closer to τ = 0 .9 V 1 =V 2 =V V 1 =-V 2 =V FIG. 3. Current I1 (in units of e∆/h) vs voltage V = V1 in the case of equal (red dashed curve) and opposite (blue full curve) bias voltages V1,2 at zero temperature, λ1 = λ2 and several values of τ . In all figures, voltages are given in units of ∆. ∆, then abruptly changes sign above the gap, becoming negative for |ω| > ∆. To obtain simple, easily readable formulas for the current, it is useful to take the zero temperature limit, and consider symmetric couplings λ 1 = λ 2 = Λ/ √ 2. We then get for the current at voltages below the gap: I 1 = eΓ 2h 1 + 1 + Γ 2 ∆ 2 tan -1 eV 1 Γ + 1 -1 + Γ 2 ∆ 2 eV 1 -eV 2 Γ + tan -1 eV 2 Γ ( 18 ) which for equal voltages V 1 = V 2 = V reduces to: I 1 = e h Γ tan -1 eV Γ ( 19 ) Fig. 3 shows numerical results for the current I 1 as a function of V for a symmetric junction in the case of equal (V 1 = V 2 = V ) and opposite voltages (V 1 = -V 2 = V ). Here we focused on three different values of the total transmission between the TS and the normal leads (τ = 0.1, 0.5, 0.9) and for simplicity we considered the case of zero temperature. The differences in the current I 1 between the V 2 = V 1 and V 2 = -V 1 cases can be understood from the properties of the non-local differential conductance J 12 [see Eq. ( 16) and the right panel of Fig. 2]. For small transparency (τ = 0.1), J 12 is very small, and the two currents cannot be distinguished in the figure. For larger transparency, the effect of J 12 is only noticeable for large enough voltage. As J 12 (ω) > 0 for |ω| < ∆, the current for V 2 = V 1 is larger than the one for V 2 = -V 1 . The difference between the two currents is maximal for V close to ∆, when J 12 reaches its maximum, before decreasing at higher voltages since J 12 (ω) becomes negative at high energy. For arbitrary subgap voltages, the current I 2 is readily obtained from Eq. ( 18) upon exchanging V 1 and V 2 . For symmetric couplings λ 1 = λ 2 , one can easily convince oneself that I 2 = I 1 in the specific case of equal voltages, while in the opposite voltage case, one has I 2 = -I 1 . B. Hanbury-Brown and Twiss cross-correlations As we did for the current, we derive here analytical expressions for the current auto-and cross-correlations S jj , and we are particularly interested in the HBT crosscorrelations S 12 between the currents flowing from the central TS to the two normal leads. As the formulas get rapidly long and cumbersome, we give here the zerotemperature expression valid for voltages below the gap ∆, and for equal couplings λ 1 = λ 2 = λ (thus Λ 2 = λ 2 1 + λ 2 2 = 2λ 2 ) . More general formulas are given in App. B. Introducing as before the broadening Γ = 2∆Λ 2 /(1-Λ 4 ), and taking without loss of generality |V 1 | ≥ |V 2 |, we have: S 12 (V 1 , V 2 ) = e 2 h Γ 2 4∆ 2 1 + 1 2 Γ 2 Γ 2 + (eV 1 ) 2 |eV 1 | + 1 2 Γ 2 |eV 2 | Γ 2 + (eV 2 ) 2 - 3Γ 2 tan -1 |eV 1 | Γ - Γ 2 tan -1 |eV 2 | Γ -sgn(V 1 V 2 ) |eV 2 | (eV 2 ) 2 + 2Γ 2 + 2∆ 2 (eV 2 ) 2 + Γ 2 -2Γtan -1 |eV 2 | Γ (20) The last term of this expression is proportional to -sgn(V 1 V 2 ), with a coefficient which is always positive, independently of the voltages. As it turns out, this term is dominant, and gives the sign of S 12 for all voltages below the gap. In the limit of small voltages |V 1 |, |V 2 | Γ, S 12 becomes S 12 - 1 2 e 2 h sgn(V 1 V 2 )|eV 2 | (21) Importantly, this means that the HBT cross-correlations are positive when the two voltages V 1 and V 2 have opposite signs. The expression for the correlations S 12 in Eq. ( 20) can be further simplified for some specific choices of the bias voltages. For equal voltages V 1 = V 2 , we have S 12 (V 1 = V 2 = V ) = - 2e 2 h Γ 2 4 |eV | (eV ) 2 + Γ 2 , ( 22 ) which coincides with existing results. 28 Conversely, for opposite voltages, we have: V τ=0.9 S 12 (V 1 = -V 2 = V ) = 2e 2 h Γ 2 4 |eV | (eV ) 2 + Γ 2 + 2Γ 2 + (eV ) 2 (eV ) 2 + Γ 2 |eV | ∆ 2 - 2Γ ∆ 2 tan -1 |eV | Γ . (23) V τ=0.1 V 1 =V 2 =V V 1 =-V 2 =V -1. V τ=0.5 V 1 =V 2 =V V 1 =-V 2 =V -1. V 1 =V 2 =V V 1 =-V 2 =V -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 -0.08 -0.06 -0.04 -0.02 0.00 0.02 0.04 V 1 =V 2 =V V 1 =-V 2 =V C. Discussion of auto-and crossed-correlations In order to better understand the behavior of the HBT noise correlations S 12 , it is useful to discuss all the noise contributions, in particular the autocorrelations S 00 , S 11 and S 22 are also considered. From the definition of the noise, Eq. ( 13), and using -I 0 = I 1 + I 2 , we have the relation: 14 S 00 = S 11 + S 22 + 2 S 12 (24) so that is is enough to consider S 00 , S 11 and S 12 to achieve a full characterization. For simplicity, we consider in what follows the regime of symmetric couplings (λ 1 = λ 2 ), in the zero-temperature limit. Equal voltages For equal voltages V 1 = V 2 = V , the analytical expressions for the various noise correlations (in units of e 2 /h) are S 00 =2Γ tan -1 |eV | Γ - |eV |/Γ 1 + (eV /Γ) 2 0 ( 25 ) S 11 =Γtan -1 |eV | Γ - 1 2 |V | 1 + (eV /Γ) 2 |eV | 2 (26) S 12 = - 1 2 |eV | 1 + (eV /Γ) 2 - |eV | 2 ( 27 ) where the final expressions are obtained in the lowvoltage limit eV Γ. In terms of the total coupling Γ, the auto-correlation noise S 00 in the TS lead has the same expression as for a single N-TS junction. 30 Focusing on Eqs. ( 25)-( 27), the behavior of the autoand cross-correlations at low voltage |eV | Γ can be understood from the basic properties of the coupling of the normal electrodes to the Majorana bound state. From the point of view of the TS, the two normal electrodes are at the same potential and thus act as a single one for the total current I 0 , so that the total conductance has a peak of height 2e 2 /h [see Eq. ( 17) and discussion below]. As a consequence, much like in the single N-TS junction, the total current I 0 is noiseless at low voltage eV Γ, which is confirmed by Eq. ( 25). This total current I 0 = 2(e 2 /h)V is partitioned here with equal prob- ability between the currents I 1 and I 2 (see Fig. 5) : I 1 = I 2 = (e 2 /h)V. ( 28 ) These two currents are thus equivalent to the transmitted and backscattered current from a quantum point contact with incoming current I 0 and transmission T = 1/2, reflection R = 1 -T = 1/2. This implies that the autocorrelations S 11 and S 22 correspond to the noise associated with currents resulting from random partitioning, 34 leading to (restoring units) S jj ≡ eI j (1 -T ) = e 2 h |eV | 2 (j = 1, 2) (29) which coincides with Eq. ( 26) to lowest order in V . Finally, the HBT noise S 12 corresponds to the correlation between the two partitioned currents I 1 and I 2 . Due to the fermionic nature of the electrons, these two currents are totally anti-correlated (see Fig. 5), yielding a negative correlation noise. 13,14 Following Eq. ( 24), and using that I 0 is noiseless, one sees that the HBT correlations and the autocorrelations are simply related as S 12 = -S 11 , which agrees with Eq. ( 27) to lowest order. Opposite voltages For opposite voltages V 1 = -V 2 = V the auto-and cross-correlations take the form (in units of e 2 /h): S 00 =2Γtan -1 |eV | Γ 2|eV | (30) S 11 =Γtan -1 |eV | Γ - |eV |/2 1 + (eV /Γ) 2 -f (V, Γ) |eV | 2 (31) S 12 = 1 2 |eV | 1 + (eV /Γ) 2 + f (V, Γ) |eV | 2 ( 32 ) where the final expressions correspond to the low-voltage regime eV Γ, and we introduced f (V, Γ) = Γ 2 2∆ 2 eV 2Γ 2 + (eV ) 2 (eV ) 2 + Γ 2 -2Γtan -1 |eV | Γ ( 33 ) While this opposite voltage case has a behavior strikingly different from its equal voltage counterpart, it can still be understood with the same ingredients, by taking into account that the coupling to the Majorana bound state is perfectly electron-hole symmetric. Indeed, when normal lead 2 is biased at voltage -V rather than V , it can be seen as a reservoir of holes biased at voltage V coupled to the Majorana bound state. The behavior of the system is thus the same as for the equal voltage case, except that electrons are now replaced by holes for the current I 2 . Picturing the total current from the TS as a stream of particles (thus disregarding the charge), this stream is still noiseless, with one particle (electron or hole) emitted during each time interval /eV . The currents I 1 and I 2 still result from the random partitioning of such a noiseless stream of particles, with electrons for I 1 and holes for I 2 , so that I 1 = -I 2 = (e 2 /h)V (34) As a consequence the autocorrelation noises S 11 = S 22 = (e 3 /h)|V |/2 are identical to their counterparts in the equal voltage case, to lowest order in eV /Γ [see Eq. ( 31)]. Much like the equal voltage case, the two currents I 1 and I 2 are totally anticorrelated, which leads to the same expression for the HBT correlation noise S 12 , only with the opposite sign, as the carriers in the two leads now bear opposite charges (see Fig. 5). Finally the total noise S 00 (which accounts for the charge of the carriers) corresponds to the current noise of a noiseless stream of particles (one particle -electron or hole-transmitted at each time interval /eV ) but with particles which can be either electron or holes. According to Eq. ( 24) this creates a total charge noise S 00 = 2(e 3 /h)|V |, which coincides with Eq. (30). The equal and opposite voltage regimes thus have similar cross-correlation noises but with opposite signs, a direct consequence of the peculiar properties of the Majorana bound state, which by definition does not distinguish electrons from holes. IV. FINITE LENGTH AND DOPING EFFECTS The results shown in the previous sections have been obtained for the case of a semi-infinite topological superconductor bearing a (single) Majorana bound state at its end, with the boundary Green function for the TS nanowire computed in the wide-band limit at half-filling, see Eq. ( 4). We wish to address here the case of a finite size TS whose two Majorana bound states (one at each extremity) overlap, leading to radically different behavior for the HBT correlations. Secondly, this TS wire can be doped in such a manner that it becomes a trivial superconductor devoid of topological effects. In this section, we thus briefly discuss how the noise correlations are modified when going beyond the approximations used in the preceding sections. A. Varying the length of the TS nanowire The retarded/advanced GF for a TS wire of length L (-L/2 < x < L/2) computed near x = L/2 in the wideband limit is given by 30 g R/A (ω) = ω tanh(ζ ω L) ζ ω σ 0 -tanh(ζ ω L)∆σ x (ω ± i0 + ) 2 -2 ω , (35) where ζ ω = √ ∆ 2 -ω 2 and ω = ∆/cosh(ζ ω L) with v F = = 1. The finite TS has a Majorana bound state localized at each end. When the length of the TS is much larger than the typical scale of the Majorana bound state, which is of the order of the superconducting coherence length ξ 0 = v F /∆, the two end-state wave functions practically do not overlap and one recovers the same result as for the infinite length GF. However, with decreasing L, this overlap becomes important, and we expect to lose the behavior specific to the presence of a Majorana bound state. In Fig. 6, we show the noise correlations S 12 as a function of the voltage V for the opposite voltage case, at transmission τ = 0.5, and for several values of the TS length L (in units of v F /∆). We see that when L 1, the results obtained for V 1 = V = -V 2 are identical (positive HBT correlations) to the ones obtained in the previous section with the infinite length Green function. However, when L is of order 1, the overlap of the two Majorana end states becomes important, and S 12 turns negative around V = 0, over a range of voltage which increases as L decreases. B. Varying the chemical potential of the TS nanowire Another important parameter is the intrinsic chemical potential of the topological superconductor, which depends in a real nanowire on the values of the proximity induced coupling, the magnetic field, etc. In our approach, this is modeled by the chemical potential µ of the Kitaev chain. In the calculations presented so far we always set µ = 0, corresponding to half-filling of the chain. By varying this parameter at finite bandwidth, it is possible to go away from half-filling and therefore to drive the system from the topological phase to a trivial one where no Majorana bound state is present. 3 In order to observe this transition, one needs to rederive the Green function for a Kitaev chain beyond the wide-band limit, with arbitrary values of the gap ∆, the hopping parameter t 0 and the chemical potential µ. Explicit formulas for this Green function, and details of the derivation are provided in App. A. Fig. 7 shows the noise correlation S 12 for the opposite (top panel) and equal voltage case (bottom panel), for a chain with a hopping parameter t 0 = 10∆, and chemical potential µ varying from 0 to 13∆. When µ is increased, while still below the bandwidth t 0 , the correlations S 12 are reduced in absolute value, but keep the same qualitative features, with a dip (peak) around V = 0 for opposite (equal) voltages. In the opposite voltage case (top panel), S 12 is always positive for V close to 0, and the decrease at large voltage becomes more pronounced as µ is increased. In the equal voltage case (bottom panel), the correlations S 12 are essentially scaled down when µ is increased, with a notable asymmetry between V > 0 and V < 0 appearing at large µ. However, the behavior becomes qualitatively different when µ reaches the value of the bandwidth t 0 , as the peak around V = 0 disappears for µ > t 0 . In the opposite voltage case (top panel of Fig. 7), S 12 becomes negative for all V , even close to zero voltage. This is consistent with our previous interpretation: the positive cross-correlations at low voltage are associated with the coupling to the Majorana bound state and this feature 7. Current correlations S12 vs V = V1 in the setup with an infinite TS wire for several values of its chemical potential µ (each curve is labeled by the corresponding value of µ/∆). The top (bottom) panel shows the case of opposite (equal) voltages. We consider a symmetric junction (λ1 = λ2) with a total transparency τ = 0.5, and a hopping strength (TS wire bandwidth) t0 = 10∆. The black curve (labeled "ref") on each plot corresponds to the wide-band limit [using Eq. ( 4)] and serves as a reference for comparison. There is a clear transition when µ reaches the bandwidth t0: the peak behavior around V = 0 disappears when µ > t0, signaling the absence of a Majorana bound state for µ > t0. S 12 (V) V V 1 =V 2 =V disappears for µ ≥ t 0 , when we cross into the trivial non-topological phase. In the equal voltage case (bottom panel), the specific behavior which was present for small V also disappears, and the correlations S 12 become fully asymmetric, almost vanishing for V > 0. Note that we avoid on purpose to compute the noise S 12 at the precise value of the transition µ = t 0 . Indeed the spatial extent of the Majorana bound state diverges in this case, 2 and the overlap of the Majorana bound states at the two ends of the system could become important, with a behavior similar to the one presented in Fig. 6. V. CONCLUSIONS In this work, we have explored the properties of a topological superconductor nanowire, including a Majorana bound state at its end, by coupling it to two biased normal leads. We computed the currents and the Hanbury-Brown and Twiss current cross-correlations, and showed that the sign of such correlations have a very peculiar dependence on the two voltages V 1 and V 2 of the normal leads: the correlations are negative when the voltages have the same sign, and become positive when the voltages have opposite signs. In addition, for voltages smaller than the coupling between the TS and the normal leads, the correlations for the equal and opposite voltage cases are exactly opposite. This behavior is in stark contrast with the one observed with a conventional BCS superconductor, where typically correlations are positive for voltages of the same sign only. [START_REF] Büttiker | Quantum Noise in Mesoscopic Physics[END_REF] This is directly related to the properties of the Majorana bound state, which by definition makes no difference between electrons and holes. Changing the sign of one of the voltages is equivalent to replacing a reservoir of electrons with a reservoir of holes -the coupling to the Majorana bound state is unaffected, simply leading to a change of sign of the correlations. The crossed correlations of a TS wire below the gap at equal voltages are similar to the fermionic version of the HBT experiment in normal metals, with differences showing only at V > Γ. [13][14][15][16][17] There, the filled Fermi sea which injects electrons at the two outputs is totally noiseless, and electron partitioning leads to negative crosscorrelations. For the TS beam splitter presented here, it is therefore crucial to also probe the HBT noise at opposite voltages to exhibit its positive sign, in order to rule out a "trivial" interpretation in terms of normal fermionic leads. Positive correlations in this configuration may imply -as for the BCS Cooper pair splitter 20 -that the electron/hole pairs emitted in the two normal metal leads coupled to the TS via the Majorana fermion may form an entangled state. Calculations were performed using a Keldysh boundary Green function approach, based on a Hamiltonian formalism. 30 Each electrode is then represented by its boundary Green function, and the electrodes are coupled through a tunneling Hamiltonian. Solving the Dyson equation gives exact (non-perturbative) simple formulas for the current and the current correlations in terms of the full Green function of the system. The boundary Green function approach used here is fully equivalent to the scattering matrix approach for voltages below the gap (see App. C), but also allows us to easily access the regime of voltages above the gap. With this method, we were also able to consider more general situations, simply by adapting the boundary Green function used for the TS nanowire. We considered a TS nanowire of finite length and also studied the effect of varying the chemical potential. Our results show the existence of a crossover when the length L becomes smaller than v F /∆ (the typical length associated with the Majorana bound state), with positive correlations becoming negative for small length. This behavior is due to the hybridization of the two Majoranas at the ends of the TS nanowire, which then behave as a regular fermion. This confirms that the unusual sign observed for the current correlations for a long (or semi-infinite) TS nanowire is specific to the presence of a Majorana bound state. Next, we considered the case where the TS nanowire is represented by a Kitaev chain with a finite bandwidth, and variable chemical potential µ (the corresponding boundary Green function is derived in App. A). By varying µ, one can explore the transition from a topological superconductor to a conventional one. Our results show that above the transition, the cross-correlations become negative at all voltage, and the specific features due to the Majorana bound state (peak at low voltage) disappear. We believe that these results, obtained by placing the TS nanowire in a three-lead hybrid system, give access to unique properties of the TS nanowire, and of Majorana bound states, which would be more difficult to characterize in a simple two-lead setup. The specific dependence of the sign of the correlations as a function of the voltages could provide an extremely firm experimental proof of the presence of a Majorana bound state in a TS nanowire. Because of its versatility and efficiency, the same boundary Green function approach could be used to consider other multi-terminal setups involving one or several TS nanowires, or more involved effects, such as interactions. Appendix A: Boundary GF of semi-infinite Kitaev's chain In this Appendix, we provide a derivation of the retarded/advanced GF used in Sec. IV B, which describes quasiparticle excitations at the boundary of a semiinfinite TS wire. The derivation is performed for arbi-trary values of the chemical potential µ and (normalstate) conduction bandwidth t 0 , so that by varying µ one can drive the wire through a topological transition at µ = ±t 0 . 2,3 This generalizes the derivation outlined in Ref. 30 for the case µ = 0. Following the strategy of Ref. 30, we first compute a "bulk" GF for a homogeneous wire of infinite length, and then the boundary GF is obtained from the Dyson equation of the wire interrupted by a local potential scatterer. The TS wire is modeled by a Kitaev chain 2,3 representing an effectively spinless single-channel p-wave superconductor. In terms of fermion operators c x on a 1D lattice with site numbers x (we set the lattice constant to unity), the model Hamiltonian reads H K = 1 2 x -t 0 c † x c x+1 + ∆c x c x+1 + H.c. -µ x c † x c x , ( A1) where t 0 > 0 is the hopping matrix element, ∆ > 0 is the p-wave pairing amplitude and µ is the chemical potential. Imposing periodic boundary conditions c x = c x+N , with the number of lattice sites N → ∞, and passing to momentum space, c x = N -1/2 k e ikx ψ k , the "bulk" Hamiltonian H takes the standard Bogoliubov-de Gennes form H K = 1 2 π -π dk 2π Ψ † k h k Ψ k , h k = k σ z + ∆ k σ y , (A2) where Ψ k = ψ k , ψ † -k T R/A xx (ω) = π -π dk 2π e ik(x-x ) (ω -h k ) -1 , (A3) where the frequency ω should be understood as ω + i0 + (ω -i0 + ) for the retarded (advanced) GF. Introducing a new integration variable z = -cos(k), some algebra yields: g R/A xx (ω) = 1 2π(∆ 2 -t 2 0 ) 1 -1 dz √ 1 -z 2 D(z, ω) s=± ω + (t 0 z -µ)σ z + s∆ 1 -z 2 σ y × -z + is 1 -z 2 x-x , (A4) with D(z, ω) = (z -Q + )(z -Q -) Q ± (ω) = -µt 0 ± µ 2 t 2 0 -(∆ 2 -t 2 0 )(ω 2 -∆ 2 -µ 2 ) ∆ 2 -t 2 0 . (A5) The boundary GF of a semi-infinite Kitaev chain located at x > 0 can be obtained from the "bulk" GF for the translationally invariant model (A1) by adding a local impurity of strength U at site x = 0, which results in the Hamiltonian HK = H K + U c † 0 c 0 . The "full" retarded/advanced GF gR/A xx (ω) then obeys the following Dyson equation: gν=R/A xx (ω) = g ν xx (ω) + g ν x0 (ω)U σ z gν 0x (ω). (A6) In the limit U → ∞, i.e., when one effectively cuts the wire into two semi-infinite pieces, Eq. (A6) yields for the boundary GF 30 defined as G(ω) = g11 (ω) G ν=R/A (ω) = g ν 00 (ω) -g ν 10 (ω) [g ν 00 (ω)] -1 g ν 01 (ω). (A7) Thus, for computing the boundary GF (A7) one only needs to evaluate Eq. (A4) for x = x and nearestneighbor sites x -x = ±1. Using the following integrals with a complex-valued parameter a, Im a = 0, 1 π 1 -1 dz √ 1 -z 2 (z -a) = -1/a 1 -1/a 2 , ( A8 ) and for n = 1, 2 1 π 1 -1 dz z n √ 1 -z 2 (z -a) = a n-1 1 - 1 1 -1/a 2 , (A9) after some algebra we obtain from Eq. (A4): g ν=R/A 00 (ω) = (ω -µσ z )F -1 (ω) + t 0 σ z F 0 (ω), g ν ±1,0 (ω) = g ν 0,∓1 (ω) = ±i∆F -1 (ω)σ y -(ω -µσ z )F 0 (ω) + (t 0 σ z ± i∆σ y ) [1 -F 1 (ω)] , (A10) where F m=0,±1 (ω) = 1 (t 2 0 -∆ 2 )(Q + (ω) -Q -(ω)) × s=± sQ m s (ω) 1 -1/Q 2 s (ω) . (A11) The boundary GF of the semi-infinite Kitaev chain then follows by inserting Eq. (A10) into Eq. (A7). Eq. ( A10) is an extension of the result of Ref. 30 to the general case of µ = 0. In particular, for µ = 0 and assuming the wide-band limit t 0 max(∆, |ω|) the above expressions in Eq. (A10) simplify to g ν=R/A 00 (ω) = ω t 0 √ ∆ 2 -ω 2 σ 0 , g ν ±1,0 (ω) = 1 t 0 √ ∆ 2 -ω 2 ∆ 2 -ω 2 σ z ∓ i∆σ y , (A12) and then using Eq. (A7) one recovers the boundary GF (4) quoted in Sec. II. Appendix B: Analytical formulas for the current and noise This Appendix contains more general formulas for the currents and current correlations, which were too lengthy to be shown in the main text. The expressions (16) for the local and non-local differential conductances can be extended for energies above the gap, leading to the following general forms J 11 (ω) =          2λ 2 1 λ 2 2 (Λ 4 -1) ω 2 ∆ 2 +4λ 2 1 Λ 2 (1-Λ 4 ) 2 ω 2 ∆ 2 +4Λ 4 |ω| < ∆ -2λ 4 1 Λ 4 +2Λ 2 1-∆ 2 ω 2 -2∆ 2 ω 2 +1 +2λ 2 1 (3Λ 4 +1) 1-∆ 2 ω 2 +Λ 2 (Λ 4 +3)-2∆ 2 Λ 2 ω 2 Λ 4 +2Λ 2 1-∆ 2 ω 2 +1 2 |ω| > ∆ (B1) J 12 (ω) =          -2λ 2 1 λ 2 2 (Λ 4 -1) ω 2 ∆ 2 (1-Λ 4 ) 2 ω 2 ∆ 2 +4Λ 4 |ω| < ∆ -2λ 2 1 λ 2 2 Λ 4 +2Λ 2 1-∆ 2 ω 2 -2∆ 2 ω 2 +1 Λ 4 +2Λ 2 1-∆ 2 ω 2 +1 2 |ω| > ∆ (B2) Similarly, one can obtain closed-form expressions for the noise cross-correlations S 12 at zero temperature for generic values of the coupling constants λ 1 , λ 2 and voltages V 1 , V 2 , thus generalizing Eq. ( 20). In the subgap regime |V 1 |, |V 2 | ≤ This relation generalizes the equivalent one for a simple junction composed of two leads. 30,[START_REF] Cuevas | [END_REF] We see that when Λ goes from 0 to 1, τ also varies from 0 to 1, so the whole range of transparencies is covered by taking Λ in [0, 1]. From Eq. (C2), one also sees that for a given Λ in [0, 1], the value 1/Λ gives the same value of the transparency τ . However, as we show below, taking Λ > 1 leads to a different physical realization of the system. Indeed, choosing the value τ of the total transparency between the TS and the normal leads, even for a symmetric system, does not totally specify the system. This can be understood simply from the scattering matrix formalism, as noted by Valentini and co-workers 29 . Writing the scattering matrix for a three-lead system, where the two lateral leads (lines/columns 1-2) are symmetrically connected to a central one (line/column 3) with a transparency τ = √ 1 -r 2 , we have S =   • • t • • t t t r   (C3) with r 2 + 2t 2 = 1. Imposing the unitarity of the S matrix gives the values of the coefficients in the 1-2 block s 12 [written as dots in Eq. (C3)], which represent direct reflection/transmission in the subsystem of the two lateral leads. There are two possible solutions (written here for simplicity with real coefficients) s 12,+ = 1 2 (1 -r) -(1 + r) -(1 + r) (1 -r) (C4) and s 12,-= 1 2 -(1 + r) (1 -r) (1 -r) -(1 + r) (C5) and we note S + and S -the complete scattering matrix corresponding to the choice of s 12,+ and s 12,-respectively. The difference between the two choices can be understood for example by taking r close to 1 (very poor transmission to the central lead). Then the amplitude of direct transmission between 1 and 2 [elements (1,2) and (2,1)] are very different for S + and S -: for S -, it is (1 -r)/2, which is close to 0; while for S + it is -(1 + r)/2 which is close to one in absolute value. This means that for a given transmission to the central lead (which fixes r), the currents I 1 and I 2 have totally different values for the two choices S + or S -as soon as the voltages V 1 and V 2 are not equal (if V 1 = V 2 , then the direct transmission between leads 1 and 2 has no consequence). The impact of the choice of S + or S -is illustrated in Fig. 8, with a plot showing the current I 1 (V ) in the opposite voltage configuration (V 1 = -V 2 = V ), for Λ = 1/4 and Λ = 4. The two values of Λ correspond to the same transparency τ 0.22 between the TS and the two normal leads, but the currents I 1 and I 2 = -I 1 are very different for the two values of Λ. For Λ = 4 the current I 1 is much larger as V increases, because of the direct current going from normal lead 1 to normal lead 2. The two choices for S correspond precisely to the choice Λ < 1 (→ S -) or Λ > 1 (→ S + ) in the microscopic calculation. The relation between Λ and r is: Λ = 1 -r 1 + r (Λ < 1) or Λ = 1 + r 1 -r (Λ > 1) (C6) or equivalently r = 1 -Λ 2 1 + Λ 2 (Λ < 1) or r = Λ 2 -1 Λ 2 + 1 (Λ > 1) (C7) with 0 < r < 1, and Λ 2 = λ 2 1 +λ 2 2 is given from the Hamiltonian. As a proof of these relations, we show below that the currents obtained from the scattering matrices S + and S -, for voltages smaller than the gap, coincide with the expressions obtained from our Hamiltonian calculation in terms of Λ [see Eq. ( 16)]. The details of the scattering matrix calculation follows Ref. 28. Channels 1 and 2 correspond to the two normal leads, while channel 3 is related to the superconductor. We first construct the s ee and s he 2x2 matrices describing the scattering between the normal leads, of an electron into an electron (ee) or a hole (he). These are 28 : s ee = s 12 -a(ω) 2 r 1 + r 2 a(ω) 2 t 2 t 2 t 2 t 2 (C8) s he = a(ω) 2 1 + r 2 a(ω) 2 t 2 t 2 t 2 t 2 (C9) where a(ω) = exp[-i arccos(ω/∆)] is the amplitude for Andreev reflection at energy ω, s 12 is given by Eq. (C4) or (C5), and r = √ 1 -2t 2 is the reflection amplitude from the superconductor. For simplicity, we consider a symmetric system, and we take r and t real. The expression for the current I 1 in terms of the scattering matrix elements is 28 where the ± sign refers to the choice of (s 12,+ ) or (s 12,-). One can show that these expressions are equal to 2 J 11 (ω) and 2 J 12 (ω) from Eq. ( 16) with λ 1 = λ 2 = Λ/ √ 2 if the relation between r and Λ is r = 1 -Λ 2 1 + Λ 2 (for s 12,-) or r = Λ 2 -1 Λ 2 + 1 (for s 12,+ ) (C13) For a given transparency τ , taking Λ > 1 thus represents a system where there is a strong, direct link between the lateral leads 1 and 2, which is not the system we are studying here. This explains why, in all the results presented in this work, we consider 0 < Λ < 1 only. FIG. 1 . 1 FIG. 1. Schematic view of the setup: a grounded TS nanowire is tunnel coupled (with hopping amplitudes λ1, λ2) to two normal-conducting (N1, N2) leads which are biased at voltages V1 and V2, respectively. 1 FIG. 2 . 12 FIG. 2. Zero-temperature local J11(ω) (left panel) and nonlocal J12(ω) (right panel) differential conductances (in units of 2e 2 /h) vs ω/∆ for λ1 = λ2 and several values of the total transmission probability τ . 9 FIG. 4 . 94 FIG.4. HBT cross-correlations S12 (in units of e 2 /h) vs V = V1 in the case of equal (red dashed curve) and opposite (blue full curve) voltages V1,2 for λ1 = λ2 and several transparencies τ (as noted on each panel), at zero temperature. The bottom right panel combines all curves to show the overall scale. S12 is negative (positive) for equal (opposite) bias voltages, and the values of S12 are simply opposite in sign in the two cases for |V | Γ. Fig. 4 4 Fig. 4 shows the HBT cross-correlation noise S 12 for equal and opposite voltages, computed for three different values of the transmission probability τ . One can see that the cross-correlations in these two cases are simply opposite as long as eV Γ, with negative (positive) values of S 12 for the equal (opposite) voltage case. For |eV | larger than the gap ∆, S 12 is always a decreasing function of |V |, which, for the opposite voltage case, eventually becomes negative for |eV | ∆ (not shown), as the TS behaves essentially as a normal electrode at such high voltages. FIG. 5 . 5 FIG. 5. Schematic picture of the current partitioning between the TS and two normal leads (N1,2) at low voltage |eV |Γ. Electrons (holes) are shown as full (empty) circles. Left panel: the case of equal voltages, where a noiseless stream of electrons from the Majorana bound state is partitioned between the two normal leads, with perfect anti-correlations of the two electron streams. Right panel: the case of opposite voltages where lead N2 is biased at potential -V , so that electrons are emitted into N1 while holes are emitted into N2. The two fermion streams are perfectly anticorrelated, which leads to positive cross-correlation noise. The arrows indicate directions of quasiparticle motion. 5 FIG. 6 . 1 , 561 FIG.6. Cross-correlations S12 vs V = V1 in the case of opposite bias voltages (V1 = -V2) for a symmetric junction (λ1 = λ2) with total transmission τ = 0.5 and several values of the TS wire length L. Each curve is labeled by the corresponding value of L in units of ξ0 = vF /∆. For L 1, S12 is identical to the infinite TS case, while for L ∼ 1 the cross-correlations S12 become negative over the range |V |/∆ (ξ0/L)2 is a Nambu spinor subject to the reality constraint Ψ k = σ x Ψ * -k , k = -t 0 cos(k) -µ is the kinetic energy, ∆ k = ∆ sin(k) is the Fouriertransformed pairing, and Pauli matrices σ x,y,z act in Nambu space. Correspondingly, in coordinate space the retarded/advanced Nambu GF of Ψ(x) = c x , c † x T for the Kitaev model (A1) is given by g V 1 =-V 2 =VFIG. 8 . 128 FIG.8. Panels (a) and (b) : schematic illustration of the behavior of the system for the choice of Λ < 1 and Λ > 1 , in the case of a strong reflection (r close to 1). While Λ = Λ0 < 1 and Λ = 1/Λ0 > 1 correspond to the same transparency between the TS and the normal leads, the transparency of the direct channel between the two normal leads is totally different in the two cases. Panel (c) : plots of the current I1(V ) in the V1 = -V2 = V configuration, for Λ = 4 and Λ = 1/4. s eh 11 2 -s hh 11 2 2 + s he 12 2 +n F,2h (ω) -s eh 12 2 + s hh 12 2 ( 2 + s he 11 2 = 2 + s he 12 2 = 2 + s he 11 2 = 1 -r 2 2 ± ( 1 -r) 2 rω 2 4r 2 ω 2 + ( 1 -+ 1 ) 2 ω 2 4r 2 ω 2 + ( 1 - 1111221222222221212211221 +n F,2e (ω) -|s ee 12 | C10) where n F,1e (ω) is the Fermi function of electrode 1, with n F,1h (ω) = 1 -n F,1e (-ω), and s hh (ω) = [s ee (-ω)] * , s eh (ω) = [s he (-ω)] * . Comparing with Eq. (15), we see that the two expressions of the current are identical if we have 1 -|s ee 11 | 2J 11 (ω), and -|s ee 12 | 2J 12 (ω). Using Eqs. (C8)-(C9), we get after some algebra (recalling that r 2 + 2t 2 = 1): 1 -|s ee 11 | ACKNOWLEDGMENTS We acknowledge discussions with A. Levy Yeyati. The authors also acknowledge the support of Grant No. ANR-2014-BLANC ("one shot reloaded"). This work was granted access to the HPC resources of Aix-Marseille Université financed by the project Equip@Meso (Grant No. ANR-10-EQPX-29-01) and has been carried out in the framework of the Labex ARCHIMEDE (Grant No. ANR-11-LABX-0033) and of the AMIDEX project (Grant No. ANR-11-IDEX-0001-02), all funded by the "investissements d'avenir" French Government program managed by the French National Research Agency (ANR). This work has also been supported by Deutsche Forschungsgemeinschaft (Bonn) Grant No. EG 96/11-1. A. Z. is grateful to the CPT for hospitality during his visit to Marseille. ∆, this reads where we introduced δ = λ 2 1 -λ 2 2 and focused on voltages Appendix C: Discussion of the case Λ > 1 vs. Λ < 1 Our microscopic model contains the two parameters λ 1 , λ 2 describing the tunneling from the TS to normal electrodes 1 and 2. As shown in the formulas for the currents and current correlations, a natural parameter is which is related to the total transmission probability τ between the TS and the normal electrodes
01588172
en
[ "phys.cond.cm-msqhe", "chim.theo", "spi" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01588172/file/pi-pi_interactions.pdf
Jorge Trasobares Jérôme Rech Thibaut Jonckheere Thierry Martin Olivier Alévêque E Levillain Valentin Diez-Cabanes Yoann Olivier Jérôme Cornil Jean-Philippe P Nys O Aleveque R Sivakumarasamy K Smaali P Leclere A Fujiwara D Théron D Vuillaume & N Clément Estimation of π-π electronic couplings from current measurements Keywords: Cooperative effect, -interaction, transfer integral, molecular electronics, nanoelectrochemistry, coupled quantum dot de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Interactions between -systems 1 , 2 are involved in diverse and important phenomena, such as the stabilization of the double helical structure of DNA, 3 protein folding, 4 molecular recognition, 5 drug design, 6 and crystal engineering. 7 These interactions are of fundamental technological importance for the development of organic-based devices, 8 in particular for organic light-emitting diodes, 9 field-effect transistors 10 , or (bio-) molecular devices. [11][12][13][14][15][16] A key parameter in these interactions is the transfer integral (or electronic coupling energy) parameter t, which is included as t 2 in simple semiclassical formulations of charge carrier mobility. 17 In symmetric dimers, t is directly related to energy-level splitting of the highest occupied/lowest unoccupied molecular orbital (HOMO/LUMO) due to intermolecular interactions for hole and electron transport, respectively. 8 The parameter t has mainly been discussed by using photoelectron spectroscopy and quantum-chemical calculations. 18 -20 In the ideal scenario for (opto-)electronic applications, t should be deduced directly from electronic measurements in a device configuration and related to the molecular structure. Such knowledge of t would help us to understand and optimize charge transport through molecular systems. For example, cooperative effects, induced by molecule-molecule and molecule/electrode electronic couplings, are attracting substantial theoretical attention. 21 , 22 The distribution or fluctuation of t plays a key role in the charge transport through organic semiconductors or biomolecules by inducing charge localization or conformational gating effects. [23][24][25] A Gaussian distribution of t with a standard deviation (SD) in the range of the mean t is usually assumed from thermal molecular motions, 25 but remains to be confirmed experimentally. The experimental measurement of t could potentially be used as an ultrasensitive chemical characterization technique because t is expected to be more sensitive to molecular structural order than other physical constants such aselectrostatic interactions (φ) measured by Cyclic Voltammetry (CV) (Figure 1a). However, recent efforts to establish correlations between electrochemical and molecular electronics results [26][27][28][29][30][31] have neglected -intermolecular interactions. To reach these goals, two main issues need to be addressed. A first issue is related to disorder. Structural variability makes it difficult to extract t from electronic measurements because t is extremely sensitive to order at the angstrom level 8 . One recently implemented and elegant way to measure charge transport at the local scale is through photoinduced time-resolved microwave conductivity (TRMC), 32 but this contactless approach differs from the measurement of charge transport in a device configuration. The alternative approach is to reduce electrodes and organic layer dimensions. A second issue is that comparisons between experimental and theoretical charge transport data are usually qualitative. Even without molecular organization disorder, many parameters influence the measured current including molecule/molecule or molecule/electrode coupling and electron-vibration (phonon) interactions 8,33 . A recent theoretical proposal suggested additional degrees of freedom. Reuter et al. found that quantitative information on cooperative effects may be assessed by statistical analysis of conductance traces. 21,34 This approach is based on the Landauer Buttiker Imry formalism that typically is used in mesoscopic physics for the study of electron transport through quantum dots in the coherent regime. The related experimental model system is a single layer of -conjugated molecules (quantum dots), which is sandwiched between two electrodes. Thousands of molecular junctions are required for statistical analysis. The authors suggested that cooperative effects between molecules should provide asymmetrical conductance histogram spectra (Figure 1b). Histogram fitting may be achieved by considering the mean and SD of molecule site energies ( , δ ), molecule-electrode coupling (V, δV) and transfer integrals (<t>, δt). 21 This fitting differs from the usual experimental log-normal conductance histogram shape (normal distribution when conductance G is plotted in log scale) reported in single moleculebased molecular electronics (Figure 1b) (see Supplementary Note S1 in Supporting Information [SI] for a detailed history of conductance histograms in molecular electronics). 11, [35][36][37][38][39][40][41] Here, we explore -intermolecular interaction energies from the electrochemical perspective (coupling between charge distributions) and molecular electronic perspective (coupling between orbitals) using a large array of ferrocene (Fc)-thiolated gold nanocrystals. First, we show that the two peaks observed in voltammograms on these systems can be controlled by the nanocrystal diameter. Each peak corresponds to a dense or dilute molecular organization structure located at the top or side facets of the nanocrystals, respectively. Second, the dense molecular organization structure is resolved by Ultra-High-Vacuum Scanning Tunneling Microscopy (UHV-STM). This structure is used as a reference for estimating t from quantum chemical calculations at the Density Functional Theory (DFT) level. Based on current measurement statistics for ~3000 molecular junctions between the top of the nanocrystals and a conducting atomic force microscope (C-AFM) tip, we confirm the theoretical prediction of histograms that shape is affected by cooperative effects. 21 Furthermore, we extend the previously proposed tight-binding formalism to fit the histograms 21,34 . The estimated electronic coupling energy distribution for t is quantitatively compared with quantumchemical calculations. The φ and t obtained from CV traces and current histograms, respectively, are discussed on the basis of intermolecular distance fluctuations. Finally, we highlight the implications and perspectives of this study to molecular electronics, organic electronics and electrochemistry. Results Electrochemical characterization of Fc-thiolated gold nanocrystals We selected ferrocenylalkylthiol (FcC 11 SH) as an archetype molecule with a πconjugated head for electrochemistry 29,30,42,43 and molecular electronics. 12,14,[29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46] CV is a powerful tool to gain insights into the molecular organization, extract surface coverage Γ, and evaluate the energy level of the HOMO (E HOMO ± δE HOMO ). In particular, as different molecular organization structures usually lead to multiple CV peaks, 47,48 the aim of this section is to demonstrate that molecules located at the top of the Fc-thiolated gold nanocrystals correspond to a single CV peak, from which φ can be extracted. 49 We have previously demonstrated the possibility of performing CV on Fc-thiolated gold nanocrystal surfaces, 14 although we studied only one dot diameter and did not investigated cooperative effects. CV cannot be performed at the single-dot level with these molecules because the currents are too weak (≤ fA range). 14,50,51 First, we assess E HOMO and dot-to-dot dispersion in E HOMO (δE HOMO ) by CV. The voltammogram is averaged over millions of nanocrystals with a few Fc molecules per nanocrystal (diluted in a C 12 SH matrix) to avoid cooperative effects (Figure 2d). The peak energy position is in the expected range for Fc molecules (0.41 eV vs Ag/AgCl). 12,14,47,42 Furthermore, the voltammogram width at half maximum (FWHM) is close to 90 mV for the main peak, i.e. the theoretical value in the absence of interaction between redox moieties. 27 This result suggests that δE HOMO is less than 45 meV (Figure S3). S6). Peak splitting can be observed [42][43][44] . The peak area is related to the total faradic charge and, therefore, to the number of molecules per dot. Only the number of molecules per nanocrystal related to peak 1 significantly varies with nanocrystal diameter D (Figure 2h). Based on a simple model with a truncated conical shape for dots (Figure 2c), we suggest that peak 1 corresponds to molecules at the top of the dot, whereas peak 2 corresponds to molecules on the side of the dot (Figure 2h, inset). Thus, the density of molecules is smaller on the sides (Γ~2 nm 2 /molecule) than on the top (Γ~0.39 nm 2 /molecule) of the nanocrystals (see Figure 2h for fits and Methods for details). In other words, a highly ordered structure corresponding to a single peak in the voltammogram can be successfully formed on the top of the gold nanocrystals. This hypothesis is consistent with FWHM ≥90 mV for peak 1 (global repulsion between Fc moieties in the electrolytic media used 56 ) and FWHM ≤90 mV for peak 2. The position and shape of the second peak can be explained by a local change of the environment (presence of Na + counter ions from negatively charged silica at the dot borders and pH>2; Figure 2i) and a modification of ion-pairing equilibrium 47,52 (fewer ClO 4 -ions at dot borders due to SiO -surface sites). CV on the smallest dots results in a single peak whose width is smaller than the width expected at room temperature without molecular interactions. This result could be technologically useful for improving the sensitivity of electrochemical biosensors beyond the thermal Nernst limit. [53][54][55] The strength of electrostatic interactions for molecules located at the top of the gold nanocrystals can be quantitatively assessed by the extended Laviron model 27,56,57 (see Supplementary Methods). Coulomb interactions (φ when Fc moieties are fully oxidized) tune the FWHMs of the voltammograms because they are modulated by the fraction of oxidized species. Reasonable fits can be obtained with φ = 4.5 meV for all dot diameters (see Table S1 in SI for fit parameters). The φ obtained from CV will be linked to t from the current measurements in the Discussion section. Estimation of t from quantum-chemical calculations The self-assembled monolayer (SAM) structure on a gold substrate has been resolved by UHV-STM (Figure 3a) and used as a reference for DFT calculations. The STM image shows a regular structure of elongated shapes corresponding to groups. The extracted average area per molecule (0.40 nm 2 ) is in agreement with our CV results and is slightly larger than the 0.36 nm 2 considered for a hexagonal structure with a diameter of 0.66 nm per Fc 58,59 . The area corresponds to a configuration in which Fc units are at the same level in the vertical position. Each Fc unit forms a tilt angle of 56° ± 15° with respect to the surface normal (Figure S7), consistent with estimates obtained by obtained by Near-edge X-Ray absorption fine structure spectroscopy (60° ± 5°) 60 and by molecular dynamics simulations (54° ± 22°). 60 When molecules are organized as in Figure 3b, t can be calculated by DFT for two neighboring Fc units (fragments). This simulation is only based on the Fc units and not on the full FcC 11 SH molecule because the contribution of the saturated part of the molecule to t is negligible. As structural fluctuations in monolayer organization are expected experimentally, we compute t a and t b between fragments of molecules 1 and 3 and molecules 1 and 2 at different positions along the X and Y axes. Figure 3c shows t b when molecule 2 moves along the X axis in a collinear geometry. t b strongly depends on displacements of molecule 2 at the angstrom level because t is related to the electronic (rather than spatial) overlaps between orbitals. [61][62][63][64] Maxima are in the 20-30 meV range. Figure 3d shows the evolution of t b as a function of the variation of the intermolecular distance d (δd) around the equilibrium position, without lateral displacement. The decay ratio b =1.94/ Å is close to the tunnel decay ratio in molecular electronics. Similar results are obtained for t a (cofacial geometry; see Figure S8). For consistency with our previous studies, the B3LYP functional (see Methods) has been chosen due to the good agreement with mobility values extracted from the TRMC technique. 32 A recent theoretical study illustrated that B3LYP behaves very similarly to long-range corrected functionals and that the size of the basis set has a weak impact on the calculated transfer integrals. 65 Overall, the results indicate that the -conjugated Fc molecules are electronically coupled and suggest that a signature of cooperative effects should be observed on current measurements. Cooperative effects on current histograms We have conducted the statistical study proposed in ref. 21 (i.e., current histograms). "Nano-SAMs"(i.e., SAMs with diameters of a few tens of nanometers) are ideal for this experiment. Use of nano-SAMs enables us to obtain sufficient molecules for cooperative effects, but limits the number of molecules to avoid averaging over many molecular structures, grain boundaries, and defects. The C-AFM, as the top electrode, 66 is swept over thousands of nanocrystals. 39 We previously showed that log-normal histograms are systematically obtained when such a statistical study is performed with nano-SAMs composed of alkyl chains without groups. 39 In contrast, as predicted in ref. 21, we find that the presence of cooperativity between π-conjugated orbitals (in the head group) affects the line shape of histograms. Figure 4a S2 for fitting parameters). In the case of 15-nm-diameter nanocrystals, a second peak, corresponding to another molecular organization structure 39 appears at a lower current in the histograms (Figure 4b). We suggest that this peak, which is barely seen in the histograms, is averaged on larger dots. Fitting parameters for the main current peak are almost unchanged (see Table S2). When FcC 11 molecules are diluted 1:1 with dodecanethiol molecules (C 12 SH) to reduce coupling between molecules, the lognormal histogram is recovered (Figure 4c), similarly to alkyl-chain-coated nanocrystals. 39 We tried to fit the current histograms using a coherent scattering formalism similar to the one proposed in ref. 21 the external potential, depends on these parameters accordingly (see SI Methods). To generate current histograms, V t ,V b ,t, are chosen from Gaussian distributions with predefined means and SDs (e.g. eq.1a) for each individual molecular junction. For t, we additionally considered eq 1b to explicitly consider the fluctuation of the intermolecular distance (see Figure 3d). meV, and δ =30 meV). Considering eq 1b for t gives an even better fit to experimental data with t 0 =0.34 eV, β = 1.96/Å, and SD(δd) = 0.8Å. When intermolecular coupling is suppressed (t = 0) while keeping other parameters constant to mimic the diluted monolayer (Figure 5c), the resulting log-normal histogram reproduces the experimental results (Figure 5d). Discussion In molecular or organic electronics, comparisons of experimental and theoretical charge transport data are usually qualitative. Therefore, any step towards a more quantitative analysis is important to the field. A strong coupling asymmetry of α=V t 2 /(V t 2 +V b 2 )≈0.9 was required to fit histograms (see Figure S10 in SI), as expected from the structure of the molecule and previous studies 12, 14 . The "large" values of V t and V b (molecular orbital energy broadening amounts of 100 meV and 15 meV, respectively) confirm our expectation of strong molecule/electrode couplings, which we previously exploited to obtain a high-frequency molecular diode 14 . Extracted distributions of t corresponding to best fits in Figure 5d are shown in Figure 6a. We have explored two t distributions corresponding to eq 1a and eq 1b. In both cases, maxima are found at t ≈ 35 meV, which is in the expected range from our DFT calculations. However, both deviate quantitatively from the theoretical distribution prediction for t based on thermal molecular motions (SD(t) ≈ <t> in eq 1a). 25 Using eq 1a, we find SD(t) ≈ 140 meV, suggesting that the structural fluctuations are larger than those generated from solely thermal motions (phonons). Structural fluctuations are explicitly considered with parameter δd in eq 1b. The extracted SD(δd) = 0.8Å is reasonable given that a more packed configuration for these monolayers is possible. 12 Based on these results, we suggest that Van der Waals interactions between alkyl chains, which compete with the -interactions in the molecular organization of such monolayers, 12 could play a role in the distribution of t. We stress that the number of molecules NxN, considered for current histograms generation, affects the quantitative extraction of t. An approximately 150 molecules are used in the experiment. A large enough N was required in the model to avoid overestimating the extracted value of t (Figure S11 in SI). At N=9, the extracted t depends to a lesser extent on the molecule/electrode coupling parameters, which reduces the error on the estimated t (Figure S11 in SI). We suggest that t ≈ 35±20 meV is extracted from the present model based on both t distributions and the possible error on V t and V b . From this quantitative analysis on t, we can discuss the results in the general contexts of charge transport in organic semiconductors 32,33,68 and chemical characterization tools. As high-mobility organic semiconductors are often composed of a -conjugated backbone substituted by one or more alkyl side chains, 32 as in the present study, a t distribution following eq 1b may be considered in charge transport models. Semiclassical theories of charge transport in organic semiconductors show that the electron transfer (hopping) rates along the -conjugated molecular planes scale as t 2 . Figure 6b represents such probability distributions for t 2 corresponding to the two t distributions shown in Figure 6a (related to eq 1a and 1b). Distributions have similar shapes in both cases, but the tail is narrower for the Gaussian distribution of t. In both cases, the broadened distribution of t 2 would open new hopping pathways. The exploration of -intermolecular interaction energies from both CV and current histograms using the same samples composed of a large array of Fc-thiolated gold nanocrystals enables a direct comparison of both techniques as chemical characterization tools. Parameters φ and t are different in nature, but both are related to the molecular organization. As for t with eq 1b, φ can be related to δd from a simple electrostatic model (Figure 6c, inset): φ=q.[1-(1+(r a /(d+δd)) 2 ) -0.5 ]/[4 ε 0 ε r (d+δd)] (2) r a is the counter-ion pairing distance, q is the elementary charge, ε 0 and ε r are the dielectric permittivity of vacuum and the relative permittivity of water, respectively. φ=4.5 meV, for d=7 Å and δd=0 Å, corresponds to an Fc-ClO 4 -ion pairing distance of 4.9 Å (5.5Å is expected from molecular dynamics simulations 69 ). With eq 2, a Gaussian distribution for δd implies a non-Gaussian distribution for φ (Figure 6c). Combining eq 2 and the extended Laviron model (see SI Methods), we see that such a distribution should induce a broadening of the CV peak, but only when d+δd approaches the ionpairing distance (Figure 6d). Therefore, CV would not be sufficiently sensitive to assess information on the small molecular organization fluctuations expected here (e.g. SD(δd) = 0.8Å from parameter t analysis). This feature illustrates the potential of using t as an ultra-sensitive chemical characterization parameter. In summary, we have investigated the possibility of assessing the -electronic couplings from charge transport measurements in a connected device, using a statistical analysis of current from a large array of Fc-thiolated gold nanocrystals. The results have been quantitatively compared to DFT calculations. Extracted parameters, including a molecule/electrode coupling asymmetry of 0.9 and t of 35 meV, were in the range of expectations. However, the distribution of t was broader than expected from the solely thermal fluctuations. This observation is attributed to structural fluctuations and to a variation of the intermolecular distance of 0.8 Å in the model. The results confirm the need for charge transport model to consider small structural fluctuations, even on the order of 1Å; however, CV does not have sufficient sensitivity to reveal such small fluctuations. This limitation may be overcome by measuring extremely small CV currents (on the single-dot level), and performing statistical analyses on φ (as predicted in Figure 6c). The origin of these structural fluctuations remains unclear, but could be related to the competitiveand -interactions due to the presence of alkyl chains. Overall, the present study provides insights into understanding -intermolecular interactions in organic and (bio-) molecular devices. The findings confirm that Landauer-type coherent-scattering models, which are usually dedicated to lowtemperature mesoscopic physics, are relevant at room temperature for molecular electronics, even in the presence of cooperative effects. Statistical current analysis could be applied to various systems, because current histograms represent a common approach in molecular junctions. The study of -electronic couplings is a unique opportunity to link quantum chemistry, mesoscopic physics, organic electronics, and electrochemistry, indicating the importance of each subfield in the development of organic electronics. Methods Additional Methodological information related to STM, gold nanodot fabrication, monolayer self-assembly, experimental conditions for CV and related fits, image treatment DFT calculations and theoretical histogram generation is available in the SI Methods. UHV STM The high resolution image was performed at room temperature with a substrate biased at 2V and at a constant current of 1 pA. Areas of top and side facets of the nanocrystals To estimate the number of molecules per peak, the area was considered from the following formula, based on Figure 2c: /4*(D-2*h/tanξ) 2 + *(D-h/tanξ)*h/sinξ (3) where the first term corresponds to the area on the top and the second term to the area on the sides of the nanocrystal. Reasonable fits are obtained with h=2.7 nm and ξ=30°, as expected from the nanocrystal structure. C-AFM We measured current voltage by using C-AFM (Dimension 3100, Veeco) with a PtIrcoated tip on molecules 66 in N 2 atmosphere. Each count in the statistical analysis corresponds to a single and independent gold nanodot. The tip curvature radius is about 40 nm (estimated by SEM), and the force constant is in the range of 0.17-0.2 N/m. C-AFM measurements were taken at loading forces of 15 and 30 nN for the smallest and largest dots, respectively to keep a similar force per surface unit. As shown in ref. 14, a weak effect of the force is observed for these molecules in the range of 10-30 nN. In scanning mode, the bias is fixed and the tip sweep frequency is set at 0.5 Hz. With our experimental setup being limited to 512 pixels/image, the parameters lead to a typical number of 3000 counts for a 6×6 µm C-AFM image. In the presence of -electronic couplings, current histogram peaks are well-fitted with an asymmetric double sigmoidal function ( ) given by ( ) = + !("!" # $% & /()/% ( )1 - !("!" # !% & /()/% , - (4) where the values of the various parameters are presented in SI Table 2. Landauer Imry Buttiker Formalism and histograms fits The model has been adapted from ref. 21 to account for a large number of molecules and asymmetrical contacts (detail in SI Methods). It provides a good description of the fundamental aspects of electron transport via the computation of the energy-dependent transmission through the device. The formalism described in ref. 21 focused on the zerobias conductance (and at low temperature), a result which can be extended to the evaluation of the current at low bias, provided that one integrates the transmission over a range of energy given by the external potential. Here we have used this model with conditions of relatively high bias and at room temperature. The assumption that the Landauer approach remains applicable under such conditions is often made in the field of molecular electronics with a single-level model. 31 We believe that these assumptions are further justified due to the strong coupling of the Fc molecules, as evidenced by the level broadening estimated in the range of 100 meV. The generation of current histograms (instead of conductance histograms in ref 21) required an additional assumption (midpoint rule) to efficiency compute the 10 6 realizations (see SI Methods). The validity of this approximation has been confirmed for the present study (Figure S14 in SI). The process of fitting the line shape of the experimental histograms relies on a relatively large number of variables, which can be defined by a step by step procedure. We considered a site energy = 0.2 eV versus Fermi level at V bias =0 V, given the CV results and related energy band diagram proposed in ref. 14. An upper limit of . <45 meV was considered based on CV analysis (Figure S3d). First, we optimized the parameters for 3×3 molecules due to computational time. Because . does not significantly affect the extracted value of t (Figure S10), we considered δ =40 meV (similar to the value considered in ref. 21. V t and V b were adjusted to get a good current level and to reproduce the histogram shape. An optimal asymmetry factor of =0.9 was considered (Figure S10), in agreement with refs. 12 and 14. δV was tuned to fit the histogram line shape when t = 0 (mixed monolayer). δt was tuned with t while fitting asymmetric histograms line shapes. Current histograms were generated based on 10 6 realizations. When the number of molecules in the matrix is large (e.g. 9x9 molecules), histogram fitting takes several days. Efficient hardwares (i.e. Ising machine) is being developed to solve such problems efficiently. 70,71 Figure S2. (h) Graphs showing number of molecules per dot (obtained from (e-g)) averaged with data from reduction peak (Figure S6). Error bars are based on dispersion between CV (oxidation/reduction peak and various speeds). Data are fitted with eq 3 (truncated cone approximation). Inset: Schematic representation of molecular organization. Peak 1 (purple) and peak 2 (green) correspond to molecules on top and sides, respectively. (i) Schematic representation of Fc-thiolated gold nanocrystals in NaClO 4 electrolyte when all Fc are oxidized. S2. Figure Figure 2a-c (and Figure S1,S2 in SI) show the experimental setup with NaClO 4 electrolyte (0.1 M) facing Fc molecules and the Au nanocrystal electrodes.49 We have Figure Figure 2e-g show conventional CV results for nanocrystals (of different diameters) , with the additional consideration of asymmetrical coupling to the electrodes and the possibility of simulating up to 9×9 molecules (only two molecules were considered in ref.21). Figure5aillustrates the modeled system. Each Fc molecule is considered as a single-level quantum dot coupled to both electrodes. Dots are coupled together with coupling term t in a tight binding model. This coupling term is equivalent to the transfer integral in DFT. For simplicity, t is considered to be identical along both axes in the plane. Each molecule within a molecular junction composed of N×N molecules has the same parameters (molecule orbital energy), V t , V b , (molecules coupling to top and bottom electrodes, respectively) and t. Cooperative effects, whose strengths are controlled by parameters V t ,V b ,t,N, cause a smearing out of the energy-dependent transmission coefficient shape, with a peak transmission being less than one21 (see FiguresS13-S15in SI for additional illustrations). The current, obtained from the integral of the transmission coefficient over a range of energy set by δd in eq 1b is chosen from a Gaussian distribution. A step-by-step fitting protocol is detailed in the Methods section and Figures S10, and S11 in SI. Figure 5b illustrates the possibility of generating histograms that reproduce the experiments. Optimized parameters for 9×9 molecules using eq 1a for t (t = 0.04 eV, V t = 0.401 eV, V b = 0.144 eV, δ = 40 meV, δt = 0.14 eV, and δV=22 meV) are in the range of those considered in ref.21 based on ref. 67 where t was 0.1 eV, V t = V l = 0.6 eV, δt = 75 meV, δV = 37 1 Figure 1 Signatures of cooperative effects with the introduction of parameters φ and t. (a) Schematic representation of CV results in the absence (black curve) and presence (orange curve) of Coulomb interactions between Fc molecules according to the Laviron model 27 based on the Frumkin isotherm. Inset: Schematic representation of the microscopic process. When Fc is oxidized (green cloud), it shifts the energy level of the neighboring molecule by φ. (b) Schematic representation of a theoretically predicted 21 conductance histogram in the absence (black curve) and presence (orange curve) of coupling between two molecules (tight binding model). Inset: Schematic representation of intermolecular coupling (t) related to charge transfer between adjacent molecules. Figure 2 2 Figure 2 Quantitative analysis of intermolecular interaction energy φ from CV (a) Schematic representation of intermolecular interaction in an electrochemical setup. Electrolyte is NaClO 4 (0.1 M). φ, Coulomb repulsion between adjacent molecules (see SI Methods); CE counter electrode; RE Ag/AgCl reference electrode; WE working electrode (gold nanocrystals). (b) SEM image of gold nanocrystal array on highly doped silicon. Scale bar = 200 nm. (c) Schematic representation of a nanocrystal cross-section based on ref.49. Parameters of interest are the nanocrystal diameter D, height h≈3 nm and angle ξ≈30°. (d) Square wave voltammogram (SwV) for (1:10) FcC 11 SH SAM diluted with C 12 molecules on an array of 15 nm diameter dots. Integration of the main peak area corresponds, after normalization (see SI Methods), to 5 to 15 FcC 11 SH molecules per dot. The curve is fitted with 2 peaks. The main peak has a FWHM~90 mV (φ≈0). (e-g) CV results (anodic peak) for arrays of FcC 11 SH-coated gold nanoelectrodes of different diameters (as indicated). FWHM correspond to sweep rate of 1V/s (oxidation peak). Fitting parameters are indicated in TableS2. Figure 3 3 Figure 3 Estimation of cooperative effects from supramolecular organization and DFT calculations (a) UHV-STM image of a SAM of FcC 11 SH molecules grafted on gold. Molecular structure is resolved and used as reference for full DFT calculations. Periodic black lines, with cell delimited by pink clouds, indicate positions of Fc molecules. (b) Cell composed of four FcC 11 SH molecules based on (a,b). A number is attributed to each molecule due structural anisotropy. (φ a ,t a ) and (φ b ,t b ) refer to interactions between molecules 1 and 3 and molecules 1 and 2, respectively. X and Y axes are aligned along molecules 3 and 1 and molecules 2 and 1, respectively. (c) Full DFT calculation of parameter t b between molecules 1 and 2. Position of molecule 2 is translated along the X axis to mimic disorder. Inset: Molecular configuration at each maximum for t b . (d) Evolution of t b as a function of the variation of intermolecular separation δd modulated from the initial geometry (normal displacement = 0 Å). Decay ratios b is indicated. Figure 4 4 Figure 4 Current histograms used to evaluate -intermolecular interaction energy (~ 3000 counts per histogram) Current histogram obtained at a tip voltage of -0.6 V for (a) 40 and (b) 15-nm diameter dots (with 5-nm diameter on top). Plain curve is the fit with asymmetric double sigmoidal function (eq 4). Dashed curve is the lognormal fit. Inset: Schematic view of the setup. (c) Same as (b) but with a (1:1) FcC 11 SH:C 12 SH-diluted SAM. Fitting parameters are shown in TableS2. Figure 5 5 Figure 5 Histograms fits with Landauer Buttiker Imry formalism. (a) Schematic representation of the model. Each molecule (quantum dot) is coupled to other molecules with coupling term t and coupled to top/bottom electrodes with coupling energies V t and V b , respectively. , molecule orbital energy. SDs of these parameters are used to generate histograms. Related experimental setup shown for clarity. (b) Experimental (V tip = -0.6 V) and simulated histograms (V t = 0.401eV, V b = 0.144eV, .V = 22meV, = 0.2eV, t = 0.04 eV, .t = 140 meV, t 0 = 0.34eV, β = 1.96/Å SD(δd)=0.8Å) considering 81 molecules. (c) Schematic representation of model with fewer molecular interactions. Parameters are same as in (a) except t = 0. Related experimental setup (diluted monolayer) is shown for clarity. (d) Experimental (V tip = -0.6 V) and simulated histograms (t = 0). Figure 6 6 Figure 6 Extracted distributions of t and implications for organic electronic and electrochemical field (a) Distribution of t obtained from best fits in Figure 5d with eqs 1a and 1b. Expected (Gaussian) distribution from solely thermal motions shown in grey. Each electronic level has an associated t that can be positive or negative, so the sign is of little importance. (b) t 2 distribution obtained from (a). (c) Estimated distribution of φ from eq 2 given an intermolecular distance fluctuation of δd=0.8Å. Inset: Schematic representation of the electrostatic model (equation 2). (d) Experimental and theoretical CV results (coupled eq 2 and eq S1) with δd=0, 0.8 and 3 Å. Energy level of Fc vs Ag/AgCl (E p ) is used to center the CV peak at 0. Acknowledgments The authors thank C.A. Nijhuis from NTU Singapore for discussions, C. Wahl from CPT for beginning simulations on current histograms, D. Guerin and A. Vlandas from IEMN for discussions related to electrochemical measurements, and T. Hayashi, K. Chida and T.Goto from NTT Basic Research Labs for fruitful discussions. J.T. thanks PhD funding from Marie Curie ITN grants and the EU-FP7 Nanomicrowave project and J.C thanks the iSwitch (GA No. 642196) project. We acknowledge support from Renatech (the French national nanofabrication network) and Equipex Excelsior. Work Associated content: Supporting information The Supporting Information is available free of charge on the ACS Publications website at DOI:10.1021/acs.nano-lett.xxx Corresponding Author: e-mail : [email protected] ToC Image The -electronic coupling energy parameter t is assessed from a statistical analysis of current histograms. (1) Hunter, C.A.; Sanders, J.K.M. J.Am.Chem.Soc. 1990, 112, 5525-5534. (2) Herman, J.; Alfe, D.; Tkatchenko, A.; Nat. Commun
01765816
en
[ "sdv", "shs" ]
2024/03/05 22:32:18
2017
https://amu.hal.science/hal-01765816/file/Alhanout%20hal-01765816.pdf
Kamel Alhanout Sok-Siya Bun Karine Retornaz Laurent Chiche Nathalie Colombini email: [email protected] Prescription errors related to the use of computerized provider order-entry system for pediatric patients Keywords: Drug Prescriptions, Medical order entry systems, Pediatrics, Medication errors Introduction Medication errors are a major source of risk and injury to patients, especially pediatric patients [START_REF] Engum | An evaluation of medication errors-the pediatric surgical service experience[END_REF][START_REF] Stultz | Prescription order risk factors for pediatric dosing alerts[END_REF]. The disadvantages of handwritten orders include illegibility, the timeconsuming process for pharmacy approval, and an absence of immediate notification systems regarding drug interactions and misuse [START_REF] King | The effect of computerized physician order entry on medication errors and adverse drug events in pediatric inpatients[END_REF][START_REF] Goldman | Beyond the basics: Refills by electronic prescribing[END_REF]. Hence, the replacement of handwritten orders by the computerized provider order-entry (CPOE) system is increasingly used in hospitals and health-care centers [START_REF] Arques-Armoiry | Most frequent drug-related events detected by pharmacists during prescription analysis in a University Hospital[END_REF][START_REF] Eslami | The impact of computerized physician medication order entry in hospitalized patients-A systematic review[END_REF]. Indeed, the use of CPOE provides undisputable advantages, such as the elimination of illegibility problems and the ability to include notification systems that assist in the prescription process [START_REF] Holdsworth | Impact of computerized prescriber order entry on the incidence of adverse drug events in pediatric inpatients[END_REF]. Nevertheless, data regarding errors resulting from CPOE are frequently reported, especially when the system is initially implemented [START_REF] Schwartzberg | We thought we would be perfect: medication errors before and after the initiation of Computerized Physician Order Entry[END_REF][START_REF] Samaranayake | Technology-related medication errors in a tertiary hospital: A 5-year analysis of reported medication incidents[END_REF]. Moreover, it has been noted that these errors can occur at any stage within the therapy management, including prescribing, dispensing, administering, and monitoring [START_REF] Schwartzberg | We thought we would be perfect: medication errors before and after the initiation of Computerized Physician Order Entry[END_REF]. The reasons behind such errors are in part due to various technical reasons such as confusing software, a screen overload as well as human-related issues such as lack of experience and training, cognitive overload or depersonalization between healthcare actors, as face to face communication is replaced by computer accessibility [START_REF] Hellot-Guersing | Erreurs médicamenteuses induites par l'informatisation de la prescription à l'hôpital : recueil et analyse sur une période de 4 ans[END_REF][START_REF] Malbranche | La sécurisation par l'informatisation des prescriptions : les médicaments administrés sont-ils ceux prescrits ?[END_REF]. During the last decade, errors resulting from recently implemented CPOE systems have been reported in many worldwide studies, including France [START_REF] Hellot-Guersing | Erreurs médicamenteuses induites par l'informatisation de la prescription à l'hôpital : recueil et analyse sur une période de 4 ans[END_REF][START_REF] Malbranche | La sécurisation par l'informatisation des prescriptions : les médicaments administrés sont-ils ceux prescrits ?[END_REF][START_REF] Chapman | Implementation of computerized provider order entry in a neonatal intensive care unit: Impact on admission workflow[END_REF][START_REF] Reynolds | Alerting strategies in computerized physician order entry: a novel use of a dashboard-style analytics tool in a children's hospital[END_REF]. Our study focused on the errors that occur when CPOE is used to prescribe drugs to pediatric patients. Hence, we researched and noted all pharmacist interventions (PharmInts) that involved different errors in prescriptions for pediatric patients over a 6-month period. The nature and frequency of these errors, and the drugs concerned noted in the PharmInt, were analyzed to evaluate the efficiency of the CPOE systems for pediatric use. Methods We performed a retrospective study that noted errors related to computerized orders carried out by the software Pharma® (Computer Engineering, France) in pediatric department between 31/05/2015 and 01/12/2015. Pharma® computerizes various steps within the therapy management, including prescription, pharmaceutical approval, and administration by the nursing staff. Interestingly, this CPOE system allows prescribers, pharmacists, and nursing staff to share notices, opinions, and various decisions on interventions. Prescribers are trained periodically after the implementation of CPOE in clinical departments. These training courses are offered but not imposed, to all prescribers every 6 months. A total of 924 patients were admitted into the pediatric department and registered into Pharma® during the study period. The patients' characteristics are presented in Figure 1. Prescriptions are made informatically by prescribers using Pharma®. Each patient has an account that is created by the hospital entry office where information such as the patient's name and age are found. When physicians prescribe, they must mention patient's bodyweight and choose appropriate medications. Prescribers then choose those medications from the list of drugs proposed by the CPOE software. Clinical pharmacists who use CPOE daily gave their approval when no prescription errors were found. In other cases, clinical pharmacists realized a PharmInt that needed to be read by the prescriber and/or nursing staff. Such a PharmInt was formulated as a "Memo" when it concerned a general remark, such as in the case of the absence of a patient's bodyweight. The PharmInt could also be formulated as a pharmaceutical opinion related to the prescribed drug. The pharmaceutical opinions were evaluated and then validated by the prescriber(s). When necessary, pharmacists could suspend drug delivery or request modifications of a prescription with the prescriber(s) consent. Results A total of 1297 computerized orders containing 4722 prescriptions were assessed by the pediatric department. From these prescriptions, a total of 302 pharmacist interventions (PharmInts) were carried out by clinical pharmacists (6.4%, Table 1, Figure 2). Of the 302 PharmInts, a total of 95 (31.5%) contained no data on the patient's bodyweight, which should have been provided by the prescriber (Table 1). After the PharmInt, information on bodyweight was then provided in 47 of these cases (15.6%); however, in the other 48 cases (15.9%), the information on bodyweight was never provided despite the PharmInt. Errors related to administration frequency of drugs such as paracetamol and phloroglucinol prescribed as to be "used when needed", accounted for 19.9% of total PharmInts (Table 1). For example, if physicians prescribe only 4 times daily without mentioning the precise interval, it is considered as a frequency administration error by the pharmacist. Prescribing an excessive dose occurred in 17.6% of PharmInts, inappropriate modifications of prescription unit accounted for 9.9% of PharmInts and incorrect dosage form for 8.3% of PharmInts (Table 1). PharmInts that highlighted the prescription of a contraindicated drug, the need for treatment monitoring, and/or the risk of a drug-drug interaction accounted for 4%, 3.3%, and 3.3 % of PharmInts, respectively (Table 1). Errors relating to incorrect dosing regimens and an absence of treatment duration were found in 1.3% and 1% of total PharmInts, respectively (Table 1). Of the 302 PharmInts, 255 concerned prescription errors and bodyweight missing not provided after PharmInt. Table 2 lists the drugs that were subject to 255 PharmInts and the errors associated with their prescriptions. Paracetamol (in its different forms: injectable, solid or liquid oral forms) was the main drug concerned and accounted for 36.3% of total PharmInts. Noted errors for this drug included an incorrect dosage form, co-administration of two paracetamol-containing drugs, modification of a prescription unit, errors in the frequency of administrations, or the absence of bodyweight data. Phloroglucinol and esomeprazole that appeared in PharmInts were mainly related to modifications to the prescription unit and inconsistent dosages. Notably, drugs presented in liquid oral forms, such as antibiotics or antipyretics, were frequently prescribed with modified prescription units. This was exemplified by paracetamol, which is habitually prescribed in dose-kg mode whereas some prescribers changed this to milliliter or milligram mode. This was also noted within two prescriptions for dalteparine, where prescribers used "UI anti-Xa" instead of "syringe", inserting a dose of 1 UI anti-Xa/day in the prescription instead of 1 syringe/day (each syringe contains 2500 UI anti-Xa). As noted in Table 2, inconsistent use of contradicted or non-used drugs for pediatric patients was noted for drugs such as ketoprofene (Profenid®, Bi-profenid®), dorzolamide /timolol eye drops (Cosopt®), Saccharomyces boulardii (Ultralevure®), dexamethasone/oxytetracycline (Sterdex®), and trimebutine (Debridat®). The prescription of inadequate dosage to pediatric patients, such as injectable paracetamol 1000 mg/100 mL instead of 500 mg/50 mL, or injectable methylprednisolone 120 mg, were also observed and the prescriber was notified. It should be noted that 52 of the 302 PharmInts (17.2%) had no response (validation or non-validation of notification) from the prescriber. Discussion Various studies have highlighted the risk of CPOE to produce errors that may be lifethreatening for patients [START_REF] Stultz | Prescription order risk factors for pediatric dosing alerts[END_REF][START_REF] Han | Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system[END_REF][START_REF] Jones | Implementing computerized prescriber order entry in a children's hospital[END_REF][START_REF] Nerich | Computerized physician order entry of injectable antineoplastic drugs: An epidemiologic study of prescribing medication errors[END_REF]. Thus, each healthcare center using a CPOE system must be aware of this situation [START_REF] Schwartzberg | We thought we would be perfect: medication errors before and after the initiation of Computerized Physician Order Entry[END_REF][START_REF] Hellot-Guersing | Erreurs médicamenteuses induites par l'informatisation de la prescription à l'hôpital : recueil et analyse sur une période de 4 ans[END_REF][START_REF] Bates | Preventing medication errors: a summary[END_REF][START_REF] Balasuriya | Computerized Dose Range Checking Using Hard and Soft Stop Alerts Reduces Prescribing Errors in a Pediatric Intensive Care Unit[END_REF][START_REF] Cl | A systematic review of the types and causes of prescribing errors generated from using computerized provider order entry systems in primary and secondary care[END_REF][START_REF] Prgomet | Impact of commercial computerized provider order entry (CPOE) and clinical decision support systems (CDSSs) on medication errors, length of stay, and mortality in intensive care units: a systematic review and meta-analysis[END_REF]. Our study focused only on errors that were encountered when using CPOE to manage pediatric patients. It should be noted that the lack of information regarding pediatric bodyweight was the most frequent error and was the main cause for a PharmInt in our study. Information on bodyweight is crucial for checking accurate drug dosages [START_REF] Jones | Implementing computerized prescriber order entry in a children's hospital[END_REF]. However, because bodyweight information is not mandatory when completing prescriptions via the CPOE system, clinicians often prescribe despite the absence of a patient's weight. Pharmacists usually notify the absence of bodyweight data and will indicate this on the Pharma® database, as well as making phone contact to approve the prescription. Consequently, data on bodyweight may be included later, although this is not always the case. The process of supplementing data is time consuming and reduces the advantages of using CPOE. This problem could be resolved by the software designer who could make it mandatory to provide information on a patient's bodyweight before a prescription can be filled. Prescribers would then be aware of this absence before pharmacy approval and thus be able to eliminate this problem. In our study, errors related to paracetamol prescriptions were recurrent as it is frequently prescribed to children. Although it might seem insignificant to prescribers, errors in administration frequency might be problematic as it can expose the patient to an overdose that can be highly toxic to pediatric patients [START_REF] Ji Westbrook | Stepped-wedge cluster randomised controlled trial to assess the effectiveness of an electronic medication management system to reduce medication errors, adverse drug events and average length of stay at two paediatric hospitals: a study protocol[END_REF][START_REF] De Giorgi | Risk and pharmacoeconomic analyses of the injectable medication process in the paediatric and neonatal intensive care units[END_REF]. Moreover, some drugs, such as paracetamol or phloroglucinol, are prescribed under the notice of "when needed". In these conditions, when information about time intervals between doses is absent, errors may occur, such as administrating the total dose (per day) all at once, and thus exposing the patient to toxic levels [START_REF] De Giorgi | Risk and pharmacoeconomic analyses of the injectable medication process in the paediatric and neonatal intensive care units[END_REF]. Other errors, such as an incorrect drug name or dose are potentially dangerous to all patients, but particularly when they are pediatric patients [START_REF] Holdsworth | Impact of computerized prescriber order entry on the incidence of adverse drug events in pediatric inpatients[END_REF][START_REF] Prgomet | Impact of commercial computerized provider order entry (CPOE) and clinical decision support systems (CDSSs) on medication errors, length of stay, and mortality in intensive care units: a systematic review and meta-analysis[END_REF]. As reported previously, these errors can occur when the prescriber chooses an incorrect drug from the list of drugs proposed by the CPOE software, which calls for prescribers, pharmacists, and nursing personnel to be more attentive [START_REF] Engum | An evaluation of medication errors-the pediatric surgical service experience[END_REF][START_REF] Stultz | Prescription order risk factors for pediatric dosing alerts[END_REF][START_REF] Arques-Armoiry | Most frequent drug-related events detected by pharmacists during prescription analysis in a University Hospital[END_REF][START_REF] Cl | A systematic review of the types and causes of prescribing errors generated from using computerized provider order entry systems in primary and secondary care[END_REF]. In our study, clinical pharmacists detected and informed the prescriber of errors using Pharma® via PharmInt which was either validated or not by the prescriber. However, as previously noted, many PharmInts that had an error were left without a response from the prescriber. Thus, pharmacists frequently need to phone or make personal contact with the prescriber before validation of a questionable prescription. This reduces the benefits of the CPOE, which is to enable more rapid and efficient communication between health-care actors and is fundamental in optimizing healthcare management [START_REF] Engum | An evaluation of medication errors-the pediatric surgical service experience[END_REF][START_REF] Arques-Armoiry | Most frequent drug-related events detected by pharmacists during prescription analysis in a University Hospital[END_REF][START_REF] Ji Westbrook | Stepped-wedge cluster randomised controlled trial to assess the effectiveness of an electronic medication management system to reduce medication errors, adverse drug events and average length of stay at two paediatric hospitals: a study protocol[END_REF]. Modification of the prescription unit has also been reported to be one of the CPOE errors [START_REF] Hellot-Guersing | Erreurs médicamenteuses induites par l'informatisation de la prescription à l'hôpital : recueil et analyse sur une période de 4 ans[END_REF]. In our study, we noted 30 cases of inappropriate modification prescription unit which could lead to potential errors of administrations. For example, Dalteparine should be prescribed either 2500 UI anti-Xa/administration or 1 syringe/administration according to hospital recommendations. In Pharma®, "syringe" is the unit as the default of dalteparine. In two cases, "syringe" was modified by prescriber to "UI anti-Xa", which led to a dose of 1 UI anti-Xa instead of 2500 UI anti-Xa. Conclusion We have researched the use of a CPOE in France, specifically with regards to pediatric patients as they are particularly vulnerable to medical errors. Our work revealed several error types in prescribing for pediatric patients, mainly absence of bodyweight, incorrect frequency of administration and excessive doses. The role of better software design is pivotal to avoiding these errors. Consequently, we decided to make it mandatory to provide bodyweight data on the Pharma® form. In addition to optimizing the quality of CPOE-entries, well-designed software, better-trained users, and improved communication among healthcare will reduce errors. Clinician pharmacists should have a key role in education prescribers to respect the required parameters and to elaborate computerized pediatric protocols. Finally, communication between healthcare actors can be improved by using CPOE system tools, such as comments, and notes. These interventions can be rapidly shared when using this computerized method, and allowed to secure the ordering system and to improve the quality of healthcare management. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Color should be used for figure 2 in print Figure 1 : 1 Figure 1: Characteristics of patients admitted into the pediatric department and registered into Pharma® during the study period. Figure 2 : 2 Figure 2: Frequency of pharmacist interventions regarding errors encountered on the computerized prescriptions. 4 ) 4 Omeprazole 10 mg tablets (Mopral) Incorrect dosage form 1 (0.4) Trimébutine 50 mg/5 mL injectable solution (non-proprietary name, PharmInt: pharmacist intervention Table 1 : 1 Total pharmacist interventions regarding errors encountered on the computerized prescriptions. Bodyweight was supplied to Pharma® due to the PharmInt, b: bodyweight was never supplied to Pharma®, despite the PharmInt Subject of PharmInt Frequency of PharmInts (%) Incorrect Frequency administration 60 (19.9) Excessive dose 53 (17.6) No weight available a 47 (15.6) No weight available b 48 (15.9) Inappropriate modification of prescription unit 30 (9.9) Incorrect dosage form 25 (8.3) Prescription of a contraindicated drug 12 (4.0) Need for treatment monitoring 10 (3.3) Presence of drug-drug interaction 10 (3.3) Wrong dosing regimen 4 (1.3) Absence of treatment duration 3 (1.0) Total 302 (100) PharmInt : Pharmacist intervention a: Table 2 : 2 Pharmacist interventions and errors encountered with the prescribed drugs. INN (commercial name) Errors
01774847
en
[ "info.info-ro" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01774847/file/Path%20Gaps%2023%20Feb%202018.pdf
Alexandre Dolgui email: [email protected] Mikhail Y Kovalyov email: [email protected] Alexander Dolgui Alain Quilliot email: [email protected] Simple Paths with Exact and Forbidden Lengths Keywords: shortest path problem, longest path problem, exact path length, forbidden path length, computational complexity, approximation come Introduction Let G = (V, A) be an arbitrary arc weighted directed multigraph, which we further call graph and where V = {1, . . . , n} is the set of vertices and A is the set of arcs, |A| = m. Arc a (r) ij ∈ A is defined by its head vertex i ∈ V , tail vertex j ∈ V and its copy marker r, if there are several arcs with the same head and the same tail. A length l(a), which is an arbitrary integer number, is associated with each arc a ∈ A, and the length L(P ) of a path P from one specified vertex of G to another is the total length of its arcs. A path with no vertex repetition is called simple. We study the following problems Exact Path(α), Path Gaps, Short Path Gaps and Long Path Gaps of finding a simple path from a given vertex s to a given vertex t in the graph G. Let integer numbers f i , f i , i = 1, . . . , k, be given such that L - Σ ≤ f 1 ≤ f 1 < f 2 ≤ f 2 < • • • < f k ≤ f k ≤ L + Σ , where L - Σ = a∈A,l(a)≤0 l(a) is the total non-positive arc length and L + Σ := a∈A,l(a)>0 l(a) is the total positive arc length. Here we assume that any sum is equal to zero if it is taken over an empty set. Denote [f , f ] = {f , f + 1, . . . , f }. Problem Exact Path(α): Given number α, find a simple path P from vertex s to vertex t in the graph G such that its length L(P ) = α. Problem Path Gaps: Find a simple path P from vertex s to vertex t in the graph G such that the length L(P ) ∈ {[f 1 , f 1 ], . . . , [f k , f k ]}. We call intervals [f i , f i ], i = 1, . . . , k, forbidden (path length) gaps. The length of a path should not fall into these gaps. We also study three special cases of the problem Path Gaps: the case k = 2 of two forbidden gaps [f 1 , f 1 ] and [f 2 , f 2 ], the case k = 1 of a single forbidden gap [f 1 , f 1 ], and a sub-case of the latter case, in which a single path length α is forbidden, that is, f 1 = f 1 = α. We denote these cases as Path-2-Gaps, Path-1-Gap and Path No(α), respectively. Problem Short (Long) Path Gaps: Differs from Path Gaps in that a shortest (longest) simple path with the length not from the forbidden gaps has to be found. Occasionally, we will assume that graph G is undirected, which will be explicitly indicated. Problems Path Gaps and Short Path Gaps appear in route planning from one point of a network to another such that the arrival time does not fall into the given time intervals when the service required at the destination point is not available. This situation is similar to that in the vehicle routing problems with time windows, see, for example, Bräysy and Gendreau [START_REF] Bräysy | Vehicle routing problem with time windows, part I: Route construction and local search algorithms[END_REF] for the formulation and a survey of the results for these problems. The problem Long Path Gaps can be used for modeling a situation, in which profits are collected over the route segments while traveling from one point of a network to another. If the total profit is less than a given value B > 0, then it is not worth traveling. The problem is to find a route such that the total profit is maximized and it does not fall into the forbidden gap [0, B]. It is clear that the forbidden gap constraint is redundant if the goal is to find an exact solution. However, the problem is NP-hard in general, and the gap constraint is essential for an approximate solution. Vehicle routing problems with profits have been studied by Archetti et al. [START_REF] Archetti | Chapter 10: Vehicle routing problems with profits[END_REF]. In a situation of goods collection over segments of a path and their transportation in containers, there can be a requirement of container capacity utilization, leading to the forbidden gaps. For example, assume that the capacity of any container is 30 goods, and each container is required to be filled with at least 25 goods. A fixed number of goods associated with a path segment has to be collected if this segment is visited. In this situation, the total number of collected goods should fall into the intervals [START_REF] Leclerc | Polynomial time algorithms for exact matching problems[END_REF][START_REF] Milanic | The exact weighted independent set problem in perfect graphs and related graph classes[END_REF], [50,60], [75,90], [100,120], [125, ∞], and the forbidden intervals are [0,24], [START_REF] Papadimitriou | The complexity of restricted spanning tree problems[END_REF]49], [61,74], [91,99] and [121,124]. Feasible loads of the containers can be decided after a feasible path has been determined. To the best of our knowledge, no literature exists on problems with the forbidden objective values gaps. The only exception is our recent paper [START_REF] Dolgui | Knapsack problem with objective value gaps[END_REF] on a 0-1 knapsack problem with the forbidden objective function values. On the other hand, several exact value (or exact cost) combinatorial problems have been studied in the literature, which concern the existence of a solution with a given objective function value. The exact value assignment problem has been studied by Papadimitriou and Yannakakis [START_REF] Papadimitriou | The complexity of restricted spanning tree problems[END_REF] who proved its NP-completeness in the ordinary sense, and Karzanov [START_REF] Karzanov | Maximum matching of given weight in complete and complete bipartite graphs[END_REF] who developed a polynomial time algorithm for the case of 0-1 costs. Pseudo-polynomial time algorithms for the exact value spanning tree problem, the exact value perfect matching problem on planar graph, the exact value cycle sum problem on planar directed graph and the exact value cut problem on planar or toroidal graphs have been presented by Leclerc [START_REF] Leclerc | Polynomial time algorithms for exact matching problems[END_REF] and Barahona and Pulleyblank [START_REF] Barahona | Exact aborescences, matchings and cycles[END_REF]. A number of computational complexity and algorithmic results for the exact weight (maximum) independent set problem on various classes of graphs have been obtained by Milanic and Monnot [START_REF] Milanic | The exact weighted independent set problem in perfect graphs and related graph classes[END_REF]. Computational complexity of the exact weight subgraph problems, in which the number of vertices of the subgraph is a constant, has been studied by Vassilevska and Williams [START_REF] Vassilevska | Finding, minimizing, and counting weighted subgraphs[END_REF] and Abboud and Lewi [START_REF] Abboud | Exact weight subgraphs and the k-sum conjecture[END_REF]. Lopéz et al. [START_REF] Lopéz | Solving the target-value search problem[END_REF] studied the problem Exact Path(α), which they showed to be NP-complete and suggested modifications of the goal search (A * ) and bidirectional search algorithms for the solution. The problem Exact Path(α) can also be formulated as a special case of a constrained path problem, which is to maximize path length, subject to the constraint that the path length does not exceed a certain value, or to minimize the path length, subject to the constraint that the path length is at least a certain value. There exists a bulk of the literature on the constrained and bi-objective path problems, see, for example, Joksch [START_REF] Joksch | The shortest route problem with constraints[END_REF], Dial [12], Hansen [START_REF] Hansen | Bicriterion path problems, in: Theory and applications[END_REF], Aneja et al. [START_REF] Aneja | Shortest chain subject to side constraints[END_REF], Desrochers [START_REF] Desrochers | La fabrication d'horaires de travail pour les conducteurs d'autobus par une methode de generation de colonnes[END_REF], Warburton [START_REF] Warburton | Approximation of Pareto optima in multiple-objective, shortest path problems[END_REF], Hassin [START_REF] Hassin | Approximation schemes for the restricted shortest path problem[END_REF], Lorenz and Raz [START_REF] Lorenz | A simple efficient approximation scheme for the restricted shortest path problem[END_REF], Ergun et al. [START_REF] Ergun | An improved FPTAS for the restricted shortest path problem[END_REF], Righini and Salani [START_REF] Righini | Symmetry helps: bounded bi-directional dynamic programming for the elementary shortest path problem with resource constraints[END_REF], Boland et al. [START_REF] Boland | Accelerated label setting algorithms for the elementary resource constrained shortest path problem[END_REF], Garcia [START_REF] Garcia | Resource constrained shortest paths and extensions[END_REF], Tsaggouris and Zaroliagis [START_REF] Tsaggouris | Multiobjective optimization: Improved FPTAS for shortest paths and non-linear objectives with applications[END_REF] and Bruegem et al. [START_REF] Breugem | Analysis of FPTASes for the multiobjective shortest path problem[END_REF]. We will present new observations about the relations between the problem Exact Path(α) and other path problems with the forbidden gaps. The rest of the paper is organized as follows. In the next section, we describe connections of the new problems with the earlier studied problems that provide a fairly complete computational complexity classification of the new problems and some new algorithmic results. Section 3 studies the case in which the graph is acyclic and the arc lengths are non-negative. While the problem Exact Path(α) and any of the problems Path Gaps, Short Path Gaps and Long Path Gaps with at least two forbidden gaps are NP-hard for this special case, an efficient approximation scheme is suggested, which delivers a solution with value close to the optimum but, possibly, violating a gap constraint with a given relative error. Polynomial time algorithms are presented for a more restrictive special case of these problems, in which forbidden gaps are polynomially bounded but the arc lengths are not. The paper concludes with a table of the obtained results and suggestions for future research. Connections with the earlier studied problems Observe that if graph G contains directed cycles, then the problems Exact Path(α) and Path-1-Gap are NP-complete in the strong sense even if the arc lengths are all equal to one because the NP-complete problem Hamiltonian Path (Garey and Johnson [START_REF] Garey | Computers and intractability: a guide to the theory of NPcompleteness[END_REF]) reduces to Exact Path(α) by setting α = n -1 and it reduces to Path-1-Gap by setting [f 1 , f 1 ] = [1, n -2]. We further abbreviate directed acyclic (multi)graph to DAG. Assume that G is a DAG. In this case, Exact Path(α) is NP-complete in the ordinary sense as it was mentioned by Lopéz et al. [START_REF] Lopéz | Solving the target-value search problem[END_REF]. Furthermore, it is pseudo-polynomially solvable by the following folkloric dynamic programming algorithm, denoted as DP-All-Lengths. This algorithm scans vertices in a topological order (cf. Cormen et al. [START_REF] Cormen | Introduction to algorithms[END_REF]) and constructs paths from vertex s to the successor vertices j. A state (j, f ) is associated with a path from vertex s to vertex j, where f is the length of this path. If a complete path P 0 from vertex s to vertex t with length L(P 0 ) goes via a vertex j and a sub-path of P 0 is in the state (j, f ), then any (incomplete) path in this state can be extended to a complete path P from s to t with the same length L(P ) = L(P 0 ). Algorithm DP-All-Lengths recursively generates states (j, f ). Let S(j) denote the set of states (j, f ) generated for vertex j, which differ by the path lengths f . The initialization is S(s) = {(s, 0)}. Vertex s is labeled. In the general recursion step, a vertex j is identified whose predecessor vertices i are all labeled. Since G is a DAG, such a vertex always exists. Set S(j) of states of the identified vertex is calculated as follows. S(j) := j, f + l(a (r) ij ) | a (r) ij ∈ A, (i, f ) ∈ S(i) . After that, vertex j is labeled and the next vertex with all predecessor vertices labeled is identified. Ultimately, set S(t) is generated, and the corresponding paths from s to t are found by backtracking. If a state (t, α) ∈ S(t), then the problem Exact Path(α) has a solution. Otherwise, it has no solution. The algorithm DP-All-Lengths can be implemented to run in O m(|L - Σ | + L + Σ ) time. Observe that the problem Exact Path(α) is a special case of the problem Path-2-Gaps: Recently, several publications appeared that study the so-called Next-to-Shortest Path problem, which asks for a path from s to t in a graph G of the second shortest length. an instance of Exact Path(α) is an instance of Path-2-Gaps with gaps [f 1 , f 1 ] = [L - Σ , α-1] and [f 2 , f 2 ] = [α + 1, L + Σ ]. Lalgudi and Papaefthymiou [START_REF] Lalgudi | Computing strictly-second shortest paths[END_REF] proved that this problem is strongly NP-complete if graph G contains directed cycles and arc lengths are non-negative. It also follows from their results that the problem is solvable in O(n+m) time if G is a DAG in this case. Computational complexity is open if the graph contains directed cycles and the arc lengths are strictly positive. The latter case is solvable in O(n 3 ) time for planar graphs, as shown by Wu and Wang [START_REF] Wu | The next-to-shortest path problem on directed graphs with positive edge weights[END_REF]. If the graph is undirected, then the problem is polynomially solvable. For strictly positive edge lengths, algorithms with running times O(n 3 m), O(n 3 ) and O(n 2 ) were successively presented by Krasikov and Noble [START_REF] Krasikov | Finding next-to-shortest paths in a graph[END_REF], Li et al. [START_REF] Li | Improved algorithm for finding next-to-shortest paths[END_REF] and Kao et al. [START_REF] Kao | A quadratic algorithm for finding next-to-shortest paths in graphs[END_REF]. For non-negative edge lengths, an O(n 6 m) time algorithm is presented by Zhang and Nagamochi [START_REF] Zhang | The next-to-shortest path in undirected graphs with nonnegative weights[END_REF]. Below we will assume that the path in the Next-to-Shortest Path problem is required to be simple. • if graph G is directed and planar and arc lengths are strictly positive, then Path No(α) can be solved in O(n 3 ) time [START_REF] Wu | The next-to-shortest path problem on directed graphs with positive edge weights[END_REF]; • if graph G is undirected and arc lengths are strictly positive, then Path No(α) can be solved in O(n 2 ) time [START_REF] Kao | A quadratic algorithm for finding next-to-shortest paths in graphs[END_REF]; • if graph G is undirected and arc lengths are non-negative, then Path No(α) can be solved in O(n 6 m) time [START_REF] Zhang | The next-to-shortest path in undirected graphs with nonnegative weights[END_REF]. Let us now show that for graphs with directed cycles and non-negative arc lengths the problem Path No(α) is difficult. Proof. Fortune et al. [START_REF] Fortune | The directed subgraph homeomorphism problem[END_REF] proved that the problem Two Disjoint Paths is NP-complete in the strong sense. In this problem, given vertices s 1 , t 1 , s 2 and t 2 of a directed graph with directed cycles, the question concerns the existence of a simple path from s 1 to t 1 and a simple DAGs with non-negative arc lengths In this section, we assume that G is a DAG and the arc lengths are non-negative integer numbers. The latter assumption implies L - Σ = 0. For this special case, the problem Exact Path(α) and any of the problems Path Gaps, Short Path Gaps and Long Path Gaps with at least two forbidden gaps are NP-hard, as it is indicated in Section 2. In the following sub-section, an approximation scheme is described for this special case, which delivers a solution with value close to the optimum with a given relative error ε, but, perhaps, inside a forbidden gap. In Sub-section 3.2, we suggest polynomial time algorithms for a more restrictive special case of these problems, in which forbidden gaps values are polynomially bounded but the arc lengths are not. Approximation Let ε be an arbitrary given number such that 0 < ε ≤ 1. We first suggest a family of algorithms (approximation scheme), each of which is specified by ε and a positive integer number β, and is denoted as DP ε,β , such that, for any instance of the problem Exact Path(α) with α ∈ {0, 1, . . . , β}, which has a solution, algorithm DP ε,β finds in O( m ε ) time a (simple) path from s to t in DAG G with non-negative integral arc lengths, whose length F (ε,α,β) satisfies relations F (ε,α,β) ≤ β and |α -F (ε,α,β) | ≤ εβ. Algorithm DP ε,β is a modification of the algorithm DP-All-Lengths. The modification concerns generation of the sets of states S(j). After the set S(j) = j, f + l(a (r) ij ) | a (r) ij ∈ A, (i, f ) ∈ S(i) has been produced, states (j, f ) with path lengths f ≥ β + 1 are excluded from it and set S (1) (j) = {(j, f ) | (j, f ) ∈ S(j), f ≤ β} is generated. Further, the set S (1) (j) is partitioned into disjoint subsets X (1) (j), X (2) (j), . . . , X (u) (j) such that |f 1 -f 2 | ≤ εβ for any states (f 1 , j) and (f 2 , j) from the same subset. This partitioning can be done in O(|S (1) (j)|) time by calculating value f /(εβ) for each (f, j) ∈ S (1) (j) and assigning states with the same value f /(εβ) to the same subset X (h) (j) such that h = f /(εβ) . The number of the subsets does not exceed O(1/ε). Since empty subsets are of no interest, they are removed and the non-empty subsets are re-numbered X (1) (j), X (2) (j), . . . , X (u) (j). In each non-empty subset X (h) (j), minimum and maximum numbers, f (h,j) min and f (h,j) max , which are the same number if |X (h) (j)| = 1, are selected, h = 1, . . . , u. Thus, for any state (j, f ) ∈ S (1) (j), there is an index h ∈ {1, . . . , u} such that f (h,j) min ≤ f ≤ f (h,j) max and f (h,j) max -f (h,j) min ≤ εβ. Finally, set S (2) (j) = {(j, f (h,j) min ), (j, f (h,j) max ) | h = 1, . . . , u} is generated, original set S(j) is updated such that S(j) := S (2) (j), vertex j is labeled, and the next vertex with all predecessor vertices labeled is identified. We have |S (2) (j)| ≤ 2u = O(1/ε). Therefore, |S (1) (j)| ≤ i∈K j |S (2) i | = O(|K j |/ε) , where K j is the set of vertices immediately preceding j in G, and |K j | is the indegree of j, for any j ∈ V . Recall that the running time of the optimal algorithm DP-All-Lengths is O m(|L - Σ | + L + Σ ) , where m = j∈V |K j | and O(|L - Σ | + L + Σ ) is an upper bound on the number of distinct path lengths. Similarly, the running time of the approximation algorithm DP ε,β can be evaluated as O( j∈V |K j |U ), where U is an upper bound on the number of distinct "rounded" path lengths associated with the same vertex. We have U ≤ O(1/ε), therefore, DP ε,β can be implemented to run in O( m ε ) time. We next prove an important property of the algorithm DP ε,β . Theorem 1 If problem Exact Path(α), α ≤ β, has solution, then there exists a state (t, F (ε,α,β) ) ∈ S(t) in the algorithm DP ε,β and the corresponding path with length F (ε,α,β) such that F (ε,α,β) ≤ β and |α -F (ε,α,β) | ≤ εβ. Proof. Let path (j 1 , . . . , j r ) be a solution of Exact Path(α), where j 1 = s and j r = t. The proof can be given by an induction on j i . Assume that, for a state (j i , f ) in DP-All-Lengths, 1 ≤ i ≤ r -1, preceding the final state (t, α), there exists a state (j i , f 1 ) in the algorithm DP ε,β such that f 1 ≤ f ≤ β and |f 1 -f | ≤ εβ. This assumption is satisfied for j i = s. Then there exist states (j i , f (h,j i ) min ) and (j i , f (h,j i ) max ) in DP ε,β such that f (h,j i ) min ≤ f 1 ≤ f (h,j i ) max ≤ β and f (h,j i ) max -f (h,j i ) min ≤ εβ. These two relations, together with f 1 ≤ f ≤ β and |f 1 -f | ≤ εβ, imply that for either f 2 = f (h,j i ) min or f 2 = f (h,j i ) max we have f 2 ≤ f ≤ β and |f 2 -f | ≤ εβ. The state (j i , f 2 ) is generated in DP ε,β . Further, if state (j i , f ) is extended to a state (j i+1 , f + l), preceding the final state (t, α), in DP-All-Lengths, then the state (j i , f 2 ) is extended to the state (j i+1 , f 2 + l) in DP ε,β such that f 2 + l ≤ f + l and |(f 2 + l) -(f + l)| ≤ εβ. Repetition of this inductive argument for all vertices j 1 , . . . , j r completes the proof. We now describe our ultimate approximation scheme {E ε }. For any given ε, 0 < ε ≤ 1, algorithm E ε consists of applications of the algorithm DP ε/2,β for β ∈ W := {0, (1 + ε/2) , (1 + ε/2) 2 , . . . , (1 + ε/2) w , L + Σ }, where w is defined to satisfy (1 + ε/2) w < L + Σ and (1 + ε/2) w+1 ≥ L + Σ . By taking logarithm with base two from both sides of the latter relation and assuming that w is a real number, we obtain that this relation is satisfied for w = (log 2 L + Σ )/ log 2 (1 + ε/2) -1. Since log 2 (1 + ε/2) ≥ ε/2 for 0 < ε ≤ 2, we know that w ≤ 2(log 2 L + Σ )/ε -1. Thus, |W | ≤ 2(log 2 L + Σ )/ε, and algorithm E ε runs in O( m ε 2 log 2 L + Σ ) time. The following theorem establishes properties of solutions delivered by the algorithm E ε . Theorem 2 For any instance of any of the problems Path Gaps, Short Path Gaps and Long Path Gaps on a DAG with non-negative arc lengths that has a solution, algorithm E ε finds a solution (path from s to t), whose length F (ε) satisfies |F (ε) -F | ≤ εF , where F ∈ {F 0 , F * }, F 0 is the value of any feasible solution of the problem Path Gaps and F * is the optimal value in the problems Short Path Gaps and Long Path Gaps. Proof. Observe that, for any value α, 0 ≤ α ≤ L + Σ , there exists a value β ∈ W such that α ≤ β ≤ α(1 + ε/2). By Theorem 1, for these α and β algorithm DP ε/2,β will find a solution with value F (ε/2,α,β) for the problem Exact Path(α), which satisfies |F (ε/2,α,β) -α| ≤ εβ/2. Taking into account β ≤ α(1+ε/2) and 0 < ε ≤ 1, we obtain |F (ε/2,α,β) -α| ≤ ε(1+ε/2)α/2 ≤ εα. By substituting α with F , we see that the statement of the theorem is satisfied by setting F (ε) = F (ε/2,F,β 0 ) , where β 0 is the number from the set W such that F ≤ β 0 ≤ F (1 + ε/2). It follows from Theorem 2 and the definition of the absolute value that if there exists a feasible solution of the problem Path Gaps or an optimal solution of any of the problems Short Path Gaps and Long Path Gaps with value F , F ∈ {F 0 , F * }, such that f i + 1 ≤ F ≤ f i+1 -1, i ∈ {1, . . . , k}, then (1 -ε)(f i + 1) ≤ (1 -ε)F ≤ F (ε) ≤ F (1 + ε) ≤ (1 + ε)(f i+1 -1), which means that ε is a guaranteed relative error of the gap constraint violation. This type of approximation with respect to the bounding constraints has been used, for example, by Brucker et al. [START_REF] Brucker | Batch scheduling with deadlines on parallel machines[END_REF] and Cheng et al. [START_REF] Cheng | Bicriterion single machine scheduling with resource dependent processing times[END_REF] for NP-hard scheduling problems. By a similar argument, if the optimal value F * of any of the problems Short Path Gaps and Long Path Gaps is at least εF * units away from any forbidden value, then the algorithm E ε is guaranteed to find a feasible solution, in which case {E ε } is a Fully Polynomial Time Approximation Scheme (FPTAS), see, for example, FPTASs of Hansen [START_REF] Hansen | Bicriterion path problems, in: Theory and applications[END_REF], Hassin [START_REF] Hassin | Approximation schemes for the restricted shortest path problem[END_REF], Lorenz and Raz [START_REF] Lorenz | A simple efficient approximation scheme for the restricted shortest path problem[END_REF] and Tsaggouris and Zaroliagis [START_REF] Tsaggouris | Multiobjective optimization: Improved FPTAS for shortest paths and non-linear objectives with applications[END_REF] for the constrained and multi-objective shortest path problems. The two-stage approach (algorithm DP ε,β first and then its application for β ∈ W ) makes the approximation scheme {E ε } different from the existing FPTASs. Note that recognizing the fact that the optimal value F * is at least εF * units away from any forbidden value for the problem Short Path Gaps with two forbidden intervals [0, α -εα] and [α + εα, L + Σ ] is as difficult as the problem Exact Path(α), which is NP-complete, see Section 2. The results of this sub-section are summarized in the following theorem. ), which finds a solution, possibly infeasible, with any given relative error ε with respect to the optimal objective value and the gap constraints. The type of approximation in this sub-section is related to the concept of resource augmentation, which is used in the analysis of approximation algorithms, see Lucarelli et al. [START_REF] Lucarelli | Online nonpreemptive scheduling in a resource augmentation model based on duality[END_REF]. According to this concept, an approximation algorithm is allowed to find a solution in the domain which is broader than the feasible domain of the original problem. L - Σ = 0 and L + Σ ≤ mP I in this case. In this sub-section, we study the case in which f k ≤ P I . We stress that the arc lengths are not assumed to be polynomially bounded. This case cannot be solved by a direct application of the algorithm DP-All-Lengths. Let us call an arc short if its length does not exceed P I . Otherwise, an arc is called long. Recall that the considered graph is a DAG and the arc lengths are non-negative. Polynomially bounded forbidden path lengths First of all, we apply the Breadth-First-Search algorithm (cf. Cormen et al. In this case, an optimal solution of the classic longest path problem is an optimal solution for the problem Long Path Gaps. Indeed, either the former solution includes a long arc, and therefore, it is feasible with respect to the gaps, or its length is at least the length of a long arc, and again, it is feasible with respect to the gaps. Recall that a long arc is present in the new graph and consider now any of the problems Path Gaps and Short Path Gaps. There are two sub-cases to consider with respect to a feasible (respectively, optimal) solution of this problem: 1) this solution includes no long arc, and 2) it includes at least one long arc. For the sub-case 1), remove long arcs from the graph and apply algorithm DP-All-Lengths to find a feasible (for Path Gaps) or optimal (for Short Path Gaps) solution in O(m 2 P I ) time. For the case 2), solve at most m classic shortest path problems, each of which is specified by a long arc a. The length of this arc is re-set to be a sufficiently small number, for example, l(a) := -L + Σ . In this case, any shortest path will necessarily go via the arc a. Denote such a path as P a . Since arc a is originally long, path P a is feasible with respect to the gaps. Feasible solution for the problem Path Gaps or optimal solution for the problem Short Path Gaps is the best solution with respect to the original arc lengths among at most m + 1 solutions: one solution for the sub-case 1) and at most m solutions for the sub-case 2). It can be found in O(m 2 P I + mn) time. Thus, the case with non-negative arc lengths and f k ≤ P I is as easy as the case with polynomially bounded absolute values of the arc lengths, and the following theorem holds. Conclusions and suggestions for future research The main results of this paper are summarized in Table 1. There, "NP-h" and "sNP-h" abbreviate "NP-hard" and "strongly NP-hard", and "SPP" and "LPP" abbreviate "shortest path problem" and "longest path problem". In the future, it is interesting to study path problems with exact and forbidden lengths for various specific graph classes. Therefore, Path-2-Gaps is NP-complete in the strong sense for graphs with directed cycles and unit arc lengths and it is NP-complete in the ordinary sense for DAGs with non-negative arc lengths. Furthermore, any instance of Path-2-Gaps can be solved by solving the same instance of Exact Path(α) for all α which are not from the gaps. Therefore, Path-2-Gaps is pseudo-polynomially solvable for DAGs, and it is polynomially solvable for DAGs with polynomially bounded absolute values of arc lengths. Since the problem Path-2-Gaps is a special case of any of the problems Path Gaps, Short Path Gaps and Long Path Gaps, the latter problems are NP-hard in the strong sense for graphs with directed cycles and unit arc lengths and they are NP-hard in the ordinary sense for DAGs with non-negative arc lengths. Let there be an algorithm with time complexity O(T ) for the problem Exact Path(α). By running this algorithm for α = L - Σ , L - Σ + 1, . . . , L + Σ , any of the problems Path Gaps, Short Path Gaps and Long Path Gaps can be solved in O T (|L - Σ | + L + Σ ) time. If G is a DAG, then, by scanning path lengths in the set S(t) generated by the algorithm DP-All-Lengths, and selecting appropriate lengths and corresponding paths, all the problems Path Gaps, Short Path Gaps and Long Path Gaps can be solved in O m(|L - Σ | + L + Σ ) time. Below more results are established for the problems Path-1-Gap and Path No(α). Observation 1 Problem Path-1-Gap reduces to solving both classic shortest and longest path problems. Proof. Note that an instance of the problem Path-1-Gap and the corresponding instance of any of the two above mentioned classic path problems have a solution only if a path from s to t exists. Assume that it is the case. Let L short and L long denote lengths of the shortest and longest simple paths, respectively, in the classic problems. If [L short , L long ] ⊆ [f 1 , f 1 ], then the instance of the problem Path-1-Gap has no solution. Otherwise, if L short < f 1 , then the shortest simple path is a solution of the instance of the problem Path-1-Gap, and if L long > f 1 , then the longest simple path is a solution of this instance. Since both classic shortest path and longest path problems can be solved in O(n + m) time for DAGs (cf. Cormen et al. [10]), the following corollary follows. Corollary 1 If G is a DAG, then the problem Path-1-Gap, and hence, the problem Path No(α), can be solved in O(n + m) time. Observation 2 Corollary 2 22 Problem Path No(α) reduces to solving both the classic shortest path problem and the Next-to-Shortest Path problem. Proof. Similar to the proof of Observation 1, assume without loss of generality that a path from s to t exists. Let L short and L short + δ, δ > 0, denote lengths of the shortest and next-toshortest simple paths, respectively. If L short = α, then the shortest simple path is a solution of the instance of the problem Path No(α). If L short = α, then the next-to-shortest simple path is a solution of this instance. The following special cases of the problem Path No(α) are polynomially solvable: Observation 3 3 If graph G contains directed cycles and the arc lengths are all equal to 0 but one arc length is equal to 1, then any of the problems Exact Path(α), Path No(α), Path-1-Gap, Path-2-Gaps, Path Gaps, Short Path Gaps and Long Path Gaps is NP-complete in the strong sense. path from s 2 2 to t 2 such that these two paths have no common vertex. It can easily be verified that an instance of Two Disjoint Paths has a solution if and only if Exact Path(α) for α = 1 (respectively, Path No(α) for α = 0) has a solution for the same graph as in Two Disjoint Paths but with an extra arc (t 1 , s 2 ) whose length is equal to one and with all other arc lengths equal to zero. Therefore, Exact Path(α) for α = 1 and Path No(α) for α = 0 are NP-complete in the strong sense. The other problems mentioned in the observation are generalizations of either the problem Path No(α) or the problem Exact Path(α) (for Path-2-Gaps), and therefore, they cannot be easier.It is worth noting that the problem Two Disjoint Paths in an undirected graph is solvable in almost linear time by the algorithm of Tholey[START_REF] Tholey | Solving the 2-disjoint paths problem in nearly linear time[END_REF]. Theorem 3 3 If G is a DAG and arc lengths are non-negative, then problems Path Gaps, Short Path Gaps and Long Path Gaps possess an approximation scheme {E ε } with running time O( m ε 2 log 2 L + Σ Let I denote the input length of any of the problems Path Gaps, Short Path Gaps and Long Path Gaps in binary encoding, I = O a∈A log 2 |l(a)| + k i=1 (log 2 |f i | + log 2 |f i |) , and let P I denote a polynomial of I. If the absolute values |l(a)| of the arc lengths are bounded by the polynomial P I , then the problems Path Gaps, Short Path Gaps and Long Path Gaps are solvable by the algorithm DP-All-Lengths in O(m 2 P I ) time for DAGs, because [START_REF] Cormen | Introduction to algorithms[END_REF]) to modify the original graph in O(n + m) time so that every arc belongs to at least one path from s to t. If the new graph contains no long arcs, then the problems Path Gaps, Short Path Gaps and Long Path Gaps can be solved by the algorithm DP-All-Lengths in O(m 2 P I ) time, because L - Σ = 0 and L + Σ ≤ mP I . Assume that the new graph contains at least one long arc. Theorem 4 4 If G is a DAG, arc lengths are non-negative and forbidden path lengths are polynomially bounded such that f k ≤ P I , then the problem Long Path Gaps is solvable in O(m 2 P I ) time and any of the problems Path Gaps and Short Path Gaps are solvable in O(m 2 P I + mn) time.
01666875
en
[ "shs.gestion" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01666875/file/Gerke%20%26%20al%2C%20SMR%2C%202018%20PP.pdf
Anna Gerke Kathy Babiak Geoff Dickson Michel Desbordes Developmental processes and motivations for linkages in cross-sectoral sport clusters Authors Keywords: interorganisational relationships, interorganisational networks, sport cluster, crosssectoral practices in professional sport (Cousens, Babiak, & Slack, 2000), across municipal recreation Interorganisational linkages are a widely studied topic in sport management. However, most researchers focus on public or non-profit organisations and analyse one focal organisation rather than a network of interrelated organisations. The purpose of this study was to address both of these shortcomings by investigating interorganisational linkages in sport clusters, a type of crosssectoral network. The authors address three main questions: (a) what is the nature of interorganisational linkages in sport clusters; (b) how do linkages in sport clusters develop; and (c) what are the organisational motivations for creating or joining linkages in sport clusters? A multiple case study approach explores two sailing clusters in France and New Zealand. Results show that interorganisational relationships tend to be formalised, while interorganisational networks tend to be informal. A circular development process from formal relationships to formal networks via informal relationships and networks was detected. Reciprocity is the most prevalent motive for the development of all types of interorganisational linkages. This research contributes to sport management practice by showcasing the potential multitude and variety of interorganisational linkages in a cross-sectoral sport context which are foundations for cooperation and collaboration. The theoretical contribution lies in the conceptualising of the IOR development process and different motivational patterns as antecedents. Introduction Sport systems are complex and often vary in form, structure, and purpose across different countries. The actors within sport systems typically include for-profit organisations (e.g., sport equipment firms), non-profit organisations (e.g., amateur sport clubs), public organisations (e.g., Ministries of Sport), governing bodies (e.g., national sport federation), and unorganised stakeholders (e.g., customers of a sport brand) [START_REF] Petry | Sport systems in the countries of the European Union: Similarities and differences[END_REF][START_REF] Shilbury | Considering future sport delivery systems[END_REF]. Previous research on sport systems focuses on policy issues in elite and professional sports [START_REF] De Bosscher | Explaining international sporting success: An international comparison of elite sport systems and policies in six countries[END_REF][START_REF] Dickson | League expansion and interorganisational power[END_REF][START_REF] Dickson | Multi-level governance in an international strategic alliance[END_REF], governance aspects in non-profit sport organisations [START_REF] Ferkins | The role of the board in building strategic capability: Towards an integrated model of sport governance research[END_REF][START_REF] Inglis | Roles of the board in amateur sport organizations[END_REF], or the increased professionalisation of nonprofit sport organisations [START_REF] Macris | Belief, doubt, and legitimacy in a performance system: National sport organization perspectives[END_REF]. Few sport management researchers have focused on the group of for-profit organisations that manufacture sport equipment despite their relevance to the development and commercialisation of sport [START_REF] Slack | The social and commercial impact of sport, the role of sport management[END_REF]. While the literature on interorganisational relationships (IORs) in sport is growing [START_REF] Misener | Understanding capacity through the processes and outcomes of interorganizational relationships in nonprofit community sport organizations[END_REF][START_REF] Wäsche | Interorganizational cooperation in sport tourism: A social network analysis[END_REF], the scholarly focus has primarily been on discrete cases of focal organisations and their partners [START_REF] Dickson | League expansion and interorganisational power[END_REF][START_REF] Frisby | The organizational dynamics of under-managed partnerships in leisure service departments[END_REF]. The management of linkages between organisations involved in sport is increasingly important and complex due to the heterogeneity of activities, goals, and outcomes. Researchers have examined the motives for IORs in cross-sector relationships in elite sport (e.g., access to resources, legitimacy seeking, reciprocity, and strategic positioning) and also in community sport development programs [START_REF] Misener | Understanding capacity through the processes and outcomes of interorganizational relationships in nonprofit community sport organizations[END_REF]. Scholars have also explored the challenges of balancing competition and collaboration [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF][START_REF] Babiak | Challenges in multiple cross-sector partnerships[END_REF][START_REF] Marlier | Capacity building through cross-sector partnerships: A multiple case study of a sport program in disadvantaged communities in Belgium[END_REF]. Other researchers have focused on IORs and key management et al., 2015). Given that collaboration plays a central role in the delivery of activities produced by different cluster members, a deeper understanding of the drivers to this form of structuring merits further investigation. To that end, in this study, we address three main questions: (a) Conceptual Framing of Interorganisational Linkages IORs are established through interactions or transactions between two organisations with the common aim of serving mutually beneficial purposes [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF][START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF]. The simplest IOR is an economic transaction; however, this type of IOR is rarely subject to IOR research. We acknowledge simple transactional IORs as a first level of relationships, and focus on repeated or regular interactions between the same parties (transactional or other) that are based on trust and collaboration aiming at mutually beneficial purposes. Interorganisational networks (IONs) develop as soon as two IORs are linked; hence, three or more organisations are involved [START_REF] Dickson | League expansion and interorganisational power[END_REF][START_REF] Provan | Interorganizational networks at the network level: A review of the empirical literature on whole networks[END_REF]. Similar to IORs, the simplest form of IONs is based on economic transactions. However, also in terms of ION, we focus on mid-or long-term, collaborative or cooperative, trust-based ION with a shared goal or mutual beneficial purposes [START_REF] Wäsche | Regional Sports Tourism Networks: A Conceptual Framework[END_REF]. Most existing sport cluster studies simply apply [START_REF] Porter | Clusters and Competition[END_REF] cluster model to a sport context [START_REF] Chetty | On the crest of a wave: The New Zealand boat-building cluster[END_REF][START_REF] Stewart | Cluster theory and competitive advantage: The Torquay surfing experience[END_REF], while some develop the concept of a sport cluster [START_REF] Gerke | Towards a sport cluster model: The ocean racing cluster in Brittany[END_REF][START_REF] Shilbury | Considering future sport delivery systems[END_REF]. However, all of these studies focus on determinants and features of cluster development rather than on dynamics and interactions between cluster members. This is where we extend knowledge on sport clusters. The geographical concentration of interconnected companies and associated institutions in one fieldusually denominated as industrial districts [START_REF] Marshall | Industry and Trade[END_REF] or clusters [START_REF] Porter | Clusters and Competition[END_REF])provides a rich empirical context to study IORs and IONs (Capo-Vicedo, Exposito-Langa, & Molina-Morales, 2008;[START_REF] Connell | Knowledge integration and competitiveness: A longitudinal study of an industry cluster[END_REF][START_REF] Gomes | Behind innovation clusters: Individual, cultural, and strategic linkages[END_REF]. Formalisation of Interorganisational Linkages Interorganisational linkages are characterised by ambiguity and uncertainty due to different structures, cultures, functional capabilities, cognitive frames, terminologies, management styles, and philosophies. This is especially the case when organisations have different histories, belong to different industries, and possess dissimilar belief systems [START_REF] Vlaar | Coping with problems of understanding in interorganizational relationships: Using formalization as a means to make sense[END_REF]. [START_REF] Dana | Evolution de la coopétition dans un cluster: le cas de Waipara dans le secteur du vin[END_REF] argue that formalisation strengthens collaborations and creates a collaborative spirit. The formalisation of interorganisational linkages also reduces misunderstanding, especially in the formative stages [START_REF] Vlaar | Coping with problems of understanding in interorganizational relationships: Using formalization as a means to make sense[END_REF]. [START_REF] Gomes | Behind innovation clusters: Individual, cultural, and strategic linkages[END_REF] argue that informal relationships between individuals and organisations will also strengthen a collaboration. The level of IOR formalisation differs in terms of process and interaction outcomes. Interorganisational processes can be formalised through planning, projecting, codifying, and enforcing exchanges. The outcomes of formalisation are contracts, rules, procedures, and plans [START_REF] Vlaar | Coping with problems of understanding in interorganizational relationships: Using formalization as a means to make sense[END_REF]. It is not always the case that both processes and outcomes are formalised. In some cases, formalisation of linkages may create lock-in effects, preventing organisations from joining collective initiatives [START_REF] Gadde | Strategizing in industrial networks[END_REF]. There may be unique insights into the nature and dynamics of interorganisational engagement in sport. We can glean interesting insights into sport management research in this area. For instance, some studies have explored the role and level of formalisation of IORs in the sport sector and have found that institutional pressures and changes in sport systems (e.g., shift of governmental funding for sports) have led to increases in the creation and formalisation of interorganisational linkages [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF][START_REF] Shilbury | Considering future sport delivery systems[END_REF]. Additionally, challenges related to capacity (e.g., lack of human resources, collaborative systems, communication channels) may play a role in the implementation and formalisation of IORs for non-profit sport organisations engaging in cross-sector linkages [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF][START_REF] Misener | Understanding capacity through the processes and outcomes of interorganizational relationships in nonprofit community sport organizations[END_REF]. Finally, people involved in sport tend to have similar beliefs and modes of functioning, and hence might not insist on the formalisation of interorganisational linkages, instead, relying on interpersonal trust and faith that the partners will uphold their relationship responsibilities [START_REF] Allen | Sport as a vehicle for socialization and maintenance of cultural identity: International students attending American universities[END_REF][START_REF] Walters | Implementing corporate social responsibility through social partnerships[END_REF]. [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF] investigated multiple motives for IOR formation amongst collaborating cross-sector organisations and found complex and interrelated drivers for partnership formation which mapped onto [START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF] analytical framework of IOR formation. Those factors were asymmetry, reciprocity, necessity, legitimacy, efficiency, and stability. These constructs were derived from theoretical explanations including institutional pressures, resource dependency, strategic management, power, and political forces. Given this theoretical breadth, we found [START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF] analytical framing to be a robust foundation from which to examine motives for IOR development in sport clusters. Motivations for creating or joining interorganisational linkages might influence the formalised nature (informal versus formal) and configuration (IOR versus ION) that linkages take. Therefore we analyse motivational patterns for IOR/ION formation with regards to their formalisation and configuration. Development and Motives for Interorganisational Linkages The following paragraphs outline briefly [START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF] motivational factors. Asymmetry refers to the desire or potential to exercise power or control over another organisation and its resources [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF][START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF]. Asymmetry reflects a power-dominated approach to linkage formation. An organisation with an asymmetry motive considers its environment as unjust, unequal, manipulated, and full of information distortion, exploitation, coercion, and conflict. Therefore the only way to resolve the inequity is via power, control, and domination [START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF]. Reciprocity reflects the pursuit of collaborative advantage [START_REF] Huxham | Doing things collaboratively: realising the advantage or succumbing inertia[END_REF]. Organisations in alliances, partnerships, or networks work more effectively and efficiently than isolated counterparts [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF][START_REF] Porter | Clusters and Competition[END_REF]. An organisation seeking reciprocity pursues cooperation, collaboration, and coordination. The theory of collaborative advantage and relational strategy explains how joint efforts are superior to independent actions [START_REF] Dyer | The relational view: Cooperative strategy and sources of interorganizational competitive advantage[END_REF][START_REF] Granovetter | Economic action and social structure: The problem of embeddedness[END_REF][START_REF] Huxham | Doing things collaboratively: realising the advantage or succumbing inertia[END_REF]. Necessity as motive is manifest when organisations create IORs to meet necessary legal or regulatory requirements. This becomes relevant in situations where organisations need to comply with certain rules and regulations. In some cases, it might be easier to build partnerships to governing bodies, industry association, or other regulatory bodies in order to comply with those regulations [START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF]. Efficiency refers to an organisation's attempts to improve its internal input/output ratio through collaborative activity. The efficiency motive might be driven by desires to increase buying power, consolidate, and maximise the use of resources, and achieve economies of scale [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF][START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF]. Stability is a motive for IORs for organisations that seek predictability and dependability of resource flows. Uncertainty of funding and the increased number of organisations providing the same or similar services encourage sport organisations to create long-term IORs [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF]. Legitimacy is the final motive underpinning IORs. Organisations are exposed to external pressureseconomic, social, or politicalto which they need to respond to appear legitimate [START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF]. In the sport context it means that sport organisations engage in linkages to legitimise both organisational and collective goals [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF]. Methods Case study research permits the creation of close links between science and reality, and a strong connection between theory, method, and data [START_REF] Dubois | Case research in purchasing and supply management: Opportunities and challenges[END_REF]. The multiple case study method permits theory development and enhances theories' robustness through literal replication [START_REF] Eisenhardt | Building theories from case study research[END_REF][START_REF] Yin | Case Study Research: Design and Methods[END_REF]. Two similar cases, a sailing cluster in France (SAILBRIT) and one in New Zealand (SAILAUCK), are the subject of this research. Cases are analysed in parallel according to research questions to strengthen findings rather than sequentially in order to deepen findings [START_REF] Yin | Case Study Research: Design and Methods[END_REF]. Data were collected using an abductive approach [START_REF] Dubois | Systematic combining: An abductive approach to case research[END_REF]. The abductive approach refers to a continuous interplay between theory and empirical data enacted by the researcher, moving between theory, data, and back to theory again [START_REF] Dubois | Systematic combining: An abductive approach to case research[END_REF]. Pierce's (1903, cited in Burks, 1946) theory of abduction puts forward abduction as the process that invents and discovers explanatory hypotheses concerning isolated phenomena rather than merely testing and establishing the explanatory value of hypotheses (induction) or developing measurable consequences for universal hypotheses (deduction). Research Context The research contexts were industrial agglomerations in a particular geographical area whose actors shared an interest in the same or similar sport(s). These sport clusters were chosen because their membership is diverse and organisational interconnectedness is high [START_REF] Gerke | Towards a sport cluster model: The ocean racing cluster in Brittany[END_REF][START_REF] Shilbury | Considering future sport delivery systems[END_REF]. Both sailing clusters were comprised of interconnected organisations including product or service firms, professional racing teams, amateur sailing clubs, public organisations, governing bodies, and tertiary institutions. Organisations were selected according to the cluster member typology developed by [START_REF] Gerke | Towards a sport cluster model: The ocean racing cluster in Brittany[END_REF]. Both clusters dispose of a high density of organisations in the same sport (i.e., sailing) in a specific geographically denominated area. Previous research has demonstrated that sailing clusters are relatively common [START_REF] Chetty | On the crest of a wave: The New Zealand boat-building cluster[END_REF][START_REF] Gerke | Towards a sport cluster model: The ocean racing cluster in Brittany[END_REF][START_REF] Glass | Innovation and interdependencies in the New Zealand custom boat-building industry[END_REF][START_REF] Sarvan | Network based determinants of innovation performance in yacht building clusters[END_REF]. Other studies on sport clusters have examined industries in outdoor sports including horse-riding [START_REF] Parker | Land-based economic clusters and their sustainability: The case of the horseracing industry[END_REF], skateboarding [START_REF] Kellett | A comparison between mainstream and action sport industries in Australia: A case study of the skateboarding cluster[END_REF], and surfing [START_REF] Stewart | Cluster theory and competitive advantage: The Torquay surfing experience[END_REF][START_REF] Warren | Making things in a high-dollar Australia: The case of the surfboard industry[END_REF]. While most sport cluster studies apply [START_REF] Porter | Clusters and Competition[END_REF] framework focusing on structure and features, this article investigates cluster dynamics. SAILBRIT is the sailing cluster located in southern Brittany in the northwest of France. SAILBRIT is comprised of about 110 affiliated cluster members (Eurolarge Innovation, 2016). There is a cluster governing body, which is primarily publicly funded. The governing body is responsible for the administration, promotion, and growth of SAILBRIT. Given the historical and cultural significance of sailing to Brittany, the sailing industry is economically important to the region. An important driver for the sailing industry was the local government's decision to invest in maritime infrastructure and to dedicate industrial space to the maritime industry. SAILAUCK is the sailing cluster in Auckland, New Zealand. Most cluster organisations (CLORs) are located close to the marinas adjacent to the central business district. CLORs were not affiliated to a specific sailing governing body, but most were formally linked via a marine trade and export group, which consisted of approximately 450 members including other nonsailing marine businesses (i.e., fishing and kayaking) (NZ Marine, 2016). Sailing and ocean navigation is an integral cultural and social institution in New Zealand, given its island geography. Sailing is integrated in schools, social events, and is an important economic contributor. Part of New Zealand's culture and education is to involve children in sailing courses, participate in leisure or competitive sailing, watch major sailing events, or work in the marine industry. 5.2.Data Collection Four types of data sources were used [START_REF] Chetty | On the crest of a wave: The New Zealand boat-building cluster[END_REF][START_REF] Yin | Case Study Research: Design and Methods[END_REF]. Interviews (n=54) and observations (n=12) served as the primary data sources. Secondary data were scanned, and 36 documents were identified as relevant and hence analysed (27 organisational documents and 9 archival documents). Data collection took place in 2012 and 2013. The majority of the interviews (86%) were conducted face-to-face, the remainder via telephone or video call. Interviews were conducted with senior executives, marketing managers, or research and development (R&D) managers. In larger organisations, we interviewed several managers. The average interview duration was 48 minutes. Data collection comprised at least two interviews with representatives of each of the 10 types of CLORs, as in [START_REF] Gerke | Towards a sport cluster model: The ocean racing cluster in Brittany[END_REF]. Interviewees were primarily involved in interorganisational linkages. In the French case, the cluster manager identified key actors in the cluster. In the New Zealand case, interviewee selection relied upon the snowball method [START_REF] Miles | Qualitative data analysis[END_REF]. The interview questions probed the CLORs' involvement in interorganisational linkages The first author attended trade shows, amateur and professional sport events, product trials, professional seminars, and networking events to collect observational data. Data collected during observations included photographs, explorative interviews, advertisements, event programs, and newspaper articles. Results from observations were summarised in reports for further analysis. Observations also served as a starting point to contact interviewees. Secondary data included organisational information and archival data. Organisational information referred to CLOR-authored presentations, brochures, catalogues, websites, Internet blogs, advertising material, and product descriptions. Archival data included third-party authored information, such as specialist journals, industry reports, and mainstream media publications. These secondary information served to identify interview candidates and initial screening of CLORs' involvement in interorganisational linkages. 5.3.Data Analysis All interview transcripts, observations reports, and selected secondary data were analysed and coded using NVivo. For the first research question, a number of themes were identified deductively from the literature while additional themes appeared and were added during the coding process [START_REF] Dubois | Systematic combining: An abductive approach to case research[END_REF]. The starting point for the first research questionwhat is the nature of interorganisational linkages in sport industry clusterswas to identify interorganisational linkages between CLORs. Interviewees referred to links with one other organisation (i.e., IOR), or to linkages with several other CLORs (i.e., ION) [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF][START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF][START_REF] Provan | Interorganizational networks at the network level: A review of the empirical literature on whole networks[END_REF][START_REF] Warren | The interorganizational field as a focus for investigation[END_REF]. Further subthemes that emerged were informal and formal both IOR and ION [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF][START_REF] Vlaar | Coping with problems of understanding in interorganizational relationships: Using formalization as a means to make sense[END_REF]. While the level of formality in reality is surely nuanced, a dichotomous coding system provided suitable clarity and distinction. After coding analysis, we conducted frequency counts for each theme and type of source within each theme [START_REF] Babiak | Challenges in multiple cross-sector partnerships[END_REF]. This organised the data and assisted with quotation retrieval. To ensure trustworthiness and credibility of coding results, tables of coded references were cross-checked by co-authors. To address the second research questionhow do interorganisational linkages develop in sport clusterswe conducted an inductive analysis across all data. We compared our findings with previous research on patterns for linkage creation to identify differences and similarities. Finally, to address the third research questionwhat are the organisational motivations for creating or joining sport clusters?we returned to a deductive approach. We used [START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF] framework of motivational patterns as analysis scheme. Findings In the following sections we provide evidence that consecutively addresses the three research questions: nature of interorganisational linkages in clusters, the development process of interorganisational linkages, and motivational patterns for creating or joining them. 6.1.Typical Nature of Interorganisational Linkages The analysis revealed that both IORs and IONs were prevalent amongst CLORs in both cases. IORs tended to be formalised, while IONs typically remained informal. Tables 1 and2 provide an overview of each theme's frequency and the number and type of sources within each theme for SAILBRIT and SAILAUCK. Evidence for the different types of linkages is discussed in the following paragraphs. Insert Table 1 about here. Insert Table 2 about here. Formal IORs. Formal IORs between CLORs manifested themselves in various manners. One-off transactional IORs between buyer and supplier, between service provider and client, or between subcontractor and client reflect the presence of simple transactional IORs that are rooted in market mechanisms. This research, however, focuses on more complex IORs based on trust and collaboration; for example, competing core equipment manufacturers joined forces formally to secure bigger contracts because one firm alone could not have responded to the bid. Formal IORs for other than purely economic reasons included agreements and partnerships aiming at joint innovation and R&D. IORs emerged between a company and a university laboratory or research institute, or between a supplier and a professional or amateur sport organisation. The mechanisms through which these IORs were formalised included the commissioning of a research study, joint funding of a doctoral student, collaboration on specification sheets, or through confidentiality agreements (e.g., for joint product development). This quote illustrates how a professional sailing team provides input for innovation in determining specifies features required to improve marine equipment via specification sheets: For five years the partnership has been quite structured. That means that every year we make a specification sheet for them with our needs, such as the type of navigation software that we like to use, the functionalities that we would like to have, or the type of interface we need (SAILBRIT, professional sailing team). Professional and amateur sport entities maintained formal IORs with core equipment manufacturers, marine accessories, services, and systems suppliers through sponsoring agreements and for product testing purposes as one respondent indicated: "We make quite formal criteria to formalise their needs and their feedback" (SAILBRIT, marine service/consulting firm). Quasi government agencies and non-profits such as Chambers of Commerce, public authorities for economic development, industry associations, sport federations, or dedicated cluster governing bodies played a key role in fostering and implementing formal IORs. The cluster or industry governing bodies maintained formal IORs through funding programs or membership. For example, Eurolarge Innovation was an organisation dedicated to develop the ocean racing cluster in Brittany (Eurolarge Innovation, 2016), and NZ Marine was the industry association for the New Zealand marine industry (NZ Marine, 2016). In both cases CLORs pay a yearly fee depending on the organisation's size to benefit from the services provided. Informal IORs. The data from our cases showed that informal IORs evolved from formal IORs (e.g., regular purchases from the same supplier) through long term exchanges and interactions that created affinities and built strong interpersonal relationships. The following quotation illustrates this with an example of an initially pure contractual relationship that has evolved towards informal mutual exchanges and aid: Yes, indeed with certain clients it happens that we discuss issues equally outside of purely commercial relationships and then if there are problems to solve, we direct each other to people that we know and that can be of help. Some of my clients will also recommend my services to people that are potentially interested. That is quite possible (SAILBRIT, marine service/consulting firm). Our data revealed that informal exchange was possible through geographical and social proximity, and high frequency of exchange and interaction. This proximity permitted unscheduled and random encounters that fostered the formation of informal IORs, which is highlighted as a major advantage of the sailing cluster as evidenced by the following quotation: I think that the main advantage of having this grouping of competences in the territory, which is not very big, is that it allows to easily discuss with people and to be able to look for competencies not too far away. It's true that this makes life easier (SAILBRIT, marine service/consulting firm). Despite the evidence of formalised relationships, many CLORs relied heavily on the trust, social capital, and confidence in ongoing interactions. Some administrators attributed little value to formalised interorganisational linkages, especially once they become informal. The director of a marine equipment firm testifies: I think that the contract is only a paper. It has only little value, even if we arrived at a conflict concerning an aspect in the contract, I don't think that (contract) would play such an important role (SAILBRIT, marine equipment firm). Formalised roles and responsibilities served only as a potential safety net, yet partners in the sport clusters interacted and relied on informal exchanges and relied much less on formal contracts. In the cases studied, structural similarity, common culture, complementary management styles, and collective experience and background created mutual commitment and trust. Dyadic relationships were critical in the sailing clustersboth formal and informal in natureto build IONs. It happens so much smoother when you have good working relationships [with maintenance firms, customers, the teams, suppliers]. This does not necessarily mean regularly meeting up and going out for lunch or dinner; it is frequent communication and the odd catch-up meeting around a table to discuss any issues or sharing of good ideas and then getting back to work (SAILAUCK, shipyard). Many respondents discussed the importance of knowledge sharing through informal networking in the cluster. This was facilitated through a shared macro-structure of language, social ties, and a set of standards and values. One cluster member summarises this special atmosphere with the following words: It is a family, let's say, this is the 'Silicon Valley' of sailing, here. That is why we all speak the same language, we speak about the same things. At the end of the day, even if the names of the big teams change, Groupama, Virbac, Banque Pop, etc., the people that work inside, the ones that are part of it, they are part of the family (SAILBRIT, marine service/consulting firm). The analysis revealed two distinct constellations of informal IONs: informal IONs consisting of similar organisations and informal IONs consisting of different but complementary organisations. Informal IONs between similar CLORs were formed through informal interaction by providing expertise, market information, and support to each other. This was expressed by one interviewee referring to other consulting firms offering similar services when he stated, "I see that more like an exchange between these people and like a primordial source of information because otherwise I would never have had access to this information." (SAILBRIT, marine service/consulting firm). Informal IONs amongst complementary organisations were formed through interactions between different CLORs that were originally linked through market mechanisms. Their interactions evolved and then exceeded formal agreements through interpersonal trust, mutual respect, citizenship, and long-standing knowledge of each other. These former business partners developed interlinked informal IORs and even IONs once the formal IOR activities had ceased. Informal IONs between heterogeneous CLORs emerged when ocean racing teams prepared for important competitions. Professional sailing sport teams and athletes need not only physical preparation and training, but they also need to construct a race boat to compete. Various complementary skills are required for a boat-building project: shipyards, naval architects, rig and sail makers, marine equipment firms, and service/consulting firms. These actors need to work together as the boat-building project is complex. This insight is key for a boat building and racing project as highlighted in the following quotation by a rig/sail maker: The client, and when we say the client we mean Team New Zealand, realised that it is very, important to actually engage the suppliers and make them part of the whole [development] process (SAILAUCK, rig/sail maker). The involvement of specialised CLORs that usually worked physically close together in the shipyard or on the testing grounds was a strong lever for the creation of informal IONs. This might be a specificity of sport clusters as the linking element were the professional sport teams. Sport athletes and teams are organised in teams or clubs which makes them easier to identify, target, and involve in the development process than individual customers. Social bonds such as family ties or friendships fostered the creation of informal IONs between CLORs: "You find that a lot of people know each other at a personal level" (SAILAUCK, governing body). Informal IONs provided access to external competences, capabilities, and knowledge without entering in formal relationships, for instance a marine equipment firm representative (SAILBRIT) stated: "It is about working your network and making the networking function." A rig/sail maker (SAILBRIT) explained: "I think that it is very interesting for the smaller firms to be able to join us when we work and research because we have a technical development that the others do not have." Continuous collaboration aiming at a common goalconstructing a fast and safe ocean racing boatcreated a temporary special atmosphere and environment characterised by a concentration of diverse and in-depth sailing expertise. This was described by a sail maker in SAILAUCK as "A big library... It's just a continuous cycle of building of knowledge. It's quite a unique sort of environment." Informal interorganisational exchange created IONs in a boatbuilding project. Staff rotation meaning that an employee changes the company within the cluster was common amongst CLORs. This facilitated informal network building and knowledge diffusion in the sailing clusters through both knowledge sharing (i.e., intended) and knowledge spillovers (i.e., unintended). Spatial proximity of organisations permitted informal face-to-face meetings and the development of cooperation and collaboration without the need for formalisation. The marine governing body in Auckland claims: Fifty percent of the gain is actually the informal connections that they make with other companies. So, it's a huge value, meeting companies that otherwise they wouldn't or in an environment that is conducive for them to talk about their problems or opportunities with competitors or maybe complementary companies (SAILAUCK, governing body). A unique characteristic of informal IONs in the sailing clusters was the trickle-down effect of knowledge, information, skill, and technology between different professional and amateur sport organisations: "I never really saw or saw very little that the other boats were using things that they would have got from the America's Cup or the Around-the-world-race. I didn't see other boats doing that, other than us" (SAILAUCK, rig/sail maker). The physical closeness between amateur and professional sport organisations, and the specialised systems, accessory, and services suppliers permitted knowledge transfer amongst different CLORs with minimal transaction costs. This process was facilitated through cross-functional roles of key persons in companies, professional or amateur sport organisations, and governing bodies, and ultimately created informal IONs between profit and non-profit organisations. Those were maintained through interpersonal relationships that had emerged over time, even when cross-functional assignments had come to an end. Formal ION. One type of formal IONs were membership-based governing bodies. There was a cluster governing body for ocean racing technology in SAILBRIT (Eurolarge Innovation) which was a formalised ION of around 110 firms and related institutions. Smaller formal IONs formed amongst the members to find solutions for shared problems and to share investment costs. A marine service firm in SAILBRIT created a shared online communication and information platform in collaboration with marine service providers, equipment suppliers, and customers. Other firms shared the cost for a hull maintenance and cleaning facility. The starting point for these formal IONs was often the cluster governing body. On a regular basis there were formal opportunities to meet and exchange with other CLORs during industry events, seminars, workshops, and exhibitions: [It is very important that] the firms know each other and that there is regularly room where they can meet in an informal way around some seminars and interprofessional encounters, etc. (SAILBRIT, education/research institute). In contrast, CLORs in SAILAUCK could not rely on a dedicated sailing cluster governing body and therefore had fewer formal IONs. One example was a formal ION to defend the CLORs' interests in access to waterfront based industrial land towards the city council. Another was the creation of a training program for professions in the marine industry. Differences and Similarities of Interorganisational Linkages across Cases. SAILBRIT and SAILAUCK provided similar evidence for the strong prevalence of formal IORs and informal IONs, and weaker evidence for informal IORs. The main difference was the prevalence of formal IONs in SAILBRIT through a formalised cluster governing body dedicated to the ocean sailing industry and subsequent smaller formal IONs. There was no equivalent governing body for SAILAUCK, though the marine industry association is worth acknowledging. In contrast to SAILBRIT, SAILAUCK functioned as a self-governed system. This may be due to cultural differences and a different perception of the role of the state in the economy. Similar results as in SAILBRIT were found in other French cluster studies. French companies are willing to accept state intervention for economic development [START_REF] Berthinier-Poncet | Gouvernance et innovation dans les clusters à la française[END_REF][START_REF] Bocquet | Gouvernance et innovation au sein des technopôles. Le cas de Savoie Technloac[END_REF]. The cluster governing body in SAILBRIT was set up and funded mainly by local state authorities (top-down) but with strong consultation of cluster members (bottom-up). In SAILAUCK, the marine industry association was only to a minor extent funded by public money; instead, a private initiative of industry and companies delivered most of the funding (bottom-up; [START_REF] Viederyte | Maritime cluster organizations: Enhancing role of maritime industry development[END_REF]. These different starting points influenced CLORs' attitudes, commitment, and willingness to invest time or money in the formal ION of the cluster. More active commitment to the formal ION as a governance structure is evident in SAILBRIT, while SAILAUCK members seem to rely on relational governance in an autonomous system. Developmental Processes of Interorganisational Linkages The typical development cycle of different interorganisational linkages from formal IOR, to informal IOR, to informal ION, to formal ION is depicted in Figure 1. This pattern was evident in both sailing clusters, however only SAILBRIT reached the fourth stage (formal ION). Insert Figure 1 about here. Formal IORs turned into informal IORs, even though they were initially purely economically-driven linkages. Informal IORs between complementary firms appeared to develop from IORs rooted in market mechanisms. Individuals in CLORs that had business IORs or other formal IORs over a longer period of time started to get to know each other very well. This resulted in the creation of informal IORs that then evolved into a large informal ION through the multiplication of IORs in a geographical and social proximate environment. This phenomenon was facilitated through parallel involvement of individuals in several CLORs and led to smoother interactions, less "red tape" and paperwork and longer lasting relationships: I was a Chairman in the Olympic Committee, that was a very formal role but there are also a lot of informal roles where people in those organisations will ring us up for advice. I am not anymore in any of those roles but we still communicate a lot with them, and that has nothing to do with my company (SAILAUCK, shipyard). In the case of SAILBRIT, an informal ION developed into a formal ION, the cluster governing body. This institution provided a new platform for the development of formal IORs. The cluster governing body was a platform and intermediary to link potential buyers and suppliers in the cluster. At the same time CLORs made informal contacts during events and via the network of the cluster governing body. The fourth stage, a formalised interorganisational network dedicated to the sport cluster, did not exist in SAILAUCK. Understanding the structure and nature of these linkages helps uncover features of their configuration, interaction, and exchange norms. Gleaning insights into the motivations for organisations to engage in linkages in the cluster aids in comprehending the impetus and rationale for engaging at all in a cluster. 6.3.Motivational Patterns for Interorganisational Linkages Motivational patterns suggested by [START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF] were applied in the context of the sailing clusters. The results are illustrated in Table 3 and explained in the paragraphs below. Insert Table 3 about here Asymmetry was an antecedent for formal ION formation. For example joining a governing body was sometimes driven by the motive of asymmetry. The work of the marine industry association in SAILAUCK was organised in committees. The firms that were involved in these committees could exercise influence in this specific domain which impacted all member organisations that were concerned by the specific topic. However, joining the cluster governing body in SAILBRIT meant also gaining access to knowledge and eventually technology of other CLORs since the governing body organised knowledge-sharing events. CLORs that joined an industry or cluster governing body risked losing control of the shared resources. Reciprocity was a motive for the formation of informal IOR. CLORs developed informal IOR to other CLOR's to access and exchange knowledge and resources bases: I think that it is very interesting for the small companies to be able to join us in our work because we have capacities in terms of calculations, technical development, and technology that others do not have. So in associating themselves with us they have the possibility to develop themselves as well (SAILBRIT, rig/sail maker). Reciprocal motives for linkage formation primarily aimed at establishing collaborative advantage and included for example collaborations for research and development, product testing, sponsoring, informal information and knowledge transfer, and collective promotion at trade shows or sport events. IORs based on reciprocity often led to the development of informal IONs. In a network perspective reciprocity becomes a more abstract phenomenon where contributions to the network do not necessitate direct compensation in return. A marine service provider from SAILAUCK referred to an ION in terms of joint commercialisation: "So it's a combined effort, people work very well together. I think all the players understand that combined as a group you are better to go to the world and you create more business as a combined group then you going individually." Another example is cooperation in product development: "We were talking about a helm provider and a systems provider working more closely together. We are able to concentrate on the best things that we can do ourselves and feed ideas" (SAILAUCK, marine equipment firm). As shown in Table 3, reciprocity occurs as motive for all types of linkages (i.e., formal and informal IOR and ION) and hence is the most evident motivational pattern for creating or joining interorganisational linkages in the investigated sport clusters. Necessity as motivational pattern resulted mostly in formal IOR, which led in some cases to formal ION. For example, necessity was reflected in the formal ION that was formed with the creation of the cluster governing body in SAILBRIT. There were three different public authorities that jointly funded the cluster governing body for the local ocean racing industry. The cluster governing body had necessarily close formal IORs to these governing bodies and depended on them for future funding. However, CLORs that had joined the cluster governing body were also a funding source through membership fees. Public authorities would offer certain funding possibilities only to consortia or partnerships of CLORs and hence encourage formation of linkages. Necessity was also evident as a motive for formal IORs because certain CLORs depended on clients or suppliers to run their activity. For example, shipyards in SAILBRIT entered formal IORs to respond to a larger order of boat hulls that otherwise neither of them could have fulfilled alone. The regulations of the tender required the shipyards to work together. Efficiency as motivational pattern to create or join interorganisational linkages led in the first step to the development of mostly informal IOR between CLORs with interdependencies, but these added up quickly to informal ION. Efficiency as motive was evident for example in larger boat-building projects. Due to the close cooperation and spatial proximity of different firms on the boat construction site, knowledge and information exchanges were more efficient; better solutions at the intersections of different parts (e.g., sail and mast) were found; and collaborative synergies were optimised through direct face-to-face meetings and communication. Firms that were involved in boat-building projects, worked closely together over a longer period of time which increased efficiency through better mutual understanding: I think part of the reason why Team New Zealand has been a successful team over the years to some degree is because we are one of the first to realise that you should not just treat all the components that go onto the boat as separate entities. They all affect each other (SAILAUCK, shipyard). Legitimacy motives led primarily to formal ION formation. For example, CLORs joined the cluster governing body and participated in collective actions in the quest for legitimacy (e.g., through collective participation in trade shows). External pressures driving firms to join the cluster governing body were primarily economic but also social. Firms in SAILBRIT joined the cluster governing body in order to be associated more obviously with ocean racing as an established sport and industry. This provided them with an attractive image and could serve as a showcase to attract new clients, even from other sectors than the ocean racing sector. The motive of legitimacy was less evident in the SAILAUCK case as there was no dedicated cluster governing body for the ocean racing industry. Stability as motivational patterns to join interorganisational linkages was mostly evident in informal IORs, as they served mainly as source of information and reassurance. Stability was the motive for those CLORs that entered informal IORs primarily to have access to information. These relationships tended to be based on trust and reliance between CLORs that had established a long-term relationship. Informal IORs served as source of expertise, market information, new knowledge, and security for anticipated changes. Informal IORs assured access to information which might be crucial to the survival of the mostly small-and medium-sized enterprises (SMEs) in the sport industry. Discussion The majority of CLORs were comprised of private SMEs that had formal IORs with local customers or suppliers. The clusters included not only firms but also governing bodies, universities, research institutes, and amateur or professional sport organisations [START_REF] Gerke | Towards a sport cluster model: The ocean racing cluster in Brittany[END_REF]. Firms developed not only formal IORs with other companies but also with professional and amateur sport organisations. Shipyards, marine equipment firms, rig/sail makers, marine service firms, media/communication firms, and naval architects had either informal or both informal and formal IORs with professional sport organisations, occasionally with amateur sport clubs, but also with governing bodies and research/ educations institutes. Informal IORs developed via formal IOR through regular social interaction and exchanges which created affinities and interpersonal relationships. Formal IORs between complementary CLORs tended to lead to informal IORs due to frequent interaction facilitated by geographical and social proximity [START_REF] Capo-Vicedo | Improving SME Competitiveness Reinforcing Interorganisational Networks in Industrial Clusters[END_REF]. Informal IORs between similar sometimes competing firms developed mostly from interpersonal linkages through family and friends [START_REF] Chetty | Role of inter-organizational networks and interpersonal networks in an industrial district[END_REF]. A congruent understanding of identity in relation to others (i.e., congruent sense making) and psychological contracts amongst the involved parties contributed to the establishment of IORs [START_REF] Ring | Developmental processes of cooperative interorganizational relationships[END_REF]. Formalisation of IORs was not valued as important because personal relationships and informal psychological contracts substituted or complemented formal IORs. Conditions for linkage development were trust and a shared macrostructure through a common language, social ties, and a set of standard and values [START_REF] Gomes | Behind innovation clusters: Individual, cultural, and strategic linkages[END_REF]. These findings improve the understanding of cross-sector linkages in complex environments including for-profit, non-profit, public, and governing organisations by determining the nature of interorganisational linkages and their developmental process [START_REF] Babiak | Challenges in multiple cross-sector partnerships[END_REF][START_REF] Misener | Understanding capacity through the processes and outcomes of interorganizational relationships in nonprofit community sport organizations[END_REF][START_REF] Shilbury | Considering future sport delivery systems[END_REF]. IORs tend to be formalised while IONs remained informal in nature. As ocean racing competitions usually take several years of preparation, teams prepare sailors and the boat construction over two to three years in advance. There are a number of formal IORs between the firms contributing to the boat-building project. In addition to the formal boat-building project with numerous contractual IORs, informal IORs emerge that build up to an informal ION between the involved parties through family bonds, friendships, informal exchanges, and joint practices of sailing. The existence of formal IORs and informal IONs in the boat-building project mutually influence and reinforce each other. The findings show that formal IORs were often a starting point from which to develop informal IORs due to frequent direct contact and exchanges. The involvement of the same organisation in various informal IORs in a geographical and social proximate environment results in the development of informal IONs. This confirms previous research claiming that an ION is a result of several interlinked IORs [START_REF] Warren | The interorganizational field as a focus for investigation[END_REF] and that interorganisational linkages are not always driven by economic considerations [START_REF] Granovetter | Economic action and social structure: The problem of embeddedness[END_REF][START_REF] Marshall | Industry and Trade[END_REF]. A better understanding of this dynamic development of formal and informal linkages can help organisations in sport clusters to consciously develop valuable IONs. Both sailing industry clusters were characterised by an evolutional process of formal and informal IORs and IONs. While IORs tended to be formal in the beginning, there was a recurrent pattern of formal IORs that turned into informal IORs once the formal agreement ceased. This is in contrast to what [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF] found in her study on the network of a non-profit organisation. She observed that relationships tended to start informally and become formalised once trust was established. This can be explained by limited capacity (i.e., time, human resources, financial resources, and so on) of non-profit organisations to formalise relationships from the beginning. Whereas in a sport cluster many actors are private firms that are more used to formalise relationships related to economic activities. Informal IORs developed also from social ties like family bonds, friendships, and from practicing sports. The accumulation of informal IORs in a geographically and socially proximate area allowed cluster members to meet informally and spontaneously [START_REF] Capo-Vicedo | Improving SME Competitiveness Reinforcing Interorganisational Networks in Industrial Clusters[END_REF]. This permitted the creation of informal IONs amongst CLORs based on initially informal IORs. Informal IONs provided access to economic activity and growth potential since there was mutual recommendation and sharing of contracts (i.e., subcontracting of work to direct competitors) was common in the informal IONs. Motives, therefore, were lack of internal capacity paired with the intention to keep business in the cluster rather than losing it to firms outside of the cluster. In recognition of this, different public authorities provided funds to create a cluster governing body a formal ION in SAILBRIT. The formal cluster governing body consisted of for-profit, nonprofit, public, and governing organisations that had shared interests in developing economic activities around ocean racing. This example emphasises the role of industry associations as intermediaries, facilitators, and levers of IONs [START_REF] Gerke | Towards a sport cluster model: The ocean racing cluster in Brittany[END_REF][START_REF] Watkins | National innovation systems and the intermediary role of industry associations in building institutional capacities for innovation in developing countries: A critical review of the literature[END_REF]. In SAILAUCK, there was a marine industry association that focused on a larger audience and that did not provide specific services for the sailing industry. Therefore the formal ION in SAILBRIT had more relevance and importance for CLORs than in SAILAUCK. SAILBRIT is an example of formalised hierarchical form of relational governance, while the case of SAILAUCK reflects an autonomous market-like form of relational governance [START_REF] Bell | The organization of regional clusters[END_REF][START_REF] Provan | Modes of governance[END_REF][START_REF] Von Corswant | Organizing interactive product development[END_REF]. Linkages formed because organisational goals were either congruent (e.g., build a fast boat in the boat-building projects) or complementary (e.g., using the image of ocean racing teams for marketing purposes while the ocean racing benefits from the company's expertise). From the six determinants for IOR development suggested by [START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF], the most evident motive for joining or creating IORs or IONsformal and informalwas reciprocity. Reciprocitythe pursuit of coordination, cooperation, and collaboration in IORsas a motive was evident in IORs and IONs formed for research collaborations, in joint bids for funding or tenders, in reciprocal informal exchange of knowledge and information, in joint presentations at trade shows or web sites, and in joint approaches to problem solutions during boat building projects. These findings show that relational governance dominates in the studied sport clusters [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF]. These findings are interesting in light of [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF] claims concerning motives for organisations to join cross-sectoral IORs where she emphasised that corporate managers' personal interests, values, or beliefs led to motives for developing IORs among non-profit sport organisations. While [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF] concentrated primarily on IORs, this study also investigated IONs. An ION consists of several connected IORs which have the potential to lead to anonymity and a loss of control over any resources that are shared within the ION. In the sailing cluster one CLOR might share knowledge with another CLOR but then loses control over the further dissemination of its knowledge. This creates interdependencies between organisations and might lead to companies refraining from engaging in IONs [START_REF] Gadde | Strategizing in industrial networks[END_REF]. On the other hand, interdependencies due to complementarity can lead to more resilience regarding external shocks due to mutual support resulting from interdependencies [START_REF] Boschma | Towards an evolutionary perspective on regional resilience[END_REF]. Asymmetry was only evident as a motive for formal ION development, as cluster members sought to be able to exercise influence over other CLORs in collective projects or through taking ownership of shared resources. The necessity motive for IONs was linked to the dependence on founding partners of the cluster governing body and their funding [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF]. Legitimacy motives for IOR formation were comparable to reasons for those of trade and industry associations (e.g., enhance CLORs' image collectively) or lobbying groups (e. g., increase cluster visibility towards public authorities) [START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF]. The affiliation of CLORs to a cluster governing was attractive because it helped to improve CLORs' image and their marketing and communication activities. The stability motive for organisations was associated with informal IONs. Social integration in the industry and the psychological support from other firms in the same industry sector with similar beliefs, objectives, and problems provided members with perceived and some real security. Finally efficiency motives were evident in informal IONs of boat-building projects. The close and direct cooperation on a shared physical sitethe boat yardprovided easier and more efficient means of meetings and exchange. This resulted in more efficient problem-solving and the harmonisation of interrelated processes. Conclusions This research extends knowledge on IOR literature in sport management. The theoretical contributions are (a) illustrating the multitude and variety of interorganisational linkages in crosssectoral sport contexts; (b) identifying and explaining a circular development process of different types of interorganisational linkages; and (c) identifying different motivational patterns as antecedents for the formation of different types of interorganisational linkages. Furthermore, we contribute to general cluster literature by providing insights on the underlying factors of the sustainability of interorganisational linkages that are somewhat crucial for the sustainability of clusters [START_REF] Li | Network Characteristics and Firm Performance: An Examination of the Relationships in the Context of a Cluster[END_REF]. We provide a nuanced perspective on the variety of interorganisational linkages in the cross-sectoral context of sport clusters. To date, researchers have focused on either dyadic IORs [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF][START_REF] Babiak | Challenges in multiple cross-sector partnerships[END_REF][START_REF] Cousens | Beyond sponsorship: Re-framing corporatesport relationships[END_REF][START_REF] Dana | Evolution de la coopétition dans un cluster: le cas de Waipara dans le secteur du vin[END_REF][START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF][START_REF] Ring | Developmental processes of cooperative interorganizational relationships[END_REF][START_REF] Vlaar | Coping with problems of understanding in interorganizational relationships: Using formalization as a means to make sense[END_REF], or on IONs [START_REF] Camagni | Inter-firm industrial networks: The costs snd benefits of cooperative behaviour[END_REF][START_REF] Capo-Vicedo | Improving SME Competitiveness Reinforcing Interorganisational Networks in Industrial Clusters[END_REF][START_REF] Chetty | Role of inter-organizational networks and interpersonal networks in an industrial district[END_REF][START_REF] Provan | Interorganizational networks at the network level: A review of the empirical literature on whole networks[END_REF]. We suggest a clearer distinction and consequently a clearer terminology and understanding of interorganisational linkages and their nuanced differences by distinguishing formal IOR, informal IOR, formal ION, and informal ION. While IORs tend to be formalised in sport clusters, IONs were more likely to remain informal in nature. Formal IORs tend to develop into informal IORs. Over time an informal ION emerges. The synergy potential of the informal ION encourages the institutionalisation of the ION into a formal governing body (formal ION). The development process of interorganisational linkages and consequently determinants that favour the development of interorganisational linkages have been studied from various perspectives. IOR development can be based on interpersonal relationships [START_REF] Chetty | Role of inter-organizational networks and interpersonal networks in an industrial district[END_REF], and can be driven by specific motivational patterns [START_REF] Oliver | Determinants of interorganizational relationships: Integration and future directions[END_REF]. Essential ingredients for development and longevity include congruent sensemaking and psychological contracts [START_REF] Ring | Developmental processes of cooperative interorganizational relationships[END_REF]. We complement previous research on cross-sectoral IORs [START_REF] Babiak | Determinants of interorganizational relationships: The case of a Canadian nonprofit sport organization[END_REF][START_REF] Babiak | Challenges in multiple cross-sector partnerships[END_REF] by focusing on a crosssectoral empirical context dominated by SMEssailing clusters. We propose a circular framework (Figure 1) that highlights the perpetuity and renaissance of IOR and ION development. Furthermore, our data suggest that reciprocity was the main motive for joining or developing interorganisational linkages. In studying motives of sport clusters, we provide insights of central motives reported by organisations for joining an IOR or ION in a highly competitive environment. The findings of this research provide practical insight for managers of interorganisational linkages in cross-sectoral contexts (e.g., sport clusters). Insights about IORs and IONs reveal alternatives to purely competitive approaches for strategic management of for-profit, non-profit, and public organisations. We advance knowledge and evidence for practitioners about possibilities and motives to enter interorganisational linkages. Coordination, cooperation, or even collaboration are alternative strategies in order to engage with and manage an organisation's interactions with the external environment. We provide recommendations for managers in sport clusters and show how interaction and development of interorganisational linkages with nonprofit organisations, tertiary institutions, and sport entities benefit companies and vice versa. Engagement in formal IORs with CLORs opens up research collaborations, sponsoring opportunities, and provides access to larger contracts or external funding through collective bids. Involvement in informal IORs with CLORs provides stability through permanent access to crucial information and knowledge that organisations could not access alone or only with difficulties via formal ways. An informal ION provides access to other CLORs' resources, which allows the new combination of a variety of resources leading to innovative solutions [START_REF] Schumpeter | Capitalism, socialism and democracy[END_REF]. A formal ION permits CLORs to promote each other collectively and reduces costs through joint investment, economies of scale, or augmented purchasing power. The findings from this research are limited in their generalisation to other contexts. [START_REF] Yin | Case Study Research: Design and Methods[END_REF] argues that literal replication of findings across similar case settings strengthens theory. While there are some differences across the two casesprimarily due to cultural differencesmost of the results were congruent across the two cases. The findings show that sailing industries are similar across national and cultural borders because the sport determines to some extent beliefs, values, management styles, modes of functioning, and philosophies of organisations, managers, and employees [START_REF] Gomes | Behind innovation clusters: Individual, cultural, and strategic linkages[END_REF]. Another notable limitation is that this study is a cross-sectional study. We asked participants to reflect on organisational dynamics that had occurred in the past (i.e., not real time or in-situ experiences and engagements with IORs and IONs). Thus, these reflections and interpretations may be influenced by the passage of time (recent or longer term). Insights about ongoing dynamics between cluster partners could be uncovered by longitudinal studies. what is the nature of interorganisational linkages in sport clusters; (b) how do linkages in sport clusters develop; and (c) what are the organisational motivations for creating or joining linkages in sport clusters? (e.g., To what extent are you linked to other CLORs?); the nature of the interorganisational linkages (e.g., How are you connected with these CLORs? Describe the relationships between your organisation and other CLORs.); motivations and intentions for involvement in interorganisational linkages (e.g., Why and how have these linkages developed? To what extent are there regular interactions?). The principal investigator transcribed all interviews. Interview transcripts were sent to participants for verification. Interviewees either offered amendments (SAILBRIT 56%, SAILAUCK 41%) or confirmed the transcripts without amendments. 6.1.3. Informal ION. Our analysis uncovered strong evidence for informal IONs in both cases. The data indicated the importance of historical and socio-economic circumstances for network development in sport clusters and as such, informal IONs served to facilitate formal business exchanges. A shipyard director explains the advantages of informal linkages: FurtherFiguresFigure 1 . 1 Figures Table 1 : 1 Findings from case study SAILBRIT Tables Number Number Type of source of times a theme appears in data of sources within each theme Interviews Observ. Org. Info. Archiv. Data Formal relationships 115 29 27 0 0 2 Informal relationships 59 24 22 0 1 1 Formal networks 59 22 19 1 1 1 Informal networks 134 37 27 4 4 2 Table 3 : 3 Motives/reasons for Interorganisational Linkages Type of Motives/Reasons emerging from Data Motives according to Oliver Linkage (1990) formal IOR commercial agreements and transactions reciprocity, necessity (purchase and subcontracting), research collaborations, sponsoring contracts, confidentiality agreements, joint bids for funding or tenders informal IOR historically developed social ties, family reciprocity, stability bonds, friendships, informal knowledge and information transfer/exchange, possibility to offer joint product packages, access to expertise formal ION cluster governing body/association, asymmetry, reciprocity, research consortium, joint bids for necessity, legitimacy tenders/funding, joint stands and presentation at trade shows, reduced cost for investment informal ION boat-building projects, networking reciprocity, efficiency meetings, shared clients/markets, shared problems IOR=interorganisational relationshp, ION=interorganisational network
01774857
en
[ "info.info-im", "sdv.ib.ima" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01774857/file/SynCTSB_hal.pdf
C Goubet M Langer F Peyrin J F P J Abascal LOW-DOSE SYNCHROTRON NANO-CT VIA COMPRESSED SENSING Keywords: Synchrotron CT, low-dose, compressed sensing, split Bregman Synchrotron phase nano-CT is a very useful technique for studying bone diseases, which requires investigating bone at the cellular level. Nevertheless, imaging biological tissue at this resolution is challenging due to the very high radiation dose. Compressed sensing provides a framework that permits to reconstruct an image from a limited amount of data. The most promising way to reduce radiation exposure in X-ray CT is to reduce the number of projections. The aim of this study is to assess the use of compressed sensing to reduce dose for synchrotron phase nano-CT for bone applications. In this paper, we address the tomographic reconstruction step by posing a total variation problem and solving it with the Split Bregman formulation. To assess the proposed method we created different low-dose imaging scenarios by reducing the number of projections, and tested them on several bone samples. The proposed method allowed accurate reconstruction using 1/4th of the projections, preserving bone features, details, and a high signal to noise ratio. INTRODUCTION X-ray CT imaging is a technique of choice to investigate bone in diseases such as osteoporosis. Bone has a sophisticated hierarchical organization, from the organ scale to the nano scale and its strength depends on features at all scales [START_REF] Seeman | Bone Quality The Material and Structural Basis of Bone Strength and Fragility[END_REF][START_REF] Schneider | Towards quantitative 3D imaging of the osteocyte lacunocanalicular network[END_REF]. The feasibility of X-ray techniques to analyze bone tissue at the cellular scale has been demonstrated by using ptychography [START_REF] Dierolf | Ptychographic X-ray computed tomography at the nanoscale[END_REF], synchrotron micro-CT [START_REF] Pacureanu | Nanoscale imaging of the bone cell network with synchrotron X-ray tomography: optimization of acquisition setup[END_REF] and phase nano-CT [START_REF] Langer | X-ray phase nanotomography resolves the 3D human bone ultrastructure[END_REF]. Nevertheless, imaging biological tissue in 3D at the nano scale remains very challenging. For instance, bone imaging at a resolution of 60 nm [START_REF] Langer | X-ray phase nanotomography resolves the 3D human bone ultrastructure[END_REF] required acquiring a very large number of projections and long acquisition times (1.9 hours to acquire 2999 projections at four focus-to-sample distances), This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement N • 701915. It was also performed within the framework of the LabEx PRIMES (ANR-11-LABX-0063) of University de Lyon. We also acknowledge the support of the ANR project SALTO (ANR-17-CE19-0011-01). which translated in a dose equivalent to 8 • 10 7 Gy. Exposing the sample to a large radiation dose during multiple hours can affect the sample creating motion artefacts and compromising resolution and image quality. In addition, access to synchrotron radiation is limited. Facing these constraints in dose and acquisition time requires new strategies and algorithms that allow for low-dose nano CT. Compressed sensing (CS) provides a framework that permits to reconstruct an image from a limited amount of data. The most promising way to reduce radiation exposure in xray CT is to reduce the number of projections acquired [START_REF] Candès | Stable signal recovery from incomplete and inaccurate measurements[END_REF][START_REF] Pan | Why do commercial CT scanners still employ traditional, filtered backprojection for image reconstruction?[END_REF][START_REF] Abascal | A novel prior-and motion-based compressed sensing method for small-animal respiratory gated CT[END_REF]. CS has been used to reduce the number of projections in micro-CT [START_REF] Li | A compressed sensing-based iterative algorithm for CT reconstruction and its possible application to phase contrast imaging[END_REF][START_REF] Zhao | High-resolution, low-dose phase contrast X-ray tomography for 3D diagnosis of human breast cancers[END_REF][START_REF] Liu | Recent advances in synchrotron-based hard xray phase contrast imaging[END_REF], but only few studies have addressed synchrotron nano-CT [START_REF] Villanueva-Perez | Contrast-transfer-function phase retrieval based on compressed sensing[END_REF][START_REF] Melli | A compressed sensing based reconstruction algorithm for synchrotron source propagationbased X-ray phase contrast computed tomography[END_REF]. The goal of this study is to assess the use of compressed sensing to reduce dose in synchrotron phase nano CT for bone applications. In this paper, we will only address the tomographic reconstruction step. To this aim we propose a total variation problem and solve it using the Split Bregman formulation, which is an efficient formulation for solving L1based problems [START_REF] Osher | An Iterative Regularization Method for Total Variation-Based Image Restoration[END_REF][START_REF] Goldstein | The Split Bregman Method for L1-Regularized Problems[END_REF]. Similar proximal methods and splitting algorithms have been proposed to solve large scale problems due to their scalability potential [START_REF] Chamorro-Servent | Use of Split Bregman denoising for iterative reconstruction in fluorescence diffuse optical tomography[END_REF][START_REF] Onose | Scalable splitting algorithms for big-data interferometric imaging in the SKA era[END_REF]. To assess the proposed method for bone imaging, we created different lowdose imaging scenarios by reducing the number of projections and tested it on several bone samples. MATERIAL AND METHODS Compressed sensing formulation CS ensures accurate image reconstruction from undersampled data under certain conditions [START_REF] Candès | Stable signal recovery from incomplete and inaccurate measurements[END_REF]. If p represents data corresponding to a low number of projections, R the forward operator equivalent to a slice-by-slice 2D-Radon transform (for parallel geometry), and f the unknown image that is sparse under a known transformation Ψ, then f can be recovered by solving the following convex problem: min f Ψf 1 such that Rf -p 2 2 ≤ σ 2 , (1) where σ 2 accounts for noisy data. In this work we consider the 2D gradient transform Ψ = ∇ that leads to total variation functional which isotropic formulation is given by ∇f 1 = (∇ x f ) 2 + (∇ y f ) 2 . To solve the problem (1) we use the Split Bregman formulation, which efficiently handles L1-based constrained problems [START_REF] Osher | An Iterative Regularization Method for Total Variation-Based Image Restoration[END_REF][START_REF] Goldstein | The Split Bregman Method for L1-Regularized Problems[END_REF][START_REF] Abascal | Fluorescence diffuse optical tomography using the split Bregman method[END_REF]. The Split Bregman formulation allows to split L1-norm and L2-norm terms in such a way that they can both be solved analytically in two separate steps. To allow for splitting, we include new variables, x and y, and formulate a new problem that is equivalent to (1): (f k+1 , x k+1 , y k+1 ) = arg min f,x,y (x, y) 1 (2) + λ 2 x -∇ x f -b x 2 2 + λ 2 y -∇ y f -b y 2 2 (3) + µ 2 Rf -p k 2 2 , (4) p k+1 = p k + p -Rf k+1 , (5) b k+1 x = b k x + ∇ x f k+1 -x k+1 , (6) b k+1 y = b k y + ∇ y f k+1 -y k+1 . (7) Note that now L2 and L1-terms are independent of each other. The L2 part leads to a linear problem, which is solved analytically, and L1-parts are given by shrinkage formulae. For a detail solution we refer to [START_REF] Goldstein | The Split Bregman Method for L1-Regularized Problems[END_REF][START_REF] Abascal | A novel prior-and motion-based compressed sensing method for small-animal respiratory gated CT[END_REF]. The resulting reconstruction algorithm will be referred as TV-SB. Data and image analysis Data were acquired at the European Synchrotron Radiation Facility (ESRF) on beamline ID16A [START_REF] Yu | Assessment of imaging quality in magnified phase CT of human bone tissue at the nanoscale[END_REF]. Sampled corresponded to bone acquired at different propagation distances. A phase retrieval step was applied to each data set of four projections to get a phase map. The set of phase maps retrieved at each projection angle was then reconstructed by filtered back projection (FBP) to get the 3D phase image of size 2048 3 with a voxel size of 120 nm [START_REF] Langer | Quantitative comparison of direct phase retrieval algorithms in in-line phase tomography[END_REF]. The original image had a size of 2048 3 and occupied 60 GB. In order to assess the algorithm in a wide range of scenarios, we created smaller images by extracting three VOIs from the volume reconstructed using fully-sampled data. To evaluate the method of relevant structures of interest in bone at the cellular scale, we selected features representative of the osteo-canalicular system which plays a major role in bone physiology. The first target represents an osteocyte lacuna, the osteocyte being the most abundant cells in bone tissue, the second one represents an osteocyte lacunae including calcium depositium, and the third one targets canaliculi which are the the small channels connecting the osteocyte lacunae (target 1 is displayed in figure 1 and targets 2 and 3 are displayed in figure 2). For each of these VOIs, the projections were simulated numerically. Low-dose scenarios were created by reducing the number of projections by 1/2, 1/4, 1/7 and 1/10. An acquisition was considered fully projected when the number of projections generated was equal to π/2 times the image size. Comparison of methods and assessment of image quality FBP and TV-SB algorithms were evaluated in terms of several metrics. We used mean squared error (MSE), peak signal to noise ratio (PSNR) and streak artefact measure (SAM) [START_REF] Abascal | A novel prior-and motion-based compressed sensing method for small-animal respiratory gated CT[END_REF]. In addition, reconstructed images were assessed by visual inspection to evaluate the preservation of edges and bone features. The edge preservation was displayed by using canny edge detection. RMSE is given by f -f / f , where f is the target image, and PSNR as 10 log 10 (L 2 /M SE), where L is the range of the values of the image pixels. The SAM is defined as T V (f -f ) = ∇(f -f ) 1 . We assessed TV-SB and FBP in all scenarios for target 1. Then, we selected the achievable dose reduction and evaluated methods on the other two targets 2 and 3. RESULTS The different metrics for the target 1 are displayed in Table 1. TV-SB presented a RMSE of 2% for 1/10th of the projections while FBP led to a RMSE of 6% already for one half of the projections and to 11% for 1/10th of the projections. For PSNR, good image quality was considered when values were above 70 dB. While TV-SB had PSNR above 70 for all scenarios, FBP presented lower values in all cases. SAM is the most interesting metric as it measures aliasing artefacts and fake edges. TV-SB led to low values of SAM across scenarios. On the contrary, FBP led to a value of SAM larger than 5 for half of the projections and to a three-fold increase for 1/10th of the projections. Visual inspection corroborates these results. Figure 1 shows reconstructed images by using FBP and TV-SB and their corresponding edges for the different scenarios for target 1. For half of the projections, FBP presented noise and artefacts and lost edges with the lowest contrast. TV-SB preserved the most relevant details up to 1/4th of the projections and retained edges with largest contrast for up to 1/10th of the projections. Table 2 and Figure 2 show results for reconstructions by FBP and TV-SB for 1/4th of the projections for all targets. As in target 1, TV-SB retains image quality and bone details Table 2. Results using TV-SB for 1/4th of the total number of projections and for the three targets. for these samples. DISCUSSION We proposed a TV-SB algorithm for low dose synchrotron phase nano CT and validated it on bone data. The results show that TV-SB allows accurate reconstruction using up to 1/4th of the projections and that higher contrasts can be preserved with 1/10th of the projections. On the contrary, traditional FBP reconstruction presented noise, blurred edges and details already when using half of the projections. The total amount of dose that can be reduced with the proposed method depends on the criterion selected for image quality. For bone imaging, post-processing requires bone details and low contrast details to be preserved. Here, we found that the compressed method could allow a four-fold reduction in the number of projections. This could translate to a fourfold reduction in dose and a significant decrease in acquisition time, which is crucial given the limited access to synchrotron beam time. In addition, decreasing acquisition time has the potential to reduce motion artefacts, which limits resolution and image quality. In this work we investigated low-dose protocols using compressed sensing. However, this work is subject to few limitations. The methods were assessed on small numerical phantoms since evaluation of the real data set (60 GB) was unfeasible with the current version of the algorithm, due to extended computation times. Future work will address implementation of the proposed algorithm on a cluster, exploiting efficient projection and retroprojection operators that have been developed for ESRF data [START_REF] Mirone | The PyHST2 hybrid distributed code for high speed tomographic reconstruction with iterative reconstruction and a priori knowledge capabilities[END_REF]. Similar proximal methods and splitting algorithms have been proposed to solve large scale problems because of their scalability potential [START_REF] Chamorro-Servent | Use of Split Bregman denoising for iterative reconstruction in fluorescence diffuse optical tomography[END_REF][START_REF] Onose | Scalable splitting algorithms for big-data interferometric imaging in the SKA era[END_REF]. In addition, reducing the number of projections would lead to reduced computation times. In this work we reached a four-fold reduction in the number of projections using TV-SB, but larger reduction could be obtained by using a higher order total variation method [START_REF] Bredies | Total Generalized Variation[END_REF] or another sparsity promoting functional. Fig. 1 . 1 Fig.1. First and second rows: Reconstructed images using FBP and TV-SB, respectively, for data with projections reduced by 1/2, 1/4, and 1/10. Third and fourth rows: Edges corresponding to previous images. We consider here target 1. Fig. 2 . 2 Fig. 2. Reference image and reconstructed images with FBP and TV-SB for 1/4th of the projections for targets 2 and 3. Table 1 . 1 Results using FBP and TV-SB algorithms for 1/2, 1/4, 1/7, and 1/10 of the total number of projections. number of projections 1/2 1/4 1/7 1/10 RMSE (FBP) 6% 7% 9% 11% PSNR (FBP) 68 67 64 62 SAM (FBP) 5.7 8.4 14.2 18 RMSE (TV-SB) 1% 1% 2% 2% PSNR (TV-SB) 83 82 79 77 SAM (TV-SB) 1.4 1.7 2 2.5 ACKNOWLEDGMENTS We thank beam team at ESRF ID16A, Alexandra Pacureanu, and Peter Cloetens from ESRF and Boliang Yu and Cécile Oliver from CREATIS.
01774864
en
[ "sdu.stu.gm" ]
2024/03/05 22:32:18
2018
https://normandie-univ.hal.science/hal-01774864/file/Bertranetal2018.pdf
Pascal Bertran email: [email protected] Eric Andrieux Mark D Bateman Marianne Font Kevin Manchuel Déborah Sicilia Deborah Sicilia Features caused by ground ice growth and decay in Late Pleistocene published or not. The documents may come Introduction Over the past decade, the creation of a database of relict periglacial features in France allowed documentation of the maximum Pleistocene extent of permafrost and made it possible to delineate permafrost types at the scale of the whole territory [START_REF] Bertran | Distribution and chronology of Pleistocene permafrost features in France: database and first results[END_REF][START_REF] Bertran | Pleistocene involutions and patterned ground in France: examples and analysis using a GIS database[END_REF]Andrieux et al., 2016aAndrieux et al., , 2016b)). Ice wedge pseudomorphs, which indicate at least widespread discontinuous permafrost, were only observed north of latitude 47.5°N in lowlands (Fig. 1). Farther south, between latitudes 47.5°N and 43.5°N, the main features listed are involutions and thermal contraction cracks filled with aeolian sand (sand wedges) at the periphery of coversands. The lack of ice wedge pseudomorphs suggests that soil temperature was too high to allow ice bodies to grow over long time periods. Therefore, this latitudinal band is considered to have been affected by sporadic permafrost. South of 43.5°N, no periglacial features have been reported, and permafrost was probably completely absent even during the coldest phases of the Glacial. In the area affected by widespread permafrost, the existence of other types of ground ice (interstitial, segregation, injection, icing, firn) appears highly plausible by analogy with modern Arctic environments. Platy structures caused by segregation ice lenses in fine-grained sediments have been widely reported, particularly in loess (e.g., [START_REF] Van Vliet | Correlation between fragipans and permaforst with special reference to silty Weichselian deposits in Belgium and northern France[END_REF][START_REF] Vliet-Lanoë | Le niveau à langues de Kesselt, niveau repère de la stratigraphie du Weichsélien supérieur européen: signification paléoenvironnementale et paléoclimatique[END_REF][START_REF] Antoine | Last interglacial-glacial climatic cycle in loess-palaeosol successions of North-Western France[END_REF]. In contrast, no indisputable evidence of the growth or decay (thermokarst) of large bodies of segregation or injection ice is known. Potentially thermokarst structures have been reported in the literature but remain debated. Shallow rounded depressions attributed to the melting of pingos or lithalsas have been described by many authors, particularly in the vicinity of Bordeaux and in the Landes district (SW France) [START_REF] Boyé | Les lagunes du plateau landais[END_REF][START_REF] Legigan | L'élaboration de la formation du Sables des Landes. Dépôt résiduel de l'environnement sédimentaire Pliocène-Pléistocène centre aquitain[END_REF], as well as in the Paris Basin [START_REF] Michel | Description de formations quaternaires semblables à des « diapirs » dans les alluvions de la Seine et de la Marne près de Paris[END_REF][START_REF] Michel | Dépressions fermées dans les alluvions anciennes de la Seine à 100 km au S-E de Paris[END_REF][START_REF] Courbouleix | Mares, mardelles et pergélisol: exemple des dépressions circulaires de Sologne[END_REF][START_REF] Lécolle | Que faire des dépressions fermées?[END_REF][START_REF] Vliet-Lanoë | Quaternary thermokarst and thermal erosion features in northern France: origin and palaeoenvironments[END_REF]. In SW France, a periglacial origin of the depressions, locally called 'lagunes', has recently been invalidated [START_REF] Texier | Genèse des lagunes landaise: un point sur la question[END_REF][START_REF] Becheler | L'origine tectono-karstique des lagunes de la région Villagrains-Landiras[END_REF] and has been shown to be mainly related to limestone dissolution (doline) below the coversands. Some shallow depressions correspond to deflation hollows upwind from parabolic dunes or to flooded areas following the dam of small valleys by dunes [START_REF] Sitzia | Chronostratigraphie et distribution spatiale des dépôts éoliens du Bassin Aquitain[END_REF]. In the Paris Basin, the authors acknowledge the difficulty of demonstrating a thermokarst origin. Alternative hypotheses (karst, anthropogenic activity) remain problematic to eliminate in the majority of cases. Detailed analysis and dating of the filling of depressions from NE France [START_REF] Etienne | The origin of closed depressions in Northeastern France: a new assessment[END_REF] has, for example, led to an anthropogenic origin (marl extraction to amend fields during the Medieval period). Convincing thermokarst remnants have been identified in a German loess sequence at Nussloch in the Rhine valley, ca. 50 km from the French border [START_REF] Antoine | Les processus thermokarstiques: marqueurs d'épisodes de réchauffement climatique rapides au cours du Dernier Glaciaire dans les séquences loessiques ouest-européennes. Oral Presentation[END_REF][START_REF] Kadereit | The chronological position of the Lohne Soil in the Nussloch loess section -re-evaluation for a European loess-marker horizon[END_REF]. The structures correspond to gullies some tens of metres in width with ice wedge pseudomorphs locally preserved at the bottom. They are interpreted as erosional features caused by the melting of an ice wedge network on the slope according to a well-documented model in modern environments [START_REF] Seppälä | Piping causing thermokarst in permafrost, Ungava Peninsula, Québec, Canada[END_REF][START_REF] Fortier | Observation of rapid drainage system development by thermal erosion of ice wedges on Bylot Island, Canadian Arctic Archipelago[END_REF]. Until now, no similar structure has been reported from the French territory. As part of the SISMOGEL project (which involves Electricity De France (EDF), Inrap, and the universities of Bordeaux and Caen), various sites showing deformations in Quaternary sediments were reevaluated. Two of them, Marcilly-sur-Seine and Gourgançon, located in an alluvial context in the Paris Basin, have been studied in detail through the survey of quarry fronts and are the subject of this article. Similar sites are then identified in northern France by using information from the aerial photographs available on Google Earth, topographical data from the 5-m DEM of the Institut Géographique National (IGN), and borehole data stored in the Banque du Sous-Sol (BSS) of the Bureau des Recherches Géologiques et Minières (BRGM). Overall, this study provides new evidence of permafrost-induced ground deformations in France and strongly suggests that thermokarst played a significant and probably largely underestimated role in the genesis of Late Pleistocene landscapes. Geomorphological context of the study region The investigated sites are located 110 to 130 km ESE from Paris in the upper Cretaceous chalk aureole of the basin (Fig. 2). This area remained unglaciated during the Pleistocene cold periods but experienced phases of permafrost development. Because of limited loess deposition (the area was at the southern margin of the north European loess belt; [START_REF] Bertran | A map of Pleistocene aeolian deposits in Western Europe, with special emphasis on France[END_REF], remnants of periglacial landscapes are still easily readable in aerial photographs and most of the polygons caused by thermal contraction cracking of the ground and soil stripes caused by active layer cryoturbation found in France from aerial survey are concentrated in this latitudinal band (Andrieux et al., 2016a). Single grain OSL dating of the infilling of sand wedges and composite wedge pseudomorphs from sites located in the Loire valley showed that thermal contraction cracking occurred repeatedly during Marine Isotopic Stages (MIS) 4, 3, 2, and early MIS 1 (Younger Dryas) [START_REF] Andrieux | The chronology of Late Pleistocene thermal contraction cracking derived from sand wedge OSL dating in central and southern France[END_REF]. In contrast, available chronological data on ice wedge pseudomorphs preserved in loess sequences of northern France strongly suggest that perennial ice (i.e., permafrost) was able to develop only during shorter periods of MIS 4 to 2 and that the largest pseudomorphs date to between 21 and 31 ka [START_REF] Locht | La séquence loessique Pléistocène supérieur de Savy (Aisne, France): stratigraphie, datations et occupations paléolithiques[END_REF][START_REF] Antoine | Les séquences loessiques Pléistocène supérieur d'Havrincourt (Pas-de-Calais, France): stratigraphie, paléoenvironnements, géochronologie et occupations paléolithiques[END_REF]. By contrast to northern Europe where most of the identified thermokarst structures have been dated to the very end of MIS 2 and the Lateglacial (Pissart, 2000b), similar structures in the Paris Basin, if present, should be significantly older and, thus, may potentially have 2018) and corresponds to the modelled LGM isotherm (Max-Plank Institute PMIP3 model, courtesy of K. Saito) that best fits the southern limits of ice wedge pseudomorphs. LGM glaciers are from [START_REF] Ehlers | Quaternary Glaciations. Extent and Chronology, Part I: Europe[END_REF] for the Alps and the Pyrenees and from [START_REF] Hughes | The last Eurasian ice sheetsa chronological database and time-slice reconstruction[END_REF] for the British-Scandinavian Ice Sheet. left much poorly preserved evidence in the landscape. Thermokarst develops today in ice-rich permafrost, typically in poorly drained valley bottoms, large deltas, and lake margins and in Yedoma-type formations in high latitude regions where abundant syngenetic ice formed during the Pleistocene. The Weichselian alluvial terraces (generally referred to as Fy on geological maps) of the main rivers crossing the Paris Basin are potentially suitable contexts for searching thermokarst structures. These terraces have been largely exploited for gravel production around Paris since the 1950s and provided evidence of periglacial structures [START_REF] Michel | Description de formations quaternaires semblables à des « diapirs » dans les alluvions de la Seine et de la Marne près de Paris[END_REF][START_REF] Michel | Dépressions fermées dans les alluvions anciennes de la Seine à 100 km au S-E de Paris[END_REF]. These quarries are no more accessible today. The quarries of Marcilly-sur-Seine (still in activity) and Gourgançon are located upstream and provide a good opportunity to investigate former potentially ice-rich fluvial deposits. Methods The sections were water jet and manually cleaned, and detailed photographs were taken. The stratigraphy was based on visual inspection and measurement of the sections. Three samples for grain size analysis were taken from the basal lacustrine unit in Marcilly-sur-Seine. The samples were processed in the PACEA laboratory (Université de Bordeaux, France) using a Horiba LA-950 laser particle size analyser. The pretreatment includes suspension in sodium hexametaphosphate (5 g/L) and hydrogen peroxide (35%) for 12 h, and 60 s of ultrasonification to achieve optimal dispersion. The Mie solution to Maxwell's equations provided the basis for calculating particle size using a refractive index of 1.333 for water and 1.55i-0.01i for the particles. An undisturbed block of lacustrine sediment was also sampled and vacuum impregnated with polyester resin following the method described by [START_REF] Guilloré | Méthode de fabrication mécanique et en série des lames minces[END_REF] to prepare a thin section. The AMS radiocarbon dating on bulk lacustrine silt sampled in Marcilly-sur-Seine was made by Beta Analytic (Miami, USA). Optically Stimulated Luminescence (OSL) dating was carried out on sand from the same site at the Luminescence Dating Laboratory of the University of Sheffield (UK). The OSL sample was collected by hammering into the freshly exposed section a metal tube (60 mm in diameter, 250 mm long). To avoid any potential light contamination that may have occurred during sampling, 2 cm of sediment located at the ends of the tube was removed. The remainder of the sample was sieved and chemically treated to extracts 90 to 180 μm diameter quartz grains as per [START_REF] Bateman | An absolute chronology for the raised beach deposits at Sewerby, E. Yorkshire, UK[END_REF]. The dose rate was determined from analysis undertaken using inductively coupled plasma mass spectroscopy (ICP-MS) at SGS Laboratories, Montréal (Canada). Adjacent lithostratigraphic units of host sediment were also analysed to establish their γ dose contribution to the sample dated as per [START_REF] Aitken | Thermoluminescence Dating[END_REF]. Conversions to annual dose rates were calculated as per [START_REF] Adamiec | Dose-rate conversion factors update[END_REF] for α and γ, and per [START_REF] Marsh | Monte Carlo determinations of the beta dose rate to tooth enamel[END_REF] for β, with dose rates attenuated for sediment size and palaeomoisture contents (Table 1). For the latter, given the presence in the sediment of features characteristic to the melting of ice, a value of 20 ± 5% was assumed. This is a value close to the saturation of sediment in water, and the absolute error of ±5% is incorporated to allow for past changes. Cosmic dose rates were determined following [START_REF] Prescott | Cosmic ray contributions to dose rates for luminescence and ESR dating: large depths and long-term variations[END_REF]. The OSL measurements were undertaken on 9.6 mm single aliquot discs in a Risø automated luminescence reader. The purity of extracted quartz was tested by stimulation with infrared light as per [START_REF] Duller | Distinguishing quartz and feldspar in single grain luminescence measurements[END_REF]. Equivalent dose (De) determination was carried out using the Single-Aliquot Regenerative-dose (SAR; [START_REF] Murray | The single aliquot regenerative dose protocol: potential for improvements in reliability[END_REF]; Table 1). The sample displayed OSL decay curves dominated by the fast component, had good dose recovery, low thermal transfer, and good recycling. Twenty-four De replicates were measured for the sample, and these showed the De distribution was unimodal with a low overdispersion (OD; b20%), therefore the age was extracted using the Central Age Model (CAM; [START_REF] Galbraith | Optical dating of single and multiple grains of quartz from Jinmium Rock Shelter, Northern Australia: part I, experimental design and statistical models[END_REF]. The final age, with 1σ uncertainties, is therefore considered a good burial age for the sediment sampled. Results Marcilly-sur-Seine Geomorphological setting Marcilly-sur-Seine (48.5411°N, 3.7234°E) is located in the Seine valley near its confluence with the Aube River in the Paris Basin (Fig. 2). The local substrate comprises alluvium overlying upper Cretaceous chalk. The studied cross sections cut the Fy terrace (geological map at 1:50,000, infoterre.brgm.fr), which dominates the Holocene floodplain (Fz) by 2 to 3 m (Fig. 3). The wide Fy terrace exhibits an undulating topography as shown by the 5-m DEM (IGN), which contrasts with the even topography of the Fz floodplain. The main recognisable topographical features consist either in shallow depressions b1 m deep or in small conical mounds especially on the edge of the terrace (Fig. 4). Shallow sinuous channels also cross the entire surface. In aerial photography, Fy appears irregularly covered with subcircular or elongated dark spots a few tens of metres to 150 m in length (Fig. 5). This type of structure is lacking on the Fz floodplain, which is crossed by large abandoned channels filled with fine-grained, dark-coloured sediments. Stratigraphy The observations were made on two trenches, the main (Section 1) about 2 m deep and 100 m long oriented east/west, the other (Section 2) 1.5 m deep and 28 m long oriented northwest/southeast. The stratigraphy of Section 1 comprises the following units, from the bottom to the top (Fig. 6): [1] Sandy gravel alluvium. They are only punctually exposed at the surface in the quarry and are not visible in the trench. When visible, the dominant lithofacies [START_REF] Miall | The Geology of Fluvial Deposits[END_REF] consists of trough cross-bedded gravel (Gt) with interstratified sand beds. According to available boreholes from the BSS and observation of the main quarry front, the alluvial deposits form a 5-7 m thick sheet overlying the chalk substrate. [2] A laminated silt unit up to 2 m thick. The laminae are a few millimetres to 1 cm thick (Fig. 7A). The grain size is polymodal (probably because of the mixing of different laminae during sampling), and the main modes range between 13 μm (fine silt) and 80 μm (fine sand) (Fig. 8). Small fragments of vegetal tissues and insect cuticle are scattered in the detrital material (Fig. 9). This unit is interpreted as organic-poor lake deposits (Fl). A root porosity associated with ferruginous precipitation is also present but poorly developed. The upper part of this unit is structured in millimetre-thick lamellae (platy structure) caused by segregation ice lenses (Fig. 7B), and the lamination is totally obliterated (facies Fm). [3] A sandy gravel unit (Gt, Sh) about 1 m thick, showing an upward fining trend (Fig. 7C). It corresponds to fluvial deposits that fill a channel eroding the underlying fine-grained unit. A thin ferruginous pan develops at the contact between the units. [4] Massive sandy gravel deposits (Gm) 1 m thick overlying the alluvium. Locally, the sediment contains a large proportion of fine particles, and the gravels are scattered in a sandy silt matrix (matrix support, Dmm). Some sand levels form involutions with a massive structure (facies Sm). This unit is interpreted as slumped alluvial and lacustrine deposits. [5] Sand (Sh) and laminated or massive and silt deposits (Fl, Fm) with a platy structure unconformably cover unit [2] in the western part of the trench, where they can reach 2 m in thickness. This unit also corresponds to lake deposits. Because of truncation caused by quarry works, its stratigraphical relationship with units [3] and[4 In the western part of the trench, the section shows laminated silts (unit [5]) extending over several tens of metres. This unit is locally affected by normal faults with an offset of a few centimetres. A recumbent fold involving sand and silt beds is also visible (Fig. 6). At the western end, a small cross-section transverse to the main trench exposes a sandy gravel unit showing planar cross stratification with a dip of 30 to 33°. A tilted block of bedded sand is interstratified in this unit, which is interpreted as a small delta (Fig. 10B). Laterally, laminated silts cover the deltaic sands. The beds show a 20°plunge but become progressively horizontal about 10 m to the east (Fig. 10C). The lack of onlap structures indicates that the plunge resulted mostly from post-sedimentary deformation caused by the collapse of the central part of the lake deposits. The second trench (Section 2) also shows strongly deformed sandy gravel interstratified with fine-grained lake deposits (Fig. 11). Deformation is pervasive in this trench and in other locations in the quarry. It comprises (i) inverse faults associated with the subsidence of sandy gravel units (Fig. 12A), (ii) overturned folds in sandy gravel or silt (Fig. 12B), (iii) involutions, and (iv) tilted and faulted deltaic sands (Fig. 12C). Chronological data Radiocarbon dating of lake silts collected at the bottom of the main trench (Fig. 6) provided an age of 20,320 ± 70 BP (Beta-470,451), i.e., after calibration (Intcal13 calibration curve, [START_REF] Reimer | Intcal13 and Marine13 radiocarbon age calibration curves 0-50,000 years cal BP[END_REF] between 24,645 and 24,120 a. cal BP (2σ). This age corresponds to Greenland stadial GS-3 [START_REF] Rasmussen | A stratigraphic framework for abrupt climatic changes during the Last Glacial period based on three synchronized Greenland ice-core records: refining and extending the INTIMATE event stratigraphy[END_REF], one of the coldest periods of the Last Glacial [START_REF] Hughes | A stratigraphical basis for the Last Glacial Maximum (LGM)[END_REF]. The OSL dating of unit [3] sands (location in Fig. 6) was also carried out from which an age of 16.6 ± 0.9 ka (Shfd 17,101) was obtained. This places the late phase of fluvial deposition within Greenland Stadial GS-2.1a. Interpretation The site of Marcilly-sur-Seine shows lake deposits resting on the lower terrace (Fy) of the Seine River. The low organic content of the silts suggests that the banks were poorly vegetated and that the biological productivity in the lake was weak. Lamination preservation also indicates a near absence of bioturbation on the lake bottom. Because the lake was shallow, these features indicate an environment unfavourable to biological activity, probably a periglacial context in agreement with the numerical ages obtained. In such a context, the hypothesis of a thermokarst origin can be proposed. It is supported by the following arguments: -According to the widely accepted scheme for northern Europe, the rivers adopted a braided pattern during the Last Glacial [START_REF] Antoine | Response of the Selle river to climatic modifications during the Lateglacial and Early Holocene (Somme Basin, Northern France)[END_REF][START_REF] Briant | Climatic control on Quaternary fluvial sedimentology of a Fenland Basin river, England[END_REF][START_REF] Vandenberghe | The fluvial cycle at cold-warm-cold transitions in lowland regions: a refinement of theory[END_REF]. The accumulation of fine-grained particles in abandoned channels is typically reduced [START_REF] Miall | The Geology of Fluvial Deposits[END_REF], and the formation of thick lake deposits seems unlikely in this kind of fluvial environment. -Lake silts formed after a phase of ice wedge degradation associated with sediment subsidence and fracturing. The development of shallow thermokarst lakes (typically 1-5 m; [START_REF] Hinkel | Thermokarst Lakes on the Arctic coastal plain of Alaska: geomorphic controls on bathymetry[END_REF] caused by the melting of ice wedge networks is a common process in permafrost-affected floodplains of modern Arctic milieus. Drainage occurs as a result of erosion of the lake margin by fluvial channels, or because of the decay of ice wedge polygons in adjacent land [START_REF] Mackay | Catastrophic lake drainage, Tuktoyaktuk peninsula area, District of Mackenzie[END_REF][START_REF] Jones | Observing a catastrophic thermokarst lake drainage in Northern Alaska[END_REF], or else because of permafrost thaw under the lake [START_REF] Yoshikawa | Shrinking thermokarst ponds and groundwater dynamics in discontinuous permafrost near Council, Alaska[END_REF]. The presence of ice wedge pseudomorphs in the Fy alluvium is attested in many sites in the study area [START_REF] Michel | Périglaciaire des environs de Paris[END_REF]Fig. 13). The mound-like topography observed on the edge of the Fy terrace (Fig. 4) can also be interpreted as remnants of degraded ice wedge polygons (badland thermokarst reliefs; [START_REF] French | The Periglacial Environment[END_REF][START_REF] Kokelj | Advances in thermokarst research[END_REF][START_REF] Steedman | Spatio-temporal variation in high-centre polygons and ice-wedge melt ponds, Tuktoyaktuk coastlands, Northwest Territories[END_REF], and the shallow sinuous valleys between these reliefs are likely to be meltwater channels [START_REF] Fortier | Observation of rapid drainage system development by thermal erosion of ice wedges on Bylot Island, Canadian Arctic Archipelago[END_REF]. -Fluvial channels built small deltas in the lake. The lake centre collapsed and the laminated deposits were deformed. Tilting of the deltas during their edification indicates that subsidence may have been partly synsedimentary. This would result from progressive permafrost melting during widening of the thermokarst lake [START_REF] Morgenstern | Evolution of thermokarst in East Siberian ice-rich permafrost: a case study[END_REF]. The large recumbent folds are original structures rarely reported in the literature. Related structures have been described by Pissart (2000aPissart ( , 2000b) ) in ramparts surrounding Younger Dryas lithalsa scars in Belgium. According to [START_REF] Pissart | The potential lateral growth of lithalsas[END_REF], the growth of segregation ice mounds in the context of discontinuous permafrost would cause vertical and lateral thrusting of the surrounding sediments. The circular ramparts that remain after ice melting originate from the combined action of lateral thrusting during lithalsa growth and of active layer slumping on the hillside. Trenches in the ramparts show folds induced by slumping and often normal and reverse faults. Mound collapse during thaw causes subsidence of the deformed sediments, and the hinge of the folds then becomes subhorizontal. In the context of Marcillysur-Seine, the growth of ice-cored mounds during periods of permafrost development appears highly probable and would have been responsible by part for the formation of pools. According to [START_REF] Wolfe | Lithalsa distribution, morphology and landscape associations in the Great Slave Lowland, Northwest Territories, Canada[END_REF] in Canada, the lithalsas develop mainly in fine-grained deposits favourable to ice segregation, especially in glaciomarine or glaciolacustrine clayey silt deposits in wet lowlands. They reach 1 to 10 m in height and have a rounded or elongated shape (lithalsa plateaus and ridges). This type of context appears similar to that inferred at Marcilly-sur-Seine. Fig. 14 depicts the main sedimentary phases identified in Marcillysur-Seine. Ice wedge formation predates 24 ka cal BP and may correspond to the main phases of ground ice development (31-25 ka) as identified from the loess sections in northern France [START_REF] Antoine | Les séquences loessiques Pléistocène supérieur d'Havrincourt (Pas-de-Calais, France): stratigraphie, paléoenvironnements, géochronologie et occupations paléolithiques[END_REF][START_REF] Bertran | Distribution and chronology of Pleistocene permafrost features in France: database and first results[END_REF]. Gourgançon Geomorphological setting Gourgançon (48.6840°N, 4.0380°E) corresponds to an old quarry in the Fy alluvial terrace of the Maurienne River, a small tributary of the Aube River. The river watershed is entirely located in Cretaceous terrains, and therefore, the fluvial deposits are mostly calcareous. The local substrate is composed of Santonian (c4) and Campanian (c5) chalk, which forms hilly relief up to 50 m above the valley (Fig. 15). The chalk is affected by faults near the site [START_REF] Baize | Non-tectonic deformation of Pleistocene sediments in the eastern Paris basin, France[END_REF]. The discontinuous loess cover and the underlying fragmented chalk are frequently affected by cryoturbation, which forms soil stripes on slopes. The IGN aerial photographs make it possible to identify soil stripes in many fields surrounding the study site, particularly in areas where the Campanian substrate outcrops (Fig. 16). Gourgançon has been the subject of previous publications [START_REF] Baize | Non-tectonic deformation of Pleistocene sediments in the eastern Paris basin, France[END_REF][START_REF] Benoit | Quaternary faulting in the central Paris basin: evidence for coseismic rupture and liquefaction[END_REF][START_REF] Vliet-Lanoë | Quaternary thermokarst and thermal erosion features in northern France: origin and palaeoenvironments[END_REF], and divergent interpretations were proposed to explain the origin of the deformations. Stratigraphy The stratigraphy comprises the following units, from the bottom to the top (Fig. 17): [1] Poorly stratified chalk gravel (Gm), mostly exposed in the SW part of the quarry with a maximum thickness of 3 m. This unit is interpreted as alluvium. [2] Dominantly horizontally bedded sand and small gravel (Sh) (Fig. 18A,B). Lenses with planar cross bedding (Sp, current ripples) or massive lenses (Sm, probably related to sedimentary mass flows) are also visible. This unit is 1 to 3 m thick and mostly develops at both ends of the outcrop. [3], [4] Laminated silt and fine sand (Fh) (Fig. 18C,D) showing by place a prismatic structure. These units develop in the central part of the outcrop where they reach almost 3 m thick. Lamination is mostly horizontal but shows a significant dip in the NE part of the cross section. In this area, the lower unit [3] has a strong dip (16-20°) and is affected by brittle deformation. The upper unit [4] rests unconformably on unit [3] and dips at a smaller angle (5-7°). Bedding at the top of the lower unit is distorted and evanescent. Deformation is interpreted as resulting from slumping of the silts. [5] Up to 1 m thick sand and small gravel with planar cross-bedding (Sp) passing laterally to unit [4] (Fig. 18C). Units [2] to [5] are interpreted as lake deposits similar to those observed at Marcilly-sur-Seine. According to Van Vliet-Lanoë et al. (2016), the prismatic structure would reflect the development of reticulate ice in the silts. The SW zone of the outcrop, where the silt units are lacking, probably represents a delta fed by inputs coming from the nearby hillslope or, possibly, by alluvial deposits from the Maurienne River. A second delta, later covered by laminated silts, is also visible in the NE part of the section. The foresets [5] reflect delta progradation toward the SW during the final evolution of the lake. Deformation Widespread deformation affects the deposits. Two events can be identified: the first located to the NE is synsedimentary; the second to the SW is postsedimentary. The structures are organised in a similar way and comprise: -A network of symmetric bell-shaped reverse faults (Figs. 17,18A). In the SW part of the cross section, which is the most legible, the fault structure is located just above a depression in alluvial deposits, which have been injected by a large body of unstratified, upwardfining sand. The injection has a globular shape with protrusions interpreted as dykes. -A network of conjugate normal faults developed laterally to the reverse faults (Fig. 18B). The first generation of faults developed between two phases of lake sedimentation (Fig. 19) and followed a bulging of the deposits, which caused their slump. The heaved deposits were truncated, and the later lacustrine unit was deposited unconformably on the former. The second faulting event to the SW intersects the whole sequence and has therefore developed at the very end of lake infilling. Chronological data Because of the lack of organic material and the calcareous composition of the deposits, the chronological framework available for this section is limited. The OSL dating of sand from unit [2] was previously tried by CIRAM (CIRAM, 2014), and enough quartz grains were retrieved. The sample gave an age of 13.57 ± 0.56 ka, contemporaneous with the Bölling-Alleröd interstadial (Greenland Interstadial (GI) 1; [START_REF] Rasmussen | A stratigraphic framework for abrupt climatic changes during the Last Glacial period based on three synchronized Greenland ice-core records: refining and extending the INTIMATE event stratigraphy[END_REF] at the end of the Last Glacial. However, since this age reflects the last exposure to light of the quartz grains, i.e., the time of burial, this OSL age would imply that deposition of the overlying sediments, including the lake deposits, would have taken place during the Lateglacial or the Holocene. The lithofacies, however, is not compatible with such an age when compared to other regional alluvial records [START_REF] Pastre | Lateglacial and Holocene fluvial records from the central part of the Paris Basin (France)[END_REF][START_REF] Antoine | Response of the Selle river to climatic modifications during the Lateglacial and Early Holocene (Somme Basin, Northern France)[END_REF], and deposition in an earlier phase of the Last Glacial must be favoured. Incorrect γ-ray dose rate assessment because of sediment heterogeneity could lead to age underestimation by a few millennia. The similarity of the sedimentary sequence with that of Marcilly-sur-Seine also strongly suggests that lake sedimentation occurred during the Last Glacial. Interpretation As in Marcilly-sur-Seine, the sedimentary sequence shows lake deposits overlying coarse-grained alluvium. Deposition took place in a periglacial context and reticulate ice developed in shallow lake sediments. Consequently, thermokarst may be proposed as the most plausible factor for lake formation. Brittle deformation affected the lacustrine units. The deformation pattern, which associates a network of bell-shaped reverse faults and normal faults, has already been described from laboratory experiments aimed at reproducing the subsidence of a block under a soft cover [START_REF] Sanford | Analytical and experimental study of simple geologic structures[END_REF] or the formation of a caldera above a magmatic chamber [START_REF] Roche | Sub-surface structures and collapse mechanisms of summit pit[END_REF][START_REF] Walter | Formation of caldera periphery faults: an experimental study[END_REF][START_REF] Geyer | Relationship between caldera collapse and magma chamber withdrawal: an experimental approach[END_REF][START_REF] Coumans | Caldera collapse at near-ridge seamounts: an experimental investigation[END_REF]. In these experiments, bell-shaped fractures form in granular material above the chamber, and annular tension cracks (normal faults) starting from the surface accommodate the collapse laterally. Further development of the fractures up to the surface is accompanied by downward movement of the lower blocks toward the cavity (reverse faulting) (Fig. 19A). Successive fractures are created as the cavity collapses and fills. In the case of uneven vertical stress due to surface reliefs, [START_REF] Coumans | Caldera collapse at near-ridge seamounts: an experimental investigation[END_REF] showed that fracturing may develop asymmetrically above the cavity, and a system of conjugate normal faults forms preferentially in the highest side (Fig. 20B). The fault distribution at Gourgançon shows that two zones of collapse developed: one to the NE between two phases of lacustrine sedimentation; the other to the SW during a final phase of lake filling. The SW structure is centred above a sand injection, showing that high interstitial water pressure occurred leading to hydraulic fracturing and sand fluidization (Ross et al., 2011b). The association between injection and faulting of the overlying sediments strongly suggests that the two phenomena are genetically linked. Therefore, ground subsidence following the collapse of a cavity created by the emptying of a liquefied deep sand layer seems to be the most plausible factor at the origin of faulting. Excess water pressures may be related to different contexts. In nonperiglacial environments, interstitial water pressures higher than hydrostatic hardly develop in freely drained coarse-grained materials unless an external stress is applied. In particular, liquefaction of watersaturated sand, hydraulic fracturing, and fluidization have been reported as a consequence of earthquakes [START_REF] Youd | Liquefaction, flow, and associated ground failure[END_REF][START_REF] Audemard | Survey of liquefaction structures induced by recent moderate earthquakes[END_REF][START_REF] Obermeier | Field occurrences of liquefaction-induced features: a primer for engineering geologic analysis of paleoseismic shaking[END_REF][START_REF] Thakkar | Internal geometry of reactivated and non-reactivated sandblow craters related to 2001 Bhuj earthquake, India: a modern analogue for interpreting paleosandblow craters[END_REF]. In periglacial environments, excess water pressure may occur either because of permafrost aggradation at the expense of an unfrozen ground pocket (talik), e.g., during refreezing of sediments in a drained lake in the context of continuous permafrost (closed system), or through gravity-induced water flow in a thawed layer beneath or within the frozen ground (open system) [START_REF] Mackay | focus: permafrost geomorphology[END_REF][START_REF] Mackay | Pingo growth and collapse, Tuktoyaktuk Peninsula area, western Arctic coast, Canada: a long-term field study[END_REF][START_REF] Yoshikawa | Notes on open-system pingo ice, Adventdalen, Spitsbergen[END_REF]. Hydraulic fracturing and water injection followed by its transformation into ice gives rise to massive ice sills overlain by a few decimetre-thick sedimentary cover (pingos, seasonal frost blisters). These can reach several meters in height. Continuous permafrost (and, therefore, the formation of closed system pingos) during the Last Glacial is unlikely in the Paris Basin (Andrieux et al., 2016a). However, the palaeoclimatic (widespread discontinuous permafrost) and geomorphological contexts (alluvium at the foot of a slope) was favourable to the development of open system pingos or frost blisters (e.g., [START_REF] Pollard | Formation of seasonal ice bodies[END_REF][START_REF] Yoshikawa | Notes on open-system pingo ice, Adventdalen, Spitsbergen[END_REF][START_REF] Worsley | Geomorphology and hydrogeological significance of the Holocene pingos in the Karup Valley area, Traill Island, northern east Greenland[END_REF]. In the examples investigated in modern Arctic environments, ground water was confined between the permafrost and the frozen part of the active layer in an alluvial fan or plain. Excess water pressure resulted from gravity flow between the feeder zone and the site. The growth of ice mounds in a fluvial channel led to its abandonment by the river [START_REF] Worsley | Geomorphology and hydrogeological significance of the Holocene pingos in the Karup Valley area, Traill Island, northern east Greenland[END_REF]. In the NE fault zone, no injection structure was observed and the mechanism responsible for collapse and fracturing is less obvious. Tilting of laminated silts, indicative of bulging, followed by slumping provide clear evidence that a mound formed laterally in the lacustrine deposits. This mound developed probably after lake drainage and exposition of the sediments to frost, leading to the growth of segregation ice (lithalsa) or injection ice (or both as is the case for many modern ice mounds according to [START_REF] Harris | Pingos and pingo scars[END_REF]. The lack of obvious injection features may be result from the inappropriate location of the cross section with respect to the structure or from the absence of a sand layer prone to liquefaction at depth. Active layer slumping suitably explains tilting of lacustrine silts [unit 3], soft-sediment deformation observed at the top of this unit, and truncation. Subsequent collapse and fracturing caused by ice melting was followed by resumption of lake sedimentation. Other potential thermokarst structures in alluvial context in the Paris Basin Cross sections in alluvial deposits from the Last Glacial potentially hosting thermokarst structures (except for ice wedge pseudomorphs) are rare. To overcome this difficulty, other indices have been sought to try mapping the areas affected by thermokarst. These indices are based on the detailed topographical data available from the 5-m DEM (IGN) and on the aerial photographs accessible in Google Earth. The thermokarst features at Marcilly-sur-Seine are associated with a pitted or undulating topography and a spotted pattern on aerial photographs. This pattern typifies the whole Fy terrace near the Seine-Aube confluence (cf. [START_REF] Vliet-Lanoë | Quaternary thermokarst and thermal erosion features in northern France: origin and palaeoenvironments[END_REF]. Dark spots correspond to finegrained wet (lacustrine) deposits, while light spots indicate that coarser well-drained alluvial materials are exposed. Similar features have, therefore, been sought in other areas of the Paris Basin. If possible, the presence of potential lake deposits has been verified through the borehole data stored in the BSS (BRGM). The identified sites are plotted in Fig. 21. All are located in upper Cretaceous terrains north of latitude 48°N, in an area with abundant ice wedge pseudomorphs (Andrieux et al., 2016a). These features are sometimes associated with other periglacial structures, such as polygons in nearby alluvial deposits (Fig. 22), or soil stripes on slopes. In some sites, available boreholes show fine-grained light-coloured levels, generally described as 'grey clays' (Fig. 23). These deposits, 0.5 to 3 m thick, appear most often at the top of the alluvial sequence, or more rarely are interstratified in alluvial sand and gravel. They contrast with Holocene channel fillings, which usually have a dark colour because of their high content in organic matter and are similar to the lacustrine silts observed at Marcilly-sur-Seine. [START_REF] Michel | Dépressions fermées dans les alluvions anciennes de la Seine à 100 km au S-E de Paris[END_REF] also describes 'marly silts' associated with depressions thought to be of thermokarst origin in the Fy terrace in an area located near Villierssur-Seine, 20 to 40 km west of Marcilly-sur-Seine. Discussion Origin of the brittle deformation The sites of Marcilly-sur-Seine and Gourgançon show that thermokarst lakes developed during the Last Glacial in alluvial deposits in the Paris Basin. In the first site, thermokarst is clearly associated with the melting of an ice wedge network. At least two phases of thermokarst development followed by a phase of lake drainage, alluvial deposition, and segregation ice growth (platy structure) can be identified. According to some authors [START_REF] French | The Periglacial Environment[END_REF], such an evolution can occur autocyclically without any climate forcing. When water does not freeze up to the lake bottom in winter, the underlying permafrost degrades (formation of a talik beneath the lake) either partially or totally in areas of thin discontinuous permafrost [START_REF] Yoshikawa | Shrinking thermokarst ponds and groundwater dynamics in discontinuous permafrost near Council, Alaska[END_REF]. Within the frame of the French Pleistocene, the succession of stadials and interstadials probably played a major role in permafrost evolution [START_REF] Antoine | Les séquences loessiques Pléistocène supérieur d'Havrincourt (Pas-de-Calais, France): stratigraphie, paléoenvironnements, géochronologie et occupations paléolithiques[END_REF][START_REF] Bertran | Distribution and chronology of Pleistocene permafrost features in France: database and first results[END_REF] and may explain the cyclic development of thermokarst in the floodplain. The finegrained lacustrine deposits have themselves promoted the growth of segregation ice mounds. These have resulted in significant deformation of the sediments. Ductile deformation developed mainly caused by slumping of the lifted active layer on hillsides. The associated features are intersected by pervasive brittle deformation. According to the contextual analysis, a periglacial origin is the most parsimonious hypothesis to explain fracturing. The faults are attributed to sediment settlement after melting of ice wedges and segregation or injection ice bodies. Because of the scarcity of natural cross sections, faulting has been rarely reported from modern permafrost regions. Mention of steeply dipping, ice-filled reverse faults has been made by [START_REF] Calmels | Internal structure and the thermal and hydrological regime of a typical lithalsa: significance for permafrost growth and decay[END_REF] from cores in a lithalsa from northern Quebec (Canada). Large subvertical ice-filled fractures were also observed by [START_REF] Wünnemann | Observations on the relationship between lake formation, permafrost activity and lithalsa development during the last 20 000 years in the Tso Kar Basin, Ladakh, India[END_REF] in a lithalsa section from India. According to [START_REF] Calmels | Internal structure and the thermal and hydrological regime of a typical lithalsa: significance for permafrost growth and decay[END_REF], the faults would have developed during the growth of ice lenses following permafrost aggradation. They would have been initiated by cryodessiccation cracks, and the offset would have resulted from the differential growth of ice lenses. Normal and reverse faults have been described in Pleistocene pingo and lithalsa scars by [START_REF] Kasse | Weichselian Upper Pleniglacial aeolian and ice-cored morphology in the southern Netherlands (Noort-Brabant, Groote Peel)[END_REF] and Pissart (2000aPissart ( , 2000b)). In these cases, thaw settlement was thought to be the main factor involved in faulting. Thaw settlement-induced normal faulting in the sandy host material of Pleistocene and Holocene ice wedge pseudomorphs is also commonly reported (e.g., [START_REF] Murton | Ice wedges and ice wedge casts[END_REF]. The origin of brittle deformation frequently observed in the Pleistocene alluvium of the Paris Basin has been strongly debated in the literature and different hypotheses have been proposed. [START_REF] Coulon | Mise en évidence et approche microtectonique des déformations quaternaires en Champagne: implications géodynamiques et conséquences hydrographiques[END_REF], [START_REF] Benoit | Tectoniques rissienne et fini-würmienne/holocène dans la basse terrasse de la rivière Aube (Longueville-sur-Aube), dans le sud-est du bassin de Paris, France[END_REF] and [START_REF] Benoit | Quaternary faulting in the central Paris basin: evidence for coseismic rupture and liquefaction[END_REF] favoured a seismic hypothesis. Fracturing was thought to reflect the propagation of deepseated faults through superficial sediments during earthquakes. Sand injections would have been triggered by local liquefaction of the sediment caused by seismic vibrations. [START_REF] Baize | Non-tectonic deformation of Pleistocene sediments in the eastern Paris basin, France[END_REF] considered the hypothesis of dissolution of the underlying limestone (karst formation) to be the most likely to explain the faults observed at Gourgançon. They reject a seismic hypothesis, mainly because of (i) the low regional seismicity both for the recent and the historical periods; (ii) the large cumulated offset of the faults (N1 m), which would imply a high magnitude earthquake unlikely to occur in the geodynamical context of the Paris Basin; and (iii) the mismatch between movements recorded by the faults affecting the Pleistocene deposits and those in the Mesozoic chalk substrate. Since then, further cleaning of the quarry front highlighted the symmetrical nature of the reverse fault network, which fits well with the collapse of sediments over a cavity. Some arguments weaken the karst hypothesis, however. These are (i) chalk karstification is generally limited, although not entirely absent [START_REF] Rodet | Karst et évolution géomorphologique de la côte crayeuse à falaises de la manche. L'exemple du massif d'aval[END_REF]; (ii) a faulting phase occurred between two phases of lacustrine silt deposition; the glacial periods were, however, not favourable to dissolution because the production of CO 2 in soils by living organisms remained low (e.g., [START_REF] Ford | Karst in cold environments[END_REF]; the deposits are carbonate-rich and the ground water was probably saturated with respect to calcite; (iii) the strong local dip of silt layers and the presence of an erosional surface within the deposits show that these have been affected by a phase of bulging, which is hardly explainable within the frame of the karst hypothesis; and (iv) karst does not account for the association between fracturing and the injection of fluidised sand in the centre of the fault structure. The scenario proposed by Van Vliet-Lanoë et al. ( 2016) favoured a periglacial origin for the faults. Accordingly, fracturing would be caused by sliding of the deposits into a depression left by ice melting, possibly from a lithalsa. The movement would have occurred over a sliding plane formed at the base of the lacustrine silts, and the arched shape of the faults would be related to later deformation by frost-creep. However, this mechanism does not take into account the symmetric development of the faults, which excludes horizontal spreading as the main process but is in agreement with the model of collapse above a cavity. The sand injection was interpreted by [START_REF] Vliet-Lanoë | Quaternary thermokarst and thermal erosion features in northern France: origin and palaeoenvironments[END_REF] as slow soft-sediment deformation following ice melting. Such a hypothesis seems equally unlikely, as it does not account for the isolated nature of the structure, which contrasts with classical load cast observed in periglacial contexts [START_REF] Vandenberghe | Cryoturbations: a sediment structural analysis[END_REF][START_REF] Vandenberghe | Cryoturbation structures[END_REF][START_REF] Bertran | Pleistocene involutions and patterned ground in France: examples and analysis using a GIS database[END_REF], and for the lack of evidence for slow deformation of watersaturated material such as bedding deformed parallel to the structure outlines. In contrast, the sand body shows a lack of bedding, compatible with sand fluidization, an upward fining that testifies to settling of the particles from a suspension, and protrusions, which indicate hydraulic fracturing of the host sediment. These features are thought to be more indicative of sudden intrusion of water-suspended sand through the overlying layers than of slow sediment deformation upon thawing. Pattern and distribution of thermokarst structures Although the formation of lakes in connection with the melting of ice wedges in low-lying areas is well documented from today's Arctic environments, no similar structure has been described so far in Europe except for a few sites from the Netherlands and eastern Germany [START_REF] Van Huissteden | Detection of rapid climate change in the Last Glacial fluvial successions in The Netherlands[END_REF][START_REF] Bohncke | Rapid climatic events as recorded in Middle Weichselian thermokarst lake sediments[END_REF]. In those sites, the lake infillings comprise organic silt layers (gyttja) a few decimetres thick and alluvial and aeolian sand. According to [START_REF] Bohncke | Rapid climatic events as recorded in Middle Weichselian thermokarst lake sediments[END_REF], the basal lake deposits are affected by involutions that would have formed during permafrost degradation. Contrary to Marcilly-sur-Seine, the overlying lacustrine units do not exhibit any significant deformation, possibly because of their low thickness and of rapid burial during the subsequent stadial. If the hypothesis of lithalsa formation at Marcilly-sur-Seine is correct, we can note that they did not generate ramparts clearly identifiable in the field and from the 5-m DEM. In addition, the pattern in aerial photography does not reveal any obvious circular structure as initially expected, but mostly irregular dark and light-coloured spots. At Gourgançon, the low quality of the DEM and the disturbances caused by quarrying do not make it possible to identify specific reliefs. Circular ramparts (sometimes elongated along slopes) are considered the best criterion for identifying scars of ice-cored mounds, and many examples have been reported from northern Europe [START_REF] Watson | Remains of pingos in Wales and the Isle of Man[END_REF][START_REF] Pissart | Remnants of periglacial mounds in the Hautes Fagnes (Belgium). Structure and age of the ramparts[END_REF]Pissart, , 2000aPissart, , 2000b;;[START_REF] Kasse | Weichselian Upper Pleniglacial aeolian and ice-cored morphology in the southern Netherlands (Noort-Brabant, Groote Peel)[END_REF][START_REF] Ballantyne | The Periglaciation of Great Britain[END_REF]Ross et al., 2011a). The few dated examples show, however, that these ramparted structures are quite recent, i.e., Younger Dryas (MIS 1) or very end of the Last Glacial (late MIS 2) (review in Pissart, 2000b). Erosion by a wide range of geomorphological processes (slumping, frost creep, overland flow, fluvial processes, deflation) may explain the faint reliefs still surrounding late MIS 2 scars ( [START_REF] Gans | Pingo scars and their identification[END_REF][START_REF] Kasse | Weichselian Upper Pleniglacial aeolian and ice-cored morphology in the southern Netherlands (Noort-Brabant, Groote Peel)[END_REF] and the almost total disappearance of the ramparts in older scars. According to Pissart (2000a), the formation of lithalsa plateaus rather than isolated mounds may also be involved in the lack of circular structures left by ice melting. In Belgium, this author described areas with circular ramparts coexisting with areas of very confused topography, probably corresponding to the degradation of lithalsa plateaus. The association of lake deposits, evidence for a periglacial context, undulating or pitted topography, and abundant ductile and brittle deformation of the lacustrine layers is assumed here to be the most reliable criterion for the identification of Pleistocene lithalsas and lithalsa plateaus. The alluvial sites potentially affected by thermokarst in the Paris Basin are distributed north of latitude 48°N in a zone that has yielded abundant ice wedge pseudomorphs in upper Cretaceous terrains. Unexpectedly, the search for similar structures in other regions of northern France was unsuccessful. In addition, laminated mineral lacustrine deposits on Pleistocene terraces have never been reported in the literature to our knowledge. The reason may be lithology. Lower Cretaceous terrains (mostly composed of sand, clay, and marl) have delivered large amounts of fine-grained particles to the water courses that cross them. Fine particle accumulation in alluvial plains downstream gave birth to deposits highly susceptible to the formation of ice wedges and segregation ice. River incision in their lower course as a consequence of sea level lowering during the glacial was not favourable to broad sedimentation of fine-grained particles, and the almost exclusive supply of large elements (flint pebbles) by the upper Cretaceous chalk led to the deposition of dominantly coarse-grained alluvial material, in which ice growth was limited. Conclusion The Last Glacial fluvial sequences of the Seine and Maurienne rivers show laminated lacustrine deposits overlying alluvial sandy gravel. A thermokarst origin of the lakes is supported by abundant traces of ground ice, particularly ice wedge pseudomorphs beneath the lacustrine layers at Marcilly-sur-Seine, and synsedimentary deformation features caused by thaw settlement. These features include both brittle deformation (normal and reverse faults) resulting from ground subsidence caused by ice melting and ductile deformations caused by slumping of the sediments heaved by the growth of ice-cored mounds. These correspond to lithalsas (or lithalsa plateaus) at Marcilly-sur-Seine and open system pingos or lithalsas at Gourgançon. At least two generations of thermokarst are recorded in each quarry. They could reflect the Dansgaard-Oeschger millennial climate variability typical of the Last Glacial. The structures studied in quarries are associated with a typical undulating topography and a spotted pattern in aerial photographs. The search for similar patterns in the Paris Basin indicates that many other potential thermokarst sites exist in the Last Glacial terrace (Fy) of rivers located north of 48°N when they cross the lower Cretaceous sands and marls. In some sites, the presence of organic-poor, fine-grained deposits presumably of lacustrine origin was confirmed by borehole data. The site distribution coincides in part with that already known for ice wedge pseudomorphs. The lack of identifiable thermokarst in large areas of northern France could be related to the coarser grain size of the alluvial deposits. The discovery of lake deposits also opens up new possibilities for documenting the palaeoenvironments of the Last Glacial in the Paris Basin from pollen, insect remains, and other biomarkers, as they are still poorly known from continental records. This aspect, together with the precise dating of the deposits, should prompt further investigation. Fig. 1 . 1 Fig. 1. Distribution of Pleistocene periglacial features in France, from Andrieux et al. (2016b), and neighbouring countries, from Isarin et al. (1998). The southern limit of widespread discontinuous permafrost is taken from Andrieux et al. (2018) and corresponds to the modelled LGM isotherm (Max-Plank Institute PMIP3 model, courtesy of K. Saito) that best fits the southern limits of ice wedge pseudomorphs. LGM glaciers are from[START_REF] Ehlers | Quaternary Glaciations. Extent and Chronology, Part I: Europe[END_REF] for the Alps and the Pyrenees and from[START_REF] Hughes | The last Eurasian ice sheetsa chronological database and time-slice reconstruction[END_REF] for the British-Scandinavian Ice Sheet. Fig. 2 . 2 Fig. 2. Simplified geological map of the Paris Basin (BRGM, infoterre.brgm.fr), and location of the study sites. The periglacial features listed in Andrieux et al. (2016b) are indicated. Fig. 3 . 3 Fig. 3. Topography of the Marcilly-sur-Seine area, from the 5-m DEM (IGN). (A) Elevation; the Fy terrace is in pale rose to red colour; (B) shaded topography. The rectangles correspond to the areas enlarged in Figs. 4 and 5. Fig. 4 . 4 Fig. 4. Detailed topography (A) and aerial view (B) of the Fy terrace near Marcilly-sur-Seine (IGN/Google Earth). The location of the area is indicated in Fig.3; chshallow channel, cmconical mound, depdepression, qquarry. ] remains unclear. We suppose here that unit[5] postdates unit[3].4.1.3. DeformationAbundant deformation structures can be observed throughout the trench. They consist of: -A vertical structure about 0.4 m in width cutting through the basal grey blue lacustrine silt [unit 2] and filled with massive oxidized silt (Fig. 7D). The surrounding beds are curved downward symmetrically on either side of the structure. This depression, visible on both sides of the trench, is interpreted as an ice wedge pseudomorph. Approximately 10 m to the west, a second depression may correspond to another ice wedge pseudomorph. -Ductile deformation affects the deposits, particularly in the eastern part of the trench. It can be seen both in the silt [2] and the sandy gravel [3] units, which form a recumbent fold (Fig. 10A). The slumped levels [4] overlay the folded unit. These features testify to the deformation of water-saturated sediments. -Faults intersect the deformed beds. The faults are predominantly normal and indicate the collapse of sediments above the ice wedge pseudomorphs over a width of several metres. Laterally, conjugate normal faults delineate small grabens in the lake silts due to lateral spreading of the deposits. -Cracks without vertical displacement, sometimes underlined by secondary carbonate accumulation, develop from the top of the section. They are associated with a well-developed platy structure. The fissures are about 1.5 m high and are a few metres apart. They are interpreted as thermal contraction cracks postdating sediment deformation. Fig. 5 .Fig. 6 . 56 Fig. 5. Composite aerial view (IGN/Google Earth) of the Fy terrace near Saint-Just-Sauvage. The location of the area is indicated in Fig. 3. Fig. 7 . 7 Fig. 7. Close-up views of the main sedimentary units, Marcilly-sur-Seine. (A) Oxidized laminated silt (unit 2); (B) massive silt with a platy structure inherited from segregation ice lenses (top of unit 2); (C) bedded sand and fine gravel (unit 3); (D) deformed silt and bedded sand above an ice wedge pseudomorph. The location of the photographs is shown in Fig. 6. Fig. 9 . 9 Fig. 9. Microfacies of lake deposits, unit [2], Marcilly-sur-Seine, Plane Polarised Light. (A) Laminated silts; the lamination is partly disrupted (v: vesicles); (B) fragment of insect cuticle in laminated fine silts. Fig. 10 . 10 Fig. 10. (A) Recumbent fold in sand (unit 3) covered by a diamictic layer (unit [4]); (B) planar cross bedded sand (delta); a tilted and deformed block of bedded sand is visible at the base; (C) laminated lacustrine silt; lamination is subhorizontal to the left and dips up to 20°to the right of the trench. The deltaic sands shown in (A) are located to the right end of the trench. Fig. 12 . 12 Fig. 12. (A) Reverse faults in alluvial sand and gravel; (B) overturned fold in bedded lacustrine sand and silt; (C) normal faults in deltaic sand. The location of (C) is indicated in Fig. 11. All photos are from P. Benoit. Fig. 15 . 15 Fig. 15. 1:50,000 geological map of the Gourgançon area (BRGM) and location of soil stripes listed in Andrieux et al. (2016a, 2016b). Fig. 14 . 14 Fig. 14. Schematic reconstruction of the main sedimentary phases recorded at Marcilly-sur-Seine. Fig. 16 . 16 Fig. 16. Soil stripes in IGN/Google Earth aerial photographs near Gourgançon. (A) Champfleury2 (48.6225°N, 4.0041°E), (B) Gourgançon7 (48.6611°N, 4.0129°E). The feature location is shown in Fig. 14. Fig. 18 . 18 Fig. 18. Close-up view of (A) reverse faults and sand injection in bedded sand (unit 2); (B) conjugate normal faults in bedded sand; (C) foresets (unit 5); (D) lacustrine silts (unit 4). Fig. 19 . 19 Fig. 19. From bottom to top, faulted sand (unit 2), slumped silt (unit 3), slightly dipping laminated silt (unit 3) lying unconformably over unit [2]. Fig. 20 . 20 Fig. 20. (A) Experimental bell-shaped faults developed above a cavity in a sand box, from Geyer et al. (2006); (B) asymmetrical collapse under a sloping surface, from Coumans and Stix (2016). Fig. 21 . 21 Fig. 21. Location of potential thermokarst sites and borehole showing supposed lake deposits in the Paris Basin. Ice wedge pseudomorphs are from Andrieux et al. (2016a, 2016b). Fig. 22 . 22 Fig. 22. Aerial view of Varennes-sur-Seine site (IGN/Google earth) showing transition between former ice wedge polygons and depressions of various shapes probably of thermokarst origin (P: pits at the intersection of ice wedges, TL: thermokarst lakes). Fig. 23 . 23 Fig. 23. Schematic stratigraphy of two boreholes showing potential lake deposits, from BSS (BRGM), and interpretation. BSS000WFPH -Barbey, BSS000UHFB -Saint-Just-Sauvage. Table 1 1 OSL-related data and age of the sampled site. Corrected for γ contribution from adjacent sediments to that sample. See text for details. b D e based on central age model. c N refers to the number of aliquots that met quality control criteria. Sample code K (%) U (ppm) Th (ppm) Cosmic dose (μGy a -1 ) Total dose (Gy kyr -1 ) a D e (Gy) b N c OD (%) Age (ka) Shfd17101 0.6 1.37 4.20 178 ± 9 0.94 ± 0.05 15.59 ± 0.23 24 9 16.6 ± 0.90 a Acknowledgements This work has been funded by the SISMOGEL project involving Electricité De France, Inrap, and the universities of Bordeaux and Caen. We acknowledge all the people who contributed to the study, particularly P. Benoit, A. Queffelec, and J.C. Plaziat. The Société des Carrières de l'Est -Etablissement Morgagni, owner of the quarry, is also warmly acknowledged for its help in the field. Jef Vandenberghe and two anonymous reviewers are also thanked for their comments, which contributed to greatly improving the manuscript.
01774887
en
[ "phys.cond.cm-gen", "phys.cond.cm-scm" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01774887/file/pouliquenPG2017.pdf
François Guillard Yoël Forterre Olivier Pouliquen Segregation in sheared granular flows: forces on a single coarse particle In order to better understand the mechanism governing segregation in granular flows, the force experienced by a large particle embedded in a granular flow made of small particles is studied using discrete numerical simulations. An empirical expression of the segregation force is proposed as a function of the stress distribution. A rich phenomenology is observed in flows of polydispersed granular material. When flowing, large and small particles have a tendency to migrate in different regions. Important progresses have been made for describing polydispersed granular flows, and a framework based on a mixture theory has been developed [START_REF] Gray | A theory for particle size segregation in shallow granular free-surface flows[END_REF][START_REF] Schlick | Granular segregation in circular tumblers: theoretical model and scaling laws[END_REF][START_REF] Hill | Segregation in dense sheared flows: gravity, temperature gradients, and stress partitioning[END_REF]. However, the segregation flux in those models remains empirical and would benefit from a better understanding of the segregation phenomenon. The objective of the present numerical study is to study the case of a single coarse particle in a bath of small particles, to understand the forces experienced by the coarse particle [START_REF] Guillard | Scaling laws for segregation forces in dense sheared granular flows[END_REF]. The configuration of interest is sketched in Fig 1a . A 2D granular medium of size d s is sheared between two rigid rough plates under a confining pressure P 0 and in presence of gravity. The discrete element simulations are made using the open source software LIGGGHTS.30. To study segregation, a particle of diameter d c larger than d s is introduced in the layer. To accurately measure the force experienced by the test particle, we mimic the existence of an optical trap. The coarse particle is attached with a spring to the vertical position, but is free to move horizon-tally. When the stationary regime is reached, the coarse particle flows with the bulk but remains in average at the same altitude, which directly gives access to the segregation lift force F z . A large number of simulations have been carried out and the segregation force F z has been measured for different confining pressure P 0 , different shear velocity V, different position in the layer z 0 , different coarse particle size (d c ) and different gravity conditions. We have shown that depending on the stress gradient, the segregation force can change sign (Fig. 1b), but that a simple scaling law can be proposed for the segregation force as a function of the normal and shear stress distribution P(z) and τ(z) in the shear flow: F z = -π d 2 4 a(d l /d s ) -f (µ) ∂P ∂z + g(µ) ∂|τ| ∂z , with µ = τ/P, f (µ), g(µ) and a(d l /d s ) being functions that we have measured, Such an expression may be useful when introduced in recent models for polydispersed flows. However, more studies are needed to better understand the physical origin of the measured segregation force. Figure 1 . 1 Figure 1. a) Sketch of the configuration: a coarse particle is vertically trapped in an harmonic potential; b) rescaled segregation force as a function of µ = τ/P for a vertical and horizontal gravity.
01774910
en
[ "spi.nano" ]
2024/03/05 22:32:18
2017
https://theses.hal.science/tel-01774910/file/TAO_2017_diffusion.pdf
Keywords: Energy harvesting, Piezoelectric semiconducting nanowires, Nanogenerators, Surface Fermi level pinning, Atomic force microscopy, Electromechanical measurements VIII First, I would like to thank my director Mireille Mouis, co-directors Laurent Montès and Gustavo Ardila at IMEP-LaHC, for their guidance and help all the time. By following my research closely, they shared their knowledge, stills and experiences in scientific research to help me overcome difficulties standing in front during the past three years. In addition to advising at least IV Résumé L'alimentation en énergie des réseaux de capteurs miniaturisés pose une question fondamentale, dans la mesure où leur autonomie est un critère de qualité de plus en plus important pour l'utilisateur. C'est même une question cruciale lorsque ces réseaux visent à assurer une surveillance d'infrastructure (avionique, machines, bâtiments…) ou une surveillance médicale ou environnementale. Les matériaux piézoélectriques permettent d'exploiter l'énergie mécanique inutilisée présente en abondance dans l'environnement (vibrations, déformations liées à des mouvements ou à des flux d'air…). Ils peuvent ainsi contribuer à rendre ces capteurs autonomes en énergie. Sous la forme de nanofils (NF), les matériaux piézoélectriques offrent une sensibilité qui permet d'exploiter des sollicitations mécaniques très faibles. Ils sont également intégrables, éventuellement sur substrat souple. Dans cette thèse nous nous intéressons au potentiel des nanofils de matériaux semi-conducteurs piézoélectriques, tels que ZnO ou les composés III-V, pour la conversion d'énergie mécanique en énergie électrique. Depuis peu, ceux-ci ont fait l'objet d'études relativement nombreuses, avec la réalisation de nanogénérateurs (NG) prometteurs. De nombreuses questions subsistent toutefois avec, par exemple, des contradictions notables entre prédictions théoriques et observations expérimentales. Notre objectif est d'approfondir la compréhension des mécanismes physiques qui définissent la réponse piézoélectrique des NF semi-conducteurs et des NG associés. Le travail expérimental s'appuie sur la fabrication de générateurs de type VING (Vertical Integrated Nano Generators) et sur leur caractérisation. Pour cela, un système de caractérisation électromécanique a été construit pour évaluer les performances des NG réalisés et les effets thermiques sous une force compressive contrôlée. Le module d'Young et les coefficients piézoélectriques effectifs de NF de GaN; GaAs et ZnO et de NF à structure coeur/coquille à base de ZnO ont été évalués également dans un microscope à force atomique (AFM). Les nanofils de ZnO sont obtenus par croissance chimique en milieu liquide sur des substrats rigides (Si) ou flexibles (inox) puis sont intégrés pour former un générateur. La conception du dispositif VING s'est appuyée sur des simulations négligeant l'influence des porteurs libres, comme dans la plupart des études publiées. Nous avons ensuite approfondi le travail théorique en simulant le couplage complet entre les effets mécaniques, piézoélectriques et semi-conducteurs, et en tenant compte cette fois des porteurs libres. La prise en compte du piégeage du niveau de Fermi en surface nous permet de réconcilier observations théoriques et expérimentales. Nous proposons notamment une explication au fait que des effets de taille apparaissent expérimentalement pour des diamètres au moins 10 fois plus grands que les valeurs prévues par simulation ab-initio ou au fait que la réponse du VING est dissymétrique selon que le substrat sur lequel il est intégré est en flexion convexe ou concave. General Introduction Energy autonomy in networks of small sensors is one of the key quality parameter for end-users. It is even critical when addressing applications in structures health monitoring (avionics, machines, building…), or in medical or environmental monitoring applications. There are many sources of energy that can be harvested, depending of the specific environment. For structure and environmental monitoring, a large number of sensors increase the response quality. But their number can become an issue either because of the amount of wiring or because of battery management cost. Harvesting energy from the environment can bring a smart solution where each node could ideally become self-powered, or in other word, energetically autonomous. For implantable medical sensors, the autonomy is even a more crucial requirement. Piezoelectric materials make it possible to exploit the otherwise wasted mechanical energy which is abundant in our environment (e. g. from vibrations, deformations related to movements or air fluxes). Thus, they can contribute to the energy autonomy of those small sensors. In the form of nanowires (NWs), piezoelectric materials offer a high sensibility allowing very small mechanical deformations to be exploited. Piezoelectric NWs have recently provided a promising improvement for electronics, sensing and energy harvesting nanosystems [1][2][3]. Among them, semiconducting NWs have attracted more and more attention due to their excellent piezoelectric properties, large-scale synthesis and compatibility with Si based integration. The piezoelectric generators based on vertically integrated ZnO NWs (VING) have presented a great potential in harvesting the mechanical energy from the environment ever since they were first demonstrated [3]. An increasing number of publications have recently bloomed about these nanostructures and promising nanogenerators (NGs) have been reported (Fig. I). However, there remain many contradictions between experimental results and our theoretical understanding of ZnO NW-based devices when realistic doping levels are considered. Chief among them are (1) decent and length-dependent experimental performance of ZnO NWs [4], while anticipations based on analytical and computational study shows that the output of NWs under compression are reduced to a few millivolts and are expected to present length-independent performance due to the screening effect [5,6]; (2) enhanced piezoelectric coefficients [7], which are measured for ZnO NWs with diameters much larger than anticipated by ab-initio method [8]; and (3) dissymmetric piezoelectric response of bent NGs under tensile and compressive strain by experimental observation [9]. In this thesis, the work was mainly focused on the piezoelectric semiconducting NWs, such as ZnO, GaAs and GaN, and on their implementation into NGs. We concentrated on one of the most promising NG structure based on vertically integrated NWs (the so-called VINGs). As shown in Fig. II, the study targeted three aspects: measurements of mechanical and electromechanical properties of individual piezoelectric NWs using AFM techniques; integration of ZnO NW arrays on different substrates to fabricate NGs and their characterization; and computational simulation study for NGs. The investigations will be introduced and discussed in three chapters: (1) In chapter I: we described the background of researches on semiconducting piezoelectric NWs for energy harvesting and sensing: the autonomous system and sensor networks. The physical basics of piezoelectricity were also introduced. Piezoelectric NGs integrating semiconducting NWs were considered as a solution to self-powering. (2) In chapter II: we used Finite element method (FEM) as the tool for simulation studies on ZnO NWs based NGs. Piezoelectricity and semiconductor physics were coupled in the simulations. We investigated NGs with various matrix materials under compression and bending, where material properties for nanoscale geometry were also considered. We also put forward the theory of surface Fermi level pinning to explain the difference existing between the theoretical results and experimental observations. (3) In chapter III: to investigate the ability of energy harvesting, we conducted electromechanical characterization on both individual NWs and NGs. ZnO NWs were synthesized by the chemical bath deposition method in our lab. They were grown on Si wafer and stainless steel foil, and then embedded into different matrix to form rigid and flexible NGs. AFM assisted techniques were applied to individual ZnO, GaN, GaAs and ZnO based core-shell NWs. We also built a measurement system to characterize the piezoelectric response of NGs under controlled compression. For the NGs working under flexion, we perform preliminary measurements, which demonstrate the surface Fermi level pinning hypothesis. Chapter I From Piezoelectricity towards Nanogenerators for Autonomous System I.1 Energy harvesting for autonomous system As the technologies develop fast during the recent decades, the emergence and increasing importance of low-energy consuming and portable/wireless miniature electronics have surfaced. One example of such kinds of devices is wearable medical and autonomous assistive devices [START_REF] Yang | Converting Biomechanical Energy into Electricity by a Muscle-Movement-Driven Nanogenerator[END_REF][START_REF] Mateu | Optimum Piezoelectric Bending Beam Structures for Energy Harvesting using Shoe Inserts[END_REF][START_REF] Riemer | Biomechanical energy harvesting from human motion: theory, state of the art, design guidelines, and future directions[END_REF]. In many cases, such devices are operated with wired or wireless sensor-transducer-actuator configurations that are powered by batteries [START_REF] Mallela | Trends in cardiac pacemaker batteries[END_REF][START_REF] Lanmuller | Multifunctional Implantable Nerve Stimulator for Cardiac Assistance by Skeletal Muscle[END_REF]. However, batteries need to be either replaced or recharged periodically and have limited lifetime, as well as the difficulty in miniaturization. Besides, in some cases the cost of wiring could be remarkable. For instance a study have evaluated the cost of installing wiring to each sensor in a commercial building at $200, while a typical cost of a wireless sensor node could be lower than $10 [START_REF] Beeby | Energy harvesting for autonomous systems[END_REF]. Efforts are required in two aspects: (1) minimize energy requirements without losing functionality; (2) harvest ambient energy to recharge the batteries, or even to directly power the specific electrical loads. The combination of low power circuits, new materials integration and 3D processing technologies make possible the development of autonomous systems. These autonomous systems harvest energy from various energy sources depending on the environment, such as light, heat, or mechanical vibration. The design of autonomous systems includes an energy harvester, an energy management unit to rectify the power, an energy storage unit (a microbattery or a supercapacitor) that stores the harvested energy and delivers the power required to switch on function units like sensors and transceivers (Fig. 1.1) [START_REF] Wang | Self-Powered Nanosensors and Nanosystems[END_REF]. Energy harvesting can power a wide variety of autonomous systems such as wireless sensors [START_REF] Gilbert | Comparison of energy harvesting systems for wireless sensor networks[END_REF][START_REF] Esu | Feasibility of a fully autonomous wireless monitoring system for a wind turbine blade Renew[END_REF], biomedical implants [START_REF] Hannan | Energy harvesting for the implantable biomedical devices: issues and challenges[END_REF], military monitoring devices for harsh combat or training conditions [START_REF] Cremers | Military: Batteries and Fuel Cells Encyclopedia of Electrochemical Power Sources[END_REF], structure-embedded instrumentation [START_REF] Mascarenas | Experimental studies of using wireless energy transmission for powering embedded sensor nodes[END_REF], remote weather station [START_REF] Sharma | Cloudy Computing: Leveraging Weather Forecasts in Energy Harvesting Sensor Systems[END_REF], and electronic devices such as portable calculators, watches, and Bluetooth headsets. Taking the wireless sensor network as an example, where battery lifetime is the major limitation in its performance, one can evaluate the value of energy autonomy. For a wireless sensor node with basic functionality of sending the data to a remote location for processing, the minimum power requirements can be estimated using a mixture of currently available off-the-shelf technology, and devices that are the current state-of-the-art in research. Three basic elements are considered: the STLM20 temperature sensor from ST Micro [START_REF]Anon 2006 Ultra-low current 2.4 V precision analog temperature sensor datasheet[END_REF], An ADC reported by Sauerbrey et al. [START_REF] Sauerbrey | A 0.5-v 1--μW successive approximation adc[END_REF], and an IEEE 802.15.4a standard-compliant ultra-wide-band transmitter from IMEC [25]. By assuming a low data rate (1 kbps), Mitcheson et al. have suggested a total power consumption for the sensor node of 10 μW -20 μW, or even 1μW -2 μW or less if the other components are also duty cycled [START_REF] Mitcheson | Energy Harvesting From Human and Machine Motion for Wireless Electronic Devices[END_REF]. This level of power can be harvested by ever smaller structures and, eventually, nanostructures. I.2 Semiconductor based energy harvesters Energy harvesting is with no doubt a very attractive technique for a wide variety of autonomous systems. Several methods of energy harvesting, such as photovoltaics (PV), thermoelectric generators, electromechanical transducers, are suitable for autonomous system corresponding to various working environments. The typical characteristics of some common energy-harvesting transducers are summarized in Table 1.1, where the available power of various ambient energy sources is compared [START_REF] Yildiz | Potential Ambient Energy-Harvesting Sources and Techniques[END_REF]. Driven by the need of lower cost (wireless) and higher integration, energy harvesting with nanostructures emerges. Among them, the energy harvester based on semiconductor NWs and nanosheets attracts more and more attention due to its possibility of further miniaturization and compatibility with IC integration. In the next subsections, we will describe briefly the most promising technologies for energy harvesting. I.2.1 Photovoltaics (PV) A PV cell is a solid state device that converts the energy of light directly into electricity by the photovoltaic effect. When photons are absorbed, they transfer their energy to electrons in the filled valence band (VB) and promote these electrons to higher energy states in the empty conduction band (CB). As there are no energy states in the band gap, only photons with energies above the band gap can cause the transfer of electrons from the VB into the CB. Thus, absorbed photons in semiconductors create pairs of negative electrons (in the CB) and positive holes (in the VB). In a solar cell, photo electrons and photo holes formed upon absorption of light separated by electric field and move to opposite sides of the cell structure, where they are collected and can feed a load circuit. First generation solar cells are typically synthesized using inorganic wafer materials such as silicon. Of the 1.7 × 10 TW of solar energy that reaches the Earth's surface, approximately 600 TW is of practical value, and 60 TW of power could be generated by using solar farms that are only 10% efficient [START_REF] Schiermeier | Energy alternatives: Electricity without carbon[END_REF]. Accounting for 90 % of the market, they are currently the mainstream solar cells and have found applications through small-scale devices such as solar panels on roofs, pocket calculators and water pumps. Second-generation PV cells were developed during the mid-1970s. These cells are mostly composed of thin-film of amorphous polycrystalline compound semiconductors [START_REF] Carabe | Thin-film-silicon solar cells[END_REF]. These devices are initially designed to be high-efficiency, multiple junctions PV cells (or tandem cells) [START_REF] Dimroth | High-efficiency solar cells from III-V compound semiconductors[END_REF]. However, these cells have a lower efficiency than the wafer-based Si solar cells, but with a higher cost. Third generation solar cells are the cutting edge of solar technology. They are broadly defined as semiconductor devices that do not rely on the conventional p-n junction. Most third-generation solar cells are still in the research stage and tend to contain dye-sensitised solar cells (DSSCs), heterojunction cells, quantum dot cells, polymer solar cells, and hot carrier cells [START_REF] Hardin | The renaissance of dye-sensitized solar cells[END_REF]. Based on nanostructures they optimize the efficiency and considerably decrease the production costs. For instance the efficiency of DSSCs that are sensitised by Ru compounds adsorbed on nanocrystalline TiO 2 has reached 11% -12% [START_REF] Chiba | Dye-Sensitized Solar Cells with Conversion Efficiency of 11.1%[END_REF][START_REF] Buscaino | A mass spectrometric analysis of sensitizer solution used for dye-sensitized solar cell[END_REF]. The maximum efficiencies for all solar conversion technologies are presented in Fig. 1.2 [36]. Overall, PV energy conversion is a well-known integrated circuit compatible technology that offers higher power output levels, when compared with the other energy-harvesting mechanisms. Nevertheless, its power output is strongly dependent on environmental conditions; in other words, varying light intensity. I.2.2 Thermoelectric generators The thermoelectric effect, also referred to as the Seebeck effect, is the direct conversion of temperature differences to electric voltage and vice-versa. A thermoelectric device creates a voltage when there is a temperature gradient on each side. Conversely, when a voltage is applied to it, it creates a temperature difference. Thermoelectric generators (TEGs) are simply thermoelectric modules that convert a temperature gradient across the device, and resulting heat flow through it, into a voltage via the Seebeck effect. The reverse of this phenomenon, known as the Peltier effect, produces a temperature differential by applying a voltage and is familiarly used in thermoelectric coolers (TECs).The polarity of the output voltage is dependent on the polarity of the temperature differential across the TEG. Reversing the hot and cold sides of the TEG will reverse the output voltage polarity. The efficiency of thermoelectric material, which determines the performance of the TEG, is often described by a dimensionless number called the figure-of-merit ZT, ZT = σS 2 T/κ, where T is the absolute temperature, σ and κ are the electrical and thermal conductivity, respectively, and S is the Seebeck coefficient. Materials with high ZT are desired in thermoelectric generator design; in practice, however, it is difficult to increase ZT because increasing S often leads to simultaneous decreasing σ and increasing σ also leads to increasing κ by the Wiedemann-Franz law. In bulk thermoelectric materials, Bi 2 Te 3 alloys have the highest ZT at about 1.0 at 300 K. over the three decades before 1990, there has been only a 10% increase in ZT because the change of one of its parameters adversely affects the other. In 1993, Hicks and Dresselhaus demonstrated that ZT > 2 can be achieved by low-dimension materials such as quantum well and quantum wire [START_REF] Hicks | Thermoelectric figure of merit of a one-dimensional conductor[END_REF][START_REF] Hicks | Effect of quantum-well structures on the thermoelectric figure of merit[END_REF]. Since then, there is a possibility to realize micro-thermoelectric generators (μTEGs) [START_REF] Hauser | Elaboration de super-réseaux de boîtes quantiques à base de SiGe et développement de dispositifs pour l'étude de leurs propriétés thermoélectriques[END_REF][START_REF] Stein | Croissance et caractérisation de super-réseaux de boites quantiques à base de siliciures métalliques et SiGe pour des applications thermoélectriques[END_REF]. A μTEG composed of n-type and p-type Bi 2 Te 3 NW arrays was fabricated by Wang et al. [START_REF] Wang | A new type of low power thermoelectric micro-generator fabricated by nanowire array thermoelectric material[END_REF]. The NW arrays were grown by electrochemical deposition of Bi 2 Te 3 into the nanopores of alumina template. The measurements showed that the Seebeck coefficient α of p-type and n-type Bi 2 Te 3 NW arrays with a diameter of about 50 nm was about 260 and -188 μV/K respectively at 307 K. Yang et al. designed a μTEG adopting TSMC0.35μmBiCMOS process with two poly-silicon layers, one poly-SiGe layer and three metal layers (3P3M) [START_REF] Yang | Application of quantum well-like thermocouple to thermoelectric energy harvester by BiCMOS process Sensors[END_REF]. In Yang's work, μTEGs with different lengths and weights were tested and the one with the size 60 μm×4 μm had the largest power factor 0.251 μW/cm 2 K 2 and voltage factor 10.042 V/cm 2 K. Compared with similar works [START_REF] Strasser | Micromachined CMOS thermoelectric generators as on-chip power supply Sensors[END_REF][START_REF] Huesgen | Design and fabrication of MEMS thermoelectric generators with high temperature efficiency Sensors[END_REF][START_REF] Yang | Development of a thermoelectric energy harvester with thermal isolation cavity by standard CMOS process Sensors[END_REF], this CMOS integrated μTEG has an outstanding power factor as well as the voltage factor. Meanwhile, n-Bi 2 Te 3 and p-(Bi 1-xSbx)Te 3 thermoelectric thin films have also been used as pressure and gas concentration sensors [START_REF] Giani | Thermoelectric microsensor for pressure and gas concentration measurement[END_REF]. Similarly, Al khalfioui et al. realized anemometers based on periodic structures elaborated by flash evaporation technique on polyimide substrate using Bi 2 Te 3 -Sb 2 Te 3 (P) and Bi 2 Te 3 -Bi 2 Se 3 (N) materials which have a high figure of merit [START_REF] Al Khalfioui | Anemometer based on Seebeck effect Sensors[END_REF]. Similar to PV cells, the μTEG also has a limitation with the working environment. The power generation is largely dependent on the temperature gradient that the environment can offer. I.2.3 Mechanical-electrical conversion for autonomous system Besides light and thermal sources, energy from vibrations, human movements, impacts or any other mechanical source is very present in daily applications and can be exploited. Table 1.2 shows how much energy is available and yet wasted from ambient vibrations. It is then interesting to develop an effective way to absorb the mechanical energy from the ambient and convert it into electrical energy. Devices, which are possessing most of the market, to harvest the mechanical energy are based on electrostatic effect, electromagnetic effect and piezoelectric effect. Electrostatic devices are easily integrated using MEMS technologies, but the fact that they typically require an initial charge and the low energy density that they produce have limited their use to research without industrial applications, to the best of our knowledge. Electromagnetic devices have a higher power densities but are difficult to integrate because of limitations on magnets miniaturizations, for this reason, only bulky devices have been commercialized [48]. Piezoelectric transduction offer a good compromise in terms of power densities and compatibility with Si integration and nowadays industrial MEMS harvesters can be found in the market [START_REF]MicroGen Systems[END_REF]. Typical piezoelectric harvesters use bulk ceramic materials (as PZT [START_REF]Piezo Ceramic Technology[END_REF]) or thin films (as AlN [START_REF] Van Schaijk | Piezoelectric AlN energy harvesters for wireless autonomous transducer solutions[END_REF]). [3]. Later the word "NG" was cited by other ultra-small energy harvesting systems that can convert mechanical, microwave, biochemical, thermal energy into electricity [START_REF] Bisotto | Microwave based nanogenerator using the ratchet effect in Si/SiGe heterostructures[END_REF][START_REF] Hansen | Hybrid nanogenerator for concurrently harvesting biomechanical and biochemical energy[END_REF]. Recently, this concept has also been used for the emerging triboelectric NGs (TENGs). I.2.3.1 Triboelectric nanogenerators (TENG) TENGs are devices taking advantage of the triboelectric effect to convert movements into electric power. The triboelectric effect, also known as triboelectric charging, is a type of contact electrification in which certain materials become electrically charged after they come into frictional contact with a different material. For example, when thin films of PET plastic and a metal come into contact with each other, both of them are charged. A current will flow between them, which can be harvested to charge a battery. As shown in Fig. 1.3, there are four fundamental principle modes for TENGs: (1) vertical contact-separation mode; (2) in-plane contact-sliding mode; (3) single-electrode mode; and (4) freestanding triboelectric-layer mode [START_REF] Wang | Triboelectric nanogenerators as new energy technology for self-powered systems and as active mechanical and chemical sensors[END_REF]. The first triboelectric generator was fabricated by stacking two polymer sheets, PET and Kapton, with metal films deposited on the top and bottom of the assembled structure [START_REF] Fan | Flexible triboelectric generator[END_REF]. Such a flexible polymer generator gave an output voltage of up to 3.3 V at a power density of ∼10.4 mW/cm 3 . More recently, the two contact surfaces have been patterned with nanoscale structures to increase the surface area, thus the friction between the materials. A TENG was developed by utilizing the contact electrification between a polytetrafluoroethylene (PTFE) thin film and a layer of TiO 2 nanomaterial (NW and nanosheet) array [START_REF] Lin | Enhanced triboelectric nanogenerators and triboelectric nanosensor using chemically modified TiO2 nanomaterials[END_REF]. TENGs have been applied for a variety of self-powered sensing system, such as vibration detection [START_REF] Zhong | Fiber-Based Generator for Wearable Electronics and Mobile Medication[END_REF], tracking of a moving object (location, velocity and acceleration) [START_REF] Chen | Triboelectric Nanogenerators as a Self-Powered Motion Tracking System[END_REF][START_REF] Han | Self-powered velocity and trajectory tracking sensor array made of planar triboelectric nanogenerator pixels[END_REF][START_REF] Su | Triboelectric sensor for self-powered tracking of object motion inside tubing[END_REF] or the fine displacement in MEMS [START_REF] Zhou | Nanometer resolution self-powered static and dynamic motion sensor based on micro-grated triboelectrification[END_REF]. This concept to harvest energy is very revolutionary because of its simplicity in terms of materials and fabrication. One possible drawback on this kind of devices is the working principle using movable parts in contact or friction, which could affect the devices lifetime and reliability. I.2.3.2 Piezoelectric nanogenerators (PENG) The other way to convert mechanical movement into electrical power is to use the piezoelectric effect. Piezoelectric materials can work in two different ways depending on the contact: (1) With conductive contacts, the conduction current is modulated by the strain and is proportional to the polarization-induced charges. In this case, the integrated material can only work as a sensor, and it requires a power supply to work. (2) With insulating or Schottky contacts (or in the case of an insulating piezoelectric material), a displacement current is generated when there is a change in strain. The current is then proportional to the time variation of the polarization induced charge. There is no need for an external power supply and piezoelectric generators usually work in this way, acting as either mechanical energy harvesters or self-powered mechanical sensors. Table 1.3 compares different mechanical energy harvesting techniques according to their complexity, energy density, size, and encountered problems. We focused on the PENG in this thesis. Details will be introduced in the following sections. The name "Piezoelectricity" was first proposed by Hankel [START_REF] Hankel | Uber die aktinound piezoelektrischen eigenschaften des bergkrystalles und ihre beziehung zu den thermoelektrischen[END_REF] in 1881 to describe a phenomenon discovered a year before by the Pierre and Jacques Curie brothers. The prefix "piezo" means pressure in Greek, thus the meaning of the whole word is "electricity by pressure". Apparently, the phenomenon found in their experiments was that positive and negative charges appeared on several parts of the crystal surfaces when compressing the crystal in different directions. It was analyzed according to the crystal symmetry. In 1881, one year after their discovery, the Curie brothers testified the existence of the reverse effect, predicted by Lippmann in the same year. Basically, the rule is that if certain materials are able to generate an electric charge due to a stress, they would also deform mechanically, under similar circumstances, in an electric field. In a word, piezoelectricity describes the ability of some materials to convert between mechanical and electrical energy. I.3.2 The piezoelectric effect The above description of the two processes was summarized as the direct and reverse piezoelectric effect. It can be explained by the arrangement of ions in the crystal structure. Fig. 1.4a shows a 2D lattice scheme of a simple two-element molecule material (AB). In a static state, the gravity centers of the negative and positive charges of each molecule coincide. Thus, their "macro" effects on the crystal are reciprocally cancelled. As a result, the molecule behaves electrically neutral in this way. When the material is placed in a stress field, its internal atomic structure can be deformed. The positive and negative gravity centers of the molecules are thusly separated and form little dipoles (Fig. 1.4b-c). Dipoles with opposite directions inside the material are mutually cancelled, leaving a distribution of a linked charge appearing on the surfaces. That is how the material is polarized. The polarization creates an electric field inside the material, which can be used to transform the mechanical energy stored in the material into electrical energy. I.3.3 Piezoelectric materials Piezoelectric materials refer to a series of materials that can exhibit piezoelectric, pyroelectric or ferroelectric effect. In general, all of the materials undergo a small physical deformation when subjected to an external force, an electric field, or a temperature change. If the deformation results in a change in electric polarization, we say it gives rise to the occurrence of the piezoelectric, ferroelectric or pyroelectric effects. In fact, the specific symmetry of the crystal unit cell determines whether the material exhibits these effects. Based on orientation only, the lattice structures of crystals are divided into 32 point groups. The relationship between polarization behavior and crystal structure is shown in Fig. 1.5. 11 classes are centrosymmetric, thus nonpolar and do not possess a finite polarization or dipole moment. The other 21 classes are noncentrosymmetric, possessing no center of symmetry, which is the necessary requirement for the occurrence of piezoelectricity. However, one of the 21 classes, though classified as the noncentrosymmetric class, possesses other combined symmetry elements, thus rendering no piezoelectricity. In half of the left 20 classes, the polarization can be induced by a mechanical stress. The representative material is quartz. The other half is permanently polared and thus can have piezoelectric as well as pyroelectric effects by possessing spontaneous polarization. ZnO, CdS and most of the III-V compounds belong to this category. A subgroup within these 10 classes possesses not only spontaneous polarization, but also reversible polarization. This last category exhibits all three effects-ferroelectric, piezoelectric, and pyroelectric, such as PZT, PMN-PT and PVDF. I.4 PENGs for energy harvesting or sensing Harvesting ambient mechanical energy at the nanometer scale holds great promises for powering small electronics and achieving self-powered electronic devices. Actually, PENGs are widely used for energy harvesting and sensing. Up till now, two kinds of mechanical stress field have been applied to the piezoelectric NWs. In the first way, the NWs are strained along their axes. While in the second way, the NWs are bent, generating potential difference between the sides of the NW. Most of the applications can be sorted by these two families. I.4.1 PENG strained along NW's axis PENGs working in this way have contacts on the two ends of the NWs to collect the potential. Therefore the NWs can be connected in parallel as what they do for laterally integrated NGs (LING) and vertically integrated NGs (VING). I.4.1.1 Laterally integrated NGs (LINGs) A first demonstration of using a single ZnO NW as a NG was reported in 2006 by Wang and his group [3]. Later laterally packaged NGs with either a single wire or bonded NWs appeared with the development of fabrication techniques [START_REF] Yang | Power generation with laterally packaged piezoelectric fine wires[END_REF][START_REF] Xu | Self-powered nanowire devices[END_REF][START_REF] Hu | High-Output Nanogenerator by Rational Unipolar Assembly of Conical Nanowires and Its Application for Driving a Small Liquid Crystal Display[END_REF]. In a single wire generator (SWG), a single ZnO NW is fixed by silver paste on a flexible polyimide film as a doubly-clamped beam (Fig. 1.6a). When a SWG is stretched and released with the strain around 0.05% -0.1%, the generated open-circuit voltage reaches 20 mV -50 mV [START_REF] Yang | Power generation with laterally packaged piezoelectric fine wires[END_REF]. Laterally grown ZnO NWs are fabricated using masks on the top surface of the ZnO seed layer that guided the NWs to grow on the side walls [START_REF] Qin | Growth of Horizonatal ZnO Nanowire Arrays on Any Substrate[END_REF][START_REF] Xu | Patterned growth of horizontal ZnO nanowire arrays[END_REF]. The maximum output voltage peak of this structure is 1.26 V [START_REF] Xu | Self-powered nanowire devices[END_REF]. This value has been improved by using 30 µm long, 1 µm wide rational unipolar assembly of conical ZnO wires dispersed randomly on a polymer matrix [START_REF] Hu | High-Output Nanogenerator by Rational Unipolar Assembly of Conical Nanowires and Its Application for Driving a Small Liquid Crystal Display[END_REF]. The model of the device was a capacitor-like plate structure with ZnO conical NWs packaged by PMMA as dielectric media, with one end fixed and a transverse mechanical force applied at the other end. The potential difference between the top and bottom electrodes of such a generator was calculated as a function of the NW density that the voltage was as high as 2.5 V when the density was 9×10 5 mm -2 [START_REF] Hu | High-Output Nanogenerator by Rational Unipolar Assembly of Conical Nanowires and Its Application for Driving a Small Liquid Crystal Display[END_REF]. The same group reported an effective approach, named scalable sweeping-printing-method, for fabricating flexible high-output LING from vertically aligned ZnO NWs (Fig. 1.6b). The reported peak output power density can reach ~11 ⁄ [START_REF] Zhu | Flexible high-output nanogenerator based on lateral ZnO nanowire array[END_REF]. This high output power can be comparable to actual MEMS and macro-devices (see Table 1.1), although better figures-of-merit are required to compare the different performance and technologies, for instance including the mechanical input (strain, acceleration, force…). Another group has built a LING based on PZT nanofibers, which were laterally suspended between the Pt wires and packaged with a thick layer of polydimethylsiloxane (PDMS) as high output NGs (Fig. 1.7a) [START_REF] Chen | 1.6 V nanogenerator for mechanical energy harvesting using PZT nanofibers[END_REF]. Fig. 1.7b shows a typical voltage output of this NG by periodic knocking of the PDMS surface. The potential amplitude increased monotonically with the strain applied to the PDMS slab. The highest output potential was ~1.6 V. In another work, similar structure was built as high output NGs, where PZT nanoribbons were printed onto the surface of a PDMS slab and interdigitated electrodes (with 25 μm spacing) were patterned on top of the nanoribbon arrays (Fig. 1.7c) [START_REF] Qi | Nanotechnology-enabled flexible and biocompatible energy harvesting[END_REF]. Frequency-dependent piezoelectric output is shown in Fig. 1.7d and e. Higher output produced by higher frequency taping is possibly due to fundamental piezoelectric theory relating current to strain rate [START_REF] Sirohi | Fundamental Understanding of Piezoelectric Strain Sensors[END_REF][START_REF] Chang | Direct-write piezoelectric polymeric nanogenerator with high energy conversion efficiency[END_REF]. At a 3.2 Hz taping frequency, the open-circuit voltage reached ~ 25 mV and the short-circuit current was ~ 40 nA, thus the optimal output power was ~ 10 nW for the NG with a size of ~ 1 cm 2 . Although these PZT NGs have a promising performance, they need to be poled by applying an electric field in advance. Vertically integrated NG (VING) structure is designed to harvest compressive and bending mechanical energy. It was first used in a self-powered system with wireless data transmission by Wang and his group, where the mechanical energy is harvested and converted into electrical energy, then stored in a capacitor to drive the other devices: sensors, data processors or data transmitters in the system (Fig. 1.8) [START_REF] Hu | Self-powered system with wireless data transmission[END_REF]. The VING structure presented in their work is composed of a five-layer flexible plate with PMMA surrounded ZnO NWs grown vertically on both sides of a polymer substrate and electrodes deposited on both top and bottom of the plate. Compared to lateral structures, VING fabrication is far easier with few lift-off steps. Several substrates can be used (Si, plastics, metals, etc.) and large scale production is possible [3,[START_REF] Choi | Mechanically Powered Transparent Flexible Charge-Generating Nanodevices with Piezoelectric ZnO Nanorods[END_REF][START_REF] Van Den Heever | The performance of nanogenerators fabricated on rigid and flexible substrates[END_REF][START_REF] Ebrahimi | Electrochemical Detection of Piezoelectric Effect from Misaligned Zinc Oxide Nanowires Grown on a Flexible Electrode Electrochim[END_REF][START_REF] Khan | Mechanical and piezoelectric properties of zinc oxide nanorods grown on conductive textile fabric as an alternative substrate[END_REF][START_REF] Chen | Gallium nitride nanowire based nanogenerators and light-emitting diodes[END_REF]. An ultrathin PENG with a total thickness of ~ 16 µ m was fabricated as an active or self-powered sensor for monitoring local deformation on a human skin [9]. The super-flexible NG was based on ZnO NWs with an anodic aluminum oxide (AAO) as an insulating layer (Fig. 1.9a-b). The AAO layer was grown on an ultrathin Al foil prior to the growth of ZnO NWs. This could lead to high-sensitivity and durability of the NG as well as high-throughput process due to the covalent bonds by sharing oxygen atoms and the increased surface contact area between the AAO and the ZnO seed layer by the nanopores of the AAO layer. This NG could be pasted on the human eyelid and was able to detect the motion of the eyeball (Fig. I.4.2 PENG working with bent NWs Bent piezoelectric NWs generate a potential difference between the opposite sides: the contractive side and the extensive side (Fig. 1.10). The problem to harvest energy with bent NWs is how to place the contact. The first trial was to integrate a Pt coated serrated electrode with vertically aligned ZnO NWs to convert ultrasonic waves into electricity, as schematically shown in Fig. 1.11a [START_REF] Wang | Direct-current nanogenerator driven by ultrasonic waves[END_REF]. The ultrasonic wave drove the electrode up and down to bend and/or vibrate the NWs. The serrated electrode acted as an array of metal tips to create, collect, and output electricity from bent NWs. A cross section SEM image of the packaged NW arrays is shown in Fig. 1.11b. The output power per unit of area was 10 mW/cm 2 . This type of NG built with the NWs grown on an area of 1 cm 2 has been demonstrated to be able to operate up to 1000 of nanodevices that are fabricated with one NW or nanotube [START_REF] Huang | Logic Gates and Computation from Assembled[END_REF][82][START_REF] Chen | Bright infrared emission from electrically induced excitons in carbon nanotubes[END_REF]. (a) (b) Another work trying to make use of the bent piezoelectric NWs for sensing application is accomplished by E. Perez [START_REF] Perez | Matrice de nanofils piézoélectriques interconnectés pour des applications capteur haute résolution : défis et solutions technologiques[END_REF]. He designed and fabricated a force sensitive pixel consists of an individual ZnO NW, selectively grown on a material stack representative of the targeted process, and with side electrodes (Fig. 1.12). I.4.3 Nanopiezotronics Besides the two major sorts of working modes, a few researches have been done by using conductive contacts and piezotronic effect. Piezotronic effect is using the piezopotential created in materials with piezoelectricity as a "gate" voltage to tune/control the charge carrier transport properties. It is used to fabricate a new class of electronic components, such as piezoelectric field-effect transistors (PE-FETs) and piezoelectric diodes (PE diodes) and sensors. These devices form the fundamental components of nanopiezotronics. Wang et al. reported a PE-FET designed by connecting a ZnO NW across two electrodes that can apply a bending force to the NW [2]. The electric field generated by the piezoelectric effect across the NW serves as the gate for controlling the electric current flowing through the NW. I.5 Conclusion In this chapter, we illustrated the importance and promising future of energy harvesting techniques. By harvesting energy from the environment, it can provide power to a variety of autonomous systems for wireless sensor networks, biomedical implants, or wearable personal electronics. We also introduced different energy harvesting methods based on semiconductors, such as PV cells, TEGs and different mechanical energy harvesters. As last, we focused on the fundamental and applications of PENGs, which is the main device studied and developed in this thesis. Chapter II Analytical and Modeling Study of Piezoelectric Nanowires and Nanogenerators Analytical and computational modeling studies are important tools in the investigation within nanoscale, where experimental tools could be partially confined by the resolution of manipulation or the sensitivity of signal acquirements. In this chapter, we first introduce the modeling tools that are widely used in nanoscale researches. Secondly, analytical and modeling studies on material properties and piezoelectric response of individual piezoelectric semiconducting NWs reported in literature are discussed, according to which merits of NWs are explained. Finite element method (FEM) is the modeling tool that we use for our work on NW-based NGs. It is divided into three parts: first, NG cells integrating intrinsic piezoelectric NWs are investigated with various matrix under compression and bending, where material properties for nanoscale geometry are also considered; second, the existence and influence of the screening effect as coupling the piezoelectric and semiconducting physics are testified; and finally, we put forward the theory of surface Fermi level pinning to explain the difference existing between the theoretical results and experimental observations. Although experimental studies have probed either electrical or mechanical behavior of NWs, the characterization of the coupling properties still faces challenges such as the manipulation of individual NW, the contact resistance between the sample and the measurement tools, the sensitivity of output electrical signals. In fact, different or even contrary results have been highlighted by measuring piezoelectric coefficient d 33 using different tools and methods. [7,[START_REF] Christman | Piezoelectric measurements with atomic force microscopy[END_REF][START_REF] Fan | Template-assisted large-scale ordered arrays of ZnO pillars for optical and piezoelectric applications[END_REF][START_REF] Scrymgeour | Polarity and piezoelectric response of solution grown zinc oxide nanocrystals on silver[END_REF][START_REF] Zhu | Piezoelectric characterization of a single zinc oxide nanowire using a nanoelectromechanical oscillator[END_REF] Compared with the bulk value 12.4 pm/V of ZnO [7,[START_REF] Christman | Piezoelectric measurements with atomic force microscopy[END_REF], piezoresponse force microscopy (PFM) method conducted on ZnO nanorods with diameters 150 nm -500 nm revealed an effective coefficient from 4.41 pm/V to 7.5 pm/V, smaller than the bulk value [START_REF] Fan | Template-assisted large-scale ordered arrays of ZnO pillars for optical and piezoelectric applications[END_REF][START_REF] Scrymgeour | Polarity and piezoelectric response of solution grown zinc oxide nanocrystals on silver[END_REF]. In contrast, a resonance shift method conducted on a 230 nm ZnO NW gave out a value of 12000 pm/V, thus 1000 times the bulk value [START_REF] Zhu | Piezoelectric characterization of a single zinc oxide nanowire using a nanoelectromechanical oscillator[END_REF]. Since contradictory observation results are partially dependent on the measurement methods that differ, as well as the materials involved in the substrate and probe contacting the NW, theoretical and computational studies offer a help to explore the physical principles of piezoelectric NWs. Simulation tools function at different levels, from atomic structure to continuum media. First-principles calculations are employed to investigate the piezoelectricity in semiconducting piezoelectric NWs at the atomistic level, while finite element method (FEM) is used to study the electromechanical behavior of both individual NWs and NWs based devices. Besides, Molecular dynamics (MD) and continuum models are also applied to related studies in piezoelectric field. Details of these simulation tools are introduced in Appendix I. II.1 Study of piezoelectricity for intrinsic NWs Piezoelectricity describes the ability of some materials to convert between mechanical and electrical energy. Piezoelectric materials are polarized under mechanical strain and the polarization is proportional to the strain. This is the direct piezoelectric effect. The converse effect is that a crystal is strained when an electric field is applied to it. II.1.1 First-order of piezoelectricity Continuum media models are used to deduce the piezoelectricity. In the first-order theory, the polarization is described as = ∑ , where the indexes µ, α, β stand for the x, y, and z axes of the Cartesian coordinate system. The mechanical and electrical behavior of a piezoelectric material can be modeled by two governing equations [START_REF] Sodano H A | A Review of Power Harvesting from Vibration Using Piezoelectric Materials Shock Vib[END_REF], = - 2.1 = + 2. 2 where σ is the stress, ε is the strain, E is the electric field, D is the electric displacement, [c] is the elasticity matrix, described by the tensor, = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 In the case of ZnO, = 209.7GPa , = 121.1GPa , = 105.1GPa , = 210.9GPa, = 42.47GPa, = 44.29GPa are measured for a thin film [START_REF] Bateman | Elastic moduli of single crystal zinc oxide[END_REF]. [START_REF] Ashkenov | Infrared dielectric functions and phonon modes of high-quality ZnO films[END_REF]. The mechanical equilibrium condition is given by, -∇σ = -∇( ) + ∇( ) = -∇( ) -∇ ∇ = 2.3 where f v is the mechanical body force and equal to 0 in a static system. Considering the Poisson's equation in dielectric materials, ∇ = 0 ∇( ) + ∇( ) = 0 2.4 the coupling equations that express the piezoelectric effect in semiconductors are then written as: ∇( ) + ∇ ∇ = 0 ∇ ∇ -∇( ) = 0 2.5 For the rest of the modeling study involving ZnO material properties, we will use the parameters mentioned above unless otherwise stated. Besides, second-order of piezoelectricity and flexoelectricity will not be considered in our simulation work for the moment. II.1.2 Second-order of piezoelectricity and flexoelectricity Second-order piezoelectricity, also defined as nonlinear piezoelectricity was first put forward by Cibert et al. in 1992 and applied to CdTe [START_REF] Cibert | Piezoelectric fields in CdTe-based heterostructures[END_REF][START_REF] André | Nonlinear piezoelectricity: The effect of pressure on CdTe[END_REF]. The piezoelectric coefficient of CdTe-based quantum wells measured at large lattice mismatch was more than three times larger than the value reported for bulk CdTe [START_REF] André R | Non-linear piezoelectric effect in CdTe and CdZnTe[END_REF]. They measured a piezoelectric field that behaved a nonlinear dependence on the elastic strain. Subsequent theoretical calculations showed that the piezoelectric tensor in CdTe strongly depended on the hydrostatic pressure, but very little on the traceless strain [START_REF] Dal Corso | Nonlinear piezoelectricity in CdTe[END_REF]. Bester et al. demonstrated the existence of second-order (nonlinear) piezoelectricity in III-V compounds and established the theoretical framework in which the second-order piezoelectric coefficient tensor was simplified in the wurtzite symmetry [START_REF] Bester | Effects of linear and nonlinear piezoelectricity on the electronic properties of InAs∕GaAs quantum dots[END_REF][100][START_REF] Prodhomme | Nonlinear piezoelectricity in wurtzite semiconductors[END_REF]. They determined the three first-order and the eight second-order piezoelectric coefficients for AlN, GaN, InN and ZnO by using a finite difference technique in combination with Density Functional Perturbation Theory (DFPT) within Local Density Approximation (LDA). The second-order piezoelectric tensor is associated with the second term in Eq. 2.6: = + 2.6 where e ijk is the third-rank proper piezoelectric coefficient tensor of the unstrained material, while e ijklm is a fifth-rank tensor defined below. The Greek indexes µ, α, β, γ and λ stand for the x, y and z axes of the Cartesian coordinate system. The six independent components of the strain tensor are given in the Voigt notation as: = = = = 2 = 2 = 2 Using Latin letters (j,k,...) for the Voigt index, the polarization component can then be written as: = + 2.7 where represents the first-order strain-induced piezoelectric tensor. Then we have the definition of nonlinear piezoelectric coefficient, which depends on the strain, ( ) = = + 2.8 A more specific research on the nonlinear piezoelectricity of ZnO NWs was proposed by Al-Zahrani et al. [102]. Density Functional Theory (DFT) calculated equilibrium values were fitted to the following equation: = + + 2 ∥ + ∥ + + ∥ 2.9 which expressed the strain dependence of the magnitude of the total piezoelectric polarization in the direction orthogonal to the growth plane. In this particular case with cylindrical NW, = = ∥ , = are strain parallel and perpendicular to the c-axis respectively. The dependence of the total polarization on strain in the range -0.08 to +0.08 of both linear and nonlinear models showed that the main feature of nonlinear polarization was always indicating either less negative or more positive values compared to the linear model. The linear and nonlinear models were tested in a ZnO NW subjected to a bending force. With the same strain range, the polarization on a cross section changed from -0.08 C/m 2 to 0.06 C/m 2 for the nonlinear model and from -0.12 C/m 2 to 0 C/m 2 for the linear model (Fig. 2.1). The polarization difference was up to 15%, which was consistent with the calculation in the literature [START_REF] Prodhomme | Nonlinear piezoelectricity in wurtzite semiconductors[END_REF]. The nonlinear piezoelectricity played an important role in semiconducting piezoelectric NWs at high strain. Our study did not include this effect, because the strain on the models was relatively small, where the second-order of piezoelectricity had a weak influence. Another interesting phenomenon has been observed that a non-uniform strain field or the presence of strain gradients could locally break inversion symmetry and induce polarization even in centrosymmetric crystals [103]. This is the flexoelectric effect. The flexoelectric coefficient exhibits a nonlinear interaction with the piezoelectric coefficient. Since it has a strong size dependency, a nonlinear increase of 3 -4 times in the effective piezoelectric coefficient was suggested with decreasing NW diameter [START_REF] Wang | Piezoelectric nanogenerators-Harvesting ambient mechanical energy at the nanometer scale[END_REF]105]. The study on bending ZnO NWs showed that this enhancement became significant when the diameter was below 50 nm [105]. For a bending BaTiO 3 beam, 500% enhancement over bulk properties induced by the flexoelectricity existed only for a beam thickness of 5 nm [103]. In our simulation, the diameter of ZnO NWs was above 50 nm. Besides, instead of bending, the ZnO NWs were actually compressed or elongated along their c-axis, therefore less strain gradient could be converted to polarization by flexoelectric effect. Although both second-order piezoelectricity and flexoelectricity are not applicable to our case for the moment, it is still very interesting to include these effects in future simulations for a better understanding of the physical principles. II.1.3 Mechanical properties and scaling rules for NWs To assess the merits of using piezoelectric NWs in upcoming technologies, a simple analytical model explains the geometry scaling rules for NWs. An individual cylindrical NW is bent by a force F perpendicular to the central axis (Fig. 2.2a) or compressed by a force F in direction of the central axis (Fig. 2.2b) with the bottom constrained to the ground. By applying a fixed force, the radius and the aspect ratio dominate the generation of the strain and strain energy within the NW. (a) (b) Strain along z axis: _ = • 4 • • ( -) Strain along z axis: Hinchet et al. investigated the strain, stiffness and deformation of ZnO NWs with the size scaling down at a given bending force [START_REF] Hinchet | Scaling rules of piezoelectric nanowires in view of sensor and energy harvester integration[END_REF]. Curves in Fig. 2.3 were plotted as a result of analytical calculation with the linear assumption (i.e. the Young's modulus is a constant). Specifically, the strain is proportional to , and the deformation is proportional to . By reducing the scaling factor, namely the size of the NW, the strain increases fast. Thus we can either obtain a larger polarization with the same force for thinner NW, or bend a thinner NW with smaller force. This is how a NW can benefit from scaling down. = • 1 • Maximum deflection: = 4 3 • • ( -) • Maximum deformation: = • • Strain energy: = 2 3 • • ( -) • Strain energy: = 1 2 • • • Figure 2. In fact, a FEM simulation of a NW having a radius 25 nm and a length 600 nm (geometry parameters chosen to compare with fabricated NWs in experiments) shows that the mechanical deformation enters nonlinear region when the force exceeds 20 nN. With a nonlinear deformation, the strain generated inside the NW has been increased by around 1.6 times at 80 nN (typical force applied by AFM tip in characterization of piezoelectric NWs) compared to the linear case shown in Fig. 2.3 [START_REF] Hinchet | Scaling rules of piezoelectric nanowires in view of sensor and energy harvester integration[END_REF]. This example clearly reveals the necessity of considering mechanical nonlinear effect in NWs. In another opinion, the surface bond saturation, rather than bulk nonlinear elastic effects, was considered to be responsible for the size effects of ZnO nanostructures [START_REF] Zhang | Young's moduli of ZnO nanoplates: Ab initio determinations[END_REF]. Then Agrawal et al. studied the elasticity size effects in ZnO NWs with a combined experimentalcomputational approach [START_REF] Agrawal | Elasticity size effects in ZnO nanowires--a combined experimental-computational approach[END_REF]. The authors measured NWs with diameters ranging from 20.4 nm to 412.9 nm by applying a uniaxial tensile load using a nanoscale materials testing system inside a TEM and calculated the Young's modulus of NWs with diameter ranging from 5 nm to 20 nm. The results are presented in Fig. 2.4, where the Young's modulus decreases from 194 GPa to 169 GPa as the NW diameter increases from 5 nm to 20 nm, and then it monotonically decreases from 160 GPa to 140 GPa and finally converged to this bulk value (140 GPa) as the diameter increases from 20 nm to 400 nm. This size effect is due to surface relaxation and long-range interactions present in ionic crystals, resulting in surfaces much stiffer than bulk [START_REF] Agrawal | Elasticity size effects in ZnO nanowires--a combined experimental-computational approach[END_REF]. Agrawal's experimental-computational results are consistent with the size effect reported by Chen et al. [START_REF] Chen | Size Dependence of Young's Modulus in ZnO Nanowires[END_REF]. Figure 2.3 Variation of Young's modulus with wire diameter (the dashed line shows the experimentally reported bulk value of ∼140 GPa). [START_REF] Agrawal | Elasticity size effects in ZnO nanowires--a combined experimental-computational approach[END_REF] Although the geometry size of the model considered in first-principles calculations is quite different, Yvonnet et al. reported a similar nonlinear elasticity in wurtzite ZnO NWs [START_REF] Yvonnet | Characterization of surface and nonlinear elasticity in wurtzite ZnO nanowires[END_REF]. They computed by ab initio method the Young's modulus that decreases from 420 GPa to 158 GPa as the diameter increases from 0.3 nm to 10 nm (Fig. 2.4a). Qi et al. demonstrated opposite size effects on the pristine and hydrogen passivated (H-passivated) ZnO NWs from the first-principles calculation conducted in a tetragonal supercell [START_REF] Qi | Different mechanical properties of the pristine and hydrogen passivated ZnO nanowires[END_REF]. Their results prove that the Young's modulus of pristine ZnO NWs is larger than the bulk ZnO and decreases from 210 GPa to 160 GPa as the diameter increases from 0.7 nm to 1.6 nm. In contrast, for the H-passivated ZnO NWs, the Young's modulus is smaller than the bulk ZnO and increases from 70 GPa to 85 GPa as the diameter increases from 1.1 nm to 2.0 nm (Fig. 2.4b). The underlying mechanism of discrepancy is the coupling of both core nonlinear effect and surface stress effect. The latter effect leads to significant axial elongations for pristine NWs and contractions for H-passivated NWs leading to the stiffening or softening of ZnO NWs. It is worth to point out, for all the first principle work, the Young's modulus shows a size effect for NWs with a diameter of a few nanometers. As the diameter exceeds this range, the Young's modulus converges to the bulk value. However, in Espinosa's study, the experimentally measured Young's modulus shows that the size effect starts from 100 nm. ZnO NWs with diameters ranging from 0.3 nm to 2.8 nm by employing first-principles methods [START_REF] Xiang | Piezoelectricity in ZnO nanowires: A first-principles study[END_REF]. The LDA of the exchange correlation functional was used for the calculations. The piezoelectric constant increased monotonically along with the decrease of ZnO NW radius. These size effects were studied later in both ZnO and GaN NWs with radius ranging from 0.6 nm to 2.4 nm by Espinosa et al. [8]. First principles-based DFT calculations were performed with generalized gradient approximation (GGA) using the Perdew -Burke -Ernzerhof (PBE) functional and the revised PBE (RPBE) functional with double-ζ polarization (DZP) orbital basis sets. Their results show that GaN NWs present a larger and more extended size dependence than ZnO and that piezoelectric constants can be improved by 2 orders of magnitude compared to the bulk values [START_REF] Catti | Full piezoelectric tensors of wurtzite and zinc blende ZnO and ZnS by first-principles calculations[END_REF]116], if the NW diameter is reduced to less than 1 nm [8]. Other piezoelectricity calculations of ZnO, GaN and AlN NWs based on first-principles are compared in Table 2.1 to Table 2.3, where is the effective piezoelectric constant. These investigations are highly dependent on the chosen functionals. In addition, only a few hundreds of atoms (a few nanometers in diameter) are evaluated due to computational limitations. Here we have a similar contradiction as what is with the Young's modulus. According to the calculation, piezoelectric coefficients are enhanced when the NW diameter is as small as a few nanometers, which does not consist with experimental results [8]. II.1.5 Piezoelectric response of individual intrinsic NWs Initially researchers consider the piezoelectric NWs, such as ZnO NWs, as intrinsic semiconductors. Thus most of the studies deal with it by using physics of dielectric materials. II.1.5.1 Bending individual intrinsic ZnO NWs Wang et al. first proposed a continuum model for simulating the electrostatic potential in a laterally bent ZnO NW with diameter 50 nm and length 600 nm [START_REF] Gao | Electrostatic potential in a bent piezoelectric nanowire. The fundamental theory of nanogenerator and nanopiezotronics[END_REF]. A perturbation technique is introduced with the assumption that no free charge exists in the NW. To simplify the analytical solution, the NW is considered to be an insulator and to have a cylindrical shape and isotropic elastic constants. The maximum potential at the surface of the NW is calculated by Eq. 2.10, ( , ) = ± 1 1 + -2(1 + ) -2 1 2.10 where κ is the dielectric constant, is the lateral force, E is the Young's modulus, e is the linear piezoelectric coefficient, ν is the Poisson's ratio and is the NW radius. Fig. 2.5 shows the potential distribution for the ZnO NW at a lateral bending force of 80 nN. The voltage drop created across the cross section of the NW is around ± 0.3 V with the compressive side having negative voltage and the tensile side having positive voltage, which is high enough to support the working mechanism of NGs [3,120]. They also point out that if the experimental elastic modulus [START_REF] Chen | Size Dependence of Young's Modulus in ZnO Nanowires[END_REF][START_REF] Song | Elastic property of vertically aligned nanowires[END_REF] and piezoelectric coefficient [7] are used in the above calculations, the potential would be multiplied by a factor of 3 -4. Hinchet et al. also put forward an optimized configuration of electric contacts for bent NWs [START_REF] Hinchet | Scaling rules of piezoelectric nanowires in view of sensor and energy harvester integration[END_REF] (Fig. 2.7a). The potentials generated in the "bottom-bottom" and the "top-bottom" configurations are compared in Fig. 2.7b. The potential difference is linear in the "bottom-bottom" configuration and approximately fits to bF+cF 2 in the "top-bottom" configuration where is the bending force applied to the NW. Larger sensitivities would be achieved with a top-bottom contact configuration. However, such configuration remains technologically out-of-reach to date and would lose linearity for high forces. Perez et al. designed a representative elementary pixel of an arrayed force-sensing device based on individual ZnO NW using the "bottom-bottom" electrode configuration [START_REF] Perez | Static finite element modeling for sensor design and processing of an individually contacted laterally bent piezoelectric nanowire[END_REF]. The pixel is constituted by a silicon base substrate, a ZnO seed-layer, one vertical ZnO NW and two gold metallic electrodes placed at the NW base (Fig. 2.8a). In their work, effects of the NW-electrode distance and the electrode thickness due to fabrication techniques were studied (Fig. 2.8b). They concluded that output voltage of several hundred millivolts could be obtained in technologically relevant corporation as response to 80 nN force. II.2 FEM modeling of NGs based on intrinsic piezoelectric NWs Individual NWs provide typically very low power. Sun et al. [START_REF] Sun | Fundamental study of mechanical energy harvesting using piezoelectric nanostructures[END_REF] proposed a dynamic analysis on individual NWs showing that the power generated by single NWs varied from 10 -3 pW to 1 pW with different damping ratios in vibration. To increase the energy generated, NWs are mostly integrated laterally [START_REF] Xu | Self-powered nanowire devices[END_REF][START_REF] Qin | Growth of Horizonatal ZnO Nanowire Arrays on Any Substrate[END_REF][START_REF] Xu | Patterned growth of horizontal ZnO nanowire arrays[END_REF][START_REF] Hu | Self-powered system with wireless data transmission[END_REF][START_REF] Yang | Characteristics of output voltage and current of integrated nanogenerators[END_REF], and vertically [START_REF] Hu | Self-powered system with wireless data transmission[END_REF][START_REF] Choi | Mechanically Powered Transparent Flexible Charge-Generating Nanodevices with Piezoelectric ZnO Nanorods[END_REF][START_REF] Van Den Heever | The performance of nanogenerators fabricated on rigid and flexible substrates[END_REF][START_REF] Ebrahimi | Electrochemical Detection of Piezoelectric Effect from Misaligned Zinc Oxide Nanowires Grown on a Flexible Electrode Electrochim[END_REF][START_REF] Khan | Mechanical and piezoelectric properties of zinc oxide nanorods grown on conductive textile fabric as an alternative substrate[END_REF][START_REF] Chen | Gallium nitride nanowire based nanogenerators and light-emitting diodes[END_REF] on the substrate to form piezoelectric composite materials. II.2.1 State of the art NWs can be integrated into NGs using two methods: LING and VING. II.2.1.1 Modeling study on LING The development and applications of LING have been introduced in Chapter I. It developed from SWG, which consists of one ZnO micro-rod fixed by silver paste on a flexible polyimide film as a doubly clamped beam. The maximum output voltage peak of this structure was 1.26 V [START_REF] Xu | Self-powered nanowire devices[END_REF]. This value was improved by using rational unipolar assembly of conical ZnO NWs [START_REF] Hu | High-Output Nanogenerator by Rational Unipolar Assembly of Conical Nanowires and Its Application for Driving a Small Liquid Crystal Display[END_REF]. The model of the device was a capacitor-like plate structure with ZnO conical NWs packaged by PMMA as dielectric media, with one end fixed and a transverse mechanical force applied at the other end (Fig. 2.9a and b). Assuming that the simulation was conducted without considering the coupling between the piezoelectric field and the inductive charges in the electric plates under the first-order approximation, the model of a paired intrinsic NW with opposite c axes was analyzed. The potential difference between the top and bottom electrodes of such a generator was calculated as a function of the NW density and the voltage was as high as 2.5 V when the density was 9×10 5 mm -2 . In their work, the piezopotentials inside the two conical NWs are opposite in sign under compressive strain, but with a small separation in the charge centers in z direction, which is the fundamental mechanism for creating the inductive charges at the top and bottom electrodes (Fig. 2.9c-e). The calculation for cylindrical NWs with zero conical angle (Fig. 2.9f) showed that the conical shape is the key for the device to work correctly. II.2.1.2 Modeling study on VING Theoretical and computational studies have been done to investigate the physical basics and the optimization guideline of VING design. Hinchet et al. explored the working principle of VING structure in compression mode and put forward the design and guideline rules for the performance improvement by using FEM simulations [START_REF] Hinchet | Design and guideline rules for the performance improvement of vertically integrated nanogenerator[END_REF]. The VING is composed of NG cells each including a single intrinsic ZnO NW surrounded by PMMA matrix material under an applied compressive pressure (1 Mpa) (Fig. 2.10a). The energy conversion mechanism was divided into three steps: mechanical energy transfer, mechanical to electrical energy conversion and, finally, electrical energy transfer to the output circuit (Fig. 2.10b). To simplify the simulation, the authors started with an individual NG cell coupling the piezoelectric equations for the NW and the mechanical and electrostatic equations for PMMA and electrodes. For comparison, the geometry parameters of the NW were taken from Ref [START_REF] Gao | Electrostatic potential in a bent piezoelectric nanowire. The fundamental theory of nanogenerator and nanopiezotronics[END_REF] (also see Fig. 2.10a) and varied, as well as the geometry ratio (defined as NW diameter/cell size), to optimize the NG performance. The divergence of simulation results from the analytical calculation indicated a complex 3D effect arising from the non-homogenous structure. (a) (b) The yield (η) has been calculated for each step using the parameters of (a). d x , E x and ε x are the thickness, Young modulus and dielectric constant of layer x. Index eq indicates that the corresponding layer is modeled as a uniform equivalent medium. T is the stress and e 33 the piezoelectric coefficient relevant to this strain configuration. [START_REF] Hinchet | Design and guideline rules for the performance improvement of vertically integrated nanogenerator[END_REF] Further simulations considering multiple NG cells (NG matrix) were further studied [5]. The piezoelectric potential increased from 20 mV to 70 mV and approached saturation as the NG matrix size increased from 1 cell × 1 cell to 15 cells × 15 cells, indicating that edge effects and 3D dielectric losses were reduced by a matrix size of around 200 NWs (Fig. 2.11a and b). Compared to a bulk ZnO layer, the NG cell generated a piezoelectric potential and electric energy varying as a function of the geometry ratio. The potential peak appeared at a geometry ratio equal to 0.4, while the energy peak appeared at 0.5. For geometry ratio of 0.5, the piezoelectric potential was as high as 3.3 times of the bulk ZnO layer, and the electric energy was about 5.6 times of the bulk layer generator (Fig. 2.11c-e). The author also tested different materials as the top insulating layer, among which Si 3 N 4 featured the best trade-off between the mechanical energy loss and electric energy loss because of its large Young's modulus (250 GPa) and relatively high permittivity (9.7). It is worth to point out that the authors demonstrated the decrease of energy generation for larger diameter NW and the approach of a saturation value with the increase of the NW length. Semiconducting properties were then introduced into the FEM simulation. Given the size of NWs, full depletion of free charges approximation was applied as the doping level was smaller than 5×10 18 cm -3 . The screening was then caused by the fixed ionized dopants and decreased the piezoelectric potential generated by ZnO NWs from 68 mV to 1.4 mV as dopant concentration increased from 1×10 12 cm -3 to 1×10 18 cm -3 . The above simulations used the bulk piezoelectric coefficients. Thus the piezoelectric potential could be increased to 123 mV if the piezoelectric coefficient of ZnO NWs measured experimentally at nano scale was used [7]. (a) (b) (c) (d) (e) II.2.1.3 Alignment of NWs in VING In the modeling study of VING, NWs are usually considered to possess perfect vertical orientation and unified polarity. Although perfectly aligned NWs have been successfully fabricated [128], to our knowledge, only slightly inclined piezoelectric NWs have been used to develop mechanical transducers [5]. In fact, the tempering effect on the potential generation does exist due to the inclination of the NWs. We have discussed about this effect particularly in one chapter of the book Future Trends in Microelectronics: Journey into the Unknown [START_REF] Tao | Will composite nanomaterials replace piezoelectric thin films for energy transduction applications? Future Trends in Microelectronics: Journey into the Unknown[END_REF]. The reference NG structure was formed by ZnO NWs (600 nm long) immersed on PMMA with a top insulator layer of Si 3 N 4 (100 nm thick), which was found optimal compared to PMMA top layer [5]. A single cell was evaluated under a compression of 1 MPa, including an increasing number of NWs having different inclination angles with respect to the ideal vertical case, starting from 1 NW up to 64 NWs. NWs were assumed to grow along the c-axis in all the FEM simulations, which was important to define their piezoelectric properties; this had been done by defining these properties with respect to a rotated Cartesian axis, aligned with each inclined NW. In this section, material properties of ZnO thin film were used for the simulations. a. Individual VING composite cell including a single NW Fig. 2.12a shows the absolute value of the potential generated by the individual composite cell integrating an inclined NW in function of the inclination angle. Two simulation conditions are confronted: taking into account the Cartesian axis rotation to correct the c-axis or without the axis rotation. As the inclination angle is low (below about 5°) the curves are very close to each other and a small error is produced if the c-axis correction is neglected (maximum error of 2% below 5° and 8% below 12°), then the error increases with the inclination angle reaching a maximum of  85%. An optimal inclination angle can also be observed when the inclination angle is lower than 5°, increasing slightly the absolute electric potential at the top electrode. Then the potential is greatly reduced reaching a reduction of 50% for an inclination of  30°. A comparison of the results with and without c-axis correction for a specific angle is depicted in Fig. 2.12b and c. b. Individual VING composite cell including 2 NWs When 2 NWs were included in a single cell, several possibilities of inclinations were evaluated. To facilitate the study, only one inclination following the xz-plane was considered. A first series of simulations were conducted with only one inclined NW (Fig. 2.13a). The absolute value of the electric potential at the top electrode in function of the inclination is shown in Fig. 2.13b. In general the results are similar to the previous case with a single NW, with an optimal inclination close to 5°. This same behavior is observed if the inclination angle is defined along the yz-plane instead of the xz-plane (Fig. A second set of simulations were conducted inclining both NWs on the cell only at small angles between 0° and 12° on the xz-plane at different situations as described on Table 2.4. The results shows no clear trend, in some cases the electric potential is slightly increased (increase of 1%) compared to the ideal situation (≈ 69mV) and in some cases the potential is highly decreased (reduction of 17%). More NWs per cell are needed to see if a trend is observed. In order to verify the trend found with the simulations including 4 NWs, cells including 36 and 64 ZnO NWs were simulated (Fig. 2.15). The absolute value of the electric potential at the top electrode in both cases is close to 59mV, representing a reduction of 14% from the ideal case. We expect that these simulations represent better the real trend because of the greater number of NWs composing actual devices. Only small inclinations angles have been considered (<12°), for this reason and because of the increasing number of NWs, these simulations have been made without correcting the c-axis of the NWs, thus the results underestimate slightly the electric potential generated (estimated at 8%). As a conclusion, the approach of considering a small number of NWs in a composite cell is only applicable if the NWs are vertical, providing the maximum electric potential. If real cases are considered with slightly inclined NWs, a large number of NWs in the cell is required to obtain appropriate simulation results. These simulations also show that integrating slightly inclined piezo NWs would reduce the electric potential generated by as much of 14% compared to the integration of vertical NWs. [START_REF] Tao | Will composite nanomaterials replace piezoelectric thin films for energy transduction applications? Future Trends in Microelectronics: Journey into the Unknown[END_REF] Table 2.4 Output electric potential (absolute value) of cells including two inclined II.2.2 FEM simulations for VING with intrinsic ZnO NWs To study the performance of the VING structure, we conduct computational simulations on intrinsic ZnO NWs based nano-composites in both compression and flexion modes. Matrix materials and ZnO properties are varied to compare with conventional thin film generators. II.2.2.1 Material parameters for ZnO NWs The material parameters of ZnO presented in section II.2.1 are measured on ZnO thin films. These parameters have been used for ZnO NWs in our modeling research because they are widely accepted due to the mature and convincing measurement techniques [START_REF] Bateman | Elastic moduli of single crystal zinc oxide[END_REF][START_REF] Carlotti | Acoustic investigation of the elastic properties of ZnO films[END_REF][START_REF] Ashkenov | Infrared dielectric functions and phonon modes of high-quality ZnO films[END_REF]. However, as 1D material, NWs usually have parameters that differ from the bulk material, including dielectric constant, piezoelectric coefficient, stiffness constant, etc. a. Dielectric constant of ZnO NWs Yang et al. [START_REF] Yang | Size Dependence of Dielectric Constant in a Single Pencil-Like ZnO[END_REF] used scanning conductance microscopy (SCM) to measure the dielectric constant of a single pencil-like ZnO NW with the diameters ranging from 85 nm to 285 nm (Fig. 2.16a). As the diameter decreases, the dielectric constant of ZnO NW was found to decrease from 6.4 to 2.7, which was much smaller than that of the bulk ZnO of 8.9. b. Piezoelectric coefficient (d 33 ) of ZnO NWs Zhao et al. used PFM to measure the effective piezoelectric coefficient (d 33 ) of an individual (0001) surface dominated ZnO nanobelt lying on a conductive surface [7]. Based on references of bulk (0001) ZnO and x-cut quartz, the effective piezoelectric coefficient d 33 of ZnO nanobelt (360 nm in width and 65 nm in thickness) was found to be frequency dependent and varied from 14.3 pm/V to 26.7 pm/V, which is much larger than that of the bulk (0001) ZnO of 9.93 pm/V (Fig. 2.16b). c. Young's modulus and stiffness constant of ZnO NWs Size effect of the Young's modulus of ZnO NW has been discussed in section II.2.3. Computational -experimental combination results showed that the Young's modulus presented significant increase when the diameter was smaller than 50 nm. On the other hand, the stiffness is also related to the surface status of the ZnO NW. Since those results are not fully validated, we still use the mechanical parameter of ZnO thin film in our simulations. To sum up, in the rest of section II.3, we will use = 2.7 and = 26.7 ⁄ as "nano dielectric constant" and "nano piezoelectric coefficient" of ZnO NWs. d. Material parameters of matrix materials In our simulation, different matrix materials have been used. Their Young's modulus, Poisson ratio and dielectric constant are listed in Table 2.5. a. Model description For VING in compression mode, the core function part of the simplified model (named NG cell) is one NW surrounded by matrix material, where the input pressure on the top surface is equal to the one on the whole structure (Fig. 2.17). Lateral surfaces are confined by symmetry boundary conditions, which represents that there are identical NG cells surrounding the target one. In this case, the NW is immerged into a layer of PMMA, which protects it from electrical leakage or short circuits that could occur because of the semiconducting properties of ZnO NW, but also gives the device robustness. Then, top and bottom surfaces are defined as electrodes to harvest the electrostatic energy generated. When the device is compressed, part of the input mechanical energy is stored inside the core piezoelectric NW, and then it is converted into electric energy through direct piezoelectric effect. Finally, the electric energy is driven out by the external circuit (not shown here). The initial cell size is 100 nm × 100 nm × 750 nm, with NW radius R = 25nm, and length L = 600 nm. The size is varied with the changing geometry ratio (NW diameter/cell width). The compressive pressure is 1 MPa. The electrical and piezoelectric properties of ZnO are varied respectively to study their influence. As mentioned above, the mechanical properties have a great scatter, so we do not consider the variation of ZnO elastic modulus for current simulations. b. Effect of material properties of ZnO NWs Fig. 2.18a shows the absolute value of the output potential varying with the previously defined ratio. The thin film model generated a potential around 10 mV. When the model of the composite NG cell was considered, the potential increased by 8 times (at ratio = 0.4) even though the NW was assumed to have the same properties as thin film [START_REF] Bateman | Elastic moduli of single crystal zinc oxide[END_REF][START_REF] Carlotti | Acoustic investigation of the elastic properties of ZnO films[END_REF][START_REF] Ashkenov | Infrared dielectric functions and phonon modes of high-quality ZnO films[END_REF]. The simulation results are corresponding to Ronan's modeling on the same NG cell. [5] The changing trends are the same and the optimum also appears at ratio = 0.4 in his work. There is a slight difference on the absolute value due to the selection of material parameters. The enhancement was due to the soft matrix PMMA, which concentrated the strain inside ZnO. Besides, because of the 3D dielectric losses and the deviation of the strain field around the NWs, a smaller size ratio improved the storage of mechanical energy to the NG but reduced the electrical energy stored. In contrast, a high size ratio increased the capacity to store electrical energy in the NG, at the expense of reducing mechanical energy storage (Fig. 2.18b ). The trade-off of these two contradictory effects formed a potential curve with a peak at ratio = 0.4. Since the dielectric constant decreases with the NW radius [START_REF] Yang | Size Dependence of Dielectric Constant in a Single Pencil-Like ZnO[END_REF], the electric energy loss through the top insulating layer is smaller in NG cell. With the increase of ratio, the dielectric constant became the major factor that influences the output potential. As a result, the potential of the NG cell with nano dielectric constant kept increasing with the size ratio. For the cell with nano piezoelectric coefficients, the potential curve followed the changing trend of the cell with thin film properties, but increased by roughly two times. Finally, the potential of NG cell with NW properties was enhanced by 22 times compared to thin film model and by 2.4 times compared to the NG cell with thin film properties. The electric energy was calculated from the potential and the equivalent capacitance of this structure. The maximum shifted to higher geometry ratio because of larger capacitance (Fig. 2.18c). We defined the energy conversion ratio/efficiency as, energy conversion ratio = × 100% Apparently, the energy conversion ratio was increased by as high as 5.9 times considering dielectric constant and piezoelectric coefficient of ZnO NWs (Fig. 2.18d). This magnification could reach 212 compared to ZnO thin film. Besides, there was also a shift of optimum geometry ratio compared to NG with thin film properties as nano piezoelectric constant and nano dielectric constant were considered respectively, while this shift was negligible for NG combining both NW piezoelectric and dielectric properties. II.2.2.3 VING working under bending (intrinsic ZnO NWs) As mentioned above, VING can work also in flexion mode. Here we extended the study to the flexion mode using a thin plate as a mechanical transducer. The hydrostatic pressure generated a force bending the membrane, compressing the NW from the sidewalls. Under strain, the NW active layer of the VING structure generated a potential difference which can be used to power an external circuit by means of a capacitive displacement current. a. Model description and working principle VING structure working in flexion mode was actually integrated on flexible metallic foils (25 μm) acting like a bottom electrode as well, bended by a hydrostatic pressure as a doubly clamped plate (Fig. 2.19a). The VING structure was the same as when it was integrated on Si wafers, NWs grown on a seed layer being embedded in a matrix sandwiched by electrodes (Fig. 2.19b). Similarly, only a NG cell of the device was considered (Fig. 2.19c) with appropriate boundary conditions for fast and reliable FEM modeling. When the device membrane was under bending as shown in Fig. 2.19a, the NG cell was compressed laterally and extended along the c-axis (Fig. 2.19c). Since the NW/PMMA layer was far from neutral plane, we assumed that the cell had a quasi-constant strain at sidewalls along z direction when the device was bent by a hydrostatic pressure (100 Pa). The initial cell size was 100 nm × 100 nm × 750 nm, with NW radius R = 25 nm, and length L = 600 nm. The size varied with the changing geometry ratio (NW diameter/cell width). The ZnO material parameters varied as in compression mode and their effects were discussed. Then different matrix materials were used based on the simulation using ZnO nano properties. In all studies, simulation with a ZnO thin film was included as a reference. The main issue compared to previous studies on the compression mode remains how the energy is transferred and converted. In the core cell, the energy conversion mechanism has been divided into 3 steps [START_REF] Hinchet | Design and guideline rules for the performance improvement of vertically integrated nanogenerator[END_REF]: mechanical energy transfer, mechanical to electrical energy conversion and, finally, electrical energy transfer to the output circuit. In the first step, the total input mechanical energy ξ is considered to be composed of two parts. One ( ) is the energy stored in the seed layer and the top insulating layer, and the other ( ) is the energy stored in the NW/PMMA compound. The energy that reaches the NW is used in the piezoelectric transfer process ( ). Thus, the mechanical energy transfer efficiency is expressed as = ( + ) ⁄ , where is supposed to be a constant. Thus the major influencing factor is the ratio of and , whose reciprocal increases with (w 1 E NW )/(dE PMMA ) where w 1 and d are geometrical parameters (see Fig. 2.20a) and E is the Young Modulus. The energy conversion efficiency of the second step depends on the NW's electromechanical properties and is proportional to the square of piezoelectric strain constant e 33 2 (Fig. 2.20b) [START_REF] Hinchet | Design and guideline rules for the performance improvement of vertically integrated nanogenerator[END_REF]. The third step brings dielectric losses to our device with the efficiency ( ) defined as 1/(1+L 1 ε 2eq /L 2 ε 1 ), where d is a geometrical parameter (Fig. b. Effect of material properties of ZnO NWs The output potential of the NG cell with thin film properties increased with the size ratio (Fig. 2.21a). This increase was fast when the geometry ratio was low due to the increasing piezoelectric functional cores. Then the potential variation tended to be stable as a result of more functional cores and larger equivalent dielectric constants. Finally the increase of ZnO overcame the other effects and further enhanced the potential. NG cells with nano properties showed different behaviors in this case. The NG cell with only nano piezoelectric coefficients generated a higher potential at a geometry ratio around 0.4. On the other hand, NG cell with nano dielectric constant presented a monotonic increasing potential with geometry ratio. Since the nano dielectric constants of ZnO is smaller than PMMA, the electric energy loss decreased as the quantity fraction of ZnO increased (Fig. 2.21c). Under the combination influence of nano piezoelectric constants and dielectric constants, the NG cell showed a similar changing trend compared to the one using thin film properties, but with an increase of 3.5 times. Since the volume ratio of matrix material in the whole plate was small, the influence on the strain energy was negligible (Fig. 2.21b). As a result, the energy conversion ratio followed the changing trend of electric energy and achieved an optimum value at ratio = 0.5 for NG with NW properties (Fig. c. Effect of matrix material Although using "nano" properties improved the energy conversion and potential generation of VING in flexion mode, the performance was not as good as ZnO thin film generators based on the simulation results. However, the performance of this composite material can be further improved by using different matrix materials. Analysis was focused on the displacement, the strain tensor and the electric potential distributions. The hardness of the matrix material resulted in two different types of distribution of the displacement and the strain. When a soft matrix material (PMMA) was used, the matrix was more compressed than the NW as indicated by the displacement distribution (Fig. 2.22a). The strain tensor distribution (Fig. 2.22b) shows that strain was concentrated in the matrix material instead of the NW. The opposite resulted from using a hard matrix (Al 2 O 3 ) (Fig. 2.22c) and strain was concentrated in the NW (Fig. 2.22d). This is consistent with the mechanical energy transfer efficiency (η m ). As the Young's modulus of the matrix material increases, the efficiency increases. More mechanical energy is transferred into the core NW by using hard matrix, influencing the final potential and energy generation. As a result, the potential generated was higher than the thin film starting from a ratio = 0.3 -0.5, and reached a value that was 1.5 -2.5 times larger than the ZnO thin film (Fig. 2.23a). Since the improvement was majorly due to the increase of input strain and the reduction of electric energy loss, potential and electric energy both increased with the geometry ratio. More details have been discussed in a previous modeling study on a similar NG cell with ZnO NWs but using thin film properties [132]. To clarify the energy transfer process, Young's modulus, Poisson's ratio and relative permittivity of matrix materials were varied independently. The former two parameters mainly influenced the mechanical energy transfer and slightly affected the piezoelectric energy transfer. The relative permittivity affected the electrical energy transfer to the output circuit. Mechanical and electrical parameters of real matrix materials are listed in Table 2.5 in section II.3.2.1. First, the effects of Young's modulus and Poisson's ratio on the potential generation were compared (Fig. 2.24a). The curves were plotted against one parameter while the two others were kept equal to that of PMMA. As the value of Young's modulus spanned from 3 GPa (PMMA) to 400 GPa (Al 2 O 3 ), the potential increases from 0.8 V to 4.4 V. In contrast, varying from 0.17 to 0.40, the change of Poisson's ratio only brought a potential difference of 1 V. The relationship between the relative permittivity and the electric potential was more complex. In fact, the relative permittivity of the matrix material was not the direct factor that influenced the electrical energy transfer process. Efficiency η e decreased with increasing ε eq of the matrix/NW compound. Here ε eq was not a linear combination of the permittivity of the NW and the matrix. As a result, the potential curve reached a maximum value when the permittivity was close to 2.5 (Fig. 2.24b). Considering that the relative permittivity of real matrix materials varied from 2.09 (SiO 2 ) to 9.7 (Si 3 N 4 ), the potential variation is less than 0.5 V. In the simulation, the effect of Young's modulus was more significant than the effects of Poisson's ratio and relative permittivity. II.2.3 Placement of the top electrode for VING in flexion mode A preliminary mechanical simulation of bending a doubly-clamped membrane showed that the strain on the active layer was negative in the middle of the membrane (in compression) and positive in the rest of the membrane (in extension) (Fig. 2.25). The potential generated in such condition without considering the top metallic contact will follow the strain, positive in the compressive part and negative in the extensive part. In real conditions, a metallic contact is necessary to produce an electrical contact to an external circuit. Including this contact on the simulation (floating potential condition), such device would generate a potential close to 0 V because the positive potential compensated the negative one. Thus, this structure is not appropriate for a transducer. We came up with an improved structure presented in Fig. 2.26. It consisted of a thin layer of piezoelectric material integrated only in the central part of the membrane. In this case a positive potential (Fig. 2.28a) was generated which could be used by electrical load. Adding a metallic contact on top of the whole structure created an equipotential condition reducing the voltage generated to 50% of the maximum potential generated without contact. This was further improved by adding the metallic layer only in the region where the piezoelectric material was integrated. This move increased the potential of 30% (see Fig. 2.28a) thus increasing the energy conversion efficiency. In another case, we tried to collect potential from both the compressive and extensive regions on the membrane. This design included a piezoelectric layer on those regions as illustrated in Fig. 2.27. The voltages generated at the side electrodes (see Fig. 2.28b) can be added using a series connection in order to boost the potential and the global energy conversion efficiency. This simulation also showed that the reduction of the length of the piezoelectric layer by 2 times towards the maximum of strain increased the voltage 1.2 times, but also reduced the equivalent capacitance of the structure 2 times, thus reducing the overall energy produced. This new structure with 3 electrodes (lateral electrode length L = 2 mm) increased the voltage generated up to 2 times and increased the conversion efficiency up to 12 times compared to the structure with the single piezoelectric layer in the middle of the membrane. The reason of this improvement is the increase in the global capacitance of the device and the voltage. II.3 Coupling of piezoelectric and semiconducting properties in NGs Initially linear mechanical and piezoelectric NW and NG models were built to evaluate the NW capabilities as the functional element for emerging electromechanical devices within self-powered nanosystems. However, many of the piezoelectric NWs are semiconductors, such as ZnO, GaN and GaAs NWs. Under this circumstance, the semiconducting properties need to be considered, because they are responsible for screening effects that influence energy harvesting and sensing. II.3.1 The screening effect in individual NW Although intrinsic III-N and ZnO are generally considered as insulators due to their middle-wide band gap, in fact, those NWs are doped during the synthesis by defects and impurities, either purposefully for achieving certain functionality or through accidental doping due to the growth mechanism [134][START_REF] Gao | Compensation mechanism in N-doped ZnO nanowires[END_REF][START_REF] Lee | Depletion width engineering via surface modification for high performance semiconducting piezoelectric nanogenerators[END_REF][START_REF] Sinha | Synthesis and enhanced properties of cerium doped ZnO nanorods[END_REF][START_REF] Kim | Electrical transport properties of individual gallium nitride nanowires synthesized by chemical-vapor-deposition[END_REF][START_REF] Fan | Very low resistance multilayer Ohmic contact to n-GaN[END_REF]. Semiconducting properties need to be included in the simulations on piezoelectric NWs since dopants play a role in the conductivity of the specimen and therefore influence the electromechanical response. Gao and coworkers investigated the behavior of free charge carriers in a bent piezoelectric n-type ZnO NW under thermodynamic equilibrium conditions by inducing donors into the model [START_REF] Gao | Equilibrium potential of free charge carriers in a bent piezoelectric semiconductive nanowire[END_REF]. The Gauss's law was combined with the mechanical equilibrium and the direct piezoelectric effect, as the free charge carriers would redistributed due to the electric field established by the polarization. The authors studied distributions of the piezoelectric potential, free electron concentration and activated donor center concentration varying at different doping levels (0.6 × 10 17 cm -3 ≤ N D ≤ 2.0 × 10 17 cm -3 ), assuming a flat Fermi level and a homojunction between the ZnO NW and the substrate. The results (shown in Fig. 2.29) lead to a conclusion that when an n-type ZnO NW is bent, the compressive side preserves the negative voltage, while the tensile side with positive potential is partially screened by free electrons. The charge carriers are accumulated at the tensile side and the compressive side is largely depleted. Besides, in order to compare with the situation when ZnO is considered as an insulator without any free charge carriers, the authors have made a high temperature approximation (T = T high = 300000K) and achieved a result that is consistent with their former work in ref [START_REF] Gao | Electrostatic potential in a bent piezoelectric nanowire. The fundamental theory of nanogenerator and nanopiezotronics[END_REF]. Similar effects were also tested in Araneo's simulations [141]. ZnO NWs of cylindrical and truncated conical shapes with typical doping levels were laterally bent by very small input forces of 442 nN. For the cylindrical NWs, the piezopotential was increasingly screened as doping level was larger, and meanwhile, the dissymmetry between the smaller reduction of the negative voltage and the larger one of the positive voltage increased. Unlike the cylindrical NWs, the output piezopotential for a doped conical NW was comparable or even higher than for a purely dielectric NW if the doping level was around the typical effective-donor concentration 10 17 cm -3 . for different donor concentrations 0.6 × 10 17 cm -3 ≤ N D ≤ 2.0 × 10 17 cm -3 . Reprinted with permission [START_REF] Gao | Equilibrium potential of free charge carriers in a bent piezoelectric semiconductive nanowire[END_REF]. In this work, Romano et al. [6] performed a theoretical and finite element analysis of a vertically compressed NW under equilibrium conditions and considering a finite electrical conductivity. The simulation model was composed by a cylindrical ZnO NW grounded and fixed at its base and compressed by a uniaxial force along the z axis, surrounded by air (Fig. 2.30). The far-field boundary conditions were set as a conductive thin film at the bottom of the NW and zero electric field far away from the NW. The authors computed the output piezopotential while varying the donor concentration, the geometry, the input force and the surrounding medium. The results identified that the output potential was screened by free electrons. The reduction of the piezopotential was dependent on the donor concentration. The influence of the dielectric medium was also discussed. For a NW with low doping level (e.g. N D = 1 × 10 16 cm -3 ), the reduction of piezopotential induced by the small dielectric constant of a certain medium (e.g. PMMA) was negligible, while for a close-to-intrinsic NW the reduction could be significant. Generally, the screening effect resulting from the free charge carriers could be useful for sensing applications as reported [START_REF] Zhao | Biomolecule-adsorption-dependent piezoelectric output of ZnO nanowire nanogenerator and its application as self-powered active biosensor[END_REF][START_REF] Nie | The conversion of PN-junction influencing the piezoelectric output of a CuO/ZnO nanoarray nanogenerator and its application as a room-temperature self-powered active H₂S sensor[END_REF]. However, it decreases the energy generation in the piezoelectric NW. Efforts have been done to reduce the influence of this effect by manipulating the depletion region. In Romano's work, the authors calculated the depletion region in the longitudinal direction, where the depletion width was much smaller than the length of NWs. Under full depletion approximation, there is no voltage drop outside the depletion region, which offers a lenient requirement for the length of NWs. Their results provide important guidelines for the design of high-efficiency PENGs. However, we wanted to re-examine these results in view of our own device configuration. II.3.2 The screening effect in VINGs To study the screening effect in the NGs, we introduced this effect into FEM modeling using FlexPDE software. We modify Eq. 2.5 by coupling the Poisson equation for semiconductors, ∇ = -2.12 = ( -+ -) 2.13 where is the total charge density, is the electron carrier density, is the hole carrier density, is the acceptor density, is the dopant density and is the elementary charge. With Eq. 2.8, we can obtain the coupling equation of piezoelectric and semiconductor physics as, ∇( ) + ∇ ∇ = 0 ∇ ∇ -∇( ) = ( -+ - ) 2.14 In our simulation, the ZnO NW is considered as n-type semiconductor. Therefore Eq. 2.14 is further simplified by setting equal to zero, ∇( ) + ∇ ∇ = 0 ∇ ∇ -∇( ) = ( -- ) 2.15 Similar to what happens in individual NW, the piezopotential generated by a NG cell is also influenced by the screening effect. A 2D NG cell model representing a nanowall generator under compression was simulated while the doping concentration of the ZnO NW varied from 10 to 10 (Fig. 2.31a). Piezopotential collected on the top of the structure is shown in Fig. 2.31b. At extremely low doping level ( 10), the domain of the NW was fully depleted and polarization charges existed without being screened by free electron charges and by few fixed charges from dopants. In this case, the piezopotential was close to that of the same NG structure integrating an intrinsic ZnO NW. Then the dopants increased with the doping level, and started to screen the polarization charges. When the doping level reached = 10 , the piezopotential was already reduced to a quarter of the intrinsic value. At > 10 , the piezopotential was almost fully screened by both free charges and fixed dopants. II.4 Surface Fermi level pinning (SFLP) effect Based on the analytical and modeling study on the screening effect in NGs, the real NG device would have fatal problems in supplying electric power since the standard doping level of ZnO NWs is from 10 to a few of 10 . However, several critical issues have not been readily addressed since contradictory observations from analytical and experimental results exist . Chief among them are: (1) decent and length-dependent performance of ZnO NWs [4], while anticipations based on analytical and computational study showed that the output of NWs under compression were reduced to a few millivolts and presenting length-independent performance due to the screening effect [5,6]; (2) enhanced piezoelectric coefficients [7], which were measured for ZnO NWs with diameter beyond the size effects anticipated by ab-initio method [8]; and (3) dissymmetric piezoelectric response of NGs under tensile and compressive strain [9]. One possible explanation to these issues is the Surface Fermi Level Pinning (SFLP) effect. II.4.1 SFLP effect on semiconducting NWs When the periodic structure of a crystal lattice is terminated at a surface, electronic states particular to the surface are created. Surface states can be true surface states with wave functions which are peaked near the surface plane and which decay in amplitude away from the surface. They may result, with typical energies inside the gap between the valence band and the conduction band. In fact, surface states emerge from the conduction and valence band since the total number of states is conserved. The "gap" between a conduction and valence band is connected in the imaginary k-plane. Moreover, the nature of this state (i.e. its orbital character and symmetry) smoothly varies between that of the conduction band (which might be, say, 4s orbitals) and the valence band (which might be 3p orbitals). In imaginary k-space, there must therefore be some crossover point, where donor-like states and acceptor-like states switch over to one another. If the Fermi level lies above this energy, it quickly concentrates electrons there in real space. If it lies below, it concentrates holes. So for the semiconductor to be locally charge-neutral, the Fermi level must lie at this crossover point ---it's pinned, hence the name Fermi level pinning. The crossover point ---the charge neutrality level is the Fermi-level pinning location. This effect commonly exists at the surface of III-V and II-VI semiconductor compounds [START_REF] Chakrapani | Electrochemical pinning of the fermi level: Mediation of photoluminescence from gallium nitride and zinc oxide[END_REF][START_REF] Lüth | Solid Surfaces, Interfaces and Thin Films[END_REF][START_REF] Van Weert | Large redshift in photoluminescence of p -doped InP nanowires induced by Fermi-level pinning[END_REF]. At the surface of ZnO NWs, oxygen molecules adsorbed are negatively charged ions by capturing the free electron from n-type ZnO and form a low-conductivity depletion layer near the surface, where the screening effect is crippled [START_REF] Lüth | Solid Surfaces, Interfaces and Thin Films[END_REF][START_REF] Kind | Nanowire ultraviolet photodetectors and optical switches[END_REF][START_REF] Li | Oxygen sensing characteristics of individual ZnO nanowire transistors[END_REF][START_REF] Li | Electronic transport through individual ZnO nanowires[END_REF][150][151]. The depletion phenomenon has been investigated on the cross-section of the NW. As shown in Fig. 2.32a, for a NW with large radius, the region close to the lateral wall is depleted and a neutral core exists in the center. While a NW with small radius can be fully depleted. The critical radius for fully depleted NW at N D = 10 18 cm -3 was calculated to be around 50 nm. Then Mouis et al. gave out a guideline on the relationship between the NW size, doping level and depletion region by analytical work from a pure semiconducting point of view using the assumption of SFLP [START_REF] Mouis | Title Materials Research Society (MRS) Spring Meeting & Exhibit[END_REF]. Fig. 2.32b presents the analytical depletion width as a function of the donor concentration and the radius of ZnO NWs. For a NW with a high doping level, the radius should be small to achieve a full depletion on the cross-section of the NW; and vice versa, for a NW with large radius, the doping level should be lower than the one with small radius. For instance, a ZnO NW with a radius of 25 nm would be fully depleted and could be less influenced by the screening effect if its doping level was lower than 5×10 18 cm -3 . (a) (b) In the modeling, both intrinsic and n-doped ZnO NWs (N D = 10 16 cm -3 -10 18 cm -3 ) were investigated. To compare with the NG cell having free Fermi level, two different situations were studied, where Fermi level was pinned only on top surface (for qualitative comparison with Romano's work) and on all surfaces (to consider a more realistic case). Equivalent surface charge was applied on top surface, and on all surfaces (Fig. 2.33). In 2D axisymmetric cylindrical coordinates, we have strain components: = ( ) = ⁄ = ( ) = ( ) + ( ) 2.16 Where r, and z are the cylindrical coordinates, and are the displacements in r and z directions, respectively. Then the expression of strain σ components is derived as, = + + + ( ) = + + + ( ) = + + + ( ) = + ( ) 2.17 If we consider a system in the mechanical static state (body force = 0), the governing equations are, (r direction): ( • ) -+ ( ) = 0 2.18 (z direction): ( • ) + ( ) = 0 2.19 For the semiconductor Poisson equation, we have the polarization vector as, = ( , ) 2.20 = 2.21 = + + 2. 22 Then the Poisson equation can be written with the form, ∇ ∇ - = ( -- ) 2.23 In a static system, if we define the potential as V when there is no mechanical input and when there is a mechanical input, the piezoelectric potential is: ∇ ∇ = ( -- ) 2.24 ∇ ∇ - = ( -- ) 2.25 = - 2.26 II.4.2.2 Electrical boundary conditions The bottom of the NG cell is electrically grounded and equilibrium potential condition is applied to the top simulating the electrode. In the modeling of SFLP, an equivalent surface charge was used on the interfaces between the NW and the matrix as, • = 2.27 Two assumptions could be used to calculate the equivalent surface charges. a. Fermi level pinned at mid-gap To simplify the simulation, the Fermi level is supposed to be pinned at mid-gap and described by an equivalent surface charge on the interface between the NW and the matrix. The surface potential is defined as, = 2 - 2.28 where E g is the band gap of the bulk ZnO (3.37 eV), N D is the doping concentration, k B is the Boltzmann constant and T is the room temperature. The equivalent surface charge can be approximated by its expression calculated under planar assumption, = -2 | | 2.29 which is used at the boundaries of the NW. The depletion width can be estimated by, = -2.30 b. Surface trap density In the second assumption, the surface charge is calculated based on a given density of slow traps on the surface. For low trap density, this assumption is corresponding to a free Fermi level on the surface. For larger than 5 × 10 , calculated from the surface trap is equivalent to Fermi level pinned at mid-gap with planar assumption when we deal with wide NWs. This second assumption has the advantage of being self-consistent when we consider thin NWs. The following simulation results were obtained using this assumption with = 5 × 10 . II.4.2.3 Mechanical boundary conditions a. 2D axisymmetric NG cell under compression We can transform the cuboid NG cell into cylindrical model in two ways (Fig. 2.35a and b). In order to avoid overlap with other cells, the model in Fig. 2.35a is more appropriate in the 2D axisymmetric simulation. Some boundary condition problems come along with this transformation. As discussed in section II.3, for the 3D cuboid NG cell model under compression, we introduce symmetry boundary conditions on the side walls assuming there is an identical cell on the other side, which is the case in periodic arrays. This symmetry boundary condition is no longer suitable to our cylindrical model because there are only 4 lines (drawn in Fig. 2.35a) which have zero displacement in radial direction (r direction), on the lateral wall of the new cell. On the other hand, the lateral wall is actually not a fully free surface also due to the constraint of the outer imaginary cuboid. Two models were simulated with free and fixed lateral walls, respectively. The comparison of the results showed that there is a variation of the output piezopotential of less than 5% which does not influence the variation trends (Fig. 2.36). In other words, there is some flexibility in their boundary condition, and we decided to use mechanically free boundary condition for the NG cell under compression. For the NG cell under bending, the modification is more significant. Here we assume a flexible NG disk with all-clamped edge, which is bent by hydrostatic pressure (Fig. 2.37a). In this case, we can have a prescribed displacement all over the lateral wall in the cylindrical NG cell located in the center of the disk. In section II.3.2.3, we analyzed the VING with intrinsic NWs under bending, in those simulations the input strain in the NG cell was considered a constant. However, the strain component in the R direction (strain R) of the cell is not uniform (Fig. 2.37b). Actually it varies linearly along the c-axis (Fig. 2.37c). For short NWs (L = 600 nm), the difference between the top and the bottom of the cell is about 5%. This value will increase with the total length of the cell and it can approach 20% in the model containing long NW (L = 2 μm). Therefore, the input strain R is expressed as = + • . To keep consistent with the deformation of the NG cell under compression, the input strain of bending NG is first considered to create an expansion in the R direction as shown in Fig. 2.37c, thus positive input strain/displacement. II.4.3 SFLP effect in VING under compression The NG cell model used in this section is similar to what has been used in section II.3, with a NW being embedded in PMMA matrix on a seed layer. The geometry ratio (the NW diameter/the NG cell width) was fixed at 0.5. The thickness of the top insulating layer and the seed layer were 100 nm and 50 nm, respectively. The radius and length of the NW were varied. The cell was compressed by a pressure of 1 MPa. II.4.3.1 NW with normal radius (R=100nm) SFLP effect creates depletion zone inside the NW extending from the interface towards the neutral center. The NW with a radius of 100 nm and a length of 600 nm was considered in the initial simulation, which is commonly obtained using chemical bath deposition (CBD). Fig. 2.38 shows qualitative maps of the polarization, potential and free electron carrier of NG cell under compression mode with SFLP on the top surface and on all surfaces, respectively. Three doping levels were chosen to present the changing trend. In the case of top SFLP, the NW was depleted from the top and the polarization in depleted zone played an important role in creating the output potential. The depletion width increases as the doping concentration decreases. As a result, the piezopotential was enhanced significantly at lower doping level. In the other case, where the equivalent surface charge was applied to all surfaces of the NW, the piezopotential was further improved. The depletion appeared not only on the top of the NW, but also on the lateral walls. At low doping level (10 16 cm -3 ), the NW was fully depleted, and therefore the potential was only screened by dopants. As the doping concentration increased, the neutral zone started from the center and extended to the lateral walls. Fig. 2.39a summarizes the piezopotential variation with the doping concentration for different SFLP conditions. As discussed in section II.4 previously, the screening effect would attenuate the potential generation even at quite low doping level. This was verified once again in the 2D axisymmetric model. We built a reference model by assuming the ZnO NW as an intrinsic semiconductor, thus only dielectric properties were considered. Compared to this reference, potential generated by the NG based on n-doped NW is reduced to 5% without any SFLP effect with N D = 10 16 cm -3 . Then we assumed that SFLP only existed on the top surface, and the potential was found out to be enhanced by 10 times at the same doping level. This magnification reached 15 times at the same doping level, if the Fermi level was also pinned at the lateral surface of the NW. Up to 10 17 cm -3 , the potential generated by the NW with SFLP was 50 times larger than the one with free Fermi level, and it accounted for 94% of the reference cell with insulating ZnO. Thus we find that the existence of SFLP largely expands the utilization of ZnO NWs with normal doping concentrations. The average length of ZnO NWs synthesized by the CBD method in our group was around 2 μm. Therefore we also run a series of simulations on the NG cell with longer NW (R = 100 nm, L = 2 μm). Fig. 2.39b shows the output piezopotential of the NG cell varying with the doping concentration of the NW. Apparently, the screening effect was more significant in a longer NW. Although the absolute value remained the same, the potential of n-type NW with free Fermi level was reduced by 98.6% compared to insulating NW at low doping level. Similarly, for the NG with FLP on all surfaces, the piezopotential was still equal to the one for insulating NW at doping level as high as N D = 10 17 cm -3 . Consistently, in the cases with free surface Fermi level and top SFLP, the output was the same for both lengths, consistently with Romano's report [6]. In contrast, with insulating ZnO, the piezopotential increased with the length. By considering realistic doping levels with SFLP from all surfaces, the piezopotential was also found length-dependent and increased with the length. II.4.3.2 NW with small diameter FEM simulations were also carried out for thinner NWs with 25 nm radius and 600 nm length (Fig. 2.40). For free Fermi level and top SFLP cases, the piezopotential kept the consistent negative sign due to the negative polarization on the top zone. According to the relation between the depletion width and the surface charge, NWs were nearly fully depleted at normal doping level (N D = 10 16 cm -3 -10 18 cm -3 ) from the sidewall in conditions with SFLP on lateral surface. As a result, the potential generated with all SFLP was almost equal to the one obtained with reference ZnO NW. = 5 × 10 . II.4.3.3 Geometry influence When increasing the length of the NW and keeping all other parameters constant, the NG with top SFLP showed almost no variation in response (Fig. 2.41a). This has also been reported in Romano's work [6]. However, it became interesting to increase the length for NGs with SFLP on all surfaces, because a large response was obtained even at high doping levels. Fig. 2.41b shows that the piezopotential is enhanced by 3. On the other hand, an improvement can be expected from the decrease of the NW radius. With top SFLP, the smaller the NW radius was, the higher the piezopotential became (Fig. 2.42a). With FLP on all surfaces, a large response was obtained even at doping level as high as 10 18 cm -3 for thin NWs (Fig. Fig. 2.43 shows a more clear effect of NW radius. For a NW with a length of 2 μm, as long as the radius was small enough to form a fully depleted case, the NW could be considered as an insulating one. The potential would increase slowly with the decreasing NW radius. Then as the radius got large to form a neutral core inside the NW, the potential decreased fast. Therefore, decreasing the NW radius has the effect of increasing the maximum piezo-response but more importantly of increasing the range of acceptable doping level towards higher values. = 5 × 10 . II.4.3.4 Comparison with the ZnO thin film For reference, a ZnO thin film model with the thickness of 2 μm was also simulated in compression mode. For the ZnO thin film, the only interface where the SFLP effect exists is the top surface. As shown in Fig. 2.44, NG cell with top SFLP had a slightly larger response than the thin film under the same pressure, because the strain in NG cell was larger due to soft matrix material. However, the problem of screening effect was still dominant. In contrast, with SFLP on all surfaces of the NW, the piezopotential was largely improved compared to a ZnO thin film in compression mode. At N D = 10 17 cm -3 , the potential generated by the NG cell was as high as 7 times of the one generated by the thin film. While at N D = 10 16 cm -3 , the potential was improved by a factor of 9. II.4.4 SFLP effect in VING under bending The NG cell model used in this section is the same as in compression mode. The geometry ratio (the NW diameter/the NG cell width) is also fixed at 0.5. The thickness of the top insulating layer and the seed layer is 100 nm and 50 nm, respectively. The radius and length of the NW are going to be changed. The input strain of the cell is calculated from a disk bending by a pressure of 100 Pa. With SFLP on all surfaces, the piezopotential reached 55 mV and kept this value as the doping level increased to 10 cm (Fig. 2.46). Within this range of doping levels, the NW was fully depleted. As the doping level continued to increase, the potential went through a slow decrease, which, unlike the wide NW, allowed NW with much higher doping concentration to be used as the generator. Besides, a similar effect of NW radius as in compression mode could also be observed. In the case of top SFLP, the model with the thinner NW generated higher piezopotential due to the strain that was concentrated more to the NW. In the case with SFLP on all interfaces, a smaller NW radius was able to enlarge the doping levels that could be used for NG devices. As a result, the SFLP effect had a more significant influence on the NW with a high aspect ratio. II.4.4.4 Non symmetric performance Until now, all the NG cells under bending were simulated with the pressure applied in the same direction as shown in Fig. 2.48a. Named after the strain state on the lateral wall of the cell, it is called stretching bending. Correspondingly, we have the other way to apply the bending pressure, which we called contractive bending (Fig. 2.48b). In fact, a non-symmetric performance exists in the NGs under contractive and stretching bending. In the model where R = 100 nm and L = 2 μm, by plotting the ratio between the contractive potential and the stretching potential, we found that it has been always over 1 (Fig. 2.48c). It means that there is a feedback influence of the surface charge on the potential generation. The negative charges on the surface can act as a contractive strain, confining the NG cell from the lateral extension. This feedback is weak when the doping concentrations is small, but it becomes obvious as the doping level increases. We will see in chapter III that a non-symmetry was also found in the experimental move. (c) II.5 Conclusion In this chapter, we mainly discussed analytical and computational modeling research on piezoelectric semiconducting NWs and NGs. In particular, ZnO NW is studied for its brilliant future in electromechanical application ever since 2006. The modeling investigation also started a decade ago as a powerful tool to design devices and guide the experimental activity. Most of the modeling work focused on individual NWs, developing from regarding the NWs as dielectric material to considering the screening effect of doped semiconductors. However, very few work had been done on the NW composites before our group. We started to investigate the piezoelectric behavior of VINGs based on ZnO NWs working under compression in 2010. Due to spontaneously doping during the synthesis, free carriers exist in ZnO NWs, which screen large part of the piezopotential generated. In the absence of SFLP, the piezopotential can be considered as fully cancelled for doping level higher than 10 cm , compared with the experimental doping level of 10 cm to 10 cm . Meanwhile, a few tens to a few hundred millivolts of output potential can still be observed in electromechanical measurements. R. Hinchet in our group worked on this field by coupling solid mechanics and electrostatic physics. He put forward an assumption that for thin NWs (R ≤ 25nm) the NG cell can be treated as dielectric material due to full depletion by SFLP, so that the only screening effect came from ionized dopants. In my thesis work, I reconsidered this assumption by taking SFLP into account in the framework of full coupling between semiconducting and piezoelectric equations. It turned out that the full depletion assumption did not hold for all dimensions and doping level. With SFLP on all surfaces, several observations from the simulation work fit the experimental results in the literature and are able to explain the contradiction which had been existing between the theory and experiments. Firstly, piezoelectric response of the NG cell did present a geometry dependency due to SFLP, indicating that better performance would come up with NWs with larger length and aspect ratio. Secondly, compared with ZnO thin film, NG cells generated higher potential, because they possessed more surfaces where SFLP contributed to decrease the screening effect. It offered a possible explanation to the higher piezoelectric coefficient experimentally measured for ZnO NWs. Finally, a non-symmetric performance of NG devices working in flexion mode was observed with SFLP in the simulation. A summary of major breakthroughs in modeling researches is presented below (Fig. 2.49). Chapter III Electromechanical Characterization of Piezoelectric Nanowires and Nanogenerators Although analytical and computational studies are helpful to analyze physical principles and provide guidelines for experiments, it is of prime importance to carry out the experimental work, with mechanical and electromechanical measurements, in order to assess our understanding of the operation mechanisms and to characterize the performance of piezoelectric NWs and NGs. This chapter starts with a short presentation of device fabrication. Vertical ZnO NW arrays were synthesized using chemical bath deposition and were then integrated to fabricate the NGs. After that, we give an overview of the methods that have been used to characterize the mechanical and electromechanical properties of piezoelectric NWs and NGs. We report on our measurement of the apparent Young's modulus and of the effective piezoelectric coefficients of semiconducting NWs using AFM-based techniques. We conducted as well electromechanical measurements on the rigid and flexible NGs introduced in Chapter III in order to evaluate their respective performance for energy generation. Finally, we end up with the observation of a dissymmetry, consistent with our simulation results, in the response of flexible NGs depending on bending direction. III.1 Synthesis of ZnO NWs and NGs The NGs under study were fabricated in the IMEP-LaHC cleanrooms. The process was adapted from the one previously developed during Ronan Hinchet's thesis, based on literature. Although some adjustments and variations were explored, our aim in this thesis was not to develop a new process but rather to have our own source of devices and to have the flexibility of changing technological parameters as required by our studies. In this work, ZnO NW arrays were grown on rigid p-type (100) Si wafer (500 μm) and flexible stainless steel foil (25 μm) using chemical bath deposition (CBD) method. We fabricated ZnO NW arrays with controlled morphology by adjusting the thickness of the seed layer, the precursor concentration, the temperature and the growth time. Then we designed the process to integrate the NWs into NGs. It was divided into three steps. First, the matrix and top layer material deposited. PMMA, Si 3 N 4 and Al 2 O 3 with controlled thickness were used for different NGs. This process aimed at finding the material that could provide best performance. Second, a 200 nm thick aluminum thin film was deposited on the top of PMMA by evaporation. The cross section view of the VING by this step is shown in Fig. 3.1a. The last step to build a working NG device was to package it with external electrodes and Cu wires. The top external electrode was an Al or Cu flat plate stuck on the evaporated Al thin film by silver paint to protect the surface from mechanical damage. The bottom external electrode was a conductive Al tape stuck on a glass plate and connected with the substrate by silver paint. Both external electrodes also offered a position to connect the wires so that the main specimen was kept flat. The VING integrated on a Si wafer could only work under compression (Fig. 3.1b), while the one on stainless steel foil could also work under flexion (Fig. 3.1c). More details about the NW morphology control, growth issue discussion and techniques for matrix deposition will be presented in Appendix II. III.2 Electromechanical measurement techniques for piezoelectric NWs and NGs III.2.1 Nanomechanical characterization methods Nano technologies have been used to investigate the mechanical properties of piezoelectric NWs, including MEMS in situ SEM/TEM [START_REF] Agrawal | Elasticity size effects in ZnO nanowires--a combined experimental-computational approach[END_REF][START_REF] Agrawal | Experimental-computational investigation of ZnO nanowires strength and fracture[END_REF][START_REF] Desai | Mechanical properties of ZnO nanowires Sensors Actuators[END_REF][START_REF] Bernal | Effect of growth orientation and diameter on the elasticity of GaN nanowires. A combined in situ TEM and atomistic modeling investigation[END_REF][START_REF] Brown | Tensile measurement of single crystal gallium nitride nanowires on MEMS test stages Sensors Actuators[END_REF], nanoindentation [START_REF] Huang | In situ nanomechanics of GaN nanowires[END_REF][START_REF] Feng | A study of the mechanical properties of nanowires using nanoindentation[END_REF][START_REF] Yang | Nanomechanical characterization of ZnS nanobelts[END_REF][START_REF] Li | Mechanical Properties of ZnS Nanobelts[END_REF], resonance techniques [START_REF] Chen | Size Dependence of Young's Modulus in ZnO Nanowires[END_REF][START_REF] Huang | In situ mechanical properties of individual ZnO nanowires and the mass measurement of nanoparticles[END_REF][163][164][165][START_REF] Gao | Higher-order harmonic resonances and mechanical properties of individual cadmium sulphide nanowires measured by in situ transmission electron microscopy[END_REF][START_REF] Gaevski | Non-catalyst growth and characterization of a -plane AlGaN nanorods[END_REF][START_REF] Kim | Determination of Mechanical Properties of Single-Crystal CdS Nanowires from Dynamic Flexural Measurements of Nanowire Mechanical Resonators[END_REF], AFM cantilever in situ SEM/TEM [169-173] and AFM based instruments [START_REF] Wen | Mechanical Properties of ZnO Nanowires[END_REF][START_REF] Ni | Young's modulus of ZnO nanobelts measured using atomic force microscopy and nanoindentation techniques[END_REF][START_REF] Chen | Mechanical elasticity of vapour-liquid-solid grown GaN nanowires[END_REF][177][START_REF] Ni | Elastic modulus of single-crystal GaN nanowires[END_REF][START_REF] Xiong | Force-deflection spectroscopy: A new method to determine the young's modulus of nanofilaments[END_REF]. These measurement methods can be sorted according to the loading mode, namely uniaxial tension or compression, and bending-based, where bending (either static or in resonance) and buckling are employed. Uniaxial methods operate by applying a controlled deformation at one end of the NW, while measuring the load at the other end [START_REF] Agrawal | Elasticity size effects in ZnO nanowires--a combined experimental-computational approach[END_REF][START_REF] Agrawal | Experimental-computational investigation of ZnO nanowires strength and fracture[END_REF][START_REF] Desai | Mechanical properties of ZnO nanowires Sensors Actuators[END_REF][START_REF] Bernal | Effect of growth orientation and diameter on the elasticity of GaN nanowires. A combined in situ TEM and atomistic modeling investigation[END_REF][START_REF] Brown | Tensile measurement of single crystal gallium nitride nanowires on MEMS test stages Sensors Actuators[END_REF]. Strain is usually measured by imaging of the sample, for example using in situ an electron microscope (SEM/TEM). Bending and buckling methods are usually easier to implement but data interpretation is more complex. Controllable nanostructure bending is achieved by atomic force microscopy (AFM), which also provides measurement of force, either in the lateral [START_REF] Wen | Mechanical Properties of ZnO Nanowires[END_REF] or vertical directions [START_REF] Ni | Young's modulus of ZnO nanobelts measured using atomic force microscopy and nanoindentation techniques[END_REF] or by a nano-manipulator pushing the NW until it buckles [START_REF] Xu | Mechanical properties of ZnO nanowires under different loading modes[END_REF]. III.2.2 Characterization of piezoelectricity in NWs The piezoelectricity is an electromechanical coupling. There are two ways to characterize piezoelectric materials, the direct and reverse effect. The most common methods for characterization of direct piezoelectricity in nanostructures usually involve bending or stretching the material with in situ measurement of the charge or electric potential. This is challenging because the charges or voltages tend to be small, which requires ultra-sensitive electronics. To give an example of such a measurement, charge generation from a suspended and doubly clamped single-crystal BTO NW has been studied under periodic tensile mechanical load. This method provided a tes a controlled experimental method for measuring the direct piezoelectric effect in nanostructures Conversely, the measurement of reverse piezoelectric effect in nanostructures has mostly been performed by means of piezoresponse force microscopy (PFM) [181][START_REF] Kolosov | Nanoscale visualization and control of ferroelectric domains by atomic force microscopy[END_REF][183][START_REF] Wang | One-dimensional ferroelectric monodomain formation in single crystalline BaTiO3 nanowire[END_REF][START_REF] Wang | Ferroelectric and piezoelectric behaviors of individual single crystalline BaTiO3 nanowire under direct axial electric biasing[END_REF]. This method takes advantage of the scanning and cantilever deflection measurement capabilities of the AFM. Fig. 3.3 schematically shows a set-up for probing the 3D piezoelectric tensor of a single c-axis GaN NW from the work of Jolandan et al., where scanning probe microscopy (SPM) was used exploiting the reverse piezoelectric effect [186]. The set-up consists of an AFM system, extended electronics including a lock-in amplifier, and a function generator (Fig. 3.3a). The NW was measured using two methods. In Fig. 3.3b, the NW is shown on an insulating surface with its two ends clamped by electric contacts, which are used to apply an axial electric field. In this configuration, electric field was generated by adding a voltage the two electric contacts. Twist of the cantilever measured the axial displacement, corresponding to strain , and the bending of the cantilever measured the out-of-plane displacement, corresponding to strain and , to obtain and , respectively. In the second configuration (Fig. 3.3c), the NW was placed on a conductive substrate acting as an electric ground. An electric voltage was applied between a conductive AFM probe and the grounded substrate to induce a transverse electric field in the NW. Torsion of the cantilever measured the displacement along the NW axis, corresponding to , to obtain An AC voltage is applied between these electrodes, resulting in an axial electric field. The long axis of the NW is placed perpendicular to the AFM cantilever. (c) In this configuration the NW is laying on a Si substrate coated with a conductive Au layer. The electric field is applied between the tip of the conductive AFM probe and the grounded substrate. Torsion of the cantilever measures the induced shear strain allowing identification of d 15 . [186] III.2.3 Characterization techniques at IMEP-LaHC Our group introduced an improved method using AFM allowing the direct bending of cantilevered NWs without using clean room techniques to prepare the as grown samples [START_REF] Xu | An improved AFM cross-sectional method for piezoelectric nanostructures properties investigation: application to GaN nanowires[END_REF]. This method (Fig. 3.4) allows a precise selection of a NW including its state before and after the experiment. The application of a controlled force at a specific location of the NW allows us to combine the mechanical and piezoelectric measurements. A high input impedance preamplifier is required to improve the voltage measurement accuracy. The detailed working principle will be discussed in next sections. Besides, we also built a set-up to characterize the piezoelectric performance of the NG device under compression. With precise force control, the relationship between the applied force and generated potential was established. This will be detailed in section III.4. III.3 Mechanical and electromechanical characterization of individual NWs using AFM techniques III.3.1 Young's Modulus measurement of piezoelectric NWs The method to measure the Young's modulus of NWs evolves from the solid mechanical physics in a one-end clamped beam. Scaling down to the nano scale, some variables, such as the deflection of the beam, can no longer be measured directly. Ergo we make a transform of the Euler-Bernouilli equations, which describes the relationship between the deflection of the beam and the applied load. III.3.1.1 Physical principle of Young's modulus measurement The scheme (Fig. 3.5) shows the principle of the Young's modulus measurement by the AFM probe. The substrate is placed perpendicular to the AFM sample holder (z direction), so that the NW vertically grown on it will be along the x direction. The force is applied in the z direction by the AFM tip. The manipulation process contains two steps: firstly, a topographic scanning image is made of the sample surface to locate the object; secondly, a small force (~10 nN) is used to control the deflection of the object. The governing equation of a cantilever beam is, • • ∆ ( ) = ( ) 3.1 where E is the Young's modulus, the curve ∆ ( ) describes the deflection of the beam in the z direction at position x, I is the area moment of inertia and M is the bending moment. For position within the strained region (0 ≤ x ≤ L), the bending moment is, ( ) = + • + --• ( -) 3.2 To solve this equation, the geometry boundary conditions are given as, ∆ | = 0; ∆ = 0 (at the clamped end) 3.3 = 0; = 0 (at the free end) 3.4 At the free end, L is the distance from the clamped end to the point where the force is applied. Then the deflection of the beam at 0 ≤ x ≤ L is expressed as, ∆ ( ) = 6 • ( -3 ) 3.5 The absolute deflection at the force position is, |∆z| = • 3 3.6 The force F is calculated by the product of the deflection of the AFM cantilever ∆ and the spring constant k (shown in Fig. 3.5), = × ∆ 3.7 In the measurement, we fix the total deflection Z (the deflection of the AFM cantilever and the beam combined) to a constant, and then the deflection of the beam can be calculated from, |∆ | = -∆ 3.8 Finally, we obtain the following formula for the Young's modulus, E = 3 × 1 |∆ | ⁄ 3.9 III.3.1.2 Calibration of polycrystalline silicon beam as a reference To verify our approach for the Young's modulus measurement, a NEMS nano-switch consisting of a source (S), two gates (G), two drains (D) and a cantilever made of polycrystalline Si beam has been used in the first test (Fig. 3.6a). Top view SEM image (Fig. 3.6b) presents the structure of the nano-switch. The Si beam is a rectangular block with dimensions: width a equal to 450 nm, the depth b equal to 900 nm and the total length L tot of 20 um, with = . The measurement of Young's modulus uses the AFM tip as the force source. By fixing the total deflection (deflection of the AFM cantilever and the Si beam combined) and monitoring the deflection of the AFM cantilever, we can calculate the force applied to the beam and thus the Young's modulus at different positions along the beam. Fig. 3.7 shows the AFM topographic image of the Si beam and the positions where the force was applied (red arrows). The point signed by the black arrow was selected to be the reference "zero point", because this point was located at the fixed end, where the deflection stayed to be zero under pressure. The effective length of the Si beam was considered to be the distance from the "zero point" to other points. As the effective length L increased, the deflection of the AFM cantilever decreased, on the other hand, the deflection of the Si beam increased because of the reduction of rigidity (shown in Fig. 3.8a). To obtain the spring constant of the Si beam, we plotted the Δz/F curves for each point and applied a linear fitting on them (Fig. 3.8b). According to Eq.3.9, we plotted the Young's modulus-force position curve of the Si beam (Fig. 3.9a). The Young's modulus increased along the beam and then became flattened, approaching and becoming stable to 150 GPa when the tip position moved to the free end. This value is in the range reported in the literature. The Young's modulus of polycrystalline Si film varies from 120 GPa to 200 GPa depending on different measuring methods [START_REF] Schneider | Non-destructive characterization and evaluation of thin films by laser-induced ultrasonic surface waves[END_REF][START_REF] Tabata | Mechanical property measurements of thin films using load-deflection of composite rectangular membranes[END_REF][190][191]. One reasonable explanation of the variation of E along the beam is that the Si beam is over-etched during the fabrication process at the fixed end, so that the actual effective length L' is larger than the measured effective length L (Fig. 3.9b). This effect has larger influence when the measurement is done near the fixed end, resulting in the underestimated Young's modulus. In (Fig. 3.9a), a correction based on this over-etching problem is displayed with the red dash line. Therefore, we only considered points that belonged to the remaining three-quarter of the beam. As we can see, the Young's modulus measurement method is suitable to beam structure with large aspect ratio. III.3.1.3 Young's modulus of long GaAs NWs As demonstrated above, when we use the AFM technique to measure the Young's modulus of micro beam structures, the results is close to the literature. However, the limitation of this method is also evident taking into account the aspect ratio of the beam (the length over the radius) which should be large. Besides, the measurement accuracy of the geometry becomes a significant factor influencing the result of Young's modulus as the tested structures scale down to nano scale. Here we apply this method to long GaAs NWs grown on Si wafers by our colleagues (J. Penuelas) from Institut des Nanotechnologies de Lyon (INL) of Ecole Centrale de Lyon (ECL) using solid-source molecular beam epitaxy (ss-MBE) method. SEM images show the morphology of the long GaAs NW arrays (Fig. 3.10a). The length is about 8 μm and the diameter varies from 160 nm to 220 nm. The AFM scanned a small region of the cross section containing several GaAs NWs (Fig. Here we consider the NW as a cylindrical beam with = , where d is the diameter of the NW. The deflection of the AFM cantilever is proportional to the total deflection (Fig. 4.11b and 3.12b). Since point 0 is at the root of the NW, the AFM cantilever deflection is equal to the total deflection at this position. Then the difference between the tip deflection of each point and point 0 is actually the deflection of the NW. With Eq. 3.7 and 3.9, we plot the curves of NW deflection versus the bending force (Fig. 3.11c and 3.12c). By applying linear fitting to these curves, we obtain the important factor ∆ / to calculate the Young's modulus. The Young's modulus of long GaAs NWs is plotted in Fig. 3.13. It increases with the length L and saturates at around 80 GPa, while the value of bulk GaAs is 85.5 GPa [192]. This confirms the validity of our approach. III.3.2 Piezoelectric response of individual NWs To investigate the properties of piezoelectric free-standing nanostructures, appropriate tools as well as convenient methods are required. Wang et al provided an approach to study the piezoelectric properties of nanostructures using AFM [3]. They plotted a potential map as the AFM tip swept across the vertical aligned ZnO NW arrays. In this method, the deformation of the NW cannot be precisely controlled, and damage will be done to the NWs. Our electromechanical measurements on individual NWs with controlled force were realized by improving the existing set-up. Based on the Young's modulus measurement, an electrical circuit including an amplifier and an oscilloscope have been introduced to monitor the potential drop between the NW and the substrate while it is bent [START_REF] Xu | An improved AFM cross-sectional method for piezoelectric nanostructures properties investigation: application to GaN nanowires[END_REF]. III.3.2.1 Principle of electromechanical measurement on individual NWs a. Experimental set-up The experiments were carried out at room temperature and normal atmosphere pressure by using AFM (with a high performance controller Digital Instruments Dimension 3100 from Veeco) and high frequency oscilloscope (Fig. 3.14a). The circuit of the whole device is schematically shown in Fig. 3.14b. When the NW is bent, strain induced polarization creates an electric field across the NW and charges are accumulated at the surface. Since the substrate of the NWs is electrically grounded, the potential generated by the NW can be measured. As the samples to be tested (GaN, GaAs and ZnO NWs) are semiconductors with wide band gap and a Schottky contact exists between the tip and the NW, a voltage preamplifier is required to improve voltage measurement accuracy. (a) (b) b. Theoretical piezoelectric response of individual bending NW Theoretical analysis on bending individual piezoelectric NW has been studied by many researchers. Details are discussed in Chapter II. As a reference to our piezoelectric measurements by the AFM, an equation should be considered [START_REF] Gao | Electrostatic potential in a bent piezoelectric nanowire. The fundamental theory of nanogenerator and nanopiezotronics[END_REF], ( , ) = ± 1 1 + -2(1 + ) -2 1 3.10 where is the maximum potential at the surface of the NW at the tensile (T) side with positive sign, is the maximum potential at the contractive (C) side with negative sign. F is the lateral bending force. If we define the effective piezoelectric coefficient under bending as, = -2(1 + ) -2 3.11 Eq. 3.10 and 3.11 can be simplified as, ( , ) = ± 1 1 + 1 × 3.12 For certain NWs, if the dielectric constant, Young's modulus and geometry factors are known, the equations above can be written as, For n type semiconductors, a Schottky barrier is the potential energy barrier for electrons which builds at a metal-semiconductor junction. It is proportional to the difference of the metal work function Φ and the semiconductor electron affinity , ≈ Φ -3.14 However, for real semiconductor surfaces, there will be additional energy states on the surface of a semiconductor because the perfectly periodic lattice ends at the surface and many bonds are not satisfied. These states can have a very high density and create a narrow distribution of energies within the band gap. The nature of these metal-induced gap states and their occupation by electrons tend to pin the Fermi level at a specific position in the band gap, which is the metal-induced Fermi level pinning effect. When the density of surface states is high, as it typically is, the potential barrier that develops is dominated by the location of the surface states in the semiconductor band gap, rather than by the work function of the metal, ≈ E - ln 3.15 where f is a factor to describe the position of the pinned Fermi level. In our AFM measurements, the probe of an AFM tip was coated with PtSi (details in Appendix III). III.3.2.2 GaN NWs grown by MBE method The GaN NWs that we tested were prepared by our colleagues (R. Songmuang) at Néel Institute in Grenoble. Fig. 3.15 shows the SEM and AFM images of undoped GaN NW arrays. The diameter of GaN NWs was 150 nm -200 nm, while the length was from 600 nm to 800 nm. The white cross signs in Fig. 3.15b represent the positions where the AFM tip contacts the NW and applies a force on it. During the measurement, the ramp force applied on the three points was controlled by the piezo response system of the AFM. The maximum value was set to a constant (~ 2.3 μN). The saturation piezopotential generated by the single GaN NW changed from 0.84 mV to 0.6 mV as the force position approached the root (the fixed end) (Fig. 3.16a). As the tip moved from point 1 to point 3, the effective length of the NW decreased. The deflection of the NW also decreased and resulted in lower potential. Take the potential signal of point 1 as the example: we analyzed the relationship between the potential and the force (Fig. 3.16b). When the applied force was smaller than 480 nN, the oscilloscope could not acquire any piezoelectric response. By continuing to increase the force, a proportional increase appeared in the potential. This threshold resulted from the Schottky barrier between the GaN surface and the PtSi coating of the AFM probe. The electron affinity of GaN is 4.1 eV [193]. By assuming the PtSi coating has a (001) orientation, its work function is 4. 96 eV [194]. Using these values, the analytical barrier height is 0.86 eV. The experimental results show a value around 0.83 eV existing in the PtSi-GaN contact [START_REF] Liu | Thermally stable PtSi Schottky contact on n-GaN[END_REF]. This means that in our experiment, before detecting any piezoelectric signal, the NW needs to lower the potential barrier by generating a potential. In the GaN NWs, this has been reached for an applied force larger than 400 nN. Fig. 3.18a presents the quasi-linear relationship between the maximum deflection of the NW and the force. This corresponds to Eq. 4.5, which describes that the maximum deflection is proportional to the bending force. Fig. 3.18b shows the measured potential as a function of deflection and force. A threshold was also observed in this case although at a lower force of 54 nN (NW deflection about 4.5 nm) In this case, to estimate the potential barrier, we considered: the electron affinity of GaAs is 4.07 eV [START_REF] Sze | Physics of Semiconductor Devices[END_REF] and the band gap is 1.27 eV at 300K [START_REF] Shenai | Optimum semiconductors for high-power electronics[END_REF]. Similar to the case of GaN, the surface states dominates the potential barrier. According to Eq. 4.15, with = 0.6~0.7, theoretical is about 0.76 . This result was consistent with the measurements. for GaAs being smaller than for GaN, a lower force was needed before acquiring a piezo signal. The total deflection being controlled by a ramp plot, the tip deflection increased linearly with the total deflection (Fig. 3.20a). The relationship between the NW deflection and the force was deduced from the cantilever deflection and spring constant (Fig. 3.20b). With small deflection, potential generated by ZnO NW was larger than for ZnO/PZT NWs, although the saturation value was only 20% of the latter (Fig. 3.20c). Meanwhile, beyond threshold, ZnO/PZT NWs produced a larger potential compared to ZnO NWs with the same bending force (Fig. 3.20d). The ZnO/BTO core-shell NWs were fabricated by the same CVD method in the same group as ZnO/PZT NWs. To start with, the ZnO NWs were grown on one Si substrate by CBD. Then the sample was cut into two parts. For one of them, a BTO coating was deposited and the remaining part was kept for reference. As shown in Fig. 3.21a and b, the diameter of the ZnO NWs varied from 70 nm to 190 nm. As for ZnO/BTO core-shell NWs, the typical diameter was about 300 nm, with a wide distribution, from 200 nm to as much as 500 nm. The typical length of ZnO and ZnO/BTO NWs was 3 μm and 4 μm, respectively. Target NWs were located by the AFM topographic scanning and then bent by the cantilever at chosen position (Fig. 3.21c and d). The electrical response was plotted in Fig. 3.21e and f. When the ramp force was applied to the NW repeatedly, potential pulses were generated by both samples. The maximum force applied was 120 nN for ZnO NWs and 300 nN for ZnO/BTO NWs. The maximum output of ZnO NWs was around 6.5 mV, while that of ZnO/BTO NWs could reach more than 30 mV. The BTO shell was deposited in an attempt to improve the piezoelectric response of the ZnO NWs. To compare the piezoelectric behavior of both kinds of NWs, two individual NWs were chosen as representative. According to the relationship between cantilever deflection and total deflection (Fig. 3.22a), we could verify that NW deflection was proportional to bending force (Fig. 3.22b). Since the BTO shell was coating ZnO NW, the area moment of inertia increased significantly due to the enlarged radius ( ∝ ). Therefore, by applying the same force, the deflection of the ZnO NW was much larger than that of the ZnO/BTO NW. However, with the same deflection, the ZnO/BTO NW generated a larger potential (Fig. 3 An experiment on bending one ZnO/BTO NW at the same position with different forces was carried out to verify the force threshold. Despite the already mentioned reproducibility issue, it can be observed that the saturation potential decreased significantly as the force reduced from 320 nN to 250 nN (Fig. 3. 23a andb). When the force approached 200 nN, the potential tested was close to zero. This is consistent with the threshold found in Fig. b. Comparison among different piezoelectric NWs Fig. 3.25 synthesizes all the results shown in previous measurements and compares the piezoelectric response of different NWs. GaN, GaAs and ZnO NWs were compared in Fig. 3.25a. ZnO NWs with a cylindrical cross section came from Institut des Nanotechnologies de Lyon, cited as ZnO cylindrical. ZnO NWs with hexagonal cross section were fabricated in our group, later cited as ZnO hexagonal. Apparently, the ZnO NWs generated higher potential than GaN and GaAs NWs. The threshold of these potential curves was related to the Schottky barrier height of the NW-PtSi contacts and the piezoelectric coefficients. The higher the barrier was, the larger was the threshold. Correspondingly, the piezoelectric behavior of ZnO-based NWs were also modified by depositing different shells, and even by the NW morphology (Fig. Further analysis of effective piezoelectric coefficients under bending should be investigated containing the material properties and the geometry. Table 3.1 lists the Young's modulus and dielectric constant of GaN, GaAs, ZnO, BTO and PZT materials reported in the literature. The values have been measured on thin film samples. From these parameters we deduced the effective Young's modulus and the effective dielectric constant / of core-shell NWs using linear combination of each material property according to their volume ratio. The radius R was measured by AFM. The factor * defined in Eq. 3.13 was then calculated by using , and R (Table 3.2). The effective piezoelectric coefficient under bending shown in Table 3.2 was calculated to evaluate the electromechanical properties of these NWs (Fig. 3.26). 3.2 with a log scale. The best effective piezoelectric coefficients were observed in ZnO/BTO and ZnO/PZT core-shell NWs. In general, the core-shell NWs had larger than homostructure NWs. On the other hand, compared with GaAs and ZnO NWs, these core-shell NWs obviously had a larger threshold force, indicating that they were not sensitive to small force. III.4 Piezoelectric response of NGs under compression Although the mechanical and electromechanical measurements of individual NWs helped us to understand their material properties and piezoelectric response, the performance of integrated NWs, namely NGs, is different. We need specific measurement configurations and set-up to characterize the NG devices. In this section, devices that are characterized will be named after their serial number. Only the parameter required for the discussion will be mentioned, while other details will be listed in Appendix II. III.4.1 Electromechanical measurement at room temperature We built a system to realize the measurement of the piezoelectric response of the NG while accurately controlling the compressive force which is applied on it. The set-ups worked at room temperature. III.4.1.1 Measurement set-up and working principle Fig. 3.27 presents the measurement set-up that we developed to measure the output potential of rigid NGs under compression. The main components are the actuator, the sample holder, the force load and the force sensor (Fig. 3.27a). The sample holder is supported and moved by a PC-controlled linear actuator, whose position can be adjusted in three directions (x, y and z) manually. Above the sample holder, there is a ceramic rod acting as the force load to apply pressure on the sample during the measurement. The ceramic rod is in contact with a force sensor, which measures the force applied between the rod and the sample holder (or the sample on it). The electrodes of the sample connect to the voltage, current and charge preamplifiers via a transmission unit. Then the output electrical signal of the preamplifier is converted into a digital signal by an analog to digital converter (ADC) which is connected to a PC. Besides, two softwares based on Labview platform were developed to control the actuator and to acquire the force and voltage signals ( As the strain variation stops, the flowing charges start to compensate the polarization charges inside the NW, which results in exponential decay of both voltage and current signals. This is similar to a capacitor discharge in a resistive load. Finally, when force is released, change in polarization charges and charge transfer process occurs. In our measurement, the sample holder was connected to the actuator with a sphere joint (see Fig. 3.29a). The purpose was to adjust the perpendicularity of the applied force on the sample, but its presence produced an overshoot on the compressive force (see Fig. 3.30a) However, the releasing force do not present this overshoot and it is well controlled by the actuator. For this reason, the voltage and current measured in section III.4 and III.5 were taken from the response to force release. b. Instantaneous power III.4.1.3 Piezoelectric response of NGs under compression at room temperature The performance of NG devices working at room temperature has been studied first. Open circuit voltage and short circuit current were measured to characterize the NG devices. The output power was also calculated using a variable resistive load. a. Effect of fabrication parameters: top PMMA layer and precursor concentration The thickness of the top insulating layer played an important role in determining the NG performance. To keep the functional device un-damaged, we were unable to measure the thickness of the top layer via a cross-section SEM image. Therefore we deduced the thickness by combining measurement results from the reference devices and the theoretical thickness vs. spin speed parameters in the data-sheet. The reference devices were prepared with the same PMMA deposition method, but they were cut to obtain the cross section images. Table 3.3 shows the distribution of potential generated by different NGs according to the PMMA thickness and applied force. In general, NGs with densely packed NWs (geometry ratio larger than 0.8 due to precursor concentration of 50 mM) had a lower performance in potential generation. This is consistent with the modeling results reported previously in our group [5,[START_REF] Tao | FEM modeling of vertically integrated nanogenerators in compression and flexion modes 10th Conference[END_REF]. Meanwhile, because PMMA features low hardness and small dielectric constant, the control of top PMMA layer thickness is critical. With too thin a layer, short circuit forms between ZnO NWs and the top electrode, and reduces performance. Conversely, thick layers decrease the capacitance and reduce performance as well, although for a different reason. This is what is observed in Fig. 3.31 where generated piezopotential is plotted as a function of PMMA thickness for given force of 1.5 N. b. RC circuit analysis Graton et al. proposed an equivalent circuit model of the VING that took into account the effect of the polymer matrix on the mechanical and electrical behavior of the generator (Fig. 3.32a) [205]. In their approach, several approximations on the elastic and electric field allowed to consider the NW/polymer composite as an effective homogeneous medium whose physical properties are expressed in terms of , the volume fraction of piezoelectric material and = 1 -, the volume fraction of polymer. The electrical branch containing the output voltage and current (right part of the circuit) is a RC circuit with characteristic capacitance ̅ and resistance . With our measurement set-up, the electrical load is the impedance of the voltage amplifier when we measure the open circuit voltage. We can also make an estimation of ̅ according to the equation given by Graton et al.[205], ̅ = ̅ 3.17 ̅ = - 2 ( + ) + ( + ) + 3.18 where is the dielectric constant, A is the surface of electrodes, is the length of the NW (ergo the thickness of the NW/PMMA composite layer), is the piezoelectric coefficient and is the stiffness components. The superscript s represents the properties of ZnO NWs, and the overline is used for effective parameters. For our NG samples, there is an extra insulating layer on the top of the composite, which can be considered as a capacitance connected in series with ̅ . Therefore we add, 1 ̅ = 1 ̅ + 3.19 where is the thickness of top PMMA layer. With the NW density equal to 2 × 10 , and the average radius equal to 60 nm, we get = 0.23. Since the Young's modulus of PMMA is in the range of 1.8 GPa to 3 GPa, we calculate the effective capacitance of NG sample In24B as, ̅ = 5.9 ∼ 8.6 × 10 3.20 The effective capacitance can also be evaluated through the measured voltage. For a NG device working under compression, once the strain is changed, a displacement current flows through the electrical load. As shown in Fig. 3.32b, the voltage yields the formula for exponential decay, ( ) = exp (- ̅ ) 3.21 ln ( ) = - ̅ 3.22 The slope = -33.58 of Eq. 3.22 is measured from the curve in the inset of Fig. 3.32b. The impedance of the voltage amplifier is 100 MΩ, so the effective capacitance is, ̅ = - = 3.0 × 10 3.23 Both theoretical and experimental effective capacitance values of the NG are around the order of 10 . The difference may result from the assumption made for the analytical calculation. For example, the analytical calculation is actually dealing with a NG made up of a lattice of equally spaced vertically aligned NWs fully entangled in a polymer matrix. While in the real case, the morphology and the alignment are not perfect and the PMMA matrix is unable to fulfill all the space in the lattice. III.4.2 Electromechanical measurement with thermal cycles Piezoelectric materials based electronics have been used in a plethora of industrial fields. The environments that these devices are required to serve in are becoming more demanding, with ambient temperatures being driven higher. For instance, one application where piezoelectric materials are used at higher temperatures, is an innovative approach for thermal energy harvesting put forward by Puscasu et al. contained a first step of thermo-mechanical conversion by a bimetal oscillating between a hot and a cold surface and a second step of electromechanical conversion by a piezoelectric material, when snapped by the bimetal [START_REF] Puscasu | An innovative heat harvesting technology (HEATec) for above-Seebeck performance[END_REF]. Therefore, it is interesting to evaluate the thermal effect on the performance of the piezoelectric NGs. III.4.2.1 Measurement set-up and working principle To realize such a measurement, we refitted our set-up by adding a temperature control system. A thermoelectric generator module (Peltier generator) was inserted below the metallic sample holder (Fig. 3.33a) to heat the sample. The Peltier generator was controlled by a temperature controller with an external power supply (Fig. 3.33b). The microcontroller West 4100+ allowed the Peltier to heat up or cool down the NG sample by switching on and off the power supply depending on the feedback of the temperature sensor stuck on the sample holder. Fig. 3.34a shows the connection of all these elements. The Peltier module in the heating system is sandwiched between two aluminum plates. The NG device is placed on the aluminum layer in contact with the hot side, while the sphere joint is under the aluminum layer in contact with the cold side (shown in Fig. 3.34b). III.4.2.2 Piezoelectric response of NGs with thermal cycles The thermal study (Fig. 3.35a) of the NG devices was based on a thermal cycle including heating and cooling down. Within one thermal cycle, the device was heated from the room temperature to several temperature plateau values ( , and ) and then cooled back to again. At each step, the temperature was kept stable for a period to allow for open circuit voltage, short circuit current, and output power measurements. Most of the study has focused on NG No4. Different thermal cycles were performed on the device and its performance was evaluated. First, the device NG No4 was subjected by the thermal cycles with = 50℃ , = 80℃ and = 90℃. The compression force was 2 N. We measured the open circuit voltage of this device before and after each thermal cycles. The initial potential was around 50 mV. During each thermal cycle, the potential was improved as the temperature increased from to . After each cycle, we recorded the potential when the device was completely cooled down. As shown in Fig. 3.35b, the potential generated by NG No4 were further improved by repeating the thermal cycles. By the end of the third cycle, the potential reached 500 mV, 10 times the initial value. Apparently, the thermal cycle induced improvement was progressive. Meanwhile, we observed that this thermal-based enhancement had a long-term effect. The open circuit voltage decreased to 290 mV three month after the thermal treatment and came back to the initial value (50 mV) after as long as six months. Six months later we conducted another thermal treatment on the same device, with = 30℃, = 50℃ and = 70℃, and = 2 N. During the first thermal cycle, the open circuit voltage increased from 50 mV to 75 mV. One could easily perceive a distinct improvement of 50 % for V oc (Fig. 3.36a). This was lower than the first experiment with thermal treatment, because the temperature of each steps was also lower. This variation of the performance also existed in other NG devices, such as NG In29A (Fig. 3.36b). The increase of V oc for this latter device reached 7 times. According to our results, this thermal-based enhancement was repeatable and commonly found in the NGs. III.4.2.3 Temperature influence on NG performance with electrical load The piezoelectric response of NG No4 connected to an electrical load was also studied during the thermal cycle. Fig. 3.37 presents the measurement results of the first thermal cycle. Three steps of the thermal cycle were selected: before the cycle, (70℃), and after the cycle. Both the potential (Fig. 3.37a) and the instantaneous power (Fig. 3.37b) increased as the temperature arose from room temperature to 70 ℃. After cooling down, the values were still larger than initial ones. The optimum load of this device after one thermal cycle was around 50 MΩ. III.4.3 Comparison between NGs and PZT generator As a mature piezoelectric ceramic, PZT is widely used for passive sensing, active transmitting and mechanical displacement applications. Actually PZT is a good traditional piezoelectric material with a much larger piezoelectric constant than that of piezoelectric semiconductors. Thus commercial approaches for piezoelectric energy harvesters based on PZT in bulk or thin film forms have been realized for several years. However, PZT has several drawbacks: (1) it needs to be pre-polarized before getting into application; (2) it is difficult to obtained in thin film and hard to be compatible with CMOS techniques; (3) it contains Pb which is toxic. It is interesting to make the comparison between our NG with a commercial PZT disk generator. The measurement of the voltage and power of NG No4 was taken three months after the first thermal treatment (see section III.4.2.2). Both the NG and PZT disk generator were compressed by a 5 N force. As the resistive load changed from a few kΩ to 1 GΩ, the voltage increased with the resistance and tended to saturate at the value of open circuit voltage (0.29 V). The instantaneous power was calculated according to the potential on the load. Unlike the voltage variation, the instantaneous power presented an obvious optimum value of 13 nW at = 1 MΩ. Considering the volume of the NG, the power density could reach 85 μW/cm 3 (Fig. 3.38a). Meanwhile, we also characterized the performance of the PZT disk generator as a reference. Fig. 3.38b shows the voltage and instantaneous power of this PZT device. The voltage followed the same trend as the NG, but with a much larger saturation value of 2.4 V. On the other hand, although the PZT device performed an optimum power of 2 μW at = 0.5 MΩ, its actual volume was much larger than that of the NG. Geometry, piezoelectric coefficient and power density of both generators are compared in Table 3.4. While the piezoelectric constant of PZT is 40 times larger than that of ZnO, the volume power density of the latter was 3.4 times higher due to the small effective thickness. According to what was observed for NGs working under compression, we started to consider that it would be interesting to characterize the performance of flexible NGs under bending. In the next section we will present experimental results on this characterization. III.5 Piezoelectric response of NGs under bending VING integrated on flexible stainless steel foil can also work under bending. It would give us better understanding of the physical principles to measure the potential with accurate deflection control. However, building this set-up is beyond what could be addressed in the timeframe of a thesis. Therefore, this section presents our very first results, obtained manually. They give a basic understanding of flexible NGs operation and are very useful for further developments. III.5.1 Measurement Set-up of NGs under bending During the measurement, the voltage signal generated by bending was acquired by an oscilloscope. Since the impedance of the oscilloscope (1 MΩ) is large compared to that of our flexible NGs (a few tens of kΩ), the measured voltage can be regarded as the output piezopotential of the NG. The NG device was bent by two methods. In the first method, one edge of the plate was clamped and we used airflow with a pressure of 0.5 psi to bend the device periodically at the other edge with an approximated deflection of about 1 cm (Fig. 3.39a). In the second method, the device was bent manually as shown in Fig. 3.39b with the deflection less than 1 cm. Fig. 3.39c and d present the piezoelectric response to periodically applied deflection of the same NG bent by the airflow and manually, respectively. Although the fluctuation of the voltage pulse was attenuated by the airflow bending, the maximum output potential of both measurements was around 20 mV. In31A, In32A, In35A and In20B were grown with the same process and parameters. Then different PMMA deposition procedures were selected to construct the matrix and the top insulating layer, resulting in a difference in the thickness. The four devices were bent manually with an estimated deflection of 0.5 cm. As shown in Fig. 3.40a, the potential reduced from 0.15 V to 0.024 V as the top layer thickness increased. On the other hand, the reproducibility of the potential generation was improved by increasing the top insulator thickness (Fig. 3.40b). When the top layer was thin, the charge leakage tended to be dominant due to the short circuit. To investigate the optimum design for NGs working under bending, Al 2 O 3 and Si 3 N 4 were selected as the matrix and top insulating material. To compare with PMMA, the NGs were bent manually with a deflection approximatively equal to the one with PMMA. The potential generated by the NG packaged with Al 2 O 3 was stable and the peak value was around 1 V (200% of improvement compared to PMMA) as shown in Fig. 3.41a. It was higher than most of devices with PMMA, and comparable with the best one. Meanwhile, the NG with Si 3 N 4 as packaging material generated stable potential of 0.6 V (100% of improvement compared to PMMA) (Fig. 3.41b). To achieve functional NGs with PMMA matrix, we needed top PMMA layer thicker than one micrometer, otherwise the performance would be crippled due to the short circuit threat. On the other hand, thick top layer reduced the generated potential by up to 84%. The compromise existing between the thin and thick layers led to a difficulty in the synthesis control. Compared to the PMMA, the thickness of the Al 2 O 3 and Si 3 N 4 top layers was about 300 nm. This is consistent with our simulation which predicted that a thinner and harder insulating material would enhance the NG performance III.5.2.3 Non-symmetric potential generation phenomenon As mentioned in Chapter II, a non-symmetric performance was observed for the NGs under bending thanks to these preliminary electromechanical measurements on NGs built on stainless steel foil. The NGs were bent manually downwards (Fig. 3.42a) or upwards (Fig. However, the VING works as a capacitor in generating the potential. The change of the capacitance in bending mode was significant compared to how it performed in compression mode. Therefore, the measured potential was not only the piezoelectric potential. It was actually composed of capacitance change induced fluctuation and the piezoelectric signals. In fact, we conducted a measurement, under the same experimental circumstances, on a device where no ZnO NWs were integrated, but only with PMMA layers. The result showed that there was still this capacitance change induced fluctuation (Fig. 3.44a). Then we tried to bend the VING device with a quick move. The piezoelectric signal showed itself as a typical displacement voltage signal before the influence of capacitance (Fig. Still it was extremely difficult to obtain a piezopotential precise and reproducible enough via bending manually. There is a clear need for a more controlled experimental set-up, which is presently under development in the framework of a following thesis. III.6 Conclusion In this chapter, we first discussed the mechanical and electromechanical response of individual NWs. An AFM was used to directly bend cantilevered NWs to characterize their Young's modulus and the piezoelectric response. We demonstrated that the Young's modulus measurement could be used for beam-like nanostructures with large aspect ratio by bending the free end with a controllable force. The piezoelectric response was measured simultaneously, as the voltage signal was acquired by a high input impedance preamplifier through conductive AFM tips. The potential -force relation was established in this way, offering a method to calculate the effective piezoelectric coefficients. In our work, III-V NWs, such as GaN and GaAs NWs, were characterized. ZnO and ZnO based core-shell heterostructure NWs were also measured and compared. In our experiments, ZnO NWs presented larger effective piezoelectric coefficients than III-V NWs, among which the core-shell ZnO/BTO and ZnO/PZT NWs provided a concept to improve even more the piezoelectric properties. Secondly, we characterized rigid and flexible NGs integrating ZnO NWs immerged into an insulating material (PMMA, Si 3 N 4 or Al 2 O 3 ). Concerning the NG devices (on Si), we built a system to measure their energy generation performance under compression by acquiring the open circuit voltage, short circuit current and instantaneous power with resistive load. Thermal control was also added to this set-up (<80°C). We found that the temperature of the working environment influenced the output. The generated potential and power were improved when the devices under compression are subjected to thermal cycles, including a rise in temperature and cooling down to ambient temperature. After three of these cycles, the potential and power were increased by 10 and 300 times, respectively. The improvement of the performance was studied over time. It was found that the performance of the devices returned to its initial value after 6 months. Flexible NGs were also characterized with preliminary experiments, using manual bending or airflow to provide the bending force. Although simple, these tests provided guidelines about how to improve the performance. For instance, we verified that it was beneficial to use a harder material allowing top layer to be thinned without the risk of short circuit formation between NWs and top electrode. Finally, the non-symmetric performance predicted by the computational study (Chapter II) seemed to be observed experimentally. However, for further investigation with flexible NGs, we need to build a set-up with accurate control of the deflection, so that we can obtain the deflection-potential relationship, as well as more quantitative analysis for the non-symmetric performance. In fact, this new bending set-up is under development for an accurate and reproducible characterization. Only then, it will be possible to determine if the non-symmetry between upward and downward bending observed in experiments has the same origin as in our simulation. Conclusions and perspectives Development of technologies expedites the emergence of low-power consuming and wireless miniature electronics, from wireless sensor networks to portable personal electronic products. Most of the cases, these system and devices are powered by batteries and/or wired. Sometimes, this can be very costly. On one hand, batteries need to be either replaced or recharged periodically and have limited lifetime. It seems easy for some sensor networks. For example a 46-node battery-powered sensor network have been deployed to measure the dynamics of the Golden Gate Bridge. However, when it comes to the border monitoring, a large number of sensors (> 1000) is required at the border between US-Mexico [START_REF] Beeby | Energy harvesting for autonomous systems[END_REF]. On the other hand, wiring can be negative to spread the application of sensor networks. For instance, wiring each sensor in a commercial building can increase the cost by 20 times compared with the use of wireless sensors. Therefore, efforts are required for autonomous system where ambient energy is harvested to recharge the batteries or even to directly power other units, such as sensor, ADC or transmitter. Among variety of energy harvesting methods, electromechanical transducers based on piezoelectric effect have attracted much attention. Conventional thin film piezoelectric devices can satisfy the power supply requirements with high generation and high energy conversion efficiency. Piezoelectric generators using ceramic compounds (such as PZT) can generate high voltages (up to 100 V [START_REF] Park K Il | Highly-efficient, flexible piezoelectric PZT thin film nanogenerator on plastic substrates[END_REF]) and have a high direct coupling efficiency between the mechanical and electrical energy (up to 80% [208]). Driven by the demand of miniaturization and compatibility with the IC/MEMS industry, PENGs based on semiconducting NWs came into sight ten years ago. Although piezoelectric semiconductor NWs might not be able to generate voltage as high as the ceramic thin film finally, they have their own strengths for energy harvesting in autonomous system. In fact, based on current research, a PENG might be able to meet the power consumption of a simple sensor node, which can work with energy supply of 1 or 2 μW for a low data rate. The objective of this thesis was the study of physical principles and piezoelectric responses of semiconducting NWs including III-V compounds, ZnO, ZnO core-shell heterostructure NWs, and ZnO NWs based VINGs. The study was divided into two main aspects: computational simulation study for VING devices, and the characterization of individual piezoelectric NWs using AFM techniques and PENGs using a set-up built by ourselves. The integration of VINGs based on ZnO NW arrays were prepared in our group. The CBD method was used to grow ZnO NWs on both p-type Si wafers and stainless steel foils. Then they were vertically integrated into dielectric matrix and top insulating layer. These NWs and NGs were characterized, together with other samples from our cooperators. Most of the simulation work in the literature is focused on individual NWs, developing from regarding the NWs as dielectric material to considering the screening effect of doped semiconductors. However, very few work had been done on the NW composites before our group. To better understand how the piezoelectric NWs could work in a VING structure, we built NG cell models, which consisted of a single NW immerged in an insulating matrix with proper boundary conditions, for VING working in compression and flexion modes. First, the ZnO NW used for the NG cell was assumed to be insulating, meaning only dielectric material properties included besides the piezoelectricity. Under this circumstance, we analyzed the working principle of VINGs and investigated the effect of matrix materials with an explanation from both mechanical and electrical aspects (section II.2). In fact, due to unintentionally doping during the synthesis, free carriers screened large part of the piezopotential generated by ZnO NWs. Simulation results showed that the piezopotential could be considered as fully cancelled for doping level higher than 10 cm . Yet a few tens to a few hundred millivolts of output potential can still be observed in electromechanical measurements. By taking SFLP into account in the framework of full coupling between semiconducting and piezoelectric physics, we gave possible explanations to several unaddressed issues: (1) better performance would come up with NWs with larger length and aspect ratio even with high doping level; (2) NG cells generated larger potential compared to ZnO thin film even with a diameter larger than predicted by first-principle calculations; (3) a non-symmetric performance of NG devices working in flexion mode was observed with SFLP in the simulation. Although analytical and computational researches helped us to understand the physical principle and provided the guideline of experimental activities, mechanical and electromechanical measurements is indispensable in the study of piezoelectric NWs and NGs. In this thesis, the Young's moduli and electromechanical responses of individual NWs were characterized under AFM. The Young's modulus of GaAs NWs with high aspect ratio was measured by bending the free end with a controllable force. The piezoelectric response was measured simultaneously as the NW was deformed. Consequently, the potential -force relation was established in this way, offering a method to calculate the effective piezoelectric coefficients. GaN, GaAs, ZnO and ZnO based core-shell heterostructure (ZnO/PZT, ZnO/BTO) NWs were measured and compared. ZnO NWs presented larger effective piezoelectric coefficients than III-V NWs, among which the core-shell ZnO/BTO and ZnO/PZT NWs provided a concept to improve even more the piezoelectric properties. Meanwhile, we moved the electromechanical characterization to the device level for a more direct evaluation. We built a system to measure the energy generation performance of NGs under controlled compression and temperature by acquiring the open circuit voltage, short circuit current and instantaneous power with a variable resistive load at different temperatures (< 80 °C). The performance of the devices was affected by the temperature. After several cycles of compression under heating and going back to ambient temperature, the performance of rigid NGs could be improved up to 10 times in open circuit voltage and 280 times in power generated. Flexible NGs were also characterized. Preliminary results were obtained by bending the samples manually or under air flow keeping approximatively the same deformation. The brief results revealed mainly two key points: first, a harder matrix could improve the energy generation performance as predicted in the simulation; and second, the non-symmetric responses were observed in most of the functional flexible NGs, self-consistent with the SFLP theory we put forward in section II.4. To achieve more accurate measurements, a new characterization set-up for controlled and reproducible bending is actually under development. Limited by what could be addressed in the timeframe of a thesis, several questions remain to be answered. For example, the effective piezoelectric coefficients tested by the AFM were lower than the theoretical values, which might be resulted from the doping conditions. If we assume that the NWs are doped at a normal level, then why can we still observe several to several tens of millivolts generation? Does SFLP confine the screening effect here? In the thesis, we observed this interesting and promising enhancement of NG performance due to thermal treatment. Then the questions come up. Does the effect of this thermal treatments repeatable? Does the thermal treatment change the characteristic, other than potential and power, of the NG, for instance, the optimum load? And the most important one ---what is the physical principle of this change? A possible answer to the last one is the surface trap, requiring further investigation of the surface state between ZnO and PMMA. In addition to researches for better understanding of the fundamentals, exploiting new application of PENGs could also be interesting for autonomous system. Since friction or contact between the electrodes/piezoelectric material and the substrate is inevitable during the operation of the PENG. This can drive a TENG with the electrostatic induction, thus offering a possibility to combine TENG with PENG to supply more power to drive electronics and to realize self-powered active sensors. A hybrid piezoelectric-triboelectric NG can be an effective way to harvesting mechanical energies. emerged as one of the first simulation methods from the pioneering applications to the dynamics of liquids by Alder and Wainwright and by Rahman in the late 1950s and early 1960s. Due to the revolutionary advances in computer technology and algorithmic improvements, MD has subsequently become a valuable tool in many areas of physics and chemistry. MD simulations are increasingly being used to study the mechanical behavior, such as the uniaxial tension, failure strain and Young's modulus of nanostructures [START_REF] Gall | The Strength of Gold Nanowires[END_REF][215][216][START_REF] Branício | Large deformation and amorphization of Ni nanowires under uniaxial strain: A molecular dynamics study[END_REF][START_REF] Komanduri | Molecular dynamic simulations of uniaxial tension at nanoscale of semiconductor materials for micro-electro-mechanical systems (MEMS) applications[END_REF][START_REF] Ju | A molecular dynamics study of the tensile behaviour of ultrathin gold nanowires[END_REF]. Besides, more specific simulations are conducted on piezoelectric NWs. Size effects, defects, and elastic properties of ZnO and GaN NWs investigated through MD models are supportive to their assets of the application in energy conversion [START_REF] Kulkarni | Orientation and size dependence of the elastic properties of zinc oxide nanobelts[END_REF][START_REF] Dai | Molecular dynamics simulation of ZnO nanowires: size effects, defects, and super ductility[END_REF][221][222][START_REF] Agrawal | Investigation of ZnO nanowires Strength and Fracture[END_REF].  Continuum modeling Some physical phenomena can be modeled assuming that materials exist as a continuum, which means the matter in the body is continuously distributed and fills the entire region of space it occupies. A continuum is a body that can be continually sub-divided into infinitesimal elements with properties being those of the bulk material. Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. The piezoelectric response of a macroscopic 3D system is well described by the continuum Landau Devonshire model. This model does not adequately describe the piezoelectric response of a one-dimensional system because the 1D Coulomb kernel is a short range potential. People demonstrate that this short range kernel leads to new physical effects in nanotubes and NWs [224]. To overcome the scale independency of the classical continuum theory, many high-order theories, such as the strain gradient theory [START_REF] Zhao | Nonlinear microbeam model based on strain gradient theory[END_REF][START_REF] Akgöz | Strain gradient elasticity and modified couple stress models for buckling analysis of axially loaded micro-scaled beams[END_REF], couple stress theory [START_REF] Akgöz | Strain gradient elasticity and modified couple stress models for buckling analysis of axially loaded micro-scaled beams[END_REF]227], micropolar theory [228], nonlocal elasticity theory, [START_REF] Eringen | On differential equations of nonlocal elasticity and solutions of screw dislocation and surface waves[END_REF][START_REF] Eringen | Nonlocal polar elastic continua[END_REF] etc., are developed to characterize the size effects in the nanostructures by introducing an intrinsic length scale. Moreover, aspect ratio effect and surface effects of piezoelectric NWs are also investigated using advanced continuum models. [224,[231][232][START_REF] Samaei | Timoshenko beam model for buckling of piezoelectric nanowires with surface effects[END_REF]  Finite element method The finite element method (FEM) is a numerical technique for finding approximate solutions to boundary value problems for partial differential equations (PDE), which is originally developed for solving solid mechanics problem in the 1940s by Richard Courant. FEM solves an equation by approximating continuous quantities as a set of quantities at discrete points. These quantities are usually called elements and the joint points of the elements are named as nodes (Fig. Nowadays FEM are used in the mechanical, thermal, fluidic, electrostatic or even chemical reaction problems. For complex geometry and boundary condition, these PDEs cannot be solved with analytical methods, where an approximation of the equations can be constructed, typically based upon different types of discretizations. For example, a function is the dependent variable in a PDE (i.e., temperature, electric potential, pressure, etc.). The function can be approximated by a function using linear combinations of basis functions according to the following expressions: ≈ = Here, denotes the basis functions and denotes the coefficients of the functions that approximate with (Fig. I.2b). Many physics laws are expressed by mathematical equations that can be solved with FEM. For example, the conservation laws such as the law of charge conservation, mass conservation and momentum conservation are all expressed by the continuity equation in differential form: + ∇ • = σ I.1 where  ρ is the amount of the quantity per unit volume,  j is the flux of the quantity,  t is time,  σ is the generation of the quantity per unit volume per unit time. Terms that generate (σ > 0) or remove (σ < 0) the quantity are referred to as a "sources" and "sinks" respectively. The differential equations express a small change in a dependent variable with respect to a change in an independent variable (x, y, z, t). In the case of the continuity equation, ρ is the dependent variable, while x, y, z and t are the independent variables. In Cartesian 3D coordinate system, the divergence of the flux j is denoted by: ∇ • = + + I.2 Furthermore, the flux usually can be described by the constitutive relation: = -∇ I.3 In the above equation, k represents the related factors and has different physical meanings in different conservation laws. Eq. 2.1-2.3 state that as a PDE, the continuity equation includes the derivatives of more than one independent variable, each of them represent a change in one direction out of several possible directions. If we know the value of ρ at a time t 0 , and the value of ρ or j at certain position (x 0 , y 0 , z 0 ), we can apply the initial condition and boundary conditions for Eq. 2.1. In many situations, PDEs cannot be resolved with analytical methods to give an expression such as = ( , , , ). Ergo an alternative option is to search for approximate numerical solutions through numerical models. This is the reason of the wide application of FEM simulations. deposition, and low-damage dry etching using inductively coupled plasma (ICP). Ra et al. first demonstrated the feasibility of the various applications based on semiconductor oxide NW devices fabricated by the NSL method [START_REF] Ra | Fabrication of ZnO nanowires using nanoscale spacer lithography for gas sensors[END_REF]. According to Ra's work, it is easily to find that the NWs obtained by NSL method are well aligned and are usually laterally integrated to the substrate. These NWs can also have controlled geometry with ultra-long or sub-10 nm thin morphology, determined by the mature thin-film deposition techniques. On the other hand, these features also exclude some application fields, such as vertically integrated devices for mechanical sensing and energy harvesting. In fact, NSL techniques are largely used in the fabrication of individual-NW electronics and chemical or biological sensing devices [242,[START_REF] Choi | Fabrication of sub-10-nm silicon nanowire arrays by size reduction lithography[END_REF]. AII.1.1.2 Inductively coupled plasma reactive ion etching (ICP-RIE) Reactive ion etching (RIE) is a directional etching process utilizing ion bombardment to remove material. This etch process is commonly used in the manufacturing of printed circuit boards and other micro fabrication procedures; the process is performed in a vacuum chamber and aggressively etches in a vertical direction. Horizontal etching is purposefully minimized in order to leave clean, accurate corners. Generally, RIE uses chemically reactive plasma to remove material deposited on wafers. The plasma is generated under low pressure (vacuum) by an electromagnetic field. High-energy ions from the plasma attack the wafer surface and react with it. The plasma comes from variable power sources, among which the ICP is involved in the fabrication of miniature structures, such as NWs. The ICP is employed as a high density source of ions which increases the etch rate, whereas a separate RF bias is applied to the substrate (silicon wafer) to create directional electric fields near the substrate (Fig. II.2a). In conventional RIE etching, the physico-chemistry cannot be controlled to make low damage processes, because the density and the energy of the radicals cannot be separately adjusted. This is the main advantage of the ICP-RIE etching that uses a RF source power to ionize the gas, and another RF chuck power (called RF platen power) to control the energy of the radicals. This ICP-RIE system achieves more anisotropic and low-damage etch profiles. It has been reported that III-V compounds, silicon and even carbon NWs can be obtained by ICP-RIE techniques [START_REF] Zhao | Nanometer-scale vertical-sidewall reactive ion etching of ingaas for 3-D III-V MOSFETs[END_REF][START_REF] Yu | Gallium Nitride Nanorods Fabricated by Inductively Coupled Plasma Reactive Ion Etching[END_REF][START_REF] Behnam | Nanolithographic patterning of transparent, conductive single-walled carbon nanotube films by inductively coupled plasma reactive ion etching[END_REF][START_REF] Wang | High optical quality GaN nanopillar arrays[END_REF][START_REF] Jalabert | High aspect ratio GaAs nanowires made by ICP-RIE etching using Cl2/N2 chemistry[END_REF][START_REF] Hausmann | Fabrication of diamond nanowires for quantum information processing applications Diam[END_REF][START_REF] Huang | Fabrication of GaN-based nanorod light emitting diodes using self-assemble nickel nano-mask and inductively coupled plasma reactive ion etching[END_REF]. Jalabert et al. reported an experimental study of GaAs etching by ICP-RIE based on Cl 2 :N 2 chemistry [START_REF] Jalabert | High aspect ratio GaAs nanowires made by ICP-RIE etching using Cl2/N2 chemistry[END_REF]. To obtain high aspect ratio GaAs NWs, they prepared GaAs samples with arrays of microstructures (lines and dots) made by electron beam lithography, followed by a lift-off of a 30 nm thick Ni film. Then GaAs NW arrays of 1 μm high and 30 nm in diameter were fabricated by ICP-RIE with Ni mask as the catalyst on top (Fig. II.2b). Compared to pure semiconductor NWs, ICP-RIE techniques are more suitable to NWs with longitudinal heterostructures or even individual NW based electronics. Huang and his coworkers reported a novel method to fabricate GaN-based nanorod light emitting diodes (LEDs) with controllable dimension and density using self-assemble Ni and Ni/Si 3 N 4 nano-masks and ICP-RIE [START_REF] Huang | Fabrication of GaN-based nanorod light emitting diodes using self-assemble nickel nano-mask and inductively coupled plasma reactive ion etching[END_REF]. A 300 nm Si 3 N 4 thin film layer was deposited on GaN-based LED heterostructure film, followed by a Ni layer with the thickness ranging from 5 to 15 nm. The sample was subsequently rapid thermal annealing (RTA) under flowing N 2 and then etched using a planar type ICP-RIE system (Fig. AII.1.2 Bottom-up method Based on existent research, bottom-up mechanisms can be grouped into two different basic categories each with two subcategories. One is based on the suppression of crystal growth in two directions to create one-dimensional structures, as is the case for selective area epitaxy (SAE) and oxide-assisted growth (OAG). SAE employs a mask layer with well-defined openings, out of which wires grow in a layer-by-layer mode (Fig. II.3a). The fact that growth continues one-dimensionally also above the mask layer, in contrast to lateral overgrowth, is attributed to the formation of slowly growing side facets with a low surface energy. [START_REF] Ikejiri | Mechanism of catalyst-free growth of GaAs nanowires by selective area MOVPE[END_REF] For this kind of growth, no seed particle is used. OAG is based on the use of the semiconductor and one of its oxides. During growth, phase separation occurs, leading to a core of the semiconductor material, which is simultaneously covered by a passivating shell of its oxide (Fig. II.3b), suppressing lateral growth [START_REF] Zhang R Q | Oxide-Assisted Growth of Semiconducting Nanowires[END_REF]. Unlike other NWs, the OAG NWs have almost always oxide sheath. The quality of these NWs hinges on how effectively the oxide is segregated from the core to the peripheral surface of the droplet during growth. The second group of 1D growth mechanisms is based on a significant local increase of growth velocity in one direction, in many cases caused by a particle at the NW top [START_REF] Wacaser | Preferential interface nucleation: An expansion of the VLS growth mechanism for nanowires[END_REF]. For these particle-assisted growth (PAG) schemes, we may distinguish between seed particles containing elements constituting the wire (homo-particle assisted growth, Fig. II.3c), and seed particles of a different material (hetero-particle assisted growth, Fig. II.3d). In both cases, supersaturation of the growth system leads to material crystallizing rapidly at the particle/crystal interface. The seed particle can be either liquid (vapor-liquid-solid growth, VLS) or solid (vapor-solid-solid growth, VSS). Some researches consider that it is more accurate to name the seed particle "quasi-liquid" or "quasi-solid" (vapor-quasi-liquid-solid growth or vapor-quasi-solid-solid growth, VQS) as it does not remain unchanged in liquid (or solid) phase during the whole growth of NWs. However, the basic principle is the same. Oxide-assisted growth. The semiconductor and its oxide are adsorbed on the surface, creating nucleation centers which separate into a semiconductor core and a passivating oxide shell. (c) Homo-particle growth. A seed particle is formed consisting of one or all elements used for wire growth. During growth both length and diameter increase as the seed particle size is variable. (d) Hetero-particle growth. A seed particle (typically Au) is deposited prior to growth. During heating to growth temperature the seed particle alloys with the substrate and/or material from the gas phase. Particle size during growth is nearly constant [START_REF] Mandl | Growth mechanism of self-catalyzed group III-V nanowires[END_REF]. AII.1.2.1 Nanopole template-assisted growth Nanopole template-assisted growth belongs to the first group, where the crystal growth in lateral direction is suppressed to create 1D structures. It has been shown to be effective for the fabrication of ordered NW arrays of metals or semiconductors. As dry method, it possesses advantages such as strict geometry control, versatility on different components and ease of dopant control. Thurn-Albrecht et al. has reported a method to rapidly and reliably fabricate NW arrays with densities in excess of 1 terabit per square inch, based on the self-assembled morphology in diblock copolymer thin films (Fig. II.4) [START_REF] Thurn-Albrecht | Ultrahigh-density nanowire arrays grown in self-assembled diblock copolymer templates[END_REF]. Another template ---Anodic aluminum oxide (AAO) ---is commonly used in the template-assisted growth of ZnO NWs [START_REF] Kovtyukhova | Layer-by-layer self-assembly strategy for template synthesis of nanoscale devices[END_REF][START_REF] Liu | High-density, ordered ultraviolet light-emitting ZnO nanowire arrays[END_REF]. AAO template can provide ordered porous structures with channel diameters ranging from 10 to 200 nm. As a thermally and chemically stable template, the growth of ZnO NWs in it could use solution-based methods, such as electroplating [START_REF] Li | Ordered semiconductor ZnO nanowire arrays and their photoluminescence properties[END_REF], electrophoretic deposition [START_REF] Wang | Preparation and characterization of nanosized ZnO arrays by electrophoretic deposition[END_REF], sol-gel [START_REF] Lakshmi | Sol-Gel Template Synthesis of Semiconductor Nanostructures[END_REF], and hydrothermal processes [START_REF] Shi | Photoluminescence of ZnO nanoparticles in alumina membrane with ordered pore arrays[END_REF]. AII.1.2.2 Methods based on VLS mechanism Among various PAG approaches used to grow NWs, most of them are based on VLS mechanism [START_REF] Wu | Direct observation of vapor-liquid-solid nanowire growth[END_REF] using chemical vapor deposition (CVD), metal organic chemical vapor deposition (MOCVD) [START_REF] Novotny | InP nanowire/polymer hybrid photodiode[END_REF], molecular beam epitaxy (MBE) [262], or laser-assisted catalytic growth [START_REF] Gudiksen | Growth of nanowire superlattice structures for nanoscale photonics and electronics[END_REF]. The VLS mechanism was proposed in the 1960s-1970s for large whisker growth [264,265], which later was developed for the growth of semiconductor NWs. The VLS mechanism relies on a vapor phase precursor of the NW material, which impinges on a liquid phase seed particle, from which unidirectional NW growth proceeds. The choice of an appropriate seed material has the benefit of allowing control over the diameter of the NWs produced, while the seed material can also significantly affect the crystalline quality of the NW [266,267]. According to the homologous or heterologous properties of the seed particle, the VLS growth is usually sorted as metal-catalyzed growth and self-catalyzed growth. (a) Metal-catalyzed VLS mechanism In the past decade, NWs of a variety of materials have been fabricated using VLS growth mechanism assisted by hetero-particles. Metal seed particles are widely used for NW growth on VLS mode, preferentially Au nanoparticles (sometimes Mn and Ni nanoparticles are also used) [START_REF] Martelli | On the growth of InAs nanowires by molecular beam epitaxy[END_REF][269][START_REF] Li | Nanowire electronic and optoelectronic devices[END_REF][START_REF] Hsieh | Time-course gait analysis of hemiparkinsonian rats following 6-hydroxydopamine lesion[END_REF]. The yields are well-controlled by modulating the size, position or pattern of seed particles. The VLS process can be divided into two main steps: 1) the formation of a small liquid droplet, and 2) the alloying, nucleation, and growth of the NWs. The growth starts on a clean, defect-free surface of a substrate, which is a semiconductor wafer in most cases. At first, metal clusters are deposited on the substrate; techniques for patterning control will be involved in this part. As pointed out above, gold is the metal catalyst used in most works. A thin layer of gold is deposited on the Si substrate followed by a short-time heating at about 700 ºC until the Au layer melts to form small droplets and combine with Si atoms. Then the substrate is heated in a reaction tube of chamber until the metal clusters melt and form liquid droplets, also called collectors. In order to lower the temperature used in the heating, the prevalent method is to replace the pure metal with an alloy of the metal and semiconductor materials, which could reach the suitable temperature in the range of 300 -1100 °C according to the target NWs. In a second step, a gas containing the elements of NWs flows through the reaction tube. The NWs grow at the interface of the collectors and the surface of the substrate. (b) Self-catalyzed VLS mechanism Nevertheless, for the combination with Si technology, this is problematic due to Au forming deep level traps in Si. Contamination and undesired dopants brought by hetero-particles strict the application of NWs to a great extent. Thus homo-particle assisted growth (or "self-catalyzed growth", SCG) attracts much more interests than ever before. The yields of this growth mechanism are usually called "self-catalyzed" or "self-catalytic" NWs, while the whole process, from the deposition of homo-particles to the particle-induced growth, can be considered as "self-organization system". The VLS growth of self-catalyzed NWs through supersaturation and precipitation at the liquid-solid interface can be explained by considering the composition-temperature phase diagram of the binary system. Besides, some routes belonging to the second group make use of the intrinsic property of the target material without particle-assistance, such as various chemical bath deposition (CBD) methods. AII.2 Chemical bath deposition method for the growth of ZnO NWs For the synthesis of ZnO NWs in our lab, chemical bath deposition (CBD) or so-called hydrothermal growth method has been used. CBD method has been proved effective and convenient in preparing ZnO NW arrays due to their low growth temperature, low cost and potential for scale up. Moreover, a remarkable diversity of ZnO 1D nanostructure, with different and controllable morphologies, can be obtained simply by changing the precursor chemicals, their concentrations, and/or the growth temperature. AII.2.1 Seed layer assisted growth mechanism Among all the CBD methods developed for the synthesis of ZnO NWs and nanorods, such as hydrothermal decomposition [START_REF] Li | Fabrication of ZnO nanorods and nanotubes in aqueous solutions[END_REF][START_REF] Peterson | Epitaxial chemical deposition of ZnO nanocolumns from NaOH solutions[END_REF], electrochemical reaction [START_REF] Yang | Electrochemical route to the synthesis of ultrathin ZnO nanorod/nanobelt arrays on zinc substrate[END_REF] and template-assisted sol-gel processes, seed layer assisted growth provide an effective route to increase the length and aspect ratio. Using ZnO thin film as the seed layer effectively lowers the interfacial energy between the ZnO nucleation and the substrate [START_REF] Tian | Complex and oriented ZnO nanostructures[END_REF]. The nucleation barrier is thus reduced, facilitating the growth of small-diameter ZnO NWs. Meanwhile, the orientation and doping status of the seed layer will dominate the orientation of NWs due to an epitaxial growth mechanism to improve the vertical alignment, and influence the semiconducting properties of NWs due to the diffusion of free carriers. In the growth process, Zn 2+ and OH -are provided by hydration of zinc nitrate hexahydrate (Zn(NO 3 ) 2 •6H 2 O) and hexamethylenetetramine (HMTA). When the precursor solution is heated, hydrothermal reactions can occur as follows: ( ) ( ) + 6 ↔ + 2 + 6 II.1 ( ) + ∆ ↔ 6 ( ) + ( ) II.2 ( ) + ↔ + II.3 where ( ) is HMTA or Hexamine. Here, Zn 2+ are known to react readily with OH - to form more soluble Zn(OH) 2 complexes, which act as the growth unit of ZnO nanostructures. Finally, ZnO is obtained by decomposition of Zn(OH) 2 . Therefore, the key chemical reactions involved in hydrothermal synthesis process is as: + 2 ↔ ( ) ( ) II. 4 ( ) ↔ ( ) + II.5 When the concentration of ZnO has reached supersaturation, ZnO crystal nuclei form and then grow according to the growth habit of ZnO crystals. ZnO crystal exhibits partial polar characteristics, and in a typical Wurtzite structure the (0001) plane is the basal polar plane. It is terminated by Zn and (0001 ) plane terminated by O, resulting in the divergence of surface energy for large polar surfaces. Other nonpolar planes have lower surface energy compared to the polar basal plane. [276] Under thermodynamic equilibrium condition, the facet with higher surface energy is usually small in area, while the lower energy facets are larger. In a word, the direction perpendicular to the smaller facet is fast in growth and the directions perpendicular to larger facets correspond to slow growing rates thus dominate the final morphology [START_REF] Tang | Size-Controllable Growth of Single Crystal In(OH) 3 and In 2 O 3 Nanocubes[END_REF]. For the ZnO crystal, the growth rates V along the normal directions of different index planes are described as follows: V(0001) > V(101 0) > V(101 1 ) > V(101 1) > V(0001 ), thus nanorod/wire type morphologies are frequently obtained. It is believed that, in the chemical bath, HMTA would preferentially attach to the non-polar facets of the NWs, thereby exposing only the (0001) plane for epitaxial growth [START_REF] Sugunan | Zinc oxide nanowires in chemical bath on seeded substrates: Role of hexamine[END_REF]. Thus it helps the NW to have a more preferential growth along the [0002] direction. AII.2.2 Growth process of ZnO NWs on rigid and flexible substrates In this work, ZnO NWs were grown on rigid p-type (100) Si wafer (500 μm) and flexible stainless steel foil (25 μm). The synthesis process and morphology control will be discussed in this section. AII.2.2.1 Growth procedures We prepared both substrates with square shapes for easy handling and testing. The Si wafer was immersed into diluted HF acid to remove the top SiO 2 layer. Both substrates were then cleaned by immersion into acetone (3 min), ethanol (3 min) and deionized water (5 min), then they were dried by N 2 gas. Fig. II.5a presents the scheme of the ZnO NW synthesis procedure. Firstly, a ZnO thin film as the seed layer was deposited covering the entire surface by ALD method. Secondly, the specimen was fixed on a rectangular glass holder by kapton tape. To prevent the possible short circuit due to the falling down ZnO NWs grown on the edges, the tape covered about 0.1 cm width from four edges of the specimen as shown in AII.2.2.3 Deposition of the seed layer ZnO seed layers can be deposited by many methods such as ALD, magnetic sputtering, spin coating and "bath dip" chemical methods. To achieve ZnO thin film with controllable orientation and good quality, we mainly used ALD technique in the deposition. ALD is an advanced thin film coating method which is used to fabricate ultrathin, highly uniform and conformal material layers for several applications. ALD uses sequential, self-limiting and surface controlled gas phase chemical reactions to achieve control of film growth in the nanometer/sub-nanometer thickness regime. Due to the film formation mechanism -the gases won't react until in touch with the surface which means the film growth proceeds by consecutive atomic layers "up" from the surface -the ALD film is dense, crack-, defect-and pinhole-free and its thickness and structural and chemical characteristics can be precisely controlled on atomic scale. ALD process is digitally repeatable and it can be performed at relatively low temperatures. This gives the possibility to construct not only single material layers but also doped, mixed, or graded layers and nanolaminates, whereas low process temperature allows coating of also sensitive materials such as plastics and polymers. A basic schematic of the ALD process including four steps is shown in Fig. II.6a. In step 1, precursor A (in pink) is added to the reaction chamber containing the material surface to be coated by ALD. After precursor A has adsorbed on the surface, any excess is removed from the reaction chamber (step 2). Precursor B (in blue) is added (step 3) and reacts with precursor A to form one monolayer of compound AB on the surface (step 4). Precursor B is then cleared from the reaction chamber and this process is repeated as a cycle until a desired thickness is achieved. Most of ZnO seed layers in our experiments were deposited by using ALD instruments shown in Fig. II.6b. The reaction was completed in the main reaction chamber. In the process, the precursor A is the deionized water, while the precursor B is the Diethyl zinc. Precursor delivery temperature was around 150 °C. The growth temperature of ZnO seed layer was 250 °C. Argon gas was used as a carrying gas with a flow speed of 40 sccm. The chemical reaction in the chamber was given below: ( ) + → + 2 ( ) II.6 The thickness of the seen layer obtained by ALD method is around 40 nm. AII.3.2 Matrix and top insulating layer deposition Polymers are first used as the buffer matrix and insulating layer in the VINGs to enhance the mechanical robustness and flexibility. [START_REF] Hu | Self-powered system with wireless data transmission[END_REF] In our earliest experiments, PMMA is used for VINGs working under both compression and bending. Later, different dielectric materials with higher rigidity and relative permittivity are included in the fabrication to improve the performance and to validate the computational studies described in sections II.2.2 of chapter II [5,132]. AII.3.2.1 PMMA deposition with spin coating method a. Spin coating process Spin coating generally involves the application of a thin film evenly across the surface of a substrate by coating a solution of the desired material in a solvent while it is rotating. The substrate is coated by the solution and then rotated at high speed so that the majority of the ink is flung off the side. The rotation of the substrate at high speed also means that the centripetal force combined with the surface tension of the solution pulls the liquid coating into an even covering. Finally, the solvent is evaporated to leave the desired material on the substrate in an even covering. To deposit PMMA matrix, we used MicroChem 495PMMA A2 and A6 Resists, each containing 2% and 6% of PMMA dissolved in Anisole, and a primer Hexamethyldisilazane (HMDS). Several processes were selected to control the thickness of the PMMA layer. The first step in the whole procedure was to deposit the primer with a spin speed of 1000 rpm and a spin time of 45 s to improve the adhesion between the resist and the sidewalls of NWs. The matrix was deposited using A2 resist and the top insulating layer was using A6 resist. In both cases, the resist "inks" were deposited on the surface of the sample, and rest for 90 s. Then the spin coating worked with certain spin speed and time. Finally the sample was placed on a hot plate at 180 ˚C for 90 s. This process was repeated several times to achieve desirable thickness. [START_REF] Wang | Self-Powered Nanosensors and Nanosystems[END_REF] shows the progressive change of the morphology as the default thickness increases. By comparing the diameters of Al 2 O 3 coated NW and pure ZnO NW, one can observe that the thickness of Al 2 O 3 thin film on the sidewalls is smaller than the top layer (default thickness). When the default value is 15 nm, the side thickness is around 10 nm. This difference is enhanced by the increase of ALD cycles. As the thickness reaches 310 nm, the Al 2 O 3 forms an top insulating layer covering the entire surface of ZnO arrays. In this case, the side thickness is supposed to be 150 nm. AIII.3 AFM probes used for mechanical and electromechanical measurements An AFM probe has a sharp tip on the free-swinging end of a cantilever that is protruding from a holder [282]. The dimensions of the cantilever are in the scale of micrometers. The radius of the tip is usually on the scale of a few nanometers to a few tens of nanometers. The cantilever holder, often 1.6 mm by 3.4 mm in size, allows the operator to hold the AFM probe assembly with tweezers and fit it into the corresponding holder clips on the scanning head of the AFM. Most AFM probes are made of Si, but borosilicate glass and silicon nitride are also in use. Depending on the interaction under investigation, the tip surface of the AFM probe needs to be modified with a coating. Due to the nano-scale force modulation (few hundreds of nN) and the small piezopotential (few tens of mV) to be tested in our electromechanical measurements, the tip must be conductive with a metallic or metallic alloy coating. The force constant of the probe should be specially tailored for the force modulation yielding very high force sensitivity while simultaneously enabling tapping mode and lift mode operation. The combination of soft cantilever and fairly high resonance frequency enables stable and fast measurements with reduced tip-sample interaction. a. Platinum Iridium (PtIr5) AFM probes Initially we used PtIr probes, which have overall metallic coating (PtIr5) on both sides of the cantilever to increase the electrical conductivity of the tip (Fig. III.2a). These AFM probes are designed for non-contact or soft tapping mode imaging. The PtIr5 coating is an approximately 25 nm thick double layer of Chromium and Platinum Iridium5 on both sides of the cantilever. The tip side coating enhances the conductivity of the tip and allows electrical contacts. The detector side coating enhances the reflectivity of the laser beam by a factor of about 2 and prevents light from interfering within the cantilever. The coating process is optimized for stress compensation and wear resistance. The bending of the cantilever due to stress is less than 3.5% of the cantilever length. The tip has a radius of curvature better than 25 nm and the tip height is around 10 -15 µm. Résumé en français Chapter IV Générateurs piézoélectrique à base de nanofils semi-conducteurs : simulations et études expérimentales L'alimentation en énergie des réseaux de capteurs miniaturisés pose une question fondamentale, dans la mesure où leur autonomie est un critère de qualité de plus en plus important pour l'utilisateur. C'est même une question cruciale lorsque ces réseaux doivent assurer une surveillance d'infrastructure (avionique, machines, bâtiments…) ou une surveillance médicale ou environnementale. La qualité de la surveillance environnementale ou d'infrastructure peut être améliorée en utilisant un grand nombre de capteurs. Cependant, ce nombre peut devenir un problème en raison de la quantité du câblage ou en raison du coût de gestion de la batterie. Pour de telles applications, la récupération d'énergie à partir de l'environnement peut apporter une solution intelligente où devenir chaque noeud pourrait idéalement s'auto-alimenter, ou en d'autres termes, énergétiquement autonome. Selon l'environnement spécifique de l'application, de nombreuses sources d'énergie peuvent être converties en électricité : l'énergie solaire, la chaleur ou les mouvements mécaniques entre d'autres. Les matériaux piézoélectriques permettent d'exploiter l'énergie mécanique inutilisée présente en abondance dans l'environnement (vibrations, déformations liées à des mouvements ou à des flux d'air…). Sous la forme de nanofils (NFs), les matériaux piézoélectriques offrent une sensibilité qui permet d'exploiter des sollicitations mécaniques très faibles. Les NFs piézoélectriques ont récemment fourni un ingrédient prometteur pour l'électronique, la détection et les nanosystèmes de récupération d'énergie. Parmi eux, les NFs semi-conducteurs ont attiré l'attention en raison de leurs excellentes propriétés piézoélectriques, de leurs techniques de synthèse à grande échelle, de leur intégration possible sur CMOS ou sur substrat souple et de la commodité de moduler leurs propriétés électriques par le dopage. Dans cette thèse nous nous intéressons au potentiel des nanofils de matériaux semi-conducteurs piézoélectriques, tels que ZnO ou les composés III-V, pour la conversion d'énergie mécanique en énergie électrique. Notre objectif est d'approfondir la compréhension des mécanismes physiques qui conditionnent la réponse piézoélectrique des NF semi-conducteurs et des dispositifs associés pour les applications de récupération d'énergie. Pour atteindre ces objectifs, nous avons travaillé à la fois sur des simulations théoriques et sur des travaux expérimentaux. Le travail expérimental a consisté, d'une part, à fabriquer et caractériser des matériaux composites intégrant des NFs de ZnO alignés verticalement et, d'autre part, à caractériser les propriétés électromécanique des NFs individuels (ZnO, GaN, GaAs…) en utilisant un microscope à force atomique (AFM). Les études théoriques ont été basées sur la Méthode des Eléments Finis (FEM), en tenant compte du couplage complet entre les effets mécaniques, piézoélectriques et semi-conducteurs, y compris les porteurs libres. En prenant en compte le piégeage du niveau de Fermi à la surface (SFLP) des NFs, nous avons réussi à concilier pour la première fois des observations théoriques et expérimentales de la littérature. Après un bref rappel sur quelques concepts de base sur les dispositifs piézoélectriques, les sections suivantes résumeront brièvement mon travail de thèse. Sur le plan théorique, je présenterai le cadre théorique et discuterai les résultats de la simulation. Ensuite, les sections expérimentales présenteront les résultats obtenus sur des NF individuels, ainsi que la fabrication et la caractérisation des nanogénérateurs à base de NF. IV.1 De l'effet piézoéléctrique aux nanogénérateurs Pierre et Jacques Curie découvrent l'effet piézoélectrique en 1880. Cet effet décrit la relation qui existe entre une déformation mécanique et la création d'une charge électrique dans certains matériaux. Un an plus tard, les frères Curie ont prouvé l'existence de l'effet inverse, qui venait d'être prédit par Lippmann: un champ électrique produit une déformation mécanique dans ces matériaux. Les matériaux piézoélectriques typiques comprennent le quartz, les semi-conducteurs tels que ZnO et les composés III-V, les céramiques comme PZT et BTO ou des polymères comme le PVDF. Selon la facon dont ils sont connectés électriquement, les matériaux semi-conducteurs piézoélectriques peuvent fonctionner de deux manières différentes: (3) Avec des contacts conducteurs, le courant de conduction est modulé par la déformation et est proportionnel aux charges induites par la polarisation. Dans ce cas, le matériau intégré ne peut fonctionner que comme un capteur, et il nécessite une alimentation pour travailler. (4) Avec des contacts isolants ou Schottky (ou dans le cas d'un matériau piézoélectrique isolant), un courant de déplacement est généré en cas de changement de contrainte. Le courant est alors proportionnel à la variation temporelle de la charge induite par la polarisation. Il n'y a pas besoin d'une alimentation externe et les générateurs piézoélectriques travaillent habituellement de cette façon, agissant soit comme des récupérateurs d'énergie mécanique (ou générateurs piézoélectriques), soit comme des capteurs mécaniques auto-alimentés. Les NFs piézoélectriques peuvent être intégrées pour construire des générateurs piézoélectriques, également connus sous le nom de nanogénérateurs (NG). Ils peuvent être soit intégrés latéralement (conduisant à ce que l'on appelle LING, pour les nano-générateurs latéralement intégrés) ou verticalement (menant à ce que l'on appelle VING, pour les nano générateurs verticalement intégrés). Dans ma thèse, la structure VING a été choisie en raison de sa performance par rapport au LING, et parce qu'elle est plus simple à fabriquer et plus robuste. IV.2 Étude analytique et modélisation de Nanofils et Nanogénérateurs piézoélectriques La modélisation analytique et la simulateur numerique sont des outils importants dans l'investigation des propriétés à l'échelle nanométrique, où les outils expérimentaux pourraient être partiellement limités par la résolution de la manipulation ou la sensibilité de l'acquisition du signal. Dans cette section, je vais discuter des effets des propriétés mécaniques, électriques et semi-conductrices sur la performance des VINGs. La methode de éléments finis (FEM) a été l'outil que j'ai utilisé pour mon travail sur les NGs à base de NFs. Le domaine de simulation a été réduit à une cellule élémentaire du NG, consistant en un seul NF entouré d'une matrice diélectrique, avec substrat et contacts. Des conditions aux limites appropriées ont été utilisées pour représenter le comportement de l'ensemble du dispositif NG. Deux modes de fonctionnement ont été étudiés, la cellule NG travaillant en compression ou en flexion. Le travail théorique a été divisé en deux parties. Notre premier objectif était de trouver des lignes directrices permettant l'optimisation des réponses du NG. Pour cela, nous avons effectué une série de simulations FEM, où la géométrie du NG et le matériau de la matrice ont été modifiés. Nous avons également évalué l'influence des propriétés du ZnO, afin de tenir compte de la différence entre les coeffients du matériau massif et ceux trouvés à l'échelle nanométrique par certains auteurs. Nous avons constaté que l'optimisation dépendait du mode de fonctionnement. En résumé (Fig. Pour cette optimisation, le ZnO a été considéré comme intrinsèque. Cependant, les NFs de ZnO fabriqués par croissance chimique (CBD -Chemical Bath Deposition) sont normalement dopés de type N. Cela nous amène à la deuxième contribution théorique de cette thèse, où nous qvons tenu compte du couplage complet entre les propriétés piézoélectriques et semi conducttrices. En accord avec les quelques articles qui considèrent ce couplage, on a constaté que la réponse piézoélectrique du VING devrait être proche de zéro pour les niveaux de dopage expérimentaux, ce qui est contraire aux observations. Nous avons proposé de rechercher l'origine de cette différence dans le fait que le niveau de Fermi est très vraisemblablement piégé à la surface de ZnO. Cette thèse présente pour la première fois une étude de simulation du potentiel piézoélectrique généré à partir des NFs de ZnO où le piégeage du niveau de Fermi à la surface a été prise en compte (par une charge de surface comme condition de frontière à l'interface entre le ZnO et la matrice diélectriaque). La Fig. 4.2 donne une illustration qualitative du mécanisme physique impliqué lorsque le niveau de Fermi à la surface est piégé à mi-intervalle sur toutes les interfaces entre le NF et le matériau matriciel. On peut voir que le piézo-potentiel n'est généré que dans les zones de dépletion, tandis que les porteurs libres écrantent la charge de polarisation dans les zones neutres. Avec le SFLP, la zone de déplétion peut s'étendre sur toute sa longueur du NF, même pour des niveaux de dopage assez importants (dans la plage de 10 17 cm -3 pour des NFs de 200 nm de diamètre). Avec cette hypothèse, plusieurs observations de la simulation correspondent maintenant qualitativement aux résultats expérimentaux de la littérature. Nous croyons que le SFLP est la clé pour résoudre les contradictions qui ont été trouvées entre la théorie et les expérimentations. Plusieurs conclusions ont été tirées de ce travail. Premièrement, le SFLP a introduit une dépendance géométrique plus réaliste de la réponse piézoélectrique de la cellule NG, avec une amélioration de performance pour des NFs plus longs et plus minces. Deuxièmement, par rapport aux couches minces de ZnO, les cellules NG ont généré un plus grand potentiel, parce que la structuration dans les NFs a augmenté l'épaisseur globale de la région qui est influencée par le SFLP. Cet effet pourrait même expliquer pourquoi un plus grand coefficient piézoélectrique est mesuré pour les NFs de ZnO. Enfin, avec le SFLP, on a constaté que les dispositifs NG fonctionnant en flexion présentaient une performance non symétrique lors de la flexion vers le haut ou vers le bas, comme cela a été observé dans les expériences. IV.4 Conclusions and perspectives Dans cette thèse, des simulations par éléments finis ont été réalisées pour étudier l'optimisation de VING par un fonctionnement en compression ou en flexion. Ils ont également été utilisés pour étudier les effets des propriétés semi-conductrices de ZnO, parmi lesquels le SFLP induit par les pièges lents de surface a présenté son importance pour la performance. Outre la modélisation, nous avons réussi à fabriquer des VING sur différents substrats et différentes matrices diélectriques. Enfin, nous avons caractérisé la réponse piézoélectrique des NFs individuels et des VING. Les NFs ZnO ont montré une bonne sensibilité à une force, ainsi ils sont un bon choix pour les VINGs. Les mesures de la production d'énergie et des effets de recuit avec les VING en mode de compression ont illustré leurs avantages en tant que generateurs d'énergie. De plus, nous avons observé quelques résultats qui pourraient être intéressants pour la suite comme le rôle bénéfique d'un réactif à haut température. Du point de vue matériau, les NFs à structure coeur/coquille à base de ZnO pourraient être une clé pour améliorer les performances des dispositifs VING, mais également d'autres applications utilisant des NFs semi-conducteurs piézoélectriques. Ils nous permettent de tirer profit des forts coefficients piézoélectriques de la coquille céramique (PZT, BTO), sans la difficulté de fabriquer des NFs en céramique ou le besoin d'utiliser des champs électriques élevés pour polariser ces structures. Pour le futur, nous envisageons d'améliorer les hypothèses utilisées dans la description des propriétés des materiaux. Les propriétés mécaniques non linéaires, la piézoélectricité du second ordre et la flexoélectricité doivent être prises en compte. De plus, l'effet dynamique du piège de surface joue également un rôle important. Finalement, un banc de mesure automatique devrait être développé pour mieux caractériser les dispositifs flexibles fonctionnant en mode de flexion. Mots-clés: Récupération d'énergie, Nanofils piézoélectriques semi-conducteurs, Nanogénérateurs, Piégeage du niveau de Fermi, Microscopie à force atomique, Mesures électromécaniques IX Résumé en français ......................................................................................................... --Chapter IV Générateurs piézoélectrique à base de nanofils semi-conducteurs : simulations et études expérimentales ............................................................................ --IV.1 De l'effet piézoéléctrique aux nanogénérateurs ............................................ --IV.2 Étude analytique et modélisation de Nanofils et Nanogénérateurs piézoélectriques ........................................................................................................ --IV.3 Caractérisation électromécanique de Nanofils et de Nanogenerators piézoélectriques ........................................................................................................ --IV.4 Conclusions and perspectives ......................................................................... -- Figure I I Figure I Number of publications on "piezoelectric nanowires" and "piezoelectric nanogenerators" in Google Scholar between years 2006 and 2015. Figure Figure II Researches and activities of the thesis. Figure 1 . 1 11 Figure 1.1 Schematic of an autonomous system. Figure 1 . 2 12 Figure 1.2 Chart of photovoltaic cell development from 1975 to 2015. [36] Figure 1 . 3 13 Figure 1.3 The four fundamental modes of TENGs: (a) vertical contact-separation mode; (b) in-plane contact-sliding mode; (c) single-electrode mode; and (d) freestanding triboelectric-layer mode. [54] Figure 1 . 4 14 Figure 1.4 Simple molecular model for explaining the piezoelectric effect: (a) unperturbed molecule; molecule subjected to (b) a compressive stress, and (c) a tensile stress. (adapted from [64]) Figure 1 . 5 15 Figure 1.5 Classification of crystals showing the classes with piezoelectric, pyroelectric, and ferroelectric effects. Figure 1 . 6 16 Figure 1.6 (a) Design of a SWG on a flexible substrate.The piezoelectric fine wire lies on a polymer (Kapton) substrate, with both ends tightly bonded to the substrate and outlet interconnects. Mechanical bending of the substrate creates tensile strain and a corresponding piezoelectric potential in the wire, driving electrons through the external load.[START_REF] Yang | Power generation with laterally packaged piezoelectric fine wires[END_REF] (b) Scheme of LING with demonstration of the output scaling-up when mechanical deformation is induced, where the "±" signs indicate the polarity of the local piezoelectric potential created in the NWs.[START_REF] Zhu | Flexible high-output nanogenerator based on lateral ZnO nanowire array[END_REF] Figure 1 . 7 17 Figure 1.7 (a) Schematic view of the PZT nanofiber generator. (b) Voltage output measured when a small Teflon stack was used to impart an impulsive load on the top of the PZT nanofiber generator. The inset in (b) shows the schematic of a Teflon stack tapping on the NG. [70] (c) Illustration of a flexible energy harvesting device, composed of a Kapton plastic substrate, PZT nanoribbons, and patterned interdigitated electrodes. Measured (d) open-circuit voltage and (e) short-circuit current as a function of tapping frequencies. [71] Figure 1 . 8 18 Figure 1.8 NG composed of a five-layer flexible plate with PMMA surrounded ZnO NWs grown vertically on both sides of a polymer substrate and electrodes deposited on both top and bottom of the plate. [74] Figure 1 . 9 19 Figure 1.9 (a) Scheme of the super-flexible NG based on ZnO NWs with AAO as an insulating layer. (b) Cross section SEM image of the super-flexible (c) NG. Super-flexible NG as an active sensor for detecting the motion of a human eye ball. The NG attached a right eyelid was driven by moving the eye ball from right (R), center (C), and to left (L) or from L, C, and R. Output voltage measured under (d) slow and (e) rapid eye movement. [9] Figure 1 . 10 110 Figure 1.10 Potential distribution in a bent NW as a result of the piezoelectric effect. Figure 1 . 1 Figure 1.11 ZnO NW-based PENGs. (a) Schematic diagram showing the design and structure of the NG. (b) Cross-section SEM image of the NG showing the integration of aligned NWs and the top electrode.Inset shows an NW that is forced by the electrode to bend.[START_REF] Wang | Direct-current nanogenerator driven by ultrasonic waves[END_REF] Figure 1 . 1 Figure 1.12 (a) schematic showing the modulation and (b) SEM top view image of a force sensitive pixel based on individual ZnO NW. Adapted from[START_REF] Perez | Matrice de nanofils piézoélectriques interconnectés pour des applications capteur haute résolution : défis et solutions technologiques[END_REF] Fig. 1 .Figure 1 . 11 Figure 1.13 (a) Principle of the PE-FET, in which the piezoelectric potential across the NW created by the bending force F replaces the gate in a conventional FET. The contacts at both ends are conductive. [86] (b) Corresponding I-V characteristics of the ZnO NW for different bending cases. This is the I-V curve of the PE-FET. [2] (c) The principle of the piezoelectric gated diode, in which one end is fixed and enclosed by a metal electrode, and the other end is bent by a moving metal tip. Both have Ohmic contact with ZnO. The piezoelectric potentialat the tensile surface acts like the p-n junction in a conventional diode.[86] (d) The sequence of SEM images of the ZnO NW at various bending angles and the corresponding I -V characteristics.[START_REF] He | Piezoelectric Gated Diode of a Single ZnO Nanowire[END_REF] [e] is the piezoelectric coefficient with the form, 51 ± 0.04 ⁄ , = 1.22 ± 0.04 ⁄ , = -0.45 ± 0.02 ⁄ are measured for ZnO thin film[START_REF] Carlotti | Acoustic investigation of the elastic properties of ZnO films[END_REF]. [κ] is the dielectric constant, are values of bulk ZnO Figure 2 . 2 Figure 2.1 (a) Dependence of the total polarization (C/m2) on strain in the range -0.08 to +0.08 according to the classic linear model (LM) and nonlinear (quadratic) model (NLM). (b) Variation of the polarization (C/m2) in a cross-section of a ZnO NW. The perpendicular (parallel) strain varied from -2.8% (+2.8%) to +2.8% (-2.8%). The calculated polarization of the NLM is on the left half and the LM on the right. [102] 2 2 Schematic of cylindrical NW under (a) bending and (b) compression. Figure 2 . 3 23 Figure 2.3 Effect of the size scaling down of a ZnO cantilever in terms of axial strain, axial deformation and stiffness under fixed lateral force (80 nN). The reference NW (scaling factor a = 1) features r = 25 nm and L = 600 nm).[START_REF] Hinchet | Scaling rules of piezoelectric nanowires in view of sensor and energy harvester integration[END_REF] Figure 2 . 4 24 Figure 2.4 (a) Young's modulus of wurtzite ZnO NWs obtained by full ab initio calculations and continuum model. [112] (b) Young's modulus of the pristine and H-passivated ZnO NWs as a function of the diameter.The inset is the equilibrium axial lattice constant L 0 (% strain from bulk) of two kinds of NWs as a function of the diameter.[START_REF] Qi | Different mechanical properties of the pristine and hydrogen passivated ZnO nanowires[END_REF] Figure 2 . 5 25 Figure 2.5 (a) side and (b) cross-sectional piezoelectric potential distribution for a ZnO NW with diameter of 50 nm and length of 600 nm at a lateral bending force of 80 nN. [120] Figure 2 . 6 26 Figure 2.6 (a) Configurations of electrodes: top-total (TT), bottom left (BL), bottom right (BR), bottom total (BT), bottom center (BC) and top-center (TC) contacts.[122] (b) Viewand schematic of a cylindrical NW (radius 150 nm and length 2 µm) surrounded by free space and subject to compressive force.[START_REF] Araneo | Piezo-semiconductive quasi-1D nanodevices with or without anti-symmetry[END_REF] (c) Piezopotential along the entire axis of the NW with floating and grounded base.[START_REF] Araneo | Piezo-semiconductive quasi-1D nanodevices with or without anti-symmetry[END_REF] Figure 2 . 7 27 Figure 2.7 (a) Schematics of contact design for a sensor. (b) Piezopotential varies with the bending force in the linear and nonlinear cases.In bottom contact configuration, the potential increases following a linear law (a = 7.3×10 6 V/N), whereas in the "top-bottom" contact case, ∆V=bF+cF 2 with b = 7.5±0.1×10 6 V/N and c = 3×10 13 V/N 2 .[START_REF] Hinchet | Scaling rules of piezoelectric nanowires in view of sensor and energy harvester integration[END_REF] Figure 2 . 8 28 Figure 2.8 (a) Schematics of the force-displacement sensing device pixel based on a vertical piezoelectric ZnO NW with contact (δ = 0) and non-contact (δ 0) electrode placement. (b) Potential distribution in the pixel for a 124 nm top displacement (F = 80 nN). Impact of NW-electrode distance and electrode thickness is presented.[START_REF] Perez | Static finite element modeling for sensor design and processing of an individually contacted laterally bent piezoelectric nanowire[END_REF] Figure 2 . 9 29 Figure 2.9 Schematic model showing (a) the conical NWs based LING device and (b) the setup for measuring the energy conversion. The conical NWs are under compressive strain during the deformation. (c) The unit cell and model used for calculating the potential distribution across the top and bottom electrodes of the NG with the presence of a pair of conical NWs. The corresponding cross sections, at which the potential distributions were exhibited, are indicated by dashed lines. The results are shown in (d) and (e), respectively.(f) Cross section output potential induced by perfect cylindrical NWs (e.g., zero conical angle).[START_REF] Hu | High-Output Nanogenerator by Rational Unipolar Assembly of Conical Nanowires and Its Application for Driving a Small Liquid Crystal Display[END_REF] Figure 2 . 2 Figure 2.10 (a) Structure of the VING before and after vertical compression. (d=thickness). Structure and dimensions of the NG cell in the reference case. The applied pressure is 1 MPa. (b) Diagram of the VING working principle.The yield (η) has been calculated for each step using the parameters of (a). d x , E x and ε x are the thickness, Young modulus and dielectric constant of layer x. Index eq indicates that the corresponding layer is modeled as a uniform equivalent medium. T is the stress and e 33 the piezoelectric coefficient relevant to this strain configuration.[START_REF] Hinchet | Design and guideline rules for the performance improvement of vertically integrated nanogenerator[END_REF] Figure 2 . 2 Figure 2.11 (a) FEM simulation of the distribution of the electric potential generated by 15 cells × 15 cells NG matrix embedded with PMMA and Si 3 N 4 , surrounded by air and strained under 1 MPa. (b) Difference of electric potential generated at the top electrode by a NG surrounded by air as a function of the NG matrix size. (c) Schematics of different NG structures using various materials and NWs densities. (d) Electric potential, and (e) electric energy of a NG core cell using different top insulating materials, function of the NWs density as the ratio parameter, also compared with a thin ZnO layer. Reprinted with permission [5]. Figure 2 . 12 FEM 212 Figure 2.12 FEM Simulation results for a VING individual composite cell as a function of the inclination angle to the vertical axis. A single ZnO NW is integrated inside the individual cell. (a) Absolute value of the piezoelectric potential generated. (b) Electric potential (mV) inside an individual cell taking into account the c-axis correction and (c) neglecting the c-axis correction.[START_REF] Tao | Will composite nanomaterials replace piezoelectric thin films for energy transduction applications? Future Trends in Microelectronics: Journey into the Unknown[END_REF] Figure 2 . 2 Figure 2.13 FEM simulations results of a single VING cell with 2 NWs with inclination angles on the xz plane: a) Displacement (nm) of one cell under compression including one vertical NW and one NW inclined at an angle of 11°. b) Electric potential at the top electrode of a single cell in function of the inclination angle to the vertical axis of one NW. c) Displacement (nm) of one cell under compression composed of one vertical NW and one NW inclined at an angle of 11° on the yz plane. [129] ZnO NWs in function of different sets of inclinations angles (lower than 12°) to the vertical axis. [129] NWs inclination conditions (<12°) Electric potential (mV) Ideal condition (vertical NWs) 68.9 Inclination towards the outside of the cell (different angles) 63.5 Inclination towards the middle of the cell(same angles) 68.3 Inclination towards the same side of the cell(small different angles) 69.5 Inclination towards the same side of the cell(same angles) 69 Inclination towards the same side of the cell(large different angles) 62.3 Inclination towards the outside of the cell(same small angles) 69.4 Inclination towards the outside of the cell(same large angles) 57 c. Individual VING composite cell including 4 NWs. In the case of 4 inclined NWs inside a composite cell, 4 different cases are simulated with inclination angles below 12° (see Fig. 2.14). The obtained potentials at the top electrode are presented in the same Figure. In this case a systematic reduction of the electric potential is observed, ranging from 1 to 13%. Figure 2 . 14 214 Figure 2.14 Results of the deformation (nm) of the individual cell with 4 ZnO NWs representing the different conditions of the simulations (with sets of inclinations angles lower than 12° to the vertical axis). The electric potential (absolute value) at the top electrode is presented for each condition. The simulated electric potential for the ideal condition with vertical NWs is 68.9mV for comparison. [129] Figure 2 . 2 Figure 2.15 FEM simulations results of the displacement (nm) of different single cells under compression with inclination angles below 12°. The electric potential (absolute value) at the top electrode is presented for each simulation: a) single cell composed of 36 ZnO NWs, b) single cell composed of 64 ZnO NWs.[START_REF] Tao | Will composite nanomaterials replace piezoelectric thin films for energy transduction applications? Future Trends in Microelectronics: Journey into the Unknown[END_REF] Figure 2 . 2 Figure 2.16 (a) Diameter dependence of the dielectric constant in the single pencil-like ZnO NW: (blue dot) experimental results, (red solid line) fitted results by core-shell composite NW model, and (green dashed line) dielectric constant of bulk ZnO. [130] (b)Frequency dependence of piezoelectric coefficient of ZnO nanobelt, bulk (0001) ZnO, and x-cut quartz. Only the piezoelectric coefficient of ZnO nanobelt is frequency dependent.[7] Figure 2 . 17 217 Figure 2.17 Scheme of (a) a composite piezoelectric material vertically integrated on a metallic substrate and (b) the NG cell in compression mode. Figure 2 . 2 Figure 2.18 (a) Absolute value of potential, (b) surface density of elastic strain energy, (c) surface density of electric energy of NG cells working in compression mode using ZnO thin film properties, nano dielectric, piezoelectric constants and NW properties combining nano dielectric and piezoelectric constants, respectively. Results using a ZnO thin layer were calculated as reference. Figure 2 . 2 Figure 2.19 (a) Scheme of VING plate working under bending. [131] (b) Cross-section view of the VING structure. (c) Scheme of NG cell when the device is working under bending. [131] 11 Figure 2 . 20 11220 Figure 2.20 Schematic of energy transfer process within a NG cell at the center of a bending VING plate. (a) Mechanical energy transferred into ZnO NW. (b) Mechanical energy stored in the NW converted into electrical energy due to the piezoelectric effect. (c) Electrical energy transfer to the output circuit. Figure 2 . 21 221 Figure 2.21 Absolute value of (a) potential, (b) surface density of elastic strain energy, (c) surface density of electric energy and (d) energy conversion ratio of NG cells working in flexion mode using ZnO thin film properties, nano dielectric constant, nano piezoelectric constant and NW properties respectively. Results using a ZnO thin layer were calculated as reference. Figure 2 . 22 222 Figure 2.22 Topview map of (a) displacement and (b) strain tensor when PMMA is used. Topview map of (c) displacement and (d) strain tensor when Al 2 O 3 is used. Fig. 2 . 2 Fig. 2.23 displays the simulation results of NG cells using PMMA, SiO 2 , Si 3 N 4 , and Al 2 O 3 as matrix material respectively, as well as the comparison with a ZnO thin film. With higher rigidity, SiO 2 , Si 3 N 4 , or Al 2 O 3 matrix enhanced the input strain of the NG cell while the whole NG membrane was still bended by 100 Pa pressure (Fig. 2.23b). With larger relative permittivity, they reduced the energy loss passing through the top insulating layer (Fig. 2.23c).As a result, the potential generated was higher than the thin film starting from a ratio = 0.3 -0.5, and reached a value that was 1.5 -2.5 times larger than the ZnO thin film (Fig.2.23a). Since the improvement was majorly due to the increase of input strain and the Figure 2 . 23 223 Figure 2.23 Absolute value of (a) potential, (b) surface density of elastic strain energy, (c) surface density of electric energy and (d) energy conversion ratio of NG cells working in flexion mode using PMMA, SiO 2 , Si 3 N 4 , and Al 2 O 3 as matrix material respectively. Results using a ZnO thin layer were calculated as reference. Figure 2 . 2 Figure 2.24 (a) Potential changing trend with the increasing Young's modulus and Poisson's ratio of matrix materials. (b) Potential changing trend with the increasing relative permittivity of matrix materials. [132] Figure 2 . 2 Figure 2.25 (a) Deformation and side view of the reference piezoelectric transducer (not in scale) under a pressure of 100 Pa (the deformation is exaggerated in this figure). (b) Surface strain on the active layer of the membrane piezoelectric transducer under a pressure of 100 Pa [133]. Figure 2 . 2 Figure 2.26 Improved structure integrating a piezoelectric layer in the middle of the membrane (not in scale). a) Structure without a top electrode, b) structure including an electrode on top of the whole membrane, c) structure including an electrode only over the position of the piezoelectric layer. Figure 2 . 2 Figure 2.27 Further improvement of the transducer structure. In this case separate layers of piezoelectric materials are integrated in the regions were the strain is negative and positive. Figure 2 . 2 Figure 2.28 (a) Simulation results of the basic structure integrating a piezoelectric layer in the middle of the membrane. (b) Simulation results of the potential generated by a side piezoelectric layer. The other two piezoelectric layers where left without electrode to simplify the modeling. The resulting potential of the 3 electrodes can be connected in series in order to boost the electric potential. Figure 2 . 2 Figure 2.29 (a) Piezoelectric potential, (b) parameter η, (c) free electron concentration n, and (d) activated donor center concentration N D + cross the NW (R = 25 nm, L = 600 nm) Figure 2 . 30 230 Figure2.30 Piezoelectric potential distribution for a 1 × 10 16 cm -3 -doped ZnO NW with R = 150 nm and L = 4 μm pressed by a uniaxial compressive force (along the z axis).[6] Figure 2 . 2 Figure 2.31 (a) Scheme of a 2D NG cell model with geometry factors. (b) Piezopotential generated by a 2D NG cell model under compression simulated while the doping concentration of the ZnO NW varies from 10 to 10 . Figure 2 . 32 232 Figure 2.32 Scheme of the FLP assumption with (a) energy band diagram, and depletion domain distribution of cylindrical NWs with large and small radius. (b) Depletion width as a function of the donor concentration and the radius of ZnO NWs, calculated with the assumption of mid-gap Fermi level pinning.[START_REF] Mouis | Title Materials Research Society (MRS) Spring Meeting & Exhibit[END_REF] Figure 2 . 2 Figure 2.33 Scheme of NG cell model with equivalent surface charge applied to the boundaries as SFLP assumption. Models with FLP on top surface, on side surfaces and on all surfaces will be used in the following simulations. Figure 2 . 2 Figure 2.34 Schematic of 2D axisymmetric cylindrical model of the NG cell. Figure 2 . 35 235 Figure 2.35 Transformation of NG cell geometry from 3D cuboid model to 2D axisymmetric cylinder model. In the cross section view, (a) the circle is inscribed in the square and (b) the square is inscribed in the circle. Figure 2 . 2 Figure 2.36 Comparison of piezopotential generated by 2D axisymmetric NG cell under compression with fixed and free lateral walls (SFLP on all surfaces are applied) Figure 2 . 2 Figure 2.37 (a) Schematic of a flexible NG disk with all-clamped edge. Scheme of (b) the central NG cell in a bending device and (c) the strain X distribution on its sidewalls. Figure 2 . 38 238 Figure 2.38 Maps of polarization, potential and carrier concentration for NG cell with SFLP (a) on top interface and (b) on all interfaces, containing a NW with a radius of 100 nm and a length of 600 nm. = 5 × 10 . Figure 2 . 39 239 Figure 2.39 Piezopotential of NG cell having free Fermi level, FLP on top surface, FLP on lateral surfaces and on all surfaces, with a radius of 100 nm, and a length of (a) 600 nm; (b) 2 μm. = 5 × 10 . Figure 2 . 2 Figure 2.40 Piezopotential of NG cell having free Fermi level, FLP on top surface, FLP on lateral surfaces and on all surfaces, with a radius of 25 nm, and a length of 600 nm. = 5 × 10 . Figure 2 . 41 241 Figure 2.41 Geometry effect of the NW length on the output of the NG cell. Piezopotential comparison between NW with 600 nm length and 2 μm length in the case of (a) SFLP on top surface and (b) SFLP on all surfaces. = 5 × 10 . Figure 2 . 42 242 Figure 2.42 Geometry effect of the NW radius. Piezopotential comparison between NW with 100 nm and 25 nm radius in the case of (a) SFLP on top surface and (b) SFLP on all surfaces.= 5 × 10 . Figure 2 . 43 243 Figure 2.43 Variation of the piezopotential in function of the NW radius for N D = 10 17 cm -3 . = 5 × 10 . Figure 2 . 44 244 Figure 2.44 Piezopotential generated under compression (1 MPa) by a ZnO thin film cell with top SFLP, a NG cell with SFLP on all surfaces. The NW of the NG cell has a radius of 100 nm, and a length of 2 μm. = 5 × 10 . II. 4 . 4 . 1 Figure 2 . 45 441245 Figure 2.45 Piezopotential of NG cell having free Fermi level, FLP on top surface, and on all surfaces, with a radius of 100 nm, and a length of (a) 600 nm; (b) 2 μm. = 5 × 10. Figure 2 . 46 246 Figure 2.46 Piezopotential of NG cell in function of the doping concentration when the NW has free Fermi level, SFLP on top surface, and on all surfaces.= 5 × 10 . Figure 2 . 47 247 Figure 2.47 Comparison between the piezopotential generated by thin film with FLP effect on top surface, and NG cells with FLP effect on all surfaces in flexion (extensive bending) mode. The NW of the NG cell has R = 100 nm, L = 2 μm and = 5 × 10 . Figure 2 . 2 Figure 2.48 Schematics of NG plates under (a) stretching and (b) contractive bending. (c) Non-symmetric performance of the NG under contractive and stretching bending in FEM modeling. = 5 × 10 . Figure 3 . 1 31 Figure 3.1 (a) Cross-section view of SEM image showing the VING structure. Image of packaged NG device for working (b) under compression and (c) under flexion. Figure 3 . 2 32 Figure 3.2 (a) Schematic showing the experimental setup for the piezoelectric charge detection from an individual BTO NW. Inset shows the SEM image of the suspended NW under test. (b) Acquired output signal from the charge amplifier for a BaTiO 3 NW under a periodic tensile load.[180] .Figure 3 . 3 33 Figure 3.3 Schematics showing the setup for probing the 3D piezoelectric tensor of a single c-axis GaN NW. (a) The experimental setup includes an AFM, a function generator, and a lock-in amplifier. (b) Configuration for measuring d 33 and d 13 . The NW is laying on Si substrate with an insulating SiO 2 layer. The NW is clamped at two ends by metals contacts.An AC voltage is applied between these electrodes, resulting in an axial electric field. The long axis of the NW is placed perpendicular to the AFM cantilever. (c) In this configuration the NW is laying on a Si substrate coated with a conductive Au layer. The electric field is applied between the tip of the conductive AFM probe and the grounded substrate. Torsion of the cantilever measures the induced shear strain allowing identification of d 15 .[186] Figure 3 . 4 34 Figure 3.4 Direct piezoelectric measurement by lateral bending. (a) The experimental setup, including a high impedance preamplifier, and AFM probe. (b) Measured electric potential from an individual GaN NW vs. applied force.[START_REF] Xu | An improved AFM cross-sectional method for piezoelectric nanostructures properties investigation: application to GaN nanowires[END_REF] Figure 3 . 5 35 Figure 3.5 Schematic principle of the Young's modulus measurement conducted using an AFM tip. Figure 3 . 6 36 Figure 3.6 (a) 3D schematics and (b) top view SEM image of polycrystalline Si beam based nano-switches. Width a and depth b of the Si beam are marked in the figure. Figure 3 . 7 37 Figure 3.7 AFM topographic image of the Si beam. The red arrows marked the position of the applied force. Figure 3 . 8 38 Figure 3.8 (a) The deflection of the AFM cantilever varying with the total deflection Z, (b) The deflection of the Si beam varying with the force applied at different positions. Figure 3 . 9 39 Figure 3.9 (a) Calculated Young's modulus along the Si beam with error bar. The red dash line presents the assumed correction by considering over-etching in fabrication. (b) Schematics showing geometry of the poly-Si beam when there is over-etching. Figure 3 . 3 Figure 3.10 (a) SEM image and (b) 3D AFM topographic image of GaAs NW arrays. Fig. 3 . 3 Fig. 3.11a and 3.12a shows the AFM topographic image of two NWs and the positions where the force was applied (red arrows). Similar to Si beam measurement, the effective length L of the different points is defined by the distance between each point and point 0. Figure 3 .Figure 3 . 33 Figure 3.11 (a) AFM image of the GaAs NW. The red arrows mark the position of the applied force on the first NW. (b) Deflection of the AFM cantilever varying with the total deflection Z, and (c) deflection of NW No1 varying with the force applied at different positions. Figure 3 . 3 Figure 3.13 Young's modulus of GaAs measured at different positions on the NW. Property of bulk GaAs is taken as a reference. Figure 3 . 3 Figure 3.14 (a) Photo of the measurement set-up for characterizing individual piezoelectric NWs. (b) The experimental setup, including a high impedance preamplifier, and AFM probe [187]. barrier of the PtSi-semiconductor NW contact Figure 3 . 3 Figure 3.15 (a) SEM image of undoped GaN NWs; (b) AFM topographic image of undoped GaN NWs. The cross symbols mark the positions where bending force is applied. Figure 3 . 3 Figure 3.16 (a) Piezopotential generated by bending the GaN NW with a force of 2.3 μN at three positions (point 1, 2 and 3 in Fig. 3.15) along it. The blue curve displays the force applied on the NW and the red curve represents the potential signal acquired by the oscilloscope. (b) The piezopotential generated by bending NW varies with the applied force. Figure 3 . 17 317 Figure 3.17(a) and (b) AFM topographic images of two ultra-long GaAs NWs with cross marks where the bending force is applied. (c) and (d) Piezoelectric response of GaAs NWs. The blue curve describes the force applied on the NW and the red curve represents the potential signal acquired by the oscilloscope. Figure 3.17(a) and (b) AFM topographic images of two ultra-long GaAs NWs with cross marks where the bending force is applied. (c) and (d) Piezoelectric response of GaAs NWs. The blue curve describes the force applied on the NW and the red curve represents the potential signal acquired by the oscilloscope. Figure 3 .Figure 3 . 33 Figure 3.18 (a) Relationship between the maximum efficient deflection and the bending force. (b) Piezopotential generated by the bending of the NW varies with the efficient deflection/ applied force. Figure 3 . 20 320 Figure 3.20 Analysis of the comparison between ZnO NW and ZnO/PZT NW. (a) Cantilever deflection increases linearly with the total deflection which is controlled by a ramp plot. (b) NW deflection is proportional to the bending force. Piezopotential varies with (c) NW deflection and (d) bending force. Figure 3 . 3 Figure 3.21 SEM cross-section view images of (a) ZnO NW arrays and (b) ZnO/BTO core-shell NW arrays. AFM topographic images of (c) ZnO NWs and (d) ZnO/BTO core-shell NWs with cross symbols marking where the bending force is applied. Piezoelectric response of (e) ZnO NWs and (f) ZnO/BTO core-shell NWs. The blue curve describes the force applied on the NW and the red curve represents the potential signal acquired by the oscilloscope. Figure 3 . 22 322 Figure 3.22 Analysis of the comparison between ZnO NW and ZnO/BTO NW. (a) Cantilever deflection increases linearly with the total deflection which is controlled by a ramp plot. (b) NW deflection is proportional to the bending force. Piezopotential varies with (c) NW deflection and (d) bending force. Figure 3 . 23 323 Figure 3.23 Piezoelectric response of ZnO/BTO NW under ramp bending force of (a) 320 nN and (b) 250 nN. Figure 3 . 3 Figure 3.24 AFM topographic images of ZnO NWs for periodic bending (a) at one fixed point and (b) at several points along the NW with white symbols marking where the bending force is applied. (c) Piezoelectric response of ZnO NW shown in (a). (d) Piezoelectric response of ZnO NW shown in (b). Figure 3 . 25 325 Fig.3.25 synthesizes all the results shown in previous measurements and compares the piezoelectric response of different NWs. GaN, GaAs and ZnO NWs were compared in Fig.3.25a. ZnO NWs with a cylindrical cross section came from Institut des Nanotechnologies de Lyon, cited as ZnO cylindrical. ZnO NWs with hexagonal cross section were fabricated in our group, later cited as ZnO hexagonal. Apparently, the ZnO NWs generated higher potential than GaN and GaAs NWs. The threshold of these potential curves was related to the Schottky barrier height of the NW-PtSi contacts and the piezoelectric coefficients. The higher the barrier was, the larger was the threshold. Correspondingly, the piezoelectric behavior of ZnO-based NWs were also modified by depositing different shells, and even by the NW morphology (Fig.3.25b). Figure 3 . 26 326 Figure 3.26 Effective piezoelectric coefficients of different NWs. Fig. 3 Figure 3 . 27 Figure 3 .Figure 3 .Fig. 3 . 3327333 Fig.3.27 presents the measurement set-up that we developed to measure the output potential of rigid NGs under compression. The main components are the actuator, the sample holder, the force load and the force sensor (Fig.3.27a). The sample holder is supported and moved by a PC-controlled linear actuator, whose position can be adjusted in three directions (x, y and z) manually. Above the sample holder, there is a ceramic rod acting as the force load to apply pressure on the sample during the measurement. The ceramic rod is in contact with a force sensor, which measures the force applied between the rod and the sample holder (or the sample on it). The electrodes of the sample connect to the voltage, current and charge preamplifiers via a transmission unit. Then the output electrical signal of the preamplifier is converted into a digital signal by an analog to digital converter (ADC) which is connected to a PC. Besides, two softwares based on Labview platform were developed to control the actuator and to acquire the force and voltage signals (Fig.3.27b). Figure 3 . 30 330 Figure 3.30 Typical piezoelectric response of NGs to periodic compressing/releasing force. (a) The compressing signal is larger due to the force overshoot. (b) Typical shape of the electrical signal pulse obtained after load release. Figure 3 . 31 331 Figure 3.31 Potential influenced by the top PMMA insulating layer as the NG is compressed by a force of 1.5 N. Figure 3 . 3 Figure 3.32 (a) Schematic representation and equivalent electrical circuit of a VING. [205] (b) Potential generated by one force pulse decaying exponentially with time. The inset presents the RC time constant. Figure 3 . 33 333 Figure 3.33 Electromechanical measurement set-up under controlled temperature (a) Image of main components including the actuator, sample holder, force sensor, thermoelectric generator module (Peltier generator) and temperature sensor installed above the metallic holder. (b) Image of set-up to characterize rigid NGs under compression with temperature control system. Figure 3 . 3 Figure 3.34 (a) Schematics of the temperature control system. (b) Schematic of integrating the Peltier module into the sample holder. Figure 3 . 3 Figure 3.35 (a) Schematic of the process within one thermal cycle. (b) Open circuit potential generated by NG No4 at room temperature before and after each thermal cycles. Figure 3 . 3 Figure 3.36 (a) Open circuit voltage generated by NG No4 at each temperature step during the first thermal cycle. (b) Open circuit voltage improved by the temperature increase of NG In29A in the first cycle. Figure 3 . 3 Figure 3.37 (a) Potential and (b) instantaneous power generated by the NG device at room temperature, 70℃ and after cooling down within one thermal cycle. Figure 3 . 3 Figure 3.38 Potential, instantaneous power and power density generated by (a) ZnO NW based NG (NG No4) and (b) reference PZT commercial disk vary with the electric load Figure 3 . 39 339 Figure 3.39 Schematics of flexible NGs bent (a) by the airflow and (b) manually. Voltage signal acquired by the oscilloscope of flexible NGs bent (c) by the airflow and (d) by hands. Figure 3 . 3 Figure 3.40 (a) Potential generated by NGs varies with the thickness of the top layer. (b) Potential generated by flexible NGs with PMMA matrix and top insulating layers. NGs consist of NWs grown under the same conditions, but with different PMMA deposition process. Figure 3 . 41 341 Figure 3.41 Potential generated by flexible NGs with (a) Al 2 O 3 and (b) Si 3 N 4 matrix. Figure 3 .Figure 3 . 33 Figure 3.42 Flexible NG in the characterization bent (a) downwards (contractive bending) and (b) upwards (stretching bending), respectively. 3 3 Figure 3 . 3 Figure 3.44 (a) Potential of reference device without ZnO NWs, bent manually. (b) Piezopotential of VING device bent manually with quick deformation. Figure I. 2 2 Figure I.2 (a) Continuous quantity approximated by typical elements and nodes. (b) Function (solid blue line) approximated with (dashed red line), which is a linear combination of linear basis functions ( is represented by the solid black lines). The coefficients are denoted by through . Figure II.1 (a) Schematic diagram illustrating the fabrication processes of the ZnO NW device based on NSL: 1) thermal SiO 2 deposition, 2) α-carbon deposition (etch-stop layer), 3) PECVD of SiO 2 (sacrificial layer), 4) sacrificial layer patterning, 5) ZnO ALD, 6) top view after ZnO plasma etching, 7) sacrificial layer removal, and 8) ZnO NW device after metal electrode deposition. (b) Field-emission SEM image of ZnO NW arrays fabricated by NSL. The width and height of the ZnO NW were 70 and 100 nm, respectively. [241] Figure Figure II.2 (a) Schematic of ICP-RIE instruments. (b) SEM image of GaAs NW arrays etched by ICP-RIE with Ni as the mask with Cl 2 :N 2 of 8:1, at 5 mTorr, 60 -500 W of RF platen and source powers, respectively [248]. (c) Schematic illustration of GaN-based nanorod LEDs' process: using Ni/Si 3 N 4 as nano-masks formation. The reaction products after RTA and ICP-RIE etching, leading to the formation of GaN-based nanorod LEDs. (d) SEM image of GaN-based nanorod LEDs samples made by etching at fixed the RTA temperature 850 ℃, annealing time 1 min, Cl 2 /Ar flow rate of 50/20 sccm, ICP/Bias power of 400/100W, and chamber pressure of 0.67Pa for 3 min of etching time [250]. Figure II. 3 3 Figure II.3 Schematic representation of the four basic NW growth mechanisms: (a) Selective area epitaxy. An epitaxial layer nucleates in openings of a mask layer and continuously grows in height. Its lateral growth is restricted by low-energy facets. (b)Oxide-assisted growth. The semiconductor and its oxide are adsorbed on the surface, creating nucleation centers which separate into a semiconductor core and a passivating oxide shell. (c) Homo-particle growth. A seed particle is formed consisting of one or all elements used for wire growth. During growth both length and diameter increase as the seed particle size is variable. (d) Hetero-particle growth. A seed particle (typically Au) is deposited prior to growth. During heating to growth temperature the seed particle alloys with the substrate and/or material from the gas phase. Particle size during growth is nearly constant[START_REF] Mandl | Growth mechanism of self-catalyzed group III-V nanowires[END_REF]. Figure II. 4 4 Figure II.4 Schematic representation of high-density NW fabrication in a polymer matrix.(a) An asymmetric diblock copolymer annealed above the glass transition temperature of the copolymer between two electrodes under an applied electric field, forming a hexagonal array of cylinders oriented normal to the film surface. (b) After removal of the minor component, a nanoporous film is formed. (c) By electro-deposition, NWs can be grown in the porous template, forming an array of NWs in a polymer matrix.[START_REF] Thurn-Albrecht | Ultrahigh-density nanowire arrays grown in self-assembled diblock copolymer templates[END_REF] Figure II. 5 5 Figure II.5 (a) Scheme presenting the growth procedure of ZnO NW arrays by CBD method. (b) Specimen fixed on a glass holder with four edges covered by kapton tape. (c) Specimen tilted face down in the precursor solution. Figure II. 6 Figure II. 8 Figure II. 9 Figure II. 10 FigureFigure 68910 Figure II.6 (a) Schematics of basic reaction principle of ALD process. (b) Image of ALD instruments. b. Thickness control Currently, we are not able to measure the thickness of the top PMMA layer without damaging the sample. However, based on the experimental-theoretical combination diagram (Fig. II.14), we can estimate the thickness as listed in Table 3.1. Figure II. 14 14 Figure II.14 Experimental-theoretical combination diagram of the dependence of the thickness on the spin speed for A6 resist. Al 2 O 2 3 thin film was deposited by ALD using water and Tri Methyl Aluminum (TMA) () as precursors, which may be employed for the controlled deposition using the two-half reactions:[START_REF] Dillon | Surface chemistry of Al2O3 deposition using Al(CH3)3 and H2O in a binary reaction sequence[END_REF]281] mechanisms are summarized in Fig. II.15. Theoretically, the thickness of the film per deposition cycle should be typically 1 monolayer, in this case, 0.18 nm per cycle. Figure II. 15 15 Figure II.15 Mechanisms for the surface chemistry of Al 2 O 3 controlled deposition using TMA and H 2 O in a binary reaction sequence.[START_REF] Dillon | Surface chemistry of Al2O3 deposition using Al(CH3)3 and H2O in a binary reaction sequence[END_REF] Figure II. 16 16 Figure II.16 Top-view SEM images of ZnO NWs embedded into Al 2 O 3 matrix and top insulating layer with a thickness of (a) 0 nm, (b) 15 nm, (c) 50 nm, (d) 140 nm, (e) 205 nm, and (f) 310 nm. Figure III. 1 1 Figure III.1 Schematic of the structure and feedback system of AFM, presenting the working principle. Figure III. 2 2 Figure III.2 SEM image of the AFM tip with (a) PtIr5 coating and (b) PtSi coating. Figure 4 . 1 41 Figure 4.1 Potentiel (en valeur absolue) généré par des cellules NG fonctionnant (a) en compression et (b) en flexion. Les matériaux de matrice sont PMMA, SiO 2 , Si 3 N 4 et Al 2 O 3 respectivement. Les résultats obtenus avec une couche mince de ZnO sont indiques pour référence. (c) Optimisation de la configuration du VING en modes compression et flexion. Figure 4 . 2 42 Figure 4.2 Cartes de polarisation, potentiel et concentration de porteurs pour les cellules NG avec SFLP sur toutes les interfaces. Nit = 5  10 11 cm -2 V -1 . Le rayon du NF est de 100 nm et la longueur est de 600 nm. Le matériau de la matrice est le PMMA. Figure 4 . 3 43 Figure 4.3 (a) Vue en coupe de l'image MEB montrant la structure du VING. Image du dispositif NG finale pour travailler (b) en compression et (c) en flexion. Figure 4 . 4 44 Figure 4.4 (a) Principe de fonctionnement du système de caractérisation électromécanique des NFs piézoélectriques individuels. (b) Images topographiques AFM des NFs ZnO. (c) Réponse piézoélectrique des NFs ZnO. Figure 4 . 5 45 Figure 4.5 (a) Réponse piézoélectrique typique de VING sous compression. (b) Potentiel de circuit ouvert généré par un VING à température ambiante avant et après chaque cycle thermique. Figure 4 . 6 46 Figure 4.6 Potentiel de sortie de VING sous flexion manuelle composée d'un effet capacitif et d'une réponse piézoélectrique. Table 1 . 1 11 Characteristics of common energy-harvesting transducers (adapted from Yildiz et al. [27]) Table 1 . 2 12 Mechanical energy available from the ambient environment and human activities Inspired by the recent observation of superior material properties of nanostructures, eletromechanical transducers based on NWs and nanosheets are developed to meet the need of miniature autonomous systems with high integration. The concept of Nanogenerator (NG) technology was first introduced by Wang et al. in 2006 as they fabricated a NG based on piezoelectric effect Energy source Order of magnitude of potential power density ( ⁄ ) Washing machine 58 Mechanical Microwave oven 23 vibration External windows 2.8 Refrigerator 0.35 Human motion 10 ~ 10 3 Airflow ~ 10 2 Acoustic noise ~ 0.1 Table 1 . 3 13 Comparison of mechanical energy harvesting techniques (adapted from[START_REF] Yildiz | Potential Ambient Energy-Harvesting Sources and Techniques[END_REF]) Complexity of process Energy density Current size Problems Very high voltage and Electrostatic Low 4 ⁄ Integrated need of adding charge source Electro-magnetic Very high 24.8 ⁄ macro Very low output voltage Piezoelectric High 35.4 ⁄ macro Low output voltage TENG In the lab 31.2 ⁄ [55] Integrated/ macro high output voltage but low output current, durability problem PENG In the lab 33 ⁄ [62] Integrated/ macro Low output voltage, stability problem I.3 Fundamentals of piezoelectricity I.3.1 What is piezoelectricity? Table 2 . 2 1 Effective first-principles piezoelectric parameter (in C/m 2 ) of ZnO NW, with different diameters (in nm). Atoms Diameter (nm) 12 0.3 48 108 192 → 300 432 588 3.9 bulk ∞ Refs. LDA 4.83 4.39 4.49 4.74 4.68 3.47 [114] Functionals PWGGA 1 CRYSTAL 4.78 4.07 3.99 [117] RPBE 50.4 18.1 1.18 [8] PWGGA 7.32 3.96 3.12 2.82 2.50 2.36 1.70 [118] Table 2.2 Effective first-principles piezoelectric parameter (in C/m 2 ) of GaN NW, with different diameters (in nm). Atoms Diameter (nm) 12 0.3 48 108 192 → 300 432 588 3.8 ∞ bulk Refs. PSP 2 1 51.0 23.3 0.255 [8] Functionals PSP2 PBE0 25.8 7.6 6.00 3.13 2.36 0.554 [8] 1.15 [118] PWGGA 5.30 2.70 2.03 1.72 1.54 1.43 0.94 [118] Table 2.3 Effective first-principles piezoelectric parameter (in C/m 2 ) of AlN NW, with different diameters (in nm). Atoms Diameter (nm) 12 0.3 48 108 192 → 300 432 588 3.8 ∞ bulk Refs. PWGGA 8.53 5.14 4.59 4.63 4.19 [117] Functionals PBE PBE0 8.66 5.17 9.15 4.87 3.73 4.24 [117] 1.86 [118] PWGGA 8.69 4.62 3.53 3.03 2.75 2.57 1.73 [118] Table 2 . 5 25 Mechanical and electrical parameters of different matrix materials. PMMA SiO 2 Si 3 N 4 Al 2 O 3 Young's modulus (GPa) 3 73.1 250 400 Poisson's ratio 0.40 0.17 0.23 0.22 Relative permittivity 3.0 2.09 9.7 5.7 II.2.2.2 VING working under compressive pressure (intrinsic ZnO NWs) Table 3 . 1 31 Young's modulus E and dielectric constant / of GaN, GaAs, ZnO, BTO and PZT materials. Data is taken form thin film measurement. Material GaN GaAs ZnO BTO PZT E (GPa) 181 [198] 85.5 [192] 129 128 [199] 125-190 [200] / 9.5 [201] 13.1 [192] 7.77 950-1200 [202,203] 650 [204] Table 3.2 Effective Young's modulus , effective dielectric constant / , radius R, factor * , effective piezoelectric coefficient and force threshold of GaN, GaAs, ZnO, ZnO/BTO and ZnO/PZT NWs. and are deduced from Table 3.1 with linear combination. NWs GaN GaAs ZnO Cylindrical ZnO Hexagonal ZnO/BTO ZnO/PZT E eff (GPa) 181 85.5 129 129 128 146 / 9.5 13.1 7.77 7.77 714 289 R (nm) 54 118 109 116 142 143 * ( • ⁄ • ) 3.50 2.53 2.92 2.74 0.03 0.06 e eff ( ⁄ ) 0.004 0.01 0.085 0.236 9.572 5.076 (nN) 420 54 80 80 200 150 Fig. 3.26 plots of Table Table 3 . 3 33 Distribution of potential generated by NGs with different geometry ratio according to the top layer thickness and applied force. Series number PMMA thickness (μm) Potential (mV) Precursor concentration (mM) Force (N) In28B 0.5 20.7 30 1.5 In29A 1 26.1 30 1.5 In24A 1 37.0 30 3 In24B 1.2 61.6 30 3 In19B 1.3 10.9 30 3 In27A 1.2 21.0 50 6 In27D 1.8 12.1 50 6 In26C 1.8 11.7 50 1.5 Table 3 . 4 34 Piezoelectric constant, geometry parameters and power density of PZT device and NG No4 Sample d 33 (pC/N) t (μm) A (cm 2 ) P (nW) P/t (μW/cm) P/A (μW/cm 2 ) P d (μW/cm 3 ) PZT 400 250 3.14 2000 80 0.64 25 VING 9.93 1.5 1 13 93 0.014 85 Table II . II 1 VING samples with different PMMA deposition speeds. Estimated and measured thickness is listed. The acceleration time is 9 s, and the spin time is 60 s. The substrates are Si wafer, unless otherwise illustrated. Sample reference A2 (rpm) 750 1000 A6 (rpm) 900 850 750 Estimated thickness (μm) Growth concentration (mM) In028B x2 x1 0.5 In029A x2 x2 1 In024A x2 x2 1 In024B x2 x2 1.2 In027A x2 x2 1.2 In027D x4 x3 1.8 In026C x2 x3 1.8 In031A x2 x2 1 In032A x2 x3 1.5 In035A x4 x4 2 In020B x4 x4 2.4 Measured thickness (μm) In026B x4 x2 1.2 In019B x2 x2 1.3 In018B x2 x2 1.5 AII.3.2.2 Al 2 O 3 deposition with ALD method a. Deposition mechanism Exchanged functionals of Crystal software: Perdew-Wang'91 (PWGGA) Pseudopotentials (PSP) functional Acknowledgements Xavier Mescot, Martine Gri, Antoine Gachon, Corinne Perret Appendix I: Tools for Computational Modeling  First principle calculation method First principle approach is a way to study the ground-state properties and excitation spectrum of a multiple-electron system, by searching eigenfunctions and eigenvalues of the Hamiltonian with a parameter-free approximation. Generally, the first-principles approach is suitable to small molecules without any adjustable parameters. However, severe approximations have to be induced to solve it for the problem with many electrons. The most successful first-principles method is the density functional theory (DFT) within the local (spin-) density approximation (L(S)DA) [START_REF] Kohn | Self-Consistent Equations Including Exchange and Correlation Effects[END_REF]210], where the multiple-body problem is mapped into a non-interacting system with a one-electron exchange-correlation potential which is approximated by that of the homogeneous electron gas. LDA has proved to be very efficient for extended systems, such as large molecules and solids. Recently, many researchers have access to the possibility of predicting ferroelectric and piezoelectric behavior of solids by first-principles techniques. A number of diverse bulk properties, elastic constants, polarization, and piezoelectric constants of the zinc-blende (ZB) and wurtzite (WZ) III-V nitrides AlN, GaN, and InN, are predicted from first principles within DFT using the plane-wave ultra-soft pseudopotential method, within both LDA and generalized gradient approximation (GGA) to the exchange-correlation functional [211]. Then the complete piezoelectric tensors of both the ZB and WZ polymorphs of ZnO and ZnS have been computed by ab initio periodic linear combination of atomic orbitals (LCAO) and DFT methods [START_REF] Catti | Full piezoelectric tensors of wurtzite and zinc blende ZnO and ZnS by first-principles calculations[END_REF]212]. Later the first principles studies approach to III-V and II-VI NW structures using hexagonal atom positions similar to the model shown in Fig. I.1 [8,[START_REF] Xiang | Piezoelectricity in ZnO nanowires: A first-principles study[END_REF][START_REF] Mitrushchenkov | Piezoelectric Properties of AlN , ZnO , and Hg x Zn 1 -x O Nanowires by First-Principles Calculations[END_REF][START_REF] Hoang | First-principles based multiscale model of piezoelectric nanowires with surface effects[END_REF][START_REF] Zhao | First-principles calculations of AlN nanowires and nanotubes: atomic structures, energetics, and surface states[END_REF].  Molecular dynamics modeling Molecular dynamics (MD) modeling is concerned with the description of the atomic and molecular interactions that govern microscopic and macroscopic behaviors of physical systems. The atoms and molecules interact for a fixed period of time to give a view of the dynamical evolution of the system. In the most common version, the trajectories of atoms and molecules are determined by numerically solving Newton's equations of motion for a system of interacting particles, where forces between the particles and their potential energies are calculated using interatomic potentials or molecular mechanics force fields. MD Appendix II: Fabrication of Piezoelectric Semiconducting Nanowire Arrays and Vertical Integrated Nanogenerators Two basic methods are involved in the synthesis of semiconducting NWs: top-down method and bottom-up method. Chemical bath deposition (CBD) belongs to the latter and has been used to grow ZnO NWs in our group. In this chapter, we mainly introduce how to fabricate ZnO NWs and to integrate the NGs based vertically aligned ZnO NW arrays. Concerning the ZnO NWs, detailed processes are described, while X-ray diffraction (XRD) and SEM techniques are used to assist the morphology control. Then the ZnO NW arrays are embedded into different dielectric matrices using several techniques such as the spin coating, the sputtering and the atomic layer deposition (ALD), covered by a metal electrode. AII.1 Synthesis techniques of piezoelectric semiconductor NWs From the analytical and computational study to the application in real life, controlled fabrication of NWs with desired longitudinal or axial structures is critical to integrating NWs into various application platforms [START_REF] Thurn-Albrecht | Ultrahigh-density nanowire arrays grown in self-assembled diblock copolymer templates[END_REF][START_REF] Novotny | InP nanowire/polymer hybrid photodiode[END_REF][236][START_REF] Wu | Semiconductor Nanolasers and NanoLEDs[END_REF]. Nowadays there are many different methods for fabricating semiconductor NWs. They are commonly placed into two categories, namely, the top-down and bottom-up approaches. The top-down approach relies on dimensional reduction through selective etching and various nanoimprint techniques. While the bottom-up approach starts with individual atoms and molecules and builds up the desired nanostructures. AII.1.1 Top-down methods Top-down etching techniques have been used for fabricating large devices with complex vertical structures for a long time. Their merits focus on well aligning, positioning, integrating and interfacing NWs to macro systems with high yields and repeatability. AII.1.1.1 Nanoscale spacer lithography (NSL) Spacer patterning is a technique employed for patterning features with line width smaller than can be achieved by conventional lithography. In the most general sense, the spacer is a layer that is deposited over a pre-patterned feature, often called the mandrel. The spacer is subsequently etched back so that the spacer portion covering the mandrel is etched away while the spacer portion on the sidewall remains. Direct patterning techniques using spacer lithography (SL) have been reported to avoid the problems associated with the alignment of the NWs [START_REF] Hua | Polymer imprint lithography with molecular-scale resolution[END_REF][START_REF] Ge | Cross-linked polymer replica of a nanoimprint mold at 30 nm half-pitch[END_REF][START_REF] Liu | Large area, 38 nm half-pitch grating fabrication by using atomic spacer lithography from aluminum wire grids[END_REF]. Large-density NW arrays of complex vertical structures could be fabricated by using these approaches with sizes of tens of nanometers and a lateral resolution of about 2 nm. These methods are expected to be more plausible and suitable for the large-scale manufacturing of NW-based devices. NSL consists of photolithography, thin film could lead to reduced performance on the integrated devices. The concentration change in the solution during the reaction could be responsible for part of it. To solve these problems, we need further study of the synthesis techniques. AII.3 Fabrication of rigid and flexible VINGs based on ZnO NWs The fabricated ZnO NWs need to be integrated into a device packaged for further characterization and application. In this section I will describe the process of integration and improvement of the fabricated devices including new dielectrics as matrix material for the composite. AII.3.1 Process from NW arrays to VINGs The process from NW arrays to VING device working under compression can be divided into three steps (Fig The bottom external electrode is a conductive Al tape stuck on a glass plate and connected with the substrate by silver paint. Both external electrodes also offer a position to connect the wires so that the main specimen is kept to be flat. The integration of VING working under bending is more simple. It follows the same process as the VING under compression, but as the substrate is already metallic, the bottom contact is taken directly from it. AII.3.3 Top Al electrode deposition with evaporation method After the deposition of matrix and top insulating layer, a 200 nm Al thin film was deposited on the top of the sample by electron beam evaporator as an electrode. The electron gun current was 180 mA, giving a deposition rate of 0.25 nm/s. AII.4 conclusion In this work, we have successfully synthesized vertically aligned ZnO NWs on both p-type Si wafer and stainless steel foil by CBD methods. The vertical alignment corresponded to the c-axis of ZnO lattice. Influence of growth process parameters were discussed and a preliminary guideline on the morphology control was given. Contact mode AFM operates by scanning a tip attached to the end of a cantilever across the sample surface while monitoring the change in cantilever deflection with a split photodiode detector. The tip contacts the surface through the adsorbed fluid layer on the sample surface. A feedback loop maintains a constant deflection between the cantilever and the sample by vertically moving the scanner at each (x, y) data point to maintain a "set point" deflection. By maintaining a constant cantilever deflection, the force between the probe and the sample remains constant. b. Non-contact mode AFM The cantilever is oscillated at a frequency which is slightly above the cantilever's resonance frequency typically with an amplitude of a few nanometers (<10nm), in order to obtain an AC signal from the cantilever. The probe does not contact the sample surface, but oscillates above the adsorbed fluid layer on the surface during scanning The cantilever's resonant frequency is decreased by the van der Waals forces, which extend from 1nm to 10nm above the adsorbed fluid layer, and by other long range forces which extend above the surface. The decrease in resonant frequency causes the amplitude of oscillation to decrease. The feedback loop maintains a constant oscillation amplitude or frequency by vertically moving the scanner at each (x, y) data point until a "set point" amplitude or frequency is reached. The distance that the scanner moves vertically at each (x, y) data point is stored by the computer to form the topographic image of the sample surface. c. Tapping mode AFM Tapping mode AFM operates by scanning a probe attached to the end of an oscillating cantilever across the sample surface. The schematic in Fig. III.1 explains the working principle. The cantilever is oscillated at or slightly below its resonance frequency with an amplitude ranging typically from 20nm to 100nm. The probe lightly "taps" on the sample surface during scanning, contacting the surface at the bottom of its swing. The feedback loop
01774941
en
[ "info", "info.info-ni" ]
2024/03/05 22:32:18
2015
https://inria.hal.science/hal-01774941/file/978-3-319-19282-6_6_Chapter.pdf
Luca Padovani Tzu-Chun Chen Andrea Tosatto Type Reconstruction Algorithms for Deadlock-Free and Lock-Free Linear π-Calculi We define complete type reconstruction algorithms for two type systems ensuring deadlock and lock freedom of linear π-calculus processes. Our work automates the verification of deadlock/lock freedom for a non-trivial class of processes that includes interleaved binary sessions and, to great extent, multiparty sessions as well. A Haskell implementation of the algorithms is available. Introduction Type systems help finding potential errors during the early phases of software development. In the context of communicating processes, typical errors are: making invalid assumptions about the nature of a received message; using a communication channel beyond its nominal capabilities. Some type systems are able to warn against subtler errors, and sometimes can even guarantee liveness properties as well. For instance, the type systems presented in [START_REF] Padovani | Deadlock and lock freedom in the linear π-calculus[END_REF] for the linear π-calculus [START_REF] Kobayashi | Linearity and the pi-calculus[END_REF] ensure well-typed processes to be deadlock and lock free. Such stronger guarantees come at the cost of a richer type structure, hence of a greater programming effort, when programmers are supposed to explicitly annotate programs with types. In this respect, type reconstruction becomes a most wanted tool in the programmer's toolkit: type reconstruction is the procedure that automatically synthesizes, whenever possible, the types of the entities used by a program; in particular, the types of the channels used by a communicating process. In the present work, we describe type reconstruction algorithms for the type systems presented in [START_REF] Padovani | Deadlock and lock freedom in the linear π-calculus[END_REF], thereby automating the static deadlock and lock freedom analysis for a non-trivial class of communicating processes. A deadlock is a configuration with pending communications that cannot complete. A paradigmatic example of deadlock modeled in the π-calculus is illustrated below (νa, b)( a?(x).b!x | b?(y).a!y ) (1.1) where the input on a blocks the output on b, and the input on b blocks the output on a. The key idea used in [START_REF] Padovani | Deadlock and lock freedom in the linear π-calculus[END_REF] for detecting deadlocks, which is related to earlier works by Kobayashi [START_REF] Kobayashi | A type system for lock-free processes[END_REF][START_REF] Kobayashi | A new type system for deadlock-free processes[END_REF], is to associate each channel with a number -called level -specifying the relative order in which different channels should be used. In (1.1), this mechanism requires a to have smaller level than b in the left subprocess, and greater level than b in the right one. Since no level assignment can simultaneously satisfy both requirements, (1.1) is flagged as ill typed. This mechanism does not prevent locks, namely configurations where some communication remains pending although the process as a whole can make progress. A deadlock-free configuration that is not lock free is (νa)( *c?(x).c!x | c!a | a!42 ) (1.2) where the communication pending on a cannot complete. There are no interleaved communications on different channels in (1.2), therefore the level-based mechanism spots no apparent issue. The idea put forward in [START_REF] Padovani | Deadlock and lock freedom in the linear π-calculus[END_REF] to reject (1.2) is to also associate each channel with another number -called ticket -specifying the maximum number of times the channel can travel in a message. With this mechanism in place, (1.2) is ill typed because a would need an infinite number of tickets to travel infinitely many times on c. Finding appropriate level and tickets for the channels used by a process can be difficult. We remedy to such difficulty with three contributions. First, we develop complete type reconstruction algorithms for the type systems in [START_REF] Padovani | Deadlock and lock freedom in the linear π-calculus[END_REF] so that appropriate level and tickets are synthesized automatically, whenever possible. The linear π-calculus [START_REF] Kobayashi | Linearity and the pi-calculus[END_REF], for which the type systems are defined, can model a variety of communicating systems with both static and dynamic network topologies. In particular, binary sessions [START_REF] Dardha | Session types revisited[END_REF] and, to a large extent, also multiparty sessions [18, technical report], can be encoded in it. Second, we purposely use a variant of the linear π-calculus with pairs instead of a polyadic calculus. While this choice has a cost in terms of technical machinery, it allows us to discuss how to deal with structured data types, which are of primary importance in concrete languages but whose integration in linear type systems requires some care [START_REF] Padovani | Type reconstruction for the linear π-calculus with composite and equi-recursive types[END_REF]. We give evidence that our algorithms scale easily to other data types, including disjoint sums and polymorphic variants. Third, we present the algorithms assuming the existence of type reconstruction for the linear π-calculus [START_REF] Igarashi | Type reconstruction for linear π-calculus with I/O subtyping[END_REF][START_REF] Padovani | Type reconstruction for the linear π-calculus with composite and equi-recursive types[END_REF]. This approach has two positive upshots: (1) we focus on the aspects of the algorithms concerning deadlock and lock freedom, thereby simplifying their presentation and the formal study of their properties; [START_REF] Amtoft | Type and behaviour reconstruction for higher-order concurrent programs[END_REF] we show how to combine in a modular way increasingly refined type reconstruction stages and how to address some of the issues that may arise in doing so. In what follows we review the linear π-calculus with pairs (Section 2) and the type systems for deadlock and lock freedom of [START_REF] Padovani | Deadlock and lock freedom in the linear π-calculus[END_REF] (Section 3). Such type systems are unsuitable to be used as the basis for type reconstruction algorithms. So, we reformulate them to obtain reconstruction algorithms that are both correct and complete (Section 4). Then, we sketch an algorithm for solving the constraints generated by the reconstruction algorithms (Section 5) We conclude presenting a few benchmarks, further connections with related work, and directions of future research (Section 6). The algorithms have been implemented and integrated in a tool for the static analysis of π-calculus processes. The archive with the source code of the tool, available at the page http://di.unito.it/hypha, includes a wide range of examples, of which we can discuss only one in the paper because of space constraints. The simply-typed linear π-calculus with pairs The process language we work with is the asynchronous π-calculus extended in two ways: [START_REF] Amtoft | Type and effect systems: behaviours for concurrency[END_REF] we generalize names to expressions to account for pairs and other data types; (2) we assume that names are explicitly annotated with simple types possibly inferred in a previous reconstruction phase ("simple" means without level/ticket decorations). We annotate free names instead of bound names because, in a behavioral type system, each occurrence of a name may be used according to a different type. Typically, two distinct occurrences of the same linear channel are used for complementary I/O actions. We use m, n, . . . to range over integer numbers; we use sets of variables x, y, . . . and channels a, b, . . . ; names u, v, . . . are either channels or variables; we let polarities p, q, . . . range over subsets of {?, !}; we abbreviate {?} with ?, {!} with !, and {?, !} with #. Processes P, Q, . . . , expressions e, f, . . . , and simple types t, s, . . . are defined below: Process P, Q ::= 0 | e?(x).P | e!f | P | Q | (νa)P | *P Expression e, f ::= n | u t | (e,f) | fst(e) | snd(e) Simple type t, s ::= int | p[t] | p[t] * | t × s Expressions include integer constants, names, pairs, and the two pair projection operators fst and snd. Simple types are the regular, possibly infinite terms built using the rightmost productions in grammar above and include the type int of integers, the type p[t] of linear channels to be used according to the polarity p and carrying messages of type t, the type p[t] * of unlimited channels to be used according to the polarity p and carrying messages of type t, and the type t × s of pairs whose components have respectively type t and s. Recall that linear channels are meant to be used for one communication, whereas unlimited channels can be used any number of communications. We require every infinite branch of a type to contain infinitely many occurrences of channel constructors. For example, the term t satisfying the equation t = ?[t] is a valid type while the one satisfying the equation t = t × int is not. We impose this requirement to simplify the formal development, but it can be lifted (for example, the implementation supports ordinary recursive types such as lists and trees). Since we are only concerned with type reconstruction, we do not give an operational semantics of the calculus. The interested reader may refer to [START_REF] Kobayashi | Linearity and the pi-calculus[END_REF][START_REF] Padovani | Type reconstruction for the linear π-calculus with composite and equi-recursive types[END_REF] for generic properties of the linear π-calculus and to [START_REF] Padovani | Deadlock and lock freedom in the linear π-calculus[END_REF] for the formalization of (dead)lock freedom. We conclude this section with a comprehensive example that is representative of a class of processes for which our type systems are able to prove deadlock and lock freedom. (where we have omitted simple type annotations) models a system composed of two neighbor processes connected by channels e and f . The process spawned by c!(e, f ) uses e for sending a message to the neighbor. Simultaneously, it waits on f for a message from the neighbor. The process spawned by c!( f ,e) does the opposite. Each exchanged message consists of a payload (omitted) and a continuation channel on which subsequent messages are exchanged. Above, each process sends and receives a fresh continuation a. Once the two communications have been performed, each process iterates with a new pair of corresponding continuations. Type systems for deadlock and lock freedom In this section we review the type systems ensuring deadlock and lock freedom [START_REF] Padovani | Deadlock and lock freedom in the linear π-calculus[END_REF] for which we want to define corresponding reconstruction algorithms. Both type systems rely on refined linear channel types of the form p[t] n m where the decorations n and m are respectively the level and the tickets of a channel with this type. Intuitively, levels are used for imposing an ordering on the input/output operations performed on channels: channels with lower level must be used before channels with higher level; tickets limit the number of "travels" for channels: a channel with m tickets can be sent at most m times in a message. From now on, we use T , S, . . . to range over types, which have the same structure and constructors as simple types, but where linear channel types are decorated with levels and tickets. We write T for the stripping of T , namely for the simple type obtained by removing all level and ticket decorations from T . For example, ?[int × ![int] n m ] * = ?[int × ![int]] * . Note that • is a non-injective function. We need some auxiliary operators. First, we extend the notion of level from channel types to arbitrary types. The level of a type T , written |T |, is an element of the set Z ∪ {⊥, } ordered in the obvious way and formally defined thus: |T | def =          ⊥ if T = p[S] * and ? ∈ p n if T = p[S] n m and p = / 0 min{|T 1 |, |T 2 |} if T = T 1 × T 2 otherwise (3.1) As an example, we have |int × ?[![int] 1 0 ] 0 0 | = min{|int|, |?[![int] 1 0 ] 0 0 |} = min{ , 0} = 0. Intuitively, the level of T measures the inverse urgency for using values of type T in order to ensure (dead)lock freedom: the lowest level (and highest urgency) ⊥ is given to unlimited channels with input polarity, for which we want to guarantee input receptiveness; finite levels are reserved for linear channels; the highest level (and lowest urgency) is given to values such as numbers or channels with empty polarity whose use is not critical as far as (dead)lock is concerned. Note that |T | is well defined because every infinite branch of T has infinitely many channel constructors. We also need an operator to shift the topmost levels and tickets in types. We define $ n m T def =      p[S] n+h m+k if T = p[S] h k ($ n m T 1 ) × ($ n m T 2 ) if T = T 1 × T 2 T otherwise (3. 2) so that, for example, we have $ 2 1 (int × ?[![int] 1 0 ] 0 0 ) = int × ?[![int] 1 0 ] 2 1 . Next, we define an operator for combining the types of different occurrences of the same object. If an object is used according to type T in one part of a process and according to type S in another part, then it is used according to the type T + S overall, where T + S is inductively defined thus: T + S def =                int if T = S = int (T 1 + S 1 ) × (T 2 + S 2 ) if T = T 1 × T 2 and S = S 1 × S 2 (p ∪ q)[T ] n h+k if T = p[T ] n h and S = q[T ] n k and p ∩ q = / 0 (p ∪ q)[T ] * if T = p[T ] * and S = q[T ] * undefined otherwise Table 1. Typing rules for the deadlock-free (k = 0) and lock-free (k = 1) linear π-calculus. Typing rules for expressions Γ e : t [T-INT] Γ n : int un(Γ ) [T-NAME] Γ , u : T u T : T un(Γ ) Type combination is partial and is only defined when the combined types have the same structure. In particular, channel types can be combined only if they have equal message types; linear channel types can be combined only if they have disjoint polarities and equal level. Also, the combination of two channel types has the union of their polarities and, in the case of linear channels, the sum of their tickets. For example, a channel that is used both with type ?[int] 0 1 and with type ![int] 0 2 is used overall according to the type ? [int] 0 1 + ![int] 0 2 = #[int] 0 3 . Lastly, we define type environments Γ , . . . as finite maps from names to types written u 1 : T 1 , . . . , u n : T n . As usual, dom(Γ ) is the domain of Γ and Γ 1 , Γ 2 is the union of Γ 1 and Γ 2 when dom(Γ 1 ) ∩ dom(Γ 2 ) = / 0. We extend type combination to type environments: Γ 1 + Γ 2 def = Γ 1 , Γ 2 if dom(Γ 1 ) ∩ dom(Γ 2 ) = / 0 (Γ 1 , u : T ) + (Γ 2 , u : S) def = (Γ 1 + Γ 2 ), u : T + S We let |Γ | def = min{|Γ (u)| | u ∈ dom(Γ )} be the level of a type environment, we write un(Γ ) if |Γ | = and un(Γ ) if un(Γ ) and Γ has no top-level linear channel types. Note that un(Γ ) is strictly stronger than un(Γ ). For example, if Γ def = x : int × / 0[int] 0 0 we have un(Γ ) but not un(Γ ) because Γ (x) has a top-level linear channel type. The type systems for deadlock and lock freedom are defined by the rules in Table 1 deriving judgments Γ e : T for expressions and Γ k P for processes. The type system for deadlock freedom is obtained by taking k = 0, whereas the type system for lock freedom is obtained by taking k = 1 and restricting all levels in linear channel types to be non negative. We illustrate the typing rules as we work through the typing derivation of the replicated process in Example 2.1. The interested reader may refer to the implementation or [START_REF] Padovani | Deadlock and lock freedom in the linear π-calculus[END_REF] for more examples and detailed descriptions of the rules. Let T and S be the types defined by the equations T = ![S] 0 0 × ?[S] 0 0 and S = ?[S] 1 1 . We build the derivation bottom up, from the judgment stating that the whole process is well typed. Since the process is a replicated input, we apply [T-IN*] thus: c : ?[T ] * c : ?[T ] * c : ![T ] * , x : T 1 (νa)(fst(x)!a | snd(x)?(y).c!(a,y)) c : #[T ] * 1 *c?(x).(νa)(fst(x)!a | snd(x)?(y).c!(a,y)) In applying this rule we have Γ 2 = c : ![T ] * so the side condition un(Γ 2 ) of [T-IN*] is satisfied: since a replicated input process is permanently available, its body cannot contain any free linear channel except those possibly received through the unlimited channel. The side condition un(Γ 2 ), which is stronger than simply un(Γ 2 ), makes sure that a replicated input process does not contain linear channels and therefore is level polymorphic. We will see a use of this feature at the very end of the derivation. The continuation of the process gains visibility of the message x with type T and is a restriction of a linear channel a. Hence, the next step is an obvious application of [T-NEW]: c : ![T ] * , x : T, a : #[S] 1 3 1 fst(x)!a | snd(x)?(y).c!(a,y) c : ![T ] * , x : T 1 (νa)(fst(x)!a | snd(x)?(y).c!(a,y)) (3.3) We guess level 1 and 3 tickets for a. The rationale is that a is a continuation channel that will be used after the channels in x, which have level 0, so a must have strictly positive level. Also, in Example 2.1 the channel a travels three times. At this point the typing derivation forks, for we deal with the parallel composition of two processes. This means that we have to split the type environment in two parts, each describing the resources used by the corresponding subprocess in (3.3). We have Γ = Γ 1 + Γ 2 where Γ def = c : ![T ] * , x : T, a : #[S] 1 3 Γ 1 def = x : ![S] 0 0 × / 0[S] 0 0 , a : ?[S] 1 2 Γ 2 def = c : ![T ] * , x : / 0[S] 0 0 × ?[S] 0 0 , a : ![S] 1 1 Observe that Γ is split in such a way that: c only occurs in Γ 2 , because it is only used in the right subprocess in (3.3); in each subprocess, the unused linear channel in the pair x is given empty polarity; the type of the continuation a has input polarity (and 2 tickets) in Γ 1 and output polarity (and 1 ticket) in Γ 2 . The type of a in Γ 1 is the same as $ 0 1 S, and we use the latter form from now on. We complete the typing derivation for the left subprocess in (3.3) using Γ 1 and applying [T-OUT]: x : ![S] 0 0 × / 0[S] 0 0 fst(x) : ![S] 0 0 a : $ 0 1 S a : $ 0 1 S x : ![S] 0 0 × / 0[S] 0 0 , a : $ 0 1 S 1 fst(x)!a 0 < |$ 0 1 S| = 1 The side condition 0 < 1 ensures that the message has higher level than the channel on which it travels, according to the intuition that the message can only be used after the communication has occurred and the message has been received. In this case, the level of fst(x) is 0 that is smaller than the level of a, which is 1. Shifting the tickets from the type of a consumes one of its tickets, meaning that after this communication a gets closer to the point where it must be the subject of a communication. Concerning the right subprocess in (3.3), we use Γ 2 above and apply [T-IN] to obtain x : / 0[S] 0 0 × ?[S] 0 0 snd(x) : ?[S] 0 0 c : ![T ] * , a : ![S] 1 1 , y : S 1 c!(a,y) c : ![T ] * , x : / 0[S] 0 0 × ?[S] 0 0 , a : ![S] 1 1 1 snd(x)?(y).c!(a,y) 0 < 1 The side condition 0 < 1 checks that the level of the linear channel used for input is smaller than the level of any other channel occurring free in the continuation of the process. In this case, c has level because it is an unlimited channel with output polarity, whereas a has level 1. To close the derivation we must type the recursive invocation of c. We do so with an application of [T-OUT*]: c : ![T ] * c : ![T ] * a : ![S] 1 1 , y : S (a,y) : $ 1 1 T c : ![T ] * , a : ![S] 1 1 , y : S 1 c!(a,y) ⊥ < 1 The side condition ⊥ < 1 ensures that no unlimited channel with input polarity is sent in the message. This is necessary to guarantee input receptiveness on unlimited channels. There is a mismatch between the actual type $ 1 1 T and the expected type T of the message. The shifting on the tickets is due, once again, to the fact that 1 ticket is required and consumed for the channels to travel. The shifting on the levels realizes a form of level polymorphism whereby we are allowed to send on c a pair of channels with level 1 even if c expects a pair of channels with level 0. This is safe because we know, from the side condition of [T-IN*], that the receiver of the message does not own any linear channel except those possibly contained in the message itself. Therefore, the exact level of the channels in the message is irrelevant, as long as it is obtained by shifting of the expected message type. Level polymorphism is a key distinguishing feature of our type systems that makes it possible to deal with non-trivial recursive processes. Type reconstruction We now face the problem of defining a type reconstruction algorithm for the type system presented in the previous section. The input of the algorithm is a process P where names are explicitly annotated with simple types, possibly resulting from a previous reconstruction stage [START_REF] Igarashi | Type reconstruction for linear π-calculus with I/O subtyping[END_REF][START_REF] Padovani | Type reconstruction for the linear π-calculus with composite and equi-recursive types[END_REF]. Notwithstanding such explicit annotations, the typing rules in Table 1 rely on guesses concerning (i) the splitting of type environments, (ii) levels and tickets that decorate linear channel types, and (iii) how tickets are distributed in combined types. We address these issues using standard strategies. Concerning (i), we synthesize type environments for expressions and processes by looking at the free names occurring in them. Concerning (ii) and (iii), we proceed in two steps: first, we transform each simple type t in P into a type expression T that has the same structure as t, but where we use fresh level and ticket variables in every slot where a level or a ticket is expected; we call this transformation dressing. Then, we accumulate (rather than check) the constraints that these level and ticket variables should satisfy, as by the side conditions of the typing rules (Table 1). Finally, we look for a solution of these constraints. It turns out that the accumulated constraints can always be expressed as an integer programming problem for which there exist dedicated solvers. There is still a subtle source of ambiguity in the procedure outlined so far. We have remarked that stripping is a non-injective function, meaning that different types may be stripped to the same simple type. For example, if we take T = ?[T ] 1 1 and S = ?[T ] 0 0 we have T = S = s where s = ?[s]. Now, if we were to reconstruct either T or S from s, we would have to dress s with level and ticket variables in every slot where a level or a ticket is expected. But since s is infinite, such dressing is not unique. For example, T = ?[T] η 1 θ 1 and S = ?[T] η 2 θ 2 are just two of the infinitely many possible dressings of s with level and ticket variables: in T we have used two distinct variables η 1 and θ 1 , one for each slot; in S we have used four. The problem is that from the dressing T we can only reconstruct T , by taking η 1 = θ 1 = 1, whereas from the dressing S we can reconstruct both T (by assigning all variables to 1) as well as S, by taking η 1 = θ 1 = 1 and η 2 = θ 2 = 0. This means that the choice of the number of integer variables we use in dressing (infinite) simple types constrains the types that we can reconstruct from them, which is a risk for the completeness of the type reconstruction algorithms. To cope with this issue, we dress simple types lazily, only to their topmost linear channel constructors, and we put fresh type variables in place of message types, leaving them undressed. It is only when the message is used that we (lazily) dress its type as well. The introduction of fresh type variables for message types means that we redo part of the work already carried out for reconstructing simple types [START_REF] Padovani | Type reconstruction for the linear π-calculus with composite and equi-recursive types[END_REF]. This appears to be an inevitable price to pay to have completeness of the type reconstruction algorithms, when they build on top of (instead of being performed together with) previous stages. To formalize the algorithms, we introduce countable sets of type variables α, β and of integer variables η, θ ; type expressions and integer expressions are defined below: Type expression T, S : := int | α | p[T] λ τ | p[T] * | T × S Integer expression λ , ε, τ ::= n | η | ε + ε | ε -ε Type expressions differ from types in three ways: they are always finite, they have integer expressions in place of levels and tickets, and they include type variables α denoting unknown types awaiting to be lazily dressed. Integer expressions are linear polynomials of integer variables. We say that T is proper, written prop(T), if all the type variables in T are guarded by a channel constructor. For example, both int and p[α] * are proper (all type variables occur within channel types), but α and int × α are not. Since the level and tickets of a type expression are solely determined by its top-level linear channel constructors, properness characterizes those type expressions that are "sufficiently dressed" so that it is possible to extract their level and to combine them with other type expressions, even if these type expressions contain type variables. We now revisit and adapt all the auxiliary operators and notions defined for types to type expressions. Recall that the level of T is the minimum level of any topmost linear channel type in T , or ⊥ if T has a topmost unlimited channel type with input polarity. Since a type expression T may contain unevaluated level expressions, we cannot compute a minimum level in general. However, a quick inspection of Table 1 reveals that minima of levels always occur on the right hand side of inequalities, and an inequality like n < min{m i | i ∈ I} can equivalently be expressed as a set of inequalities {n < m i | i ∈ I}. Following this observation, we define the level |T| of a proper type expression T as the set of level expressions that decorate the topmost linear channel types in T, and possibly the element ⊥. Formally: |T| def =          {⊥} if T = p[S] * and ? ∈ p {λ } if T = p[S] λ τ and p = / 0 |T 1 | ∪ |T 2 | if T = T 1 × T 2 / 0 otherwise (4.1) We write un(T) if |T| = / 0, in which case T denotes an unlimited type. Shifting for proper type expressions is defined just like for types, except that we symbolically record the sum of level/ticket expressions instead of computing it: $ λ τ T def =      p[S] λ +λ τ+τ if T = p[S] λ τ ($ λ τ T 1 ) × ($ λ τ T 2 ) if T = T 1 × T 2 T otherwise (4.2) Because type expressions may contain type and integer variables, we cannot determine a priori whether the combination of two type expressions is possible. For instance, the combination of ?[T] λ 1 τ 1 and ![S] λ 2 τ 2 is possible only if T and S denote the same type and if λ 1 and λ 2 evaluate to the same level. We cannot check these conditions right away, when T, S and the level expressions contain variables. Instead, we record these conditions into a constraint. Constraints ϕ, . . . are conjunctions of type constraints T = S (equality relations between type expressions) and integer constraints ε ≤ ε (inequality relations between integer expressions). Formally, their syntax is defined by Constraint ϕ ::= true | T = T | ε ≤ ε | ϕ ∧ ϕ We write ε < ε in place of ε + 1 ≤ ε and ε = ε in place of ε ≤ ε ∧ ε ≤ ε; if E = {ε i } i∈I is a finite set of integer expressions, we write ε < E for the constraint i∈I ε < ε i ; finally, we write dom(ϕ) for the set of type expressions occurring in ϕ. The combination operator T S for type expressions returns a pair R; ϕ made of the resulting type expression R and the constraint ϕ that must be satisfied for the combination to be possible. The definition of mimics exactly that of + in Section 3, except that all non-checkable conditions accumulate in constraints: T S def =                    int ; true if T = int and S = int (p ∪ q)[T ] λ τ+τ ; T = S ∧ λ = λ if T = p[T ] λ τ and S = q[S ] λ τ and p ∩ q = / 0 (p ∪ q)[T ] * ; T = S if T = p[T ] * and S = q[S ] * R 1 × R 2 ; ϕ 1 ∧ ϕ 2 if T = T 1 × T 2 and S = S 1 × S 2 and T i S i = R i ; ϕ i undefined otherwise Like type combination, also is a partial operator: T S is undefined if T and S are structurally incompatible (e.g., if T = int and S = p[int] * ) or if T and S are not proper. When T S is defined, though, the resulting type expression is always proper. We use ∆, . . . to range over type expression environments (or just environments, for short), namely finite maps from names to type expressions, and we inherit all the notation introduced for type environments. We let |∆| def = u∈dom(∆) |∆(u)| and write un(∆) if |∆| = / 0 and ∆ has no top-level linear channel type in its range. By now, the extension of to environments is easy to imagine: when defined, ∆ 1 ∆ 2 is a pair ∆; ϕ made of the resulting environment ∆ and of a constraint ϕ that results from the combination of the type expressions in ∆ 1 and ∆ 2 . More precisely: ∆ 1 ∆ 2 def = ∆ 1 , ∆ 2 ; true if dom(∆ 1 ) ∩ dom(∆ 2 ) = / 0 (∆ 1 , u : T) (∆ 2 , u : S) def = ∆, u : R ; ϕ ∧ ϕ if ∆ 1 ∆ 2 = ∆; ϕ and T S = R; ϕ The last notion we need to formalize, before introducing the reconstruction algorithms, is that of dressing. Dressing a simple type t means placing fresh integer variables in the level/ticket slots of t. Formally, we say that T is a dressing of t if t ↑ T is inductively derivable by the following rules which pick globally fresh variables: int ↑ int α fresh p[t] * ↑ p[α] * α, η, θ fresh p[t] ↑ p[α] η θ t i ↑ T i (i=1,2) t 1 × t 2 ↑ T 1 × T 2 Note that the decoration of t with fresh integer variables stops at the topmost channel types in t and that message types are left undecorated. By definition, the dressing of a simple type is always a proper type expression. We can now present the type reconstruction algorithms, defined by the rules in Table 2. The rules in the upper part of the table derive judgments of the form e : T ∆; ϕ, stating that e has type T in the environment ∆ if the constraint ϕ is satisfied. The expression e is the only "input" of the judgment, while T, ∆, and ϕ are synthesized from it. There is a close correspondence between these rules and those for expressions in Table 1. Observe the use of where + was used in Table 1, the accumulation of constraints from the premises to the conclusion of each rule and, most notably, the dressing of the simple type that annotates u in [I-NAME]. Type expressions synthesized by the rules are always proper, so the side conditions in [I-FST] and [I-SND] can be safely checked. The rules in the lower part of the table derive judgments of the form P k ∆; ϕ, stating that P is well typed in the environment ∆ if the constraint ϕ is satisfied. The parameter k plays the same role as in the type system (Table 1). The process P and the parameter k are the only "inputs" of the judgments, and ∆ and ϕ are synthesized from them. All rules except [I-WEAK] have a corresponding one in Table 1. Like for expressions, environments are combined through and constraints accumulate from premises to conclusions. We focus on the differences with respect to the typing rules. In rule [T-IN], the side condition verifies that the level of the channel e on which an input is performed is smaller than the level of any channel used for typing the continuation process P. This condition can be decomposed in two parts: (1) no unlimited channel with input polarity must be in P; this condition is necessary to ensure input receptiveness on unlimited channels in the original type system [START_REF] Padovani | Deadlock and lock freedom in the linear π-calculus[END_REF] and is expressed in Reconstruction rules for processes ,2) e i : T i ∆ i ; ϕ i (i=1,2) (e 1 ,e 2 ) : T 1 × T 2 ∆; 1≤i≤3 ϕ i ∆ 1 ∆ 2 = ∆; ϕ 3 [I-NAME] P k ∆; ϕ [I-WEAK] P k ∆; ϕ P k ∆, u : T; ϕ un(T) prop(T) [I-IDLE] 0 k / 0; true [I-PAR] P i k ∆ i ; ϕ i (i=1 P 1 | P 2 k ∆; 1≤i≤3 ϕ i ∆ 1 ∆ 2 = ∆; ϕ 3 [I-IN] e : ?[T] λ τ ∆ 1 ; ϕ 1 P k ∆ 2 , x : S; ϕ 2 e?(x).P k ∆; 1≤i≤3 ϕ i ∧ T = $ -λ 0 S ∧ λ < |∆ 2 | ⊥ ∈ |∆ 2 | ∆ 1 ∆ 2 = ∆; ϕ 3 [I-OUT] e : ![T] λ τ ∆ 1 ; ϕ 1 f : S ∆ 2 ; ϕ 2 e!f k ∆; 1≤i≤3 ϕ i ∧ T = $ -λ -k S ∧ λ < |∆ 2 | ⊥ ∈ |∆ 2 | ∆ 1 ∆ 2 = ∆; ϕ 3 [I-NEW] P k ∆, a : #[T] λ τ ; ϕ (νa)P k ∆; ϕ [I-IN*] e : ?[T] * ∆ 1 ; ϕ 1 P k ∆ 2 , x : S; ϕ 2 *e?(x).P k ∆; 1≤i≤3 ϕ i ∧ T = S un(∆ 2 ) ∆ 1 ∆ 2 = ∆; ϕ 3 [I-OUT*] e : ![T] * ∆ 1 ; ϕ 1 f : S ∆ 2 ; ϕ 2 e!f k ∆; 1≤i≤3 ϕ i ∧ T = $ -η -k S ⊥ ∈ |∆ 2 | ∆ 1 ∆ 2 = ∆; ϕ 3 η fresh [I-NEW*] P k ∆, a : #[T] * ; ϕ (νa)P k ∆; ϕ In [T-IN], [T-OUT], and [T-OUT*] , shifting is used for updating message levels, consuming tickets, and realizing level polymorphism. In rules [I-IN], [I-OUT], and [I-OUT*], analogous shiftings are performed on type expressions, except that they are inverted and recorded in constraints. For example, when typing the continuation P of a process e?(x).P using [T-IN], if e has type ?[T ] n m then the type of x is required to be $ n 0 T . In the reconstruction algorithm, we record this requirement as the constraint T = $ -λ 0 S, where S is the type synthesized for x in P. We invert the shifting because shifting is defined only on proper type expressions, and in [I-IN] (and the other rules mentioned) only S is guaranteed to be proper, while T in general is not. Finally, note that [I-WEAK] has no correspondent rule in Table 1. This rule is necessary because the premises of [I-IN], [I-IN*], [I-NEW], and [I-NEW*] assume that bound names occur in their scope. Since type environments are generated by the algorithm as it works through an expression or a process, this assumption may not hold if a bound name is never used in its scope. Naturally, the type T of an unused name must be unlimited, whence the constraint un(T). We also require T to be proper, to preserve the invariant that all environments synthesized by the algorithms have proper types. In principle, [I-WEAK] makes the rule set in Table 2 not syntax directed, which is a problem if we want to consider this as an algorithm. In practice, the places where [I-WEAK] may be necessary are easy to spot (in the premises of all the aforementioned rules for the binding constructs). What we gain with [I-WEAK] is a simpler presentation of the rules. To state the properties of the reconstruction algorithm, we need a notion of constraint satisfiability. A variable assignment σ is a map from type/integer variables to types/integers. We say that σ covers X if σ provides assignments to all the type/integer variables occurring in X, where X may be a constraint, a type/integer expression, or an environment. When σ covers X, the application of σ to X, written σ X, substitutes all type/integer variables according to σ and evaluates all integer expressions in X. When σ covers ϕ, we say that σ satisfies ϕ if σ ϕ is derivable by the rules: σ true σ T = S σ T = σ S σ ε ≤ ε σ ε ≤ σ ε σ ϕ i (i=1,2) σ ϕ 1 ∧ ϕ 2 Whenever we apply an assignment σ to a set of type expressions in reference to a derivation that is parametric on k, we will implicitly assume that all integer expressions in ticket slots evaluate to non-negative integers and that, if k = 1, all integer expressions in level slots evaluate to non-negative integers. The value of k and the set of type expressions will always be clear from the context. The reconstruction algorithm is correct, namely each derivation obtained through the algorithm such that the resulting constraint is satisfiable corresponds to a derivation in the type system: Theorem 4.1 (correctness). If P k ∆; ϕ and σ ϕ and σ covers ∆, then σ ∆ k P. The algorithm is also complete, meaning that if there exists a typing derivation for the judgment Γ k P, then the algorithm is capable of synthesizing an environment ∆ from which Γ can be obtained by means of a suitable variable assignment: Theorem 4.2 (completeness). If Γ k P, then P k ∆; ϕ for some ∆, ϕ, and σ such that σ ϕ and Γ = σ ∆. Note that the above results do not give any information about how to verify whether there exists a σ such that σ ϕ and, in this case, how to find such σ . These problems will be addressed in Section 5. We conclude this section showing the reconstruction algorithm at work on the replicated process in Example 2.1. ![α 1 ] η 1 θ 1 × / 0[α 2 ] η 2 θ 2 ?[α 3 ] η 3 θ 3 α 1 = ?[α 3 ] η 3 -η 1 θ 3 -k ∧ η 1 < η 3 (2) ![α 4 ] * ![α 5 ] η 5 θ 5 ?[α 6 ] η 6 θ 6 α 4 = ![α 5 ] η 5 -η 4 θ 5 -k × ?[α 6 ] η 6 -η 4 θ 6 -k (3) / 0[α 7 ] η 7 θ 7 × ?[α 8 ] η 8 θ 8 α 8 = ?[α 6 ] η 6 -η 8 θ 6 ∧ η 8 < η 5 (4) ![α 1 ] η 1 θ 1 +θ 7 × ?[α 2 ] η 2 θ 2 +θ 8 #[α 3 ] η 3 θ 3 +θ 5 α 1 = α 7 ∧ α 2 = α 8 ∧ α 3 = α 5 ∧ η 1 = η 7 ∧ η 2 = η 8 ∧ η 3 = η 5 (5) #[α 9 ] * α 9 = ![α 1 ] η 1 θ 1 +θ 7 × ?[α 2 ] η 2 θ 2 +θ 8 ∧ α 9 = α 4 Table 4. Constraint entailment rules. [S-LEVEL] ϕ 1 0 ≤ λ p[T] λ τ ∈ dom(ϕ) [S-TICKET] ϕ k 0 ≤ τ p[T] λ τ ∈ dom(ϕ) [S-CONJ] ϕ 1 ∧ ϕ 2 k ϕ i i ∈ {1, 2} [S-SYMM] ϕ k T = S ϕ k S = T [S-TRANS] ϕ k T = R ϕ k R = S ϕ k T = S [S-CHAN] ϕ k p[T] λ 1 τ 1 = p[S] λ 2 τ 2 ϕ k T = S ∧ λ 1 = λ 2 ∧ τ 1 = τ 2 [S-CHAN*] ϕ k p[T] * = p[S] * ϕ k T = S [S-PAIR] ϕ k T 1 × T 2 = S 1 × S 2 ϕ k T 1 = S 1 ∧ T 2 = S 2 Each subprocess triggers one rule of the reconstruction algorithm which synthesizes a type environment and possibly generates some constraints. Table 3 summarizes the parts of the environments and the constraints produced at each step of the reconstruction algorithm with parameter k. We have omitted the step concerning the restriction on a, which just removes a from the environment and introduces no constraints. Constraint solving We sketch an algorithm that determines whether a constraint ϕ is satisfiable and, in this case, computes an assignment that satisfies it. The presentation is somewhat less formal since the key steps of the algorithm are instances of well-known techniques. The algorithm is structured in three phases, saturation, verification, and synthesis. The constraint ϕ produced by the reconstruction algorithm does not necessarily mention all the relations that must hold between integer variables. For example, the constraint η 3η 1 = η 6η 8 ∧ θ 3k = θ 6 is implied by those in Table 3, but it appears nowhere. Finding all the integer constraints entailed by a given ϕ, regardless of whether such constraints are implicit or explicit, is essential because we use an external solver for solving them. The aim of the saturation phase is to find all such integer constraints. Table 4 defines an inference system for deriving entailments ϕ k ϕ . The parameter k plays the same role as in the type system. Rules [S-LEVEL] and [S-TICKET] introduce nonnegativity constraints for integer expressions that occur in level and ticket slots; level expressions are required to be non-negative only for lock freedom analysis, when k = 1; rule [S-CONJ] decomposes conjunctions; rules [S-SYMM] and [S-TRANS] compute the symmetric and transitive closure of type equality; finally, [S-CHAN], [S-CHAN*], and [S-PAIR] state expected congruence rules. We let ϕ def = ϕ k ϕ ϕ . Clearly ϕ can be computed in finite time and is satisfiable by the same assignments as (i.e., it is equivalent to) ϕ. The verification phase checks whether ϕ is satisfiable and, in this case, computes an assignment σ int that satisfies the integer constraints in it. In ϕ all the integer constraints are explicit. These are typical constraints of an integer programming problem, for which it is possible to use dedicated (complete) solvers that find a σ int when it exists (our tool supports GLPK3 and lpsolve4 ). When this is the case, the type constraints in ϕ are satisfiable if, for each type constraint of the form T = S, either T or S are type variables, or T and S have the same topmost constructor, i.e. they are either both int, or both unlimited/linear channel types with the same polarity, or both product types. The synthesis phase computes an assignment that satisfies ϕ. This is found by applying σ int to all the type constraints in ϕ, by choosing a canonical constraint of the form α = T where T is proper for each α ∈ dom( ϕ), and then by solving the resulting system {α i = T i } of equations. By [4, Theorem 4.2.1], this system has exactly one solution σ type and now σ int ∪ σ type ϕ. There may be type variables α for which there is no α = T constraint with T proper. These type variables denote values not used by the process, like a message that is received from one channel and just forwarded on another one. These variables are assigned a type that can be computed canonically. Example 5.1. The constraints shown in Table 3 entail 0 ≤ θ 3k and 0 ≤ θ 5k and 0 ≤ θ 6 -k namely k ≤ θ 3 and k ≤ θ 5 and k ≤ θ 6 must hold. When k = 0, these constraints can be trivially satisfied by assigning 0 to all ticket variables. When k = 1, from the type of a at step (4) of the reconstruction algorithm we deduce that a must have at least 2 tickets. Indeed, a is sent in two messages. It is only considering the remaining processes c!(e, f ) and c!( f ,e) that we learn that y is instantiated with a. Then, a needs one more ticket, to account for the further and last travel in the recursive invocation c!(a,y). Concluding remarks A key distinguishing feature of the type systems in [START_REF] Padovani | Deadlock and lock freedom in the linear π-calculus[END_REF] is the use of polymorphic recursion. Type reconstruction in presence of polymorphic recursion is notoriously undecidable [START_REF] Kfoury | Type reconstruction in the presence of polymorphic recursion[END_REF][START_REF] Henglein | Type inference with polymorphic recursion[END_REF]. In our case, polymorphism solely concerns levels and reconstruction turns out to be doable. A similar situation is known for effect systems [START_REF] Amtoft | Type and effect systems: behaviours for concurrency[END_REF], where polymorphic recursion restricted to effects does not prevent complete type reconstruction [START_REF] Amtoft | Type and behaviour reconstruction for higher-order concurrent programs[END_REF]. We have conducted some benchmarks on generalizations of Example 2.1 to Ndimensional hypercubes of processes using full-duplex communication. The table below reports the reconstruction times for the analysis of an hypercube of side 5 and N varying from 1 to 4. The table details the dimension, the number of processes and channels, and the times (in seconds) spent for linearity analysis [START_REF] Padovani | Type reconstruction for the linear π-calculus with composite and equi-recursive types[END_REF], constraint generation (Section 4) and saturation, solution of level and ticket constraints (Section 5). The solver used for level and ticket constraints is GLPK 4.48 and times were measured on a 13" MacBook Air running a 1.8 GHz Intel Core i5 with 4 GB of 1600MHz DDR3. Reconstruction times scale almost linearly in the number of channels as long as there is enough free main memory. With N = 4, however, the used memory exceeds 10GB causing severe memory (de)compression and swapping. The running time inflates consequently. We have not determined yet the precise causes of such disproportionate consumption of memory, which the algorithms do not seem to imply. We suspect that they are linked to our naive implementation of the algorithms in a lazy language (Haskell), but a more rigorous profiling analysis is left for future investigation. Integer programming problems are NP-hard in general, but the time used for integer constraint resolution appears negligible compared to the other phases. As suggested by one reviewer, the particular nature of such constraints indicates that there might be more clever way of solving them, for example by using SMT solvers. Our work has been inspired by previous type systems ensuring (dead)lock freedom for generic π-calculus processes [START_REF] Kobayashi | A type system for lock-free processes[END_REF][START_REF] Kobayashi | A new type system for deadlock-free processes[END_REF] and corresponding type reconstruction algorithms [START_REF] Kobayashi | Type-based information flow analysis for the pi-calculus[END_REF]. These type systems and ours are incomparable: [START_REF] Kobayashi | A type system for lock-free processes[END_REF][START_REF] Kobayashi | A new type system for deadlock-free processes[END_REF] use sophisticated behavioral types that provide better accuracy with respect to unlimited channels as used for modeling mutual exclusion and concurrent objects. On the other hand, our type systems exploit level polymorphism for dealing with recursive processes in cyclic topologies, often arising in the modeling of parallel algorithms and sessions. Whether and how the strengths of both approaches can be combined together is left for future research. A more thorough comparison between these works can be found in [START_REF] Padovani | Deadlock and lock freedom in the linear π-calculus[END_REF]. There is a substantial methodological difference between our approach and those addressing sessions, particularly multiparty sessions [START_REF] Honda | Multiparty asynchronous session types[END_REF][START_REF] Deniélou | Multiparty session types meet communicating automata[END_REF]. Session-based approaches are top down and type driven: types/protocols come first, and are used as a guidance for developing programs that follow them. These approaches guarantee by design a number of properties, among which (dead)lock freedom when different sessions are not interleaved. Our approach is bottom up and program driven: programs come first, and are used for inferring types/protocols. The two approaches can integrate and complement each other. For example, type reconstruction may assist in the verification of legacy or third-party code (for which no type information is available) or for checking the impact of code changes due to refactoring and/or debugging. Also, some protocols are hard to describe a priori. For example, describing the essence of full-duplex communications (Example 2.1) is far from trivial [START_REF] Deniélou | Multiparty session types meet communicating automata[END_REF]. In general, processes making use of channel mobility (delegation) and session interleaving, or dynamic network topologies with variable number of processes, are supported by our approach (within the limits imposed by the type systems), but are challenging to handle in top-down approaches. Inference of progress properties akin to lock freedom for session-based calculi has been studied in [START_REF] Mezzina | How to infer finite session types in a calculus of services and sessions[END_REF][START_REF] Coppo | Inference of global progress properties for dynamically interleaved multiparty sessions[END_REF], although only finite types are considered in these works. The reconstruction of global protocol descriptions from local session types has been studied in [START_REF] Lange | Synthesising choreographies from local session types[END_REF][START_REF] Lange | From communicating machines to graphical choreographies[END_REF]. In this respect, our work fills the remaining gap and provides a reconstruction tool from processes to local session types. We plan to investigate the integration with [START_REF] Lange | Synthesising choreographies from local session types[END_REF][START_REF] Lange | From communicating machines to graphical choreographies[END_REF] in future work. Example 2 . 1 ( 21 full duplex communication). The term *c?(x).(νa)( fst(x)!a | snd(x)?(y).c!(a,y) ) | c!(e, f ) | c!( f ,e) [I-IN] as the side condition ⊥ ∈ |∆ 2 |, which can be checked on type expressions environments directly; (2) the level of e must satisfy the ordering with respect to all the linear channels in P; this is expressed in [I-IN] as the constraint λ < |∆ 2 |, where λ is the level of e. The same side condition and constraint are found in [I-OUT]. u t : T u : T; true t ↑ T [I-FST] e : T × S ∆; ϕ fst(e) : T ∆; ϕ un(S) [I-SND] e : T × S ∆; ϕ snd(e) : S ∆; ϕ un(T) Example 4 . 1 . 41 Below is the replicated process in Example 2.1, where we have numbered and named the relevant rules used by the algorithm as it visits the process bottom-up, left-to-right: *c?(x).(νa)(fst(x)!a | snd(x)?(y).c!(a,y)) (1) [I-OUT] (2) [I-OUT*] (3) [I-IN] (4) [I-PAR] (5) [I-IN*] Table 2 . 2 Type reconstruction rules for expressions and processes. Reconstruction rules for expressions e : T ∆; ϕ [I-INT] [I-PAIR] n : int / 0; true Table 3 . 3 Type environment and constraints generated for the process in Example 2.1. i c x a y Constraint (1) N Processes Channels Linearity Gen.+Sat. Levels Tickets Overall 1 5 8 0.021 0.006 0.002 0.003 0.032 2 25 80 0.128 0.051 0.009 0.012 0.200 3 125 600 1.439 0.844 0.069 0.124 2.477 4 625 4000 33.803 26.422 1.116 3.913 65.254 [T-PAIR]Γ i e i : T i (i=1,2)Γ 1 + Γ 2 (e 1 ,e 2 ) : T 1 × T 2 [T-FST] Γ e : T × S Γ fst(e) : T un(S) [T-SND]Γ e : T × S Γ snd(e) : S un(T )Typing rules for processesΓ k P [T-IN] Γ 1 e : ?[T ] n m Γ 2 , x : $ n 0 T k P Γ 1 + Γ 2 k e?(x).P n < |Γ 2 | [T-OUT] Γ 1 e : ![T ] n m Γ 2 f : $ n k T Γ 1 + Γ 2 k e!f n < |Γ 2 | [T-IN*]Γ 1 e : ?[T ] * Γ 2 , x :T k P Γ 1 + Γ 2 k *e?(x).P un(Γ 2 ) [T-OUT*] Γ 1 e : ![T ] * Γ 2 f : $ n k T Γ 1 + Γ 2 k e!f ⊥ < |Γ 2 | [T-IDLE] Γ k 0 un(Γ ) [T-PAR] Γ 1 k P Γ 2 k Q Γ 1 + Γ 2 k P | Q [T-NEW] Γ , a : #[T ] m n k P Γ k (νa)P [T-NEW*]Γ , a : #[T ] * k P Γ k (νa)P http://www.gnu.org/software/glpk/ http://sourceforge.net/projects/lpsolve/ Acknowledgments. The authors are grateful to the reviewers for their detailed comments and useful suggestions. The first two authors have been supported by Ateneo/CSP project SALT. The first author has also been supported by ICT COST Action IC1201 BETTY and MIUR project CINA.
01774943
en
[ "info", "info.info-ni" ]
2024/03/05 22:32:18
2015
https://inria.hal.science/hal-01774943/file/978-3-319-19282-6_3_Chapter.pdf
Francesco L De Angelis email: [email protected] Giovanna Di email: [email protected] Marzo Serugendo Logic Fragments: a coordination model based on logic inference Chemical-based coordination models have proven useful to engineer self-organising and self-adaptive systems. Formal assessment of emergent global behaviours in self-organising systems is still an issue, most of the time emergent properties are being analysed through extensive simulations. This paper aims at integrating logic programs into a chemical-based coordination model in order to engineer self-organising systems as well as assess their emergent properties. Our model is generic and accommodates various logics. By tuning the internal logic language we can tackle and solve coordination problems in a rigorous way, without renouncing to important engineering properties such as compactness, modularity and reusability of code. This paper discusses our logic-based coordination model and shows how to engineer and verify a simple pattern detection example and a gradient-chemotaxis example. Introduction Coordination models have been proven useful for designing and implementing distributed systems. They are particularly appealing for developing self-organising systems, since the shared tuple space on which they are based is a powerful paradigm to implement self-organising mechanisms, particularly those requiring indirect communication (e.g. stigmergy) [START_REF] Viroli | A framework for modelling and implementing self-organising coordination[END_REF]. Chemical-based coordination models are a category of coordination models that use the chemical reaction metaphor and have proven useful to implement several types of self-organising mechanisms [START_REF] Zambonelli | Developing pervasive multiagent systems with nature-inspired coordination[END_REF]. A well-known difficulty in the design of self-organising systems stems from the analysis, validation and verification (at design-time or run-time) of so-called emergent properties -i.e. properties that can be observed at a global level but that none of the interacting entities exhibit on its own. Few coordination models integrate features supporting the validation of emergent properties, none of them relying on the chemical metaphor. In this paper, we propose to enrich a chemical-based coordination model with the notion of Logic Fragments (i.e. a combination of logic programs). Our logic-based coordination model allows agents to inject Logic Fragments into the shared space. Those fragments actually define on-the-fly ad hoc chemical reactions that apply on matching data tuples present in the system, removing tuples and producing new tuples, possibly producing also new Logic Fragments. Our model is defined independently of the logic language used to define the syntax of the Logic Fragment, an actual instantiation and implementation of the model can use its own logic(s). The advent of new families of logic languages (e.g. [START_REF] Vitória | Modeling and reasoning in paraconsistent rough sets[END_REF]) has enriched the paradigm of logic programming, allowing, among other things, practical formalisation and manipulation of data inconsistency, knowledge representation of partial information and constraints satisfaction. By combining those logics with a chemical-based coordination model, we argue that global properties can be verified at design time. Section 2 discusses related works, section 3 presents our logic-based coordination model. Section 4 shows two case studies: a simple pattern recognition example and another one with the gradient and chemotaxis patterns. Finally, section 5 concludes the paper. Related works Chemical-based coordination models An important class of coordination models is represented by so-called chemical-based coordination models, where "chemical" stands for the process of imitating the behaviours of chemical compounds in chemical systems. Gamma (General Abstract Model for Multiset mAnipulation) [START_REF] Banâtre | The gamma model and its discipline of programming[END_REF] and its evolutions historically represents an important chemical-inspired coordination model. The core of the model is based on the concept of virtual chemical reactions expressed through condition-action rewriting pairs. Virtual chemical reactions are applied on input multisets which satisfy a condition statement and they produce as output multisets where elements are modified according to the corresponding action (like for chemical compounds); the execution of virtual chemical reactions satisfying a condition pair is nondeterministic. Gamma presents two remarkable properties: (i) the constructs of the model implicitly support the definition of parallel programs; (ii) the language was proposed in the context of systematic program derivation and correctness as well as termination of programs is easy to prove ( [START_REF] Dershowitz | Proving termination with multiset orderings[END_REF]). Its major drawback is represented by the complexity of modeling real large applications. The SAPERE model [START_REF] Castelli | Pervasive middleware goes social: The sapere approach[END_REF] (Figure 1a) is a coordination model for multiagent pervasive systems inspired by chemical reactions. It is based on four main concepts: Live Semantic Annotations (LSAs), LSA Tuple Space, agents and eco-laws. LSAs are tuples of types (name, value) used to store applications data. For example, a tuple of type (date, 04/04/1988) can be used to define a hypothetical date. LSAs belonging to a computing node are stored in a shared container named LSA Tuple Space. Each LSA is associated with an agent, an external entity that implements some domainspecific logic program. For example, agents can represent sensors, services or general applications that want to interact with the LSA space -injecting or retrieving LSAs from the LSA space. Inside the shared container, tuples react in a virtual chemical way by using a predefined set of coordination rules named eco-laws, which can: (i) instantiate relationships among LSAs (Bonding eco-law); (ii) aggregate them (Aggregate eco-law); (iii) delete them (Decay eco-law) and (iv) spread them across remote LSA Tuples Spaces (Spreading eco-law). Spontaneous executions of ecolaws can be fired when specific commands (named operators) are present in tuple values. When a tuple is modified by an eco-law, its corresponding agent is notified: in this way, agents react to virtual chemical reactions according to the program they implement. The implementation of the SAPERE model, named SAPERE middleware, has been proven to be powerful enough and robust to permit the development of several kinds of real distributed self-adaptive and self-organising applications, as reported in [START_REF] Zambonelli | Developing pervasive multiagent systems with nature-inspired coordination[END_REF]. Nevertheless, the model does not aim at proving correctness or emergence of global properties programs built on it: this means that proving correctness of applications may turn to be a complex task. Formal approaches for tuple based coordination models Coordination models based on tuple spaces are amenable to several kinds of analytical formalisation. PoliS [START_REF] Ciancarini | A coordination model to specify systems including mobile agents[END_REF] is a coordination model based on multiset rewriting in which coordination rules consume and produce multisets of tuples; rules are expressed in a Chemical Abstract Machine style [START_REF] Berry | The chemical abstract machine[END_REF]. In PoliS, properties can be proved by using the PoliS Temporal Logic and the PoliMC model checker. Tuples centres [START_REF] Omicini | From tuple spaces to tuple centres[END_REF] allow the use of a specification language (named RespecT) to define computations performed in the tuple space. Computations are associated with events triggered internally because of reactions previously fired or during the execution of traditional input/output operations by agents. RespecT is based on first-order logic and unification of unitary clauses (tuple templates) and ground atoms (tuples) represent the basic tuple matching mechanism. In the ACLT model [START_REF] Denti | Logic tuple spaces for the coordination of heterogeneous agents[END_REF], the tuple space is treated as a container of logic theories, which can be accessed by logic agents to perform deduction processes. Again, the first-order logic and unification of unitary clauses and ground atoms is used as matching mechanism; the model offers specific input-output primitives tailored to provide different meaning for unification by allowing a certain control in selecting the set of unitary clauses to be treated as facts in case of backtracks or temporary missing information in the deduction process. In our model we do not express coordination in terms of rewriting rules; moreover, the logic layer is enhanced by considering several types of logic languages. 3 Logic-and chemical-based coordination model Definition of the model The chemical-based coordination model we present in this paper is designed to exploit several important features of the models cited above in the context of self-organising and self-adaptive applications; our goal is to define a coordination model with the following characteristics: (i) coordination algorithms can be described in an sufficiently abstract way starting from high-level specifications; (ii) the constructs used to express coordination algorithms are amenable to formal analysis of their correctness, they incentivize the decoupling of logic from implementation and they meet software engineering properties such as modularity, reusability and compactness. The rationale leading the definition of our coordination model can be synthesized as the adoption of Kowalski's terminology [START_REF] Kowalski | Algorithm = logic + control[END_REF]: algorithm = logic + control. This formulation promotes the dichotomy of algorithms in: (i) logic components (formulae) that determine the meaning of the algorithm, the knowledge used to solve a problem (i.e. what has to be done) and (ii) control components, which specify the manner the knowledge is used (i.e. how it has to be done). The coordination model we define (Figure 1b) is a generalization of the SAPERE model with two additional features: (i) LSAs can store not only data tuples but actual logic programs (Section 3.2); (ii) the bonding eco-law is replaced by a new one named Logic eco-law, which is in charge of executing logic programs and performing the bonding actions. The remaining components of the model are exactly the same as the ones of the SAPERE model. The virtual chemical reactions among tuples taking place in the shared container are now driven by logic inferences processes, which produce either data tuples or new logic programs during the "execution" of logic programs (Figure 1c). This process brings the idea promulgated by [START_REF] Kowalski | Algorithm = logic + control[END_REF] in the context of chemical-based coordination models: the logic components of an algorithm are expressed in terms of logic programs, here embedded in LSAs, which can react among each other in a chemical fashion. Similarly, agents implement the control components (written in a programming language such as Java), and they perform computations according to the knowledge inferred by logic programs. This approach to separation and mapping of concepts helps designing coordination algorithms from an abstract point of view. On the one hand, algorithms are thought as interactions of atomic logic entities which define the meaning (in Kowalski's terminology) of subparts of the original algorithm. On the other hand, once logic entities have been defined, a specific problem-solving strategy can be chosen to be implemented for each subpart of the original problem. The intuition of using logic programs is twofold: (i) tuples exchanges represent the basic mechanism to carry out indirect communication among agents, thus the state and the evolution of a coordination process can be defined by analysing the set of tuples in the containers; (ii) tuples are used as inputs (facts) and produced as outputs of logic programs (models and formulae obtained by resolution rules). By considering points (i) and (ii), logic programs provide a natural formal tool to express coordination, allowing for inferred formulae to state relationships among entities of the system, depicting the evolution of coordination processes and proving system properties. Logic programs Logic programs [START_REF] Nilsson | Logic, Programming, and PROLOG[END_REF] are sets of logic formulae and are expressed in a logic language (e.g. first-order logic). Executing a logic program means either: (i) providing queries to the program and testing whether they logically follow from the program by using a proof engine (logic inference) or (ii) inferring all sentences that logically follow from the program (logic semantics). An interpretation of a formal language is an interpretation (see [START_REF] Nilsson | Logic, Programming, and PROLOG[END_REF]) of constants, predicate and functions of the language over a given domain. The truth-value of a logic sentence is determined by the interpretation of the logic connectives. Given a logic program P , a model is an interpretation M such that every formula in P is true (depicted as M |= P ). Here we are interested in Herbrand interpretations ( [START_REF] Nilsson | Logic, Programming, and PROLOG[END_REF]): (i) the implicit domain is the Herbrand Universe, the closure of the set of constants under all the functions symbols of the language; (ii) constants are interpreted as themselves and every function symbol as the function it refers to. In classical 2-valued logic programs, Herbrand interpretation can be defined through sets of atoms implicitly interpreted as true. Example: [START_REF] Nilsson | Logic, Programming, and PROLOG[END_REF]. Clauses are implicitly universally quantified. This is a definite logic program (i.e. containing Horn clauses): x is a variable, c is a constant and here they range over an (implicitly) defined domain. P = (C(x) ← A(x), B(x); A(c) ← ; B(c) ← ; ) is a defin- ite logic program The first rule is composed of the head C(X) and the body A(X), B(X) and it can be read as "C(X) is true if both A(X) and B(X) are true". Rules with empty bodies ( ) are named facts and they state sentences whose heads must be considered satisfied; in this case A(c) and B(c) hold. M = {A(c), B(c), C(c)} is a model for the program in the example, because it satisfies all the rules. Logic languages In our model, logic programs are executed by the Logic eco-law. An important point in our approach is the generality of the coordination model w.r.t. the logic. We consider only logic languages that support Herbrand's interpretations, whereas we do not put any constraint on the inference methods or the semantics. Both inference methods and semantics are treated as parameters associated with logic programs. From the practical point of view, for each logic language we require the implementation of a dedicated Logic eco-law that executes the corresponding logic programs. This feature makes possible to use, possibly simultaneously: (i) Logic Fragments In our model, logic programs are embedded in logic units named Logic Fragments. The following set of definitions will be used to clarify the concept. We assume that P rop, Const and V ar are finite mutually disjoint sets of relation symbols, constants and variables respectively. We will identify variables with letters x, y, . . . and constants with letters a, b, . . . . Definition 1 (Literals, Ground Literals): A literal P is an expression of type P (X 1 , . . . , X n ) or ¬P (X 1 , . . . , X n ) where P ∈ P rop and X i ∈ (Const ∪ V ar) for i = 1, . . . , n. A ground literal is a literal without variables. The set of all ground literals w.r.t. a set Const is denoted G(Const). The power set of G(Const) is depicted P(G). Definition 2 (Valuations): A valuation w is a function from V ar to Const that assigns a constant c i to each variable x i . The set of all possible valuations is depicted as W = {w|w : V ar → Const}. Definition 3 (Instances of Literal): If P is a literal and w is a valuation, with Pw we identify the ground literal where every variable of P has been replaced by a constant according to the definition of w. Pw is named an instance of P . We denote I P = { Pw |w ∈ W} ⊆ G(Const). Definition 4 (Logic Programs): A logic program is a set of logic formulae written in a logic language using: (i) literals P1 , ..., Pn defined over P rop, Const, V ar and (ii) logic operators. Definition 5 (A-generator): Given a literal P (X 1 , . . . , X n ), an A-generator w.r.t. a function U : Const n → {T, F } is the finite set: P U (X 1 , . . . , X n ) = {P (c 1 , . . . , c n ) ∈ I P (X 1 ,...,Xn) |U (c 1 , . . . , c n ) = T }. Example: A U (X) = {A(X)|X ∈ {a, b, c}} = {A(a), A(b), A(c)}, with U (a) = U (b) = U (c) = T . Definition 6 (I-generator): Given a literal P (X 1 , . . . , X n ), an I-generator w.r.t a function V : P(G) → P(G) and a finite set H ⊆ P(G) is the set: P H,V (X 1 , . . . , X n ) = {P (c 1 , . . . , c n ) ∈ I P (X 1 ,...,Xn) ∩ V (H)} If V is omitted, we assume that V (H) = H (identity function). Example: if N = {2, 3, 4} and V (N ) = {Even(x)|x ∈ N ∧x is even}, then Even N,V (X) = {Even(2), Even(4)}. The rationale of such definitions is to provide the program with a set of facts built from conditions holding on tuples stored in the container. The unfolding of these generators produces new facts for the interpretation of the logic program. By LF we identify the algebraic structure of Logic Fragments, recursively defined as follows: is a special symbol used only in Logic Fragments to depict all the tuples in the container (both LSAs and Logic Fragments). M is the identifier of the way P is "executed" (we will use M = A for the Apt-van Emden-Kowalski and M = K for the Kripke-Kleen semantics). e P is named constituent of the Logic Fragment and it is interpreted as a set of tuples used as support to generate the facts for the program. S is a set of A,I-generators used to derive new facts from P . The function ϕ : P(G) → {T, F } returns T if the tuples represented by the constituent e p satisfy some constraints; the logic program is executed if and only ϕ(e P ) = T (Def. 8). ϕ T is constant and equal to T . For style reason, we will write P M (e P , S, ϕ) instead of (P, M, e P , S, ϕ). Every Logic Fragment is executed by the Logic eco-law; its semantics is defined by using the function v L . Definition 8 (Semantic function): v L : LF → P(G) ∪ { } associates the fragment with the set of tuples inferred by the logic program (consequent) or with , which stands for undefined interpretation. L denotes the set of actual tuples in the container before executing a Logic Fragment. Operators are ordered w.r.t. these priorities: grouping (highest priority), composition, and (lowest priority). v L is recursively defined as follows: I) v L ( ) L II) v L (e) v L (e) III) v L e 1 ... e n n≥2 if ∃i ∈ {1, . . . , n}.v L e i = n i=1 v L e i otherwise IV) v L e 1 ... e n n≥2 i∈I v L e i if I = {e i |v L (e i ) = , 0 ≤ i ≤ n} = ∅ otherwise V) v L P M (e P , S, ϕ) Q Q is the consequent of P M and it is defined as follows: if M is not compatible with the logic program P or if v L (e p ) = or if ϕ v L (e p ) = F then Q = . ϕ "blocks" the execution of the program as long as a certain condition over e p is not satisfied. Otherwise, based on S = {P H 0 ,V 0 0 (X 01 , ..., X 0t 0 ), ..., P Hn,Vn n (X n1 , ..., X ntn ), P 0 (Y 01 , ..., Y 0z 0 ), ... P m (Y m1 , ..., Y mzm )}, the Logic eco-law produces the set of facts F s = n i=0 P v L (H i ),V i i (X i1 , . . . , X it i ) ∪ m i=0 P i (Y i1 , . . . , Y iz i ). A,I-generators are then used to define sets of ground literals for the logic program which satisfy specific constraints; during the evaluation, for every set H i we have either H i = e p or H i = . Q is finally defined as the set of atoms inferred by applying M on the new logic program P = P ∪ {l ← |l ∈ F s}, enriched by all the facts contained in F s. Note that there may be no need to explicitly calculate all the literals of A,I-generators beforehand: the membership of literals to generators sets may be tested one literal at a time or skipped because of the short-circuit evaluation. Intuitively, composing two Logic Fragments means calculating the inner one first and considering it as constituent for the computation of the second one. Parallel-and ( ) means executing all the Logic Fragments them in a row or none, whereas Parallel-or ( ) means executing only those ones that can be executed at a given time. Update of the container In our model, all the Logic Fragments are carried on a snapshot image of the container, i.e. given a Logic Fragment e in the container, if v L (e) = , then it is evaluated as an atomic operation (every symbol in the sub Logic Fragments which composes e is always translated with the same set of actual tuples). Multiple Logic Fragments ready to be evaluated are computed in a non-deterministic order. The tuples inferred by the logic programs (with all used facts) are inserted in the container only when the evaluation of the whole logic program terminates. At that point, the Logic eco-law injects the inferred tuples in the container and notifies the end of inference process to the agent. The Logic Fragment is subject to a new evaluation process as soon as the set F s changes due to updates of the shared container, but there are no concurrent parallel evaluations of the same Logic Fragment at a given time (unless it appears twice); this aspect can potentially hide tuples updates in the evaluation process (Section 5). The representation of the functions associated with A,I-generators depends on the implementation. Case studies By using Logic Fragments we can easily tackle interesting coordination problems and properties. Additional examples are reported in [START_REF] De Angelis | Towards a logic and chemical based coordination model[END_REF]. Palindrome recognition As a first example we show an easy pattern recognition scenario. Assuming that an agent A inserts positive integers into the container, we want to discover which ones are palindromic numbers (i.e. numbers that can be read in the same way from left to right and from right to left). We assume that these integers are represented by tuples of type N (a), where a is a number, e.g. N (3) represents the number 3. Agent A inserts the Logic Fragment LF p : P A p ( , {N , T estP alin}, ϕ p ). P p is the logic program in Code 1.1, evaluated with the Apt-van Embden Kowalski semantics (A). The set S of A,I-generators is composed of two elements: N contains all literals N (a) (numbers) existing in the container ( ); T estP alin(x) contains all the literals of type T estP alin(a), where a is a positive palindromic number less then d max . These two sets of literals are treated as facts for P p . According to ϕ, P p is executed as soon as a number N (a) is inserted into the container. The rule of the logic program P p states that a number a is a palindromic number (P alin(a)) if a is a number (N (a)) and a passes the test for being palindromic (T estP alin(a)). We consider the tuple space shown in Figure 2a and2b. At the beginning, agent A injects LF p (Figure 2a). At a later stage A injects N (22) and the Logic Fragment is then executed. In this case, N is evaluated as {N (22)}. Moreover, T estP alin(a) will contain T estP alin(22), because it is palindromic. This means the consequent Q of LF p contains P alin(22), along with all the facts generated by the A,I-generators used in the logic program. If now agent A injects N (12), the Logic Fragment is re-executed and N is evaluated as {N (22), N (12)}. This second number does not satisfy the palindromic test (N (12) ∈ T estP alin(x)), so the 12 will not be considered as palindromic. Finally A injects N (414) and during the re-execution of LF p we obtain: N = {N (22), N (12), N (414)} and N (414) ∈ T estP alin(x), so the consequent Q will contain P alin(22) and P alin(414) (Figure 2b). Note that if numbers were injected by agents different from A (like a sensor), the same reactions would take place. Proof sketch: The property above states that by using the Logic Fragment LF p we are able to correctly find out all the palindromic integers. Thanks to the logic programs and the semantic of Logic Fragments, we can easily verify that if such integers exist in the container then their literals are inferred in Herbrand model of P p . Moreover, given that such literals are only generated by LF p , if such literals exist in the model then there must be the associated palindromic integers in the shared space. Gradient and Chemotaxis patterns -general programs In this second example we use Logic Fragments to implement the gradient and chemotaxis design patterns ( [START_REF] Fernandez-Marquez | Description and composition of bio-inspired design patterns: a complete overview[END_REF]), which are two bio-inspired mechanisms used to build and follow shortest paths among nodes in a network. The chemotaxis is based on the gradient pattern. A gradient is a message spread from one source to all the nodes in the network, carrying a notion of distance from the source (hop-counter). Gradient messages can also carry user-information. Once a node receives a gradient from a known source whose hop-counter is less than the local one (i.e. a new local shortest-path has been found), the node updates its local copy of the hopcounter (aggregation) and spreads it towards the remaining neighbours with a hop-counter incremented by one unit. In these terms, the gradient algorithm is similar to the distance-vector algorithm. The chemotaxis pattern resorts to gradient shortest-paths to route messages towards the source of the gradient. We can implement the gradient and chemotaxis patterns by using an agent A gc associated with the Logic Fragment: , where a and c are as above and e is the previous node in the path, this will be used to route the chemotaxis message downhill towards the source. LF gc is composed of several Logic Fragments; the parallel-or operator makes the agent A gc to react simultaneously to chemotaxis and gradients messages. The innermost fragment e Pa = P K n ( , S n , ϕ n ) is executed when a gradient message is received from a neighbour ( can be executed directly but the parallel-and operator blocks the execution of outer fragments until P K n ( , S n , ϕ n ) finishes); it initializes the GP ath tuple for the source of the gradient. By using the composition operator, the literals inferred in the model of P n , along with all the tuples in the container (fragment ) are then treated as constituent for the fragment e Pa = P A a (e Pe , S a , ϕ T ), i.e. they are used to generate facts for the program P a . This one is used to aggregate the hop-counter for the source with the one stored in the local container. e Pa is finally treated as constituent for the fragment e Pg = P A g (e Pa , S g , ϕ T ). Note that aggregation happens before spreading, imposing an order on the reactions. P A g is used to verify whether the gradient message must be spread to the neighbours. If so, a literal spreadGradient(a, local, d, c, b) is inferred during the computation of its semantics, where local is translated with the name of the current node. Simultaneously, the Logic Fragment P A ch ( , S ch , ϕ ch ) is executed as soon as a chemotaxis message is received (described as Cmsg(f, g), with f content of the message and g ID of the receiver). That Logic Fragment uses the local copy of the hop-counter to infer which is the next hop to which the chemotaxis message must be sent to (relay node). If the local hop-counter exists, a literal sendChemo(f, g, h) is generated in the model of P ch , with h representing the ID of the next receiver of the chemotaxis message. Otherwise, the message remains in the container until such a literal is finally inferred. All the literals contained in the consequent Q of LF gc are used by the agent A gc to manage the control part of the algorithm, described in the following code. the hop-counter caused by the applications of the aggregation-function; (ii) global properties, based on the local property holding in each node (e.g. we prove the creation of the shortest-path). The details are reported in [START_REF] De Angelis | Towards a logic and chemical based coordination model[END_REF]. Additional studies focusing on the integration of spatial-temporal logics in Logic Fragment are needed to prove the analogous statement when considering mobile nodes. LFgc : P A g P A a P K n ( , Conclusion and Future works In this paper we have presented a chemical-based coordination model based on a logic framework. Virtual chemical reactions are lead by logic deductions, implemented in terms of combination of logic programs. This approach combines the benefits of using a chemical-based coordination model along with the expressiveness of several distinct types of logic languages to formalise coordination logically. Intuitively, even though no formal verification or validation methods were presented, the rationale behind the proof of the correctness of coordination algorithm follows from a formalisation of the system properties to be proved in terms of logical formulae. This paves the way for at least two formal analysis: (i) what-if assessment -coordination events can be modeled in terms of injected/removed tuples and deducted literals can be used to test the satisfaction of the system properties formulae. This first kind of verification can be done at design time, to assess properties of the whole system under certain conditions (events) and partially at run-time, to infer how the system will evolve assuming a knowledge restricted to a certain subset of locally perceived events; (ii) the second type of design time analysis starts from the literals that satisfy the properties formulae and proceeds backwards, to derive what are the events that lead the system to that given state. Future works will focus on such aspects, to derive formal procedures for correctness verification of algorithm built on top of Logic Fragments. Several kinds of logics present interesting features to model and validate coordination primitives: (i) paraconsistent logics (e.g. [START_REF] Vitória | Modeling and reasoning in paraconsistent rough sets[END_REF]) and (ii) spatial-temporal logics, to assert properties depending on location and time parameters of system components. We plan also to realise an implementation of the model, including several semantics for Logic Fragments taking inspiration from the coordination primitives presented in [START_REF] Denti | Logic tuple spaces for the coordination of heterogeneous agents[END_REF]. Figure 1 : 1 Figure 1: The generalization of the SAPERE Model Definition 7 ( 7 Logic Fragments LF): (I) ∈ LF (II) (Grouping) If e ∈ LF then (e) ∈ LF (III) (Parallel-and) If e 1 , e 2 ∈ LF then e 1 e 2 ∈ LF (IV) (Parallel-or) If e 1 , e 2 ∈ LF then e 1 e 2 ∈ LF (V) (Composition) If P is a logic program, M an execution modality, S a set of A,I-generators, ϕ : P(G) → {T, F } and e p ∈ LF then (P, M, e P , S, ϕ) ∈ LF. Lemma 1 ( 1 Properties of operators): Given a, b ∈ LF with a ≡ b we state that v L (a) = v L (b) for every set of literals L. Then for any a, b, c ∈ LF: I) a a ≡ a (Idempotence of ) II) a b ≡ b a (Commutativity of ) III) a (b c) ≡ (a b) c (Associativity of ) IV) a a ≡ a (Idempotence of ) V) a b ≡ b a (Commutativity of ) VI) a (b c) ≡ (a b) c (Associativity of ) VII) a (b c) ≡ (a b) (a c) ≡ (b c) a (Distrib. of over ) ϕp( ) = T ⇔ ∃w : N (X)w ∈ T estP alin(x) = {T estP alin(a)|a is a positive palindromic number less than dmax} Logic code 1.1 Definite logic program P p P alin(x) ← N (x), T estP alin(x) Figure 2 : 1 Property 1 : 211 Figure 2: Evolution of the container for the example of Section 4.1 Sn, ϕn), Sa, ϕT , Sg, ϕT P A ch ( , S ch , ϕ ch ) Logic code 1.2 Program P n -Next hop initialization GPath(x,dmax,null) ← ¬existsGPath(x) Logic code 1.3 Program P a -Aggregation cmpGradient(x1, x2, y1, y2, z) ← Gmsg(x1, x2, y1, z), GPath(x1, y2, w) updateGPath(x1, y1, x2, z) ← cmpGradient(x1, x2, y1, y2, z), less(y1, y2) Logic code 1.4 Program P g -Spreading spreadGradient(x1, local, z, y, x2) ← updateGPath(x1, y, x2, z) Logic code 1.5 Program P ch -Chemotaxis sendChemo(m, x, w) ← Cmsg(m, x), GPath(x, y, w) ϕn( ) = T ⇔ ∃w : Gmsg(x1, x2, y, z)w ∈ , ϕ ch ( ) = T ⇔ ∃w : Cmsg(x, y)w ∈ S ch = {Cmsg , GP ath } Sg = {updateGP ath e Pg } Sn = {existsGP ath ,V , Gmsg } Sa = {Gmsg e Pa , GP ath e Pa , less} less(x, y) = {less(a, b)|a < b, a, b ∈ {1, ..., dmax}} existsGP ath(x) ,V = {¬existsGP ath(a) ∈ I ¬existsGP ath(x) ∩ V ( )} V ( ) = {¬existsGP ath(a)|∃w : Gmsg(a, x, y, z)w ∈ ∧ ¬∃GP ath(a, y, w)w ∈ } Gradients are represented by tuples Gmsg(a, b, c, d) where a is the ID of the source, b is the ID of the last sender of the gradient, c is the hopcounter and d is the content of the message. Values null, d max and local are considered constants. Local hop-counter are stored in tuples of type GP ath(a, c, e) several types of logic programs (e.g. definite, general logic programs, several types DATALOG or DATALOG-inspired programs) associated with two-valued, multi-valued (e.g. Belnap's logic) or paraconsistent logics; (ii) several inference procedures (e.g. SLD, SLDNF ) and semantics (e.g. Aptvan Emden-Kowalski, Kripke-Kleen, stable,well-founded model semantics)[START_REF] Fitting | Fixpoint semantics for logic programming a survey[END_REF][START_REF] Kowalski | Linear Resolution with Selection Function[END_REF][START_REF] Apt | Contributions to the theory of logic programming[END_REF][START_REF] Emden | The semantics of predicate logic as a programming language[END_REF][START_REF] Vitória | Modeling and reasoning in paraconsistent rough sets[END_REF]. We consider the network of Figure 3; the Logic Fragment can be used to provide the gradient and chemotaxis functionalities as services to other agents running on the same nodes. Assuming that agent A Gm on node A wants to send a query message m 1 to all the nodes of the network, it creates and injects the gradient message Gmsg(A, A, 0, m 1 ). At this point a reaction with LF gc takes place, generating in the consequent Q of LF gc literals GP ath(A, 0, A) (semantics of P n ) and spreadGradient(A, A, m 1 , 0, A) (semantics of P g ). The second literal causes the spreading of the gradient message to nodes B and C. Similar reactions take place in the remaining nodes. If we assume that the gradient passed by node D is the first one to reach E then GP ath(A, 3, D) is inferred in the consequent Q on node E. When the gradient message coming from B reaches E, updateGP ath(A, 1, B, m 1 ) is inferred in the semantics of program P a , so the hop-counter tuple is updated in GP ath(A, 2, B). Now assuming that agent A Cm on node E wants to send a reply-message m 2 to node A, it creates and injects a chemotaxis message Cmsg(m 2 , A). On the basis of the tuple GP ath(A, 2, C), the literal sendChemo(m 2 , A, C) is inferred in the model of P g , so the message is sent to node B. Similar reactions take place on node B, which finally sends the chemotaxis message to node A. Agent GC Logic Fragment LFgc GPath(A,1,A) Agent GC Logic Fragment LFgc GPath(A,2,B) Node B Node D Agent GC Logic Fragment LFgc GPath(A,2,C) Node E Agent GC Logic Fragment LFgc GPath(A,1,A) Node C Agent GC Logic Fragment LFgc GPath(A,0,A) Gmsg(A,A,0,m1)
00177502
en
[ "phys.cond.cm-gen", "phys.meca.acou", "spi.acou" ]
2024/03/05 22:32:18
2005
https://hal.science/hal-00177502/file/papier_laser2.pdf
Eric Larose email: [email protected] Arnaud Derode Dominique Clorennec Ludovic Margerin Michel Campillo Passive Retrieval of Rayleigh Waves in Disordered Elastic Media When averaged over sources or disorder, cross-correlation of diffuse fields yield the Green's function between two passive sensors. This technique is applied to elastic ultrasonic waves in an open scattering slab mimicking seismic waves in the Earth's crust. It appears that the Rayleigh wave reconstruction depends on the scattering properties of the elastic slab. Special attention is paid to the specific role of bulk to Rayleigh wave coupling, which may result in unexpected phenomena like a persistent time-asymmetry in the diffuse regime. I. INTRODUCTION Whatever the type of waves involved, knowing the Green's function of an heterogeneous medium is the key to many essential applications like imaging, communication or detection. In the last twenty years or so, mesoscopic physics has intensely studied wave phenomena in strongly disordered media: weak and strong localization, radiative transfer and diffusion approximation etc. [1]. But the exact Green's function of a complex medium is not easily tractable, and usually theoreticians study ensemble-averaged quantities like statistical correlations of intensities or wave fields. Moreover, from an experimental point of view, it is not always possible to measure the Green's functions of a complex medium because it requires controllable arrays of sources and receivers that do not perturb the medium. In the context of laboratory ultrasound, this is (nearly) routine; but in other fields of wave physics like seismology, distant sources are not controllable. In that respect, a good deal of publications followed recent works by Weaver. He proposed to cross-correlate the diffuse wave fields obtained at two passive sensors and showed it yields the elastic Green's function between the two receivers, as if one of the receivers was a source [2,3]. That correlations performed on passive sensors should yield the wave travel times is not that new. This principle was applied to helioseismology in the 90's where it provided tomographic images of the Sun's interior [4,5]. Beyond travel time reconstruction, Weaver's experimental retrieval of exact Green's functions was a real breakthrough. Weaver's work has been followed by different contributions. Some theoretical works are based on the ergodic approximation where time and ensemble average coincide. This approach is very useful for estimating the role of scattering in the correlation asymmetry [6,7]. When several sources are available, another possibility is to average correlations over sources without moving the receivers. This was done in underwater acoustics [8]. Sabra et al [9] showed the possibility of recovering the entire Green's function of the sea waveguide (especially the late contributions of multiples) and proposed a model for estimating the signal-to-noise ratio of the correlations. Derode et al [10] proposed to interpret the Green's function reconstruction in terms of a Time-Reversal analogy, and showed that all the benefits of time-reversal devices in multiple scattering media could be fruitfully applied to "passive imaging" from correlations [11,12]. This idea is based on the mathematical principle of the representation theorem [START_REF] Aki | Quantitative Seismology[END_REF], equivalently referred to as the Helmholtz-Kirchhoff theorem [START_REF] Cassereau | [END_REF]: if a perfect series of receivers has been sensing the wave field for ages, then one can mathematically have access to the wave field anytime and anywhere in the area enclosed by the sensors. Very recently, this principle was fully developed and applied to elastic waves in open media by Wapenaar [15]. From the very start of Weaver's work, the correlation principle was successfully applied to seismic waves [17,18,19]. The most energetic part of the Green's function between two seismic stations (Rayleigh and Love wave trains) was retrieved from passive correlations of either coda waves or records of seismic noise. As we mentioned earlier, obtaining impulse responses without a controllable source is of high interest in seismology since it gives the possibility of simulating very energetic and punctual earthquakes everywhere around the Earth, and do imaging without a controllable source ("passive imaging"). Results are quite encouraging although several theoretical problems remain unsolved. They concern issues very specific to the Earth. In particular elastic sources (earthquakes) are naturally never arranged in such a way that they would perfectly surround a couple of seismic stations. In addition, for distant earthquakes, the propagation directions of incident bulk waves are mostly vertical, and in a vertically layered medium they would not couple with Rayleigh waves. FIG. 1: 10 6 → 1 scaled analogy between the Earth crust and our ultrasonic experiment. In the crust, the scattering mean free path was estimated in Mexico [16] at 1 Hz: ℓ * = 30 km. In the aluminum waveguide we numerically found ℓ * = 5.5 mm at 1 MHz. So, why do we observe a Rayleigh wave train in the correlation of waves generated by bulk sources? The Green's Function retrieval is linked to equipartition, which is due to wave scattering. So, what is the influence of scattering within the Earth crust in that process? Here we present experimental results obtained in the lab with laser-induced and laser-detected ultrasonic waves propagating in a heterogeneous slab mimicking the Earth crust. Lots of previous experimental articles applied the "passive imaging" technique to acoustic waves. Here we propose to investigate the emergence of the Green's function in the correlations of elastic waves propagating in a solid heterogeneous medium with a free surface. Because field experiments are tedious and natural environment are mostly unknown (especially the scattering properties), we propose to build an Earth crust model at the scale 1/10 -6 (presented in fig. 1). Waves will be sensed at ultrasonic frequencies at the free-surface of an elastic open medium. In our experiment surface waves are not initially excited. This is different from the work of Malcolm et al [7], where ultrasonic Rayleigh waves were generated on the same surface they were measured. In addition they used a finite cylindrical medium with possibly round trip wave trains whereas our experiment is conducted in a nearly open medium. In our configuration, we would not expect the Rayleigh wave to be reconstructed in the correlation, except if scattering is present and mode conversion between Rayleigh and bulk waves occurs. To verify this assumption and study the role of mode conversion, we used two samples of identical dimensions, one with scatterers the other without. The next section describes the experimental setup and the propagation medium. Section III presents a short theoretical study of the scattering properties (scatteringcross sections σ of a single scatterer and the transport mean-free paths ℓ * P and ℓ * S of the heterogeneous sample). This study is supported by experimental measurements. In section IV, the passive imaging technique is applied to records acquired at the free surface. Time and frequency analysis are proposed, and a brief discussion on the timesymmetry of the correlations concludes the article. II. EXPERIMENTAL SET-UP FIG. 2: Experimental setup : The propagation medium is an aluminum block drilled with 54 vertical cylindrical holes (diameter: 4 mm). A Q-switched Nd:YAG laser shoots on the bottom side 100 pulses (24 ns duration and 280 MW/cm 2 intensity each). On the top side a heterodyne interferometer senses the vertical displacement of the aluminum/air interface. This measure is repeated at 7 different locations (X0 = 0 mm→ X6 = 60 mm) for each source position. Plasticine was stuck on the edges and the four lateral sides to mimic absorbing boundary conditions and avoid the generation of Rayleigh waves by mode conversion at the edges. The experimental setup is depicted in fig. 2. It was designed to mimic the propagation and scattering of elastic waves through the Earth crust, at ultrasonic frequencies (0.8-3.2 MHz). A duraluminum slab whose dimensions are roughly 10 -6 those of the crust was used. Fifty-four cylindrical holes (radius a = 2 mm) were drilled at random along direction y so that the waves propagating through the slab could undergo multiple scattering. The 2-D spatial Fourier transform of the hole positions was calculated and is almost perfectly flat in the range of ultrasonic wavelengths involved in the experiment, which confirms the absence of spatial correlation between the holes. The density of scatterers was n = 0.0105 mm -2 . Ideally, the slab should have had infinite dimensions along x and y. To approach this condition, we stuck a thick layer of dense plasticine on the lateral sides and edges of the duraluminum sample. It was aimed at creating absorbing boundary conditions and avoiding the generation of Rayleigh waves by mode conversion at the edges. The energy decay time τ abs was initially found to be 23,000 µs. With the plasticine it decreased to 120 µs (see next section for a detailed discussion on the absorption time). We therefore simulated an open slab in the x and y direction with a free surface at the top. Rigorously, the earth crust is a waveguide that partially leaks energy through the Moho to the underlying mantle. To perfectly match this feature we should have placed a infinite medium with a different impedance at the bottom side of the aluminum slab. Yet we think that our results and conclusions do not suffer too much from this omission: the central point of our set-up is to mimic an elastic and scattering medium, infinite in the horizontal directions with a free surface on the top. The source we employed to simulate earthquakes was a Q-switched Nd:YAG laser that shot 24 ns pulses at the bottom side of the slab (each pulse energy : 9 mJ). Two regimes of elastic wave generation are possible with a laser source [START_REF] Scruby | Laser ultrasonics -Techniques and Application[END_REF]. When the surfacic intensity of the pulse is weak (< 15 MW/cm 2 ), the surface is locally heated up and its dilation creates Rayleigh waves (thermoelastic regime). At higher intensity, the laser evaporates part of the metal. In this ablation regime, both Rayleigh and bulk waves are generated. Theoretical radiation patterns are displayed on fig. 3 for compressional and shear waves. Rigorously, such a source is not truly reproducible since the laser impact can damage the surface. To make the experiment as reproducible as possible while staying in the ablation regime, the shot intensity was no more than 280 MW/cm 2 . A 1 ms record was acquired 100 times without any observable change. In the following experiments, for a satisfactory signal to noise ratio, each impulse response was averaged over 100 consecutive shots. As to the detection of the free surface motion, it was achieved with a contactless and quasi-punctual device: a heterodyne optical interferometer developed by Royer et al [START_REF] Royer | [END_REF] which has the advantage of a very broadband response (20 kHz-45 MHz) and a sensitivity of 10 -4 Å/ √ Hz. It was mounted perpendicularly to the slab and then provided us with the absolute vertical component of the free surface displacement (top side), with a fine spatial resolution; the size of the laser spot was ∼ 100 µm whereas the typical elastic wavelengths here are ranging between 1 and 10 mm. This is similar to seismology, where sensors are nearly punctual compared to the wavelengths considered (several kilometers at 1 Hz). However seismic sensors usually provide time records of the three components of the displacement field. Here the interferometer only measured the vertical movements of the free surface. In an elastic body, three different kinds of wave polarization are possible. Compressional (or longitudinal) waves are analogous to acoustic waves in fluids (velocity v P = 6.32 mm/µs in duraluminum). Shear (transverse) waves have two possible polarizations (velocity v S = 3.13 mm/µs): one we call SV (Vertical) in the x-z plane (see fig. 1) and one SH (Horizontal) in the x-y plane. SH waves have no contribution in the z direction and therefore will not be detected by the interferometer. In addition to bulk waves, surface waves exist but here only Rayleigh waves (v R = 2.9 mm/µs) will be taken into account since the others cannot be detected (no vertical displacement). The shortest wavelength in the aluminium slab is 0.9 mm, which is much greater than the duraluminum alloy grain size. Since the orientation of the grains is random, we consider the alloy to be isotropic for elastic waves in the frequency band of interest. Scattering at the grain edge is presumably also negligible compared to scattering by the void cylinders. The overall translational symmetry along y of both the free surface and the cylindrical scatterers avoid any coupling between SH mode and the other SV and P modes. Therefore SH waves will not be considered in our article and SV waves will be referred to as S (shear) waves. The wave propagation in our experiment will be treated as 2-dimensional (and quasi 1-D for the surface Rayleigh waves). Waves initially propagating in the y direction are rapidly absorbed by the plasticine and lost. The laser source and the laser interferometer could be translated independently: 35 sources and 7 sensing positions were used during the experiment, providing us with a set of 35 × 7 impulse responses. A typical waveform is depicted on fig. 4. Around 1 MHz it is lasting nearly 800 µs, and shows a long diffusive decay comparable to the seismic coda. Due to the strong scattering on the cylindrical cavities, no top-bottom reflection was observed in the data. This confirms the Hz corresponding the optimal level of optical reflection on the sensed surface. After averaging over 100 records we reached the precision of 10 -2 Å. Bottom: intensity averaged over several source/sensor positions. high diffusive nature of the propagation in the scattering slab. The relevant scattering properties are discussed and evaluated in the next section. III. WAVE SCATTERING AND TRANSPORT PROPERTIES A. Scattering cross-section of an empty cylinder In order to evaluate the amount of scattering and mode conversion, we calculated the differential cross-section ∂σ ∂θ and the total scattering cross-section σ of a cylindrical void in a elastic medium excited by a compressional or shear plane wave. A brief description of the calculation is given in the Appendix. For a detailed derivation we refer to [START_REF] Pao | Diffraction of Elastic Waves and Dynamic Stress Concnetration[END_REF][START_REF] Faran | [END_REF]24]. The differential scattering cross-section gives the angular distribution of the scattered surfacic intensity, normalized by the incident surfacic intensity. The total elastic cross-section is σ = ∂σ ∂θ dθ. In 2D it has the dimensions of a length. It corresponds to the scattering strength of an object at a given frequency. In an elastic medium, mode conversion can occur and different cross-sections must be considered. In the case of an incident compressional wave, they are noted σ P P , σ P S and σ P = σ P P + σ P S , respectively for the P to P, P to S and total P elastic cross-sections. We also calculated the elastic cross-sections for an incident shear wave (S), σ SP , σ SS and σ S = σ SP + σ SS . The differential cross-sections plotted on fig. 5 frequencies ranging from 0.1 MHz to 200 MHz. In average in the frequency band of interest (0.8-3.2 MHz), we obtained σ P = 9.2 mm. This value is comparable to measurements by White [25]. The same calculations were conducted for an incident shear wave, we found an average of σ S = 12 mm in the 0.8-3.2 MHz frequency band. FIG. 6: Elastic (σ) scattering cross-sections calculated for shear (S) and compressional (P) plane wave impinging on a cylindrical void with diameter 4 mm. Between 0.8 and 3.2 MHz, scattering is stronger for shear waves. At high frequencies the elastic cross-sections tend to the limit of twice the geometrical diameter. B. Transport properties When the elastic wave propagates through the aluminum slab drilled with holes, it undergoes multiple scattering. Let ϕ(t) (resp. ϕ 0 (t)) be the vertical displacements sensed at the free surface through the scattering (resp. homogeneous) medium. Classically, this field is split into two contributions: the coherent and the incoherent part. The coherent wave is the ensemble-averaged field ϕ(t) (averaged over disorder configurations, here: cylinder's positions). We underline the difference between the coherent and the ballistic wave (i.e. the first arrival). For a detailed discussion about scattering effect on coherent and ballistic waves, see [26]. Away from resonances, the coherent wave can be roughly thought of as an attenuated version of the direct wavefront ϕ 0 (t). When there is no intrinsic dissipation, the energy of the coherent wave decays with the slab thickness H as e -H/ℓ , where ℓ is the elastic mean free path. Assuming a dilute set of scatterers, the elastic mean-free path is simply related to the scatterers density n and their elastic crosssection σ: ℓ = 1 nσ From the theoretical scattering cross section calculated above, we find ℓ P = 10.3 mm. In order to measure the mean-free path experimentally, we used two aluminum slabs of exactly the same dimensions. The first served as a reference and provided measurements of ϕ 0 (t) for different source-sensors positions. The second one was drilled with holes. By translating the source-receiver device along the slab, we achieved something very similar to a configurational averaging and measured the energy of the coherent wave ϕ(t) 2 . Between 0.8 and 3.2 MHz, we obtained ℓ P = 9 ± 2.5 mm from these experiments. The intensity of the incoherent part was also studied. The time evolution of the averaged incoherent intensity I(t) = ϕ(t) 2 is governed by another parameter: the transport mean free path ℓ * . In an elastic body, transport quantities have been theoretically defined by [27]: ℓ * P = 1 n σ S -σ * SS + σ * P S (σ P -σ * P P )(σ S -σ * SS ) -σ P S σ * SP (1) ℓ * S = 1 n σ P -σ * P P + σ * SP (σ P -σ * P P )(σ S -σ * SS ) -σ * P S σ * SP (2) with σ * = ∂σ ∂θ cos(θ)dθ. It was evaluated numerically: ℓ * P ≈ ℓ * S = 5.5 mm. In an experiment, this parameter is very hard to measure with a reasonable precision. The coherent backscattering effect [28,29,30,31] (also referred to as weak localization) does give a direct estimation of the transport mean free path but our experimental configuration did not allow this special measurement since we could not place a laser sensor in the vicinity of the laser source. Yet we checked that the experimental intensity decay I(t) gives an order of magnitude for ℓ * that is consistent with the theoretical value. For the sake of simplicity we propose a 2-D scalar wave model for I(t) [32], under the diffusion approximation. In an infinite slab of thickness H with perfect reflections on both sides, the averaged transmitted intensity reads: I(X, t) = I 0 1 2H √ πDt + ∞ n=1 (-1) n H √ πDt e -n 2 π 2 Dt H 2 e -X 2 4Dt -t τ abs with τ abs the absorption time (taking into account the intrinsic absorption in the aluminum and the lateral leaking due to the plasticine) and D the diffusion constant. X is the lateral distance between source and receiver. This formula is obtained using a modal decomposition of the diffusion equation in the z direction. The intensity I(t) is a mix of compressional and shear waves, each mode traveling with its own parameters (velocity, ℓ * , diffusion constant) and interchanging their energy through scattering events. In our experiment ℓ * S and ℓ * P are of the same order. The diffusion constant was approximated by D = 1 2(1+2v 2 P /v 2 S ) (v P ℓ * P + 2v S ( vp vs ) 2 ℓ * S ) ≈ 40 mm 2 /µs. This assumption is valid after a couple of mean free times, when the equipartition regime [33] is set. Equipartition means that the density of compressional and trans-verse modes equilibrates. Considering the specific velocities of each mode [34], we infer that 80% of the energy is transported by S waves, and only 19% by P waves (and an additional 1% for surface waves). Hence our best fit (fig. 7) of the intensity decay in the coda gives τ abs = 120 µs±10% and ℓ * = 5 -20 mm. IV. TWO-POINT CORRELATION OF DIFFUSE FIELDS In this section we focus on the experimental reconstruction of the direct Green's function from "passive" correlations. The main idea is to correlate diffuse fields sensed at two different locations on the top side when a source generates bulk waves at the bottom. Since we record the vertical component of the surface displacements, the two-point correlations should simulate a vertical source at the surface, which mainly generates surface waves. Indeed, the experimental correlations we obtained reveal a wave packet that travels at the speed of a Rayleigh wave. We insist that our sources do not generate surface waves at the top side of the slab. Moreover if a surface wave happened to be generated anywhere, it would be completely absorbed by the plasticine. Under these conditions no Rayleigh wave should travel on the top surface, and no Rayleigh wave should be passively retrieved by correlations. Why then should passive imaging give rise to a Rayleigh wave train in our experiment? We propose first to examine the role of scatterers for the emergence of the direct Rayleigh wavefront in the correlations. To that end we separately correlated coda records obtained through two different aluminum slabs: the first drilled with holes, the second without. Each impulse response was lasting ≈ 800 µs before reaching the noise level (see fig. 4). We underline that these record lengths are far from the Heisenberg time (break time) at which the modes of the aluminum block would be resolved (here T H ≈ 10 6 µs) and correlations would naturally converge to the Green's function. This modal approach is unrelevant to our experiment. The records were correlated and averaged over the 35 available sources. For the scattering slab, this reads: C ij (τ ) = 35 S=1 t=600 µs t=0 µs ϕ(S, X i , t)ϕ(S, X j , t + τ )dt where X i and X j are the sensors points (running from X 0 = 0 mm to X 6 = 60 mm along the array). And for the homogeneous slab: To enhance the signal-to-noise ratio, each correlation is time symmetrized (= C(+τ ) + C(-τ ) ) and normalized by its maximum. Results are displayed in fig. 8 and9. A propagating wavefront (traveling at the Rayleigh wave velocity v R = 2.9 mm/µs) is clearly visible in the presence of scatterers, whereas it does not appear in the homogeneous slab. We also summed the 6 normalized propagating peaks after having delayed each signal according to the Rayleigh wave travel time. The summation is displayed in the enclosed box on each figure. In the scattering slab, its amplitude nearly corresponds to the coherent addition of 6 pulses. In the homogeneous device, the amplitude of the summation is ≈ 2.5 (incoherent addition of 6 uncorrelated fields). We conclude on the necessity of mode coupling due to scattering for the Rayleigh wave to emerge from the passive correlations of diffuse fields generated by bulk waves sources. This is especially relevant for applications to seismology. Therefore, it appears once again [11] that the role of scattering is crucial in "passive imaging". Firstly, because of multiple scattering, at late times the equipartition regime can be attained whatever the sources/receivers positions. Secondly, because of mode conversions due to the scatterers, a Rayleigh wave emerges from the passive correlations even though no Rayleigh wave was generated by the sources. Note that the bandwidth in the upper band record is a little wider than in the lower band. This was done to compensate for the coda shortening (< 600 µs) so that the product T ∆f is kept constant. C 0 ij (τ ) = 35 We can go a little further and catch a glimpse of the time symmetry properties. We performed the correlations into two consecutive time windows: from 0 to 45 µs and from 45 µs to 600 µs, and did not time-symmetrize the correlations (fig. 10 and11). 45 µs is twice the time after which the diffuse energy spreads homogeneously along the array of receivers (length L = 30 mm) : L 2 /4D ≈ 22 µs in an open 2D scattering medium. The time series were filtered in two frequency bands: 0.8-1.6 MHz and 1.6-3.2 MHz. In the first time-window, from 0 to 45 µs, correlations are asymmetric in time. This means that the causal part (τ > 0) and the acausal part (τ < 0) of the correlations are different (see left part of fig. 10 and fig. 11). In the causal part, a Rayleigh wavefront is clearly visible whereas noise is dominating the acausal part. This is due to the preferential direction of Rayleigh wave propagation (waves traveling from X 0 to X 3 in our experiment). There is a net flux of energy from X 0 to X 1 , X 2 and X 3 (distances 10, 20 and 30 mm in fig. 10 and fig. 11). This flux is due to the uneven distribution of sources in comparison to the receiver couples X 0 -X 1 , X 0 -X 2 and X 0 -X 3 : most of the sources are on the X 0 's side. At the early times of the coda (from 0 to 45 µs), the diffusion regime is not yet attained. Later in the coda, from 45 µs to 600 µs, the wave field in the bulk of the aluminum slab is very likely to be equipartitioned. In the low frequency band (0.8-1.6 MHz), the time-symmetry of the correlation is indeed restored [6,[START_REF] Paul | [END_REF] (see right part of fig. 10): Rayleigh waves travel in all directions. Nevertheless and surprisingly, the asymmetry persists in the high frequency band (from 1.6 MHz to 3.2 MHz, see right part of fig. 11). To interpret this observation, we have carefully studied the location of the scattering sources around the array. In the high frequency band, the Rayleigh wavelength is ∼ 1.5 mm. The generation of Rayleigh waves by scattering necessarily occurs in the first half-wavelength beneath the free surface [START_REF] Maeda | Proceedings of the Workshop on 'Probing Earth Media Having Small-Scale Heterogeneities[END_REF]. In our scattering slab, one hole was nearly showing on the surface (position X < 0), another one was 0.61 mm beneath (position X > 60 mm), the others being located much deeper. Rayleigh wave trains are mainly generated by the hole nearest to the surface, then propagate along the array of receivers (from X 0 to X 6 point). These waves are almost not perturbed (attenuated) until they reach the edges of the slab and the absorbing plasticine. They contribute to a very clear propagating pulse in the positive part of the correlation. The weak coupling due to the deeper hole (z = -0.61 mm) on the X 6 side contributes to a smaller pulse propagating from X 6 to X 0 in the negative part of the correlations. This interpretation is in agreement with observations in the low frequency band (0.8-1.6 MHz), where the average Rayleigh wavelength is 3 mm (fig. 10). At least 5 holes are present in the first half wavelength and should cause significant scattering of Rayleigh waves and coupling between surface and bulk waves. This time, the holes are evenly distributed along the sensor array. In the late coda, correlations X 0 × X 1 , X 0 × X 2 and X 0 × X 3 are nearly symmetric. The time symmetry is obtained thanks to scattering by a symmetric distribution of scatterers. Under these conditions, a global equipartition among bulk and surface waves is guaranteed. Furthermore the reconstructed surface wave is strongly attenuated along its path because it senses many scatterrers along the array. In addition, these scatterers contribute signals around τ = 0 in the correlations, which degrade the reconstruction. We think this interpretation explains why the symmetric wavefront is much more noisy at low frequencies (fig. 8), where lots of cavities are encountered in the first half wavelength, than at higher frequencies where the Rayleigh wavefront can propagate freely (fig. 10). Finally, we comment on the possible misidentification of the waves that are reconstructed in our experiment. Indeed, if many scatterers are present within a wavelength, the overall wave velocity may be different (effective medium). In the heterogeneous plate, a reduced-speed shear wave might propagate with the same wavespeed as a Rayleigh wave in the bulk aluminium plate (i.e. without cavities). In our experiment, the hole interspacing is 10 mm on average, which is larger than the largest ultrasonic wavelength. Thus, it is reasonable to assume that the shear waves propagate in the aluminium plate with the same wavespeed as in the bulk. The measured wave velocity is 2.9 mm/µs at all frequencies (from 0.8 MHz to 3.2 MHz) and indeed corresponds to the velocity of the Rayleigh wave. V. CONCLUSION In this paper were presented laboratory experiments of elastic wave propagation in heterogeneous media at ultrasonic frequencies. An aluminum slab was made quasi-infinite by the use of absorbing boundaries to mimic the Earth's crust, in which scattering was obtained drilling cylindrical cavities. A relatively simple theoretical model for wave scattering properties was proposed. Wave field generation and detection was achieved using contactless and quasi-punctual laser devices. The cross-correlation of diffuse fields was performed, allowing us to retrieve passively the Rayleigh wave between two sensors only when scattering was present. Without scatterers, and in the case of bulk wave generation and surface detection, no Rayleigh wave was reconstructed. This illustrates the role of scattering and mode conversion in the Green's function passive reconstruction. Analysis for different time-windows and frequency bands were conducted. Previous works in acoustics [37] and seismology [17] observed that the Rayleigh wave reconstruction was harder with increasing frequency. Here we found that the Rayleigh wave reconstruction was more efficient in the high frequency band. In addition we observed that even in the late coda where waves are expected to be equipartitioned, asymmetry in the correlations may remain. Both observations are due to the very specific coupling between bulk and Rayleigh waves, which occurs if scatterers are present in the first wavelength beneath the free surface. We emphasize that equipartition of bulk waves does not always mean equipartition of surface waves. On the one hand, our experiments show the need of scattering to passively retrieve the impulse response between two sensors, on the other hand they show that scattering occurring between the sensors degrade the reconstructed Rayleigh waves. The trade-off between the two effects should be further investigated. Though our experiment was designed for seismological applications, results should be applicable to other fields of wave physics where both surface and bulk waves are present. Here follows a brief calculation of the wave field scattered by a cylindrical cavity insonified by a plane compressional wave. Three displacement potentials are relevant: Φ i is the displacement potential of the incident P wave, Φ s is the scattered P wave potential and Ψ s is the scattered S wave potential. They can be expanded as: where J m and H m are respectively the Bessel and Hankel functions both of first kind and of order m and ε m is the Neumann factor. Taking into account the null traction condition at the surface of the cylinder allows the calculation of the A m and B m coefficients. Those coefficients are given in [START_REF] Pao | Diffraction of Elastic Waves and Dynamic Stress Concnetration[END_REF][START_REF] Faran | [END_REF]24]. The scattering crosssections are σ P →P = 2 k P [2|A 0 | 2 + ∞ m=1 |A m | 2 ] σ P →S = 2 k S [ ∞ m=1 |B m | 2 ] σ P = σ P →P + σ P →S The corresponding differential scattering cross-sections are given by: FIG. 3 : 3 FIG.3: Directivity pattern (linear scale) of the laser source. Rayleigh waves are not taken into account since they are absorbed at the edges. The experiments were carried out in the ablation regime, where bulk waves are much more energetic than in the thermoelastic regime. Compressional wave directivity: thick line, shear wave directivity: dotted line. FIG. 4 : 4 FIG.4: Top: typical waveform obtained through the scattering slab around 1 MHz. The sensitivity of the heterodyne interferometer is 10 -4 Å/ √ Hz corresponding the optimal level of optical reflection on the sensed surface. After averaging over 100 records we reached the precision of 10 -2 Å. Bottom: intensity averaged over several source/sensor positions. FIG. 5 : 5 FIG.5: Differential scattering cross-sections of a cylindrical cavity calculated for a compressional (P) or shear (S) incident plane wave around 1.2 MHz (top) and 2.4 MHz (bottom). The scattered wave is either compressional (P) or transverse (S). Each pattern is normalized by its maximum. FIG. 7 : 7 FIG. 7: Averaged intensity I(t) and theoretical fit for scalar wave diffusion in a 2-D semi-infinite slab with thickness H = 24 mm. ℓ * = 5.5 mm and τ abs = 120 µs. FIG. 8 : 8 FIG. 8: Green's function reconstruction for different pairs of receivers. Correlations are averaged over the 35 sources and filtered in the 0.8-1.6 MHz frequency band. On the left (a) are displayed the 7 cross-correlations Cij (τ ) in the diffusive aluminum plate. On the right (b) the 7 cross-correlations C 0 ij (τ ) are obtained in an equivalent aluminum block without any hole, where the mode conversion possibly occurring at the edges was suppressed by the plasticine. The insets show the summation of the 6 wavefronts from X0 = 10 mm to X6 = 60 mm after a time-reduction based on the Rayleigh wave speed (dotted line, vR = 2.9 mm/µs). FIG. 9 : 9 FIG. 9: Same figure as fig. 8 except correlations are filtered in the 1.6-3.2 MHz frequency band. The signal-to-noise ratio in (a) is increased compared to the results obtained at lower frequencies. FIG. 10 : 10 FIG. 10: Evolution of the asymmetry in the reconstructedGreen's function in the low frequency regime (0.8-1.6 MHz), for early (a) and late (b) times. At early times, the waves are exciting scatterers on one side of the receiving array more than the other. The energy flux is clearly going from the source to the rest of the medium. After 45 µs the field is equipartitioned and the scattering halo fills the array, leading to a more symmetric correlation. FIG. 11 : 11 FIG. 11: Evolution of the asymmetry in the reconstructedGreen's function in the high frequency regime (1.6-3.2 MHz), for early (a) and late (b) time windows. At early times, the waves are exciting scatterers preferably on one side of the receiving array. After 45 µs, bulk waves are expected to be equipartitioned, but not Rayleigh waves. Mode conversion to Rayleigh waves mainly occurs at one scattering cavity, leading to an anisotropic energy flux, and an asymmetric correlation. APPENDIX A: CALCULATION OF THE SCATTERING CROSS-SECTION OF ACYLINDRICAL CAVITY. Φ i (r, t) = ∞ m=0 ε m i m J m (k P r)cos(mθ)e -iωt Φ s (r, t) = ∞ m=0 A m H (1) m (k P r)cos(mθ)e -iωt Ψ s (r, t) = ∞ m=1 B m H (1) m (k S r)sin(mθ)e -iωt B m i -m sin(mθ)| 2 ACKNOWLEDGMENTS The authors wish to thank Richard Weaver and Julien de Rosny for fruitful discussions, Xavier Jacob and Samir Guerbaoui for experimental help. This work was supported by the Groupement de Recherche CNRS "Imagerie, Communication et Désordre" (GdR IMCODE 2253) and the CNRS program "DyETI".
01775032
en
[ "info", "info.info-ni" ]
2024/03/05 22:32:18
2015
https://inria.hal.science/hal-01775032/file/978-3-319-19129-4_1_Chapter.pdf
A C Resmi François Taiani email: [email protected] Fluidify: Decentralized Overlay Deployment in a Multi-Cloud World As overlays get deployed in large, heterogeneous systems-ofsystems with stringent performance constraints, their logical topology must exploit the locality present in the underlying physical network. In this paper, we propose a novel decentralized mechanism-Fluidify-for deploying an overlay network on top of a physical infrastructure while maximizing network locality. Fluidify uses a dual strategy that exploits both the logical links of an overlay and the physical topology of its underlying network. Simulation results show that in a network of 25,600 nodes, Fluidify is able to produce an overlay with links that are on average 94% shorter than that produced by a standard decentralized approach based on slicing, while demonstrating a sub-linear time complexity. Introduction Overlays are increasingly used as a fundamental building block of modern distributed systems, with numerous applications [START_REF] Frey | Heterogeneous gossip[END_REF][START_REF] Gupta | Meghdoot: content-based publish/subscribe over p2p networks[END_REF][START_REF] Kermarrec | Xl peer-to-peer pub/sub systems[END_REF][START_REF] Lakshman | Cassandra: a decentralized structured storage system[END_REF][START_REF] Li | Inside the new coolstreaming: Principles, measurements and performance implications[END_REF][START_REF] Ratnasamy | A scalable contentaddressable network[END_REF][START_REF] Stoica | Chord: A scalable peer-to-peer lookup service for internet applications[END_REF]. Unfortunately, many popular overlay construction protocols [START_REF] Bertier | The gossple anonymous social network[END_REF][START_REF] Jelasity | T-man: Gossip-based fast overlay topology construction[END_REF][START_REF] Voulgaris | Epidemic-style management of semantic overlays for content-based searching[END_REF] do not usually take into account the underlying network infrastructure on which an overlay is deployed, and those that do tend to be limited to a narrow family of applications or overlays [START_REF] Xu | Building topology-aware overlays using global softstate[END_REF][START_REF] Zhang | A construction of localityaware overlay network: moverlay and its performance[END_REF]. This is particularly true of systems running in multiple clouds, in which latency may vary greatly, and ignoring this heterogeneity can have stark implications in terms of performance and latency. In the past, several works have sought to take into account the topology of the underlying infrastructure to realise network-aware overlays [START_REF] Ratnasamy | Topologically-aware overlay construction and server selection[END_REF][START_REF] Waldvogel | Efficient topology-aware overlay network[END_REF][START_REF] Xu | Building topology-aware overlays using global softstate[END_REF][START_REF] Zhang | A construction of localityaware overlay network: moverlay and its performance[END_REF]. However, most of the proposed solutions are service-specific and they do not translate easily to other overlays. To address this lack, we propose a novel decentralized mechanism-called Fluidify-that seeks to maximize network locality when deploying an overlay network. Fluidify uses a dual strategy that exploits both the logical links of an overlay and the physical topology of its underlying infrastructure to progressively align one with the other. Our approach is fully decentralized and does not assume any global knowledge or central form of co-ordination. The resulting protocol is generic, efficient, scalable. Simulation results show that in a network of 25,600 nodes, Fluidify is able to produce an overlay with links that are on average 94% shorter than that produced by a standard decentralized approach based on slicing, while converging to a stable configuration in a time that is sub-linear (≈ O(n 0.6 )) in the size of the system. The remainder of the paper is organized as follows. We first present the problem we address and our intuition (Sec. 2). We then present our algorithm (Sec. 3), and its evaluation (Sec. 4). We finally discuss related work (Sec. 5), and conclude (Sec. 6). Background, Problem, and Intuition Overlay networks organize peers in logical topologies on top of an existing network to extend its capabilities, with application to storage [START_REF] Ratnasamy | A scalable contentaddressable network[END_REF][START_REF] Stoica | Chord: A scalable peer-to-peer lookup service for internet applications[END_REF], routing [START_REF] Gupta | Meghdoot: content-based publish/subscribe over p2p networks[END_REF][START_REF] Kermarrec | Xl peer-to-peer pub/sub systems[END_REF], recommendation [START_REF] Bertier | The gossple anonymous social network[END_REF][START_REF] Voulgaris | Epidemic-style management of semantic overlays for content-based searching[END_REF], and streaming [START_REF] Frey | Heterogeneous gossip[END_REF][START_REF] Li | Inside the new coolstreaming: Principles, measurements and performance implications[END_REF]. Although overlays were originally proposed in the context of peer-to-peer (P2P) systems, their application today encompasses wireless sensor networks [START_REF] Grace | Experiences with open overlays: A middleware approach to network heterogeneity[END_REF] and cloud computing [START_REF] Decandia | Dynamo: amazon's highly available key-value store[END_REF][START_REF] Lakshman | Cassandra: a decentralized structured storage system[END_REF]. The problem: building network-aware overlays One of the challenges when using overlays, in particular structured ones, is to maintain desirable properties within the topology, in spite of failures, churn, and request for horizontal scaling. This challenge can be addressed through decentralized topology construction protocols [START_REF] Jelasity | T-man: Gossip-based fast overlay topology construction[END_REF][START_REF] Leitao | Epidemic broadcast trees[END_REF][START_REF] Montresor | Chord on demand[END_REF][START_REF] Voulgaris | Epidemic-style management of semantic overlays for content-based searching[END_REF], which are scalable and highly flexible. Unfortunately, such topology construction solutions are not usually designed to take into account the infrastructure on which an overlay is deployed. This brings clear advantages in terms of fault-tolerance, but is problematic from a performance perspective, as overlay links may in fact connect hosts that are far away in the physical topology. This is particularly likely to happen in heterogeneous systems, such as multi-cloud deployment, in which latency values might vary greatly depending on the location of individual nodes. For instance, Fig. 1(a) depicts a randomly connected overlay deployed over two cloud providers (rounded rectangles). All overlay links cross the two providers, which is highly inefficient. By contrast, in Fig. 1(b), the same logical overlay only uses two distant links, and thus minimizes latency and network costs. This problem has been explored in the past [START_REF] Qiu | Towards location-aware topology in both unstructured and structured p2p systems[END_REF][START_REF] Ratnasamy | Topologically-aware overlay construction and server selection[END_REF][START_REF] Waldvogel | Efficient topology-aware overlay network[END_REF][START_REF] Xu | Building topology-aware overlays using global softstate[END_REF][START_REF] Zhang | A construction of localityaware overlay network: moverlay and its performance[END_REF], but most of the proposed solutions are either tied to a particular service or topology, or limited (0) (1) (2) (3) (4) (5) Fig. 2. Example of basic Fluidify approach on a system with n=6 and d=2 to unstructured overlays, and therefore cannot translate to the type of systems we have just mentioned, which is exactly where the work we present comes in. Our intuition: a dual approach Our proposal, Fluidify, uses a dual strategy that exploits both an overlay's logical links and its physical topology to incrementally optimize its deployment. We model a deployed overlay as follows: each node possesses a physical index, representing the physical machine on which it runs, and a logical index, representing its logical position in the overlay. Each node also has a physical and logical neighbourhood: the physical neighbors of a node are its d closest neighbors in the physical infrastructure, according to some distance function d net () that captures the cost of communication between nodes. The logical neighbors of a node are the node's neighbors in the overlay being deployed. For simplicity's sake, we model the physical topology as an explicit undirected graph between nodes, with a fixed degree. We take d to be the fixed degree of the graph, and the distance function to be the number of hops in this topology. Fig. 2(a) shows an initial configuration in which the overlay has been deployed without taking into account the underlying physical infrastructure. In this example, both the overlay (solid line) and the physical infrastructure (represented by the nodes' positions) are assumed to be rings. The two logical indices 0 and 1 are neighbors in the overlay, but are diametrically placed in the underlying infrastructure. By contrast Fig. 2(c) shows an optimal deployment in which the logical and physical links overlap. Our intuition, in Fluidify, consists of exploiting both the logical and physical neighbors of individual nodes, in a manner inspired from epidemic protocols, to move from the configuration of Fig. 2(a) to that of Fig. 2(c). Our basic algorithm is organized in asynchronous rounds and implements a greedy approach as follows: in each round, each node n randomly selects one of its logical neighbors (noted p) and considers the physical neighbor of p (noted q) that is closest to itself. n evaluates the overall benefit of exchanging its logical index with that of q. If positive, the exchange occurs (Fig. 2(b) and then Fig. 2(c)). Being a greedy algorithm, this basic strategy carries the risk of ending in a local minimum (Fig. 3). To mitigate such situations, we use simulated annealing (taking inspiration from recent works on epidemic slicing [START_REF] Pasquet | Autonomous multi-dimensional slicing for large-scale distributed systems[END_REF]), resulting in a decentralized protocol for the deployment of overlay networks that is generic, efficient and scalable. 3 The Fluidify algorithm (0) (1) (9) (3) (2) (7) (8) (4) (5) System model We consider a set of nodes N = {n 1 , n 2 , .., n N } in a message passing system. Each node n possesses a physical (n.net) and a logical index (n.data). n.net represents the machine on which a node is deployed. n.data represents the role n plays in the overlay, e.g. a starting key in a Chord ring [START_REF] Montresor | Chord on demand[END_REF][START_REF] Stoica | Chord: A scalable peer-to-peer lookup service for internet applications[END_REF]. Table 1 summarizes the notations we use. We model the physical infrastructure as an undirected graph G net = (N, E net ), and capture the proximity of nodes in this physical infrastructure through the distance function d net (). In a first approximation, we use the hop distance between two nodes in G net for d net (), but any other distance would work. Similarly, we model the overlay being deployed as an undirected graph G data = (N, E data ) over the nodes N . Our algorithms use the k-NN neighborhood of a node n in a graph G x , i.e. the k nodes closest to n in hop distance in G x , which we note as Γ k x (n) . We assume that these k-NN neighborhoods are maintained with the help of a topology construction protocol [START_REF] Bertier | The gossple anonymous social network[END_REF][START_REF] Jelasity | T-man: Gossip-based fast overlay topology construction[END_REF][START_REF] Voulgaris | Epidemic-style management of semantic overlays for content-based searching[END_REF]. In the rest of the paper, we discuss and evaluate our approach independently of the topology construction used, to clearly isolate its workings and benefits. Under the above model, finding a good deployment of G data onto G net can be seen as a graph mapping problem, in which one seeks to optimize the cost function (n,m)∈E data d net (n, m). Fluidify The basic version of Fluidiy (termed Fluidify (basic)) directly implements the ideas discussed in Sec. 2.2 (Fig. 4): each node n first chooses a random logical neighbor (noted p, line 2), and then searches for the physical neighbor of p (noted q) that offers the best reduction in cost (argmin operator at line 3)1 . The code shown slightly generalises the principles presented in Sec. 2, in that the nodes p and q are chosen beyond the 1-hop neighborhood of n and p (lines 2 and 3), and consider nodes that are k data and k net hops away, respectively. G data the logical graph (N, E data ) Γ k net (n) k closest nodes to n in Gnet, in hop distance Γ k data (n)k closest nodes to n in G data , in hop distance p ← random node from Γ k data data (n) 3: q ← argmin u∈Γ k net net (p) ∆(n, u) 4: conditional swap(n, q, 0) The potential cost reduction is computed by the procedure ∆(n, u) (lines 5-8), which returns the cost variation if n and u were to exchange their roles in the overlay. The decision whether to swap is made in conditional swap(n, q, δ lim ) (with δ lim = 0 in Fluidify Basic). 5: Procedure ∆(n, u) 6: δn ← (n,r)∈E data dnet(u, r) -(n,r)∈E data dnet(n, r) 7: δu ← (u,r) ∈E data dnet(n, r) -(u,r) ∈E data dnet(u, To mitigate the risk of local minimums, we extend it with simulated annealing [START_REF] Pasquet | Autonomous multi-dimensional slicing for large-scale distributed systems[END_REF], which allows two nodes to be swapped even if there is an increase in the cost function. We call the resulting protocol Fluidify (SA), shown in Figure 5. In this version, we swap nodes if the change in the cost function is less than a limit, ∆ limit (r), that gradually decreases to zero as the rounds progress (line 4). ∆ limit (r) is controlled by two parameters, K 0 which is the initial threshold value, and r max which is the number of rounds in which it is decreased to 0. In the remainder of this paper, we use Fluidify to mean Fluidify (SA). 1: In round(r) do 2: p ← random node from Γ k data data (n) 3: q ← argmin u∈Γ k net net (p) ∆(n, u) 4: conditional swap(n, q, ∆ limit (r)) 5: Procedure ∆ limit (r) 6: return max 0, K0 × (1 -r/rmax) Fig. 5. Fluidify (SA) 4 Evaluation Experimental Setting and Metrics Unless otherwise indicated, we use rings for both infrastructure graph G net and overlay graph G data . We assume that the system has converged when the system remains stable for 10 rounds. The default simulation scenario is one in which the system consists of 3200 nodes, and use 16-NN logical and physical neighborhoods (k net = k data = 16) when selecting p and q. The initial threshold value for simulated annealing (K 0 ) is taken as |N |. r max is taken as |N | 0.6 where 0.6 was chosen based on the analysis of the number of rounds Fluidify (basic) takes to converge. We assess the protocols using two metrics: -Proximity -captures the quality of the overlay constructed by the topology construction algorithm. Lower value denotes a better quality. -Convergence time -measures the number of rounds taken by the system to converge. Proximity is defined as the average network distance of logical links normalized by the diameter of the network graph G net : proximity = E (n,m)∈E data d net (n, m) diameter(G net ) (1) where E represents the expectation operator, i.e. the mean of a value over a given domain, and diameter() returns the longest shortest path between pairs of vertices in a graph, i.e. its diameter. In a ring, it is equal to N/2. Baselines The performance of our approach is compared against three other approaches. One is Randomized (SA) (Fig. 6) where each node considers a set of random nodes from N for a possible swap. The other is inspired from epidemic slicing [START_REF] Jelasity | Ordered slicing of very large-scale overlay networks[END_REF][START_REF] Pasquet | Autonomous multi-dimensional slicing for large-scale distributed systems[END_REF], and only considers the physical neighbors of a node n for a possible swap (Slicing (SA), in Figure . 8). The third approach is similar to PROP-G [START_REF] Qiu | Towards location-aware topology in both unstructured and structured p2p systems[END_REF], and it only considers logical neighbours of a node n for a possible swap (PROP-G (SA), 1: In round(r) do 2: S ← knet random nodes from N 3: q ← argmin u∈S ∆(n, u) 4: conditional swap n, q, ∆ limit (r) Fig. 6. Randomized (SA) 1: In round(r) do 2: S ← Γ k data data (n) 3: q ← argmin u∈S ∆(n, u) 4: conditional swap(n, q, 0) Fig. 7. PROP-G 1: In round(r) do 2: q ← argmin u∈Γ k net net (n) ∆(n, u) 3: conditional swap n, q, ∆ limit (r) Fig. 8. Slicing (SA) 1: In round(r) do 2: p ← random node from Γ k data data (n) 3: S ← Γ k net 2 net (p) ∪ Γ k net 2 net (n) 4: q ← argmin u∈S ∆(n, u) 5: conditional swap(n, q, 0) Fig. 9. Data-Net & Net 1: In round(r) do 2: p ← random node from Γ k data data (n) 3: S ← Γ k net 2 net (p) ∪ k net 2 rand. nodes ∈ N \ Γ k net 2 net (p) 4: q ← argmin u∈S ∆(n, u) 5: conditional swap(n, q, 0) Fig. 10. Data-Net & R in Figure . 7). In all these approaches simulated annealing is used as indicated by (SA). The only difference between the above four approaches is the way in which the swap candidates are taken. To provide further comparison points, we also experimented with some combinations of the above approaches. Fig. 9 (termed Data-Net & Net) is a combination of Fluidify (basic) with Slicing (SA). Fig. 10 (termed Data-Net & R) is a combination of Fluidify (basic) with Randomized (SA). We also tried a final variant, combination-R, in which once the system has converged using Fluidify (basic) (no more changes are detected for a pre-determined number of rounds), nodes look for random swap candidates like we did in Fig. 6. Results All the results (Figs. 11-18 and Tables 345) are computed with Peersim [START_REF] Montresor | Peersim: A scalable p2p simulator[END_REF] and are averaged over 30 experiments. The source code is made available in http://armi.in/resmi/fluidify.zip. When shown, intervals of confidence are computed at 95% confidence level using a student t-distribution. Evaluation of Fluidify (SA) The results obtained by Fluidify (SA) and the three baselines on a ring/ring topology are given in Table 3 and charted in Figs. 12 and 13. In addition, Fig. 11 illustrate some of the rounds that Fluidify (SA) and Slicing (SA) perform. Fig. 12 shows that Fluidify clearly outperforms the other three approaches in terms of proximity over a wide range of network sizes. Fig 13 charts the convergence time against network size in loglog scale for Fluidify and its competitors. Interestingly all approaches show a polynomial convergence time. This shows the scalability of Fluidify even for very large networks. If we turn to Tab. 3, it is evident that as the network size increases, the time taken for the system to converge also increases. Both Fluidify and Slicing (SA) converges around the same time with Slicing (SA) converging a bit faster than Fluidify. Randomized (SA) takes much longer (almost twice as many rounds). PROP-G (SA) converges faster in comparison to all other approaches. The better convergence of PROP-G (SA) and Slicing (SA) can be explained by the fact that both approaches run out of interesting swap candidates more rapidly than Fluidify. It is important to note that all approaches are calibrated to consider the same number of candidates per round. This suggests that PROP-G (SA) and Slicing (SA) runs out of potential swap candidates because they consider candidates of lesser quality, rather than considering more candidates faster. Fig. 14 shows how the proximity varies with round for our default system settings. Initial avg. link distance was around N/4 where N is the network size and this is expected as the input graphs are randomly generated. So the initial proximity was approximately equal to 50%. Fluidify was able to bring down the proximity from 50% to 0.7%. A steep decrease in proximity was observed in initial rounds and later it decreases at a lower pace and finally settles to a were able to perform well in the initial stages but later on the gain in proximity decreases. Slicing (SA) is unable to get much gain in proximity from the start itself and converges to a proximity value of 8.4%. Cumulative distribution of nodes based on the avg. link distance in a converged system for all the three approaches is depicted in Fig. 15. It is interesting to see that nearly 83% of the nodes are having an average link distance less than 10 and 37% were having an average link distance of 1 in the case of Fluidify. But for Slicing (SA) even after convergence, a lot of nodes are having an average link distance greater than 200. Slicing (SA) clearly fails in improving the system beyond a limit. The maximum, minimum and the mean gain obtained per swap in a default system setting using Fluidify is shown in Fig. 16(a). As the simulation progresses the maximum, minimum and the mean value of the cost function per swap in each round starts getting closer and closer and finally becomes equal on convergence. Maximum gain per swap (negative cost) is obtained in the initial rounds of the simulation. Maximum value obtained by the cost function is expected to gradually decrease from a value less than or equal to 3200, which is the initial threshold value for simulated annealing, to 0. Variation of cost function for Randomized (SA) (Fig. 16(b)) and PROP-G (SA) also shows a similar behaviour where the system progresses with a very small gain for a long period of time. The most interesting behaviour is that of Slicing (SA) (Fig. 16(c)) which does not benefit much with the use of simulated annealing. The maximum gain that can be obtained per swap is 32 and the maximum negative gain is 2. This is because only the physically closer nodes of a given node are considered for a swap and the swap is done with the best possible candidate. The message cost per round per node will be equal to the amount of data that a node exchanges with another node. In our approach the nodes exchange their logical index and the logical neighbourhood. We assume that each index value amounts to 1 unit of data. So the message cost will be 1+k data which will be 17 in default case. The communication overhead in the network per cycle will be equal to the average number of swaps occurring per round times the amount of data exchanged per swap. A single message costs 17 units. So a swap will cost 34 units. In default setting, an average of 2819 swaps happen per round and this amounts to around 95846 units of data per round. All the four approaches that we presented here are generic and can be used for any topologies. Table . 4 shows how the three approaches fares for various topologies in a default setting. Fluidify clearly out performs the other approaches. Effects of variants Figure. 17 shows that compared to its variants like Fluidify (basic), combination-R, Data-Net & Net (Fig. 9), Data-Net & R (Fig. 10), Fluidify (SA) is far ahead in quality of convergence. Here also we consider a ring/ring topology with default setting. The convergence time taken by Fluidify is slightly higher compared to its variants as shown in Fig. 18. Table 5 shows how varying the initial threshold value for Fluidify affects its performance. From the table it is clear that as the initial threshold value increases the proximity that we obtain also become better and better. With a higher threshold value, more swaps will occur and therefore there is a higher chance of getting closer to the global minimum. The threshold value that gives the best performance is used for all our simulations. Related Work Fully decentralized systems are being extensively studied by many researchers. Many well known and widely used P2P systems are unstructured. However, there are several overlay networks in which the node locality is taken into account. Structured P2P overlays, such as CAN [START_REF] Ratnasamy | A scalable contentaddressable network[END_REF], Chord [START_REF] Stoica | Chord: A scalable peer-to-peer lookup service for internet applications[END_REF], Pastry [START_REF] Rowstron | Pastry: Scalable, decentralized object location, and routing for large-scale peer-to-peer systems[END_REF], and Tapestry [START_REF] Zhao | Tapestry: An infrastructure for fault-tolerant wide-area location and routing[END_REF], are designed to enhance the searching performance by giving some importance to node placement. But, as pointed out in [START_REF] Ratnasamy | Can heterogeneity make gnutella scalable?[END_REF], structured designs are likely to be less resilient, because it is hard to maintain the structure required for routing to function efficiently when hosts are joining and leaving at a high rate. Chord in its original design, does not consider network proximity at all. Some modification to CAN, Pastry, and Tapestry are made to provide locality to some extent. However, these results come at the expense of a significantly more expensive overlay maintenance protocol. One of the general approaches used to bridge the gap between physical and overlay node proximity is landmark clustering. Ratnasamy et al. [START_REF] Ratnasamy | Topologically-aware overlay construction and server selection[END_REF] use landmark clustering in an approach to build a topology-aware CAN [START_REF] Ratnasamy | A scalable contentaddressable network[END_REF] overlay network. Although the efficiency can be improved, this solution needs extra deployment of landmarks and produces some hotspots in the underlying network when the overlay is heterogeneous and large. Some [START_REF] Zhang | A construction of localityaware overlay network: moverlay and its performance[END_REF] [29] have proposed methods to fine tune the landmark clustering for overlay creation. The main disadvantage with landmark system is that there needs to be a reliable infrastructure to offer these landmarks at high availability. Application layer multicast algorithms construct a special overlay network that exploits network proximity. The protocol they use are often based on a tree or mesh structure. Although they are highly efficient for small overlays, they are not scalable and creates hotspots in the network as a node failure can make the system unstable and difficult to recover. Later proximity neighbour selection [START_REF] Castro | Topology-aware routing in structured peer-to-peer overlay networks[END_REF] was tried to organise and maintain the overlay network which improved the routing speed and load balancing. Waldvogel and Rinaldi [12] [28] propose an overlay network(Mithos) that focuses on reducing routing table sizes. It is a bit expensive and only very small overlay networks are used for simulations and the impact of digression is not considered. Network aware overlays are used to increase the efficiency of network services like routing, resource allocation and data dissemination. Works like [START_REF] Matos | Lightweight, efficient, robust epidemic dissemination[END_REF] and [START_REF] Doerr | Epidemic algorithms and processes: From theory to applications[END_REF] combines the robustness of epidemics with the efficiency of structured approaches in order to improve the data dissemination capabilities of the system. Gossip protocols which are scalable and inherent to network dynamics can do efficient data dissemination. Frey et al. [START_REF] Frey | Heterogeneous gossip[END_REF] uses gossip protocols to create a system where nodes dynamically adapt their contribution to the gossip dissemination according to the network characteristics like bandwidth and delay. Kermarrec et al. [START_REF] Giakkoupis | Gossip protocols for renaming and sorting[END_REF] use gossip protocols for renaming and sorting. Here nodes are given id values and numerical input values. Nodes exchange these input values so that in the end the input of rank k is located at the node with id k. Slicing method [START_REF] Pasquet | Autonomous multi-dimensional slicing for large-scale distributed systems[END_REF] [START_REF] Jelasity | Ordered slicing of very large-scale overlay networks[END_REF] was made use of in resource allocation. Specific attributes of network(memory, bandwidth, computation power) are taken into account to partition the network into slices. Network aware overlays can be used in cloud infrastructure [START_REF] Tudoran | Bridging Data in the Clouds: An Environment-Aware System for Geographically Distributed Data Transfers[END_REF] to provide efficient data dissemination. Most of the works on topology aware overlays are aimed at improving a particular service such as routing, resource allocation or data dissemination. What we are proposing is a generalized approach for overlay creation giving importance to data placement in the system. It has higher scalability and robustness and less maintenance cost compared to other approaches. The simulated annealing and slicing approach is motivated mainly by the works [START_REF] Pasquet | Autonomous multi-dimensional slicing for large-scale distributed systems[END_REF], [START_REF] Giakkoupis | Gossip protocols for renaming and sorting[END_REF], [START_REF] Jelasity | Ordered slicing of very large-scale overlay networks[END_REF]. But these works concentrated mainly on improving a single network service while we concentrate on a generalized solution that can significantly improve all the network services. Conclusion and Future Work In this paper, we present-Fluidify-a novel decentralized mechanism for overlay deployment. Fluidify works by exploiting both the logical links of an overlay and the physical topology of its underlying network to progressively align one with the other and thereby maximizing the network locality. The proposed approach can be used in combination with any topology construction algorithm. The resulting protocol is generic, efficient, scalable and can substantially improve network overheads and latency in overlay based-systems. Simulation results show that in a ring/ring network of 25,600 nodes, Fluidify is able to produce an overlay with links that are on average 94% shorter than that produced by a standard decentralized approach based on slicing. One aspect we would like to explore in future is to deploy Fluidify in a real system and see how it fares. A thorough analytical study of the behaviour of our approach is also intended. Fig. 1 . 1 Fig. 1. Illustration of a randomly connected overlay and a network-aware overlay Fig. 3 . 3 Fig. 3. Example of local minimum of a system with n=10 and d=2 Fig. 11 . 11 Fig. 11. Illustrating the convergence of Fluidify (SA) & Slicing (SA) on a ring/ring topology. The converged state is on the right. (N = K0 = 400, knet = k data = 16) Fig. 12 . 12 Fig. 12. Proximity. Lower is better. Fluidify (SA) clearly outperforms the baselines in terms of deployment quality. Fig. 13 . 13 Fig. 13. Convergence time. All three approaches have a sublinear convergence (≈ 1.237 × |N | 0.589 for Fluidify). Fig. 14 . 14 Fig. 14. Proximity over time (N = K0 = 3200, knet = k data = 16). Fluidify (SA)'s optimization is more aggressive than those of the other baselines. Fig. 15 . 15 Fig. 15. Average link distances in converged state (N = K0 = 3200, knet = k data = 16). Fluidify (SA)'s links are both shorter and more homogeneous. Fig. 16 . 16 Fig.[START_REF] Matos | Lightweight, efficient, robust epidemic dissemination[END_REF]. Variation of the cost function per swap over time. Lower is better. (N = K0 = 3200, knet = k data = 16, note the different scales) Fluidify (SA) shows the highest amplitude of variations, and fully exploits simulated annealing, which is less the case for Randomized (SA), and not at all for slicing. Fig. 17 .Fig. 18 . 1718 Fig. 17. Comparison of different variants of Fluidify -Proximity Table 1 . 1 Notations and Entities n.net physical index of node n n.data logical index of node n dnet distance function to calculate the distance between two nodes in physical space Gnet the physical graph (N, Enet) Table 2 . 2 Parameters of Fluidify knet size of the physical neighborhood explored by Fluidify k data size of the logical neighborhood explored by Fluidify K0 initial threshold value for simulated annealing rmax fade-off period for simulated annealing (# rounds) 1: In round(r) do 2: Table 3 . 3 Performance of Fluidify against various baselines Nodes Proximity(%) Fluid(SA) Slicing(SA) Rand(SA) PROP-G(SA) Fluid(SA) Slicing(SA) Rand(SA) PROP-G(SA) Convergence (rounds) 100 4.06 10.46 7.70 13.88 18.10 17.16 23.80 17.03 200 2.70 10.12 6.27 12.99 28.50 26.33 43.43 25.13 400 1.71 9.76 5.35 12.65 42.50 39.20 85.36 38.06 800 1.26 9.34 4.83 12.14 64.13 58.93 136.76 57.16 1,600 0.86 8.80 4.41 11.57 96.80 90.56 198.03 85.13 3,200 0.69 8.47 3.82 11.31 144.40 138.20 274.80 128.14 6,400 0.51 8.13 3.07 11.27 216.10 203.40 382.10 198.24 12,800 0.46 7.66 2.28 11.01 324.00 292.10 533.67 263.32 25,600 0.43 6.99 1.79 10.02 485.00 418.60 762.13 392.81 Table 4 . 4 Performance on various topologies Approach Physical topology Logical topology Proximity(%) Convergence(#Rounds) Fluidify(SA) torus torus 2.4(±0.05) 162(±2.34) Fluidify(SA) torus ring 2.6(±0.03) 171(±3.6) Fluidify(SA) ring torus 1.8(±0.06) 156(±2.36) Slicing(SA) torus torus 4.5(±0.05) 130(±2.16) Slicing(SA) torus ring 5.2(±0.02) 128(±3.26) Slicing(SA) ring torus 9.5(±0.08) 143(±4.1) Randomized(SA) torus torus 3.82(±0.08) 423(±2.41) Randomized(SA) torus ring 4.05(±0.04) 464(±3.28) Randomized(SA) ring torus 2.7(±0.05) 442(±3.82) PROP-G(SA) torus torus 4.6(±0.05) 132(±2.34) PROP-G(SA) torus ring 5.6(±0.03) 130(±3.6) PROP-G(SA) ring torus 10.1(±0.06) 128(±2.36) 12 Fluidify (SA) Fluidify (basic) 10 Data-Net & Net Data-Net & R Combination-R Proximity ( in %) 4 6 8 2 0 100 200 400 800 1600 3200 6400 12800 25600 Network Size Table 5 . 5 Impact of K0 on Fluidify (SA) K0 Proximity (%) Convergence (rounds) 320 2.4 156 640 1.6 145 1600 1.1 146 3200 0.7 144 argmin x∈S f (x) returns one of the x in S that minimizes f (x). Acknowledgments This work was partially funded by the DeSceNt project granted by the Labex CominLabs excellence laboratory of the French Agence Nationale de la Recherche (ANR-10-LABX-07-01).
01775076
en
[ "shs.sport", "shs.sport.ps" ]
2024/03/05 22:32:18
2013
https://insep.hal.science//hal-01775076/file/Schaal%20et%20al.%20%282013%29%20-%20APNM.pdf
Karine Schaal Yann Le Meur François Bieuzen Odile Petit Philippe Hellard Jean-François Toussaint Christophe Hausswirth email: [email protected] Effect of recovery mode on postexercise vagal reactivation in elite synchronized swimmers Keywords: cryotherapy, female athlete, heart rate variability, maximal exercise, water immersion, active recovery cryothérapie globale, athlète féminine, variabilité du rythme cardiaque, exercice maximal, immersion dans l'eau, récupération active à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Synchronized swimming is an aesthetic and highly technical judged sport that has become increasingly physically demanding over the past decades. In addition to developing artistic, technical, and acrobatic skills, elite synchronized swimmers must follow a high-volume, high-intensity regimen to optimize strength, flexibility, and aerobic and anaerobic exercise capacity [START_REF] Liang | Ulnar and tibial bending stiffness as an index of bone strength in synchronized swimmers and gymnasts[END_REF][START_REF] Mountjoy | Injuries and medical issues in synchronized Olympic sports[END_REF]) because these physiological parameters are directly correlated to performance scores in elite competition settings [START_REF] Yamamura | Physiological characteristics of well-trained synchronized swimmers in relation to performance scores[END_REF]. At the international level, synchronized swimmers usually perform 2 or more training sessions per day, with few days of rest [START_REF] Liang | Ulnar and tibial bending stiffness as an index of bone strength in synchronized swimmers and gymnasts[END_REF]. Hence, in the practical scheduling of so many training hours, the time spans allotted for physical recovery between training sessions can be fairly short. Likewise, during competitions, the various ballet performances are sometimes closely scheduled, and accelerating the postexercise return to resting physiological levels after each performance is particularly important to maintain a consistent level of execution throughout the competition. Heart rate variability (HRV) analysis is an established method used to gauge the extent of autonomic recovery from intense exercise sessions [START_REF] Pichot | Relation between heart rate variability and training load in middledistance runners[END_REF][START_REF] Pichot | Autonomic adaptations to intensive and overload training periods: a laboratory study[END_REF]. During exercise, the intensity-dependent withdrawal of parasympathetic tone and the increase in sympathetic activity results in an increase in heart rate and a decrease in vagal-related HRV indices [START_REF] Cottin | Heart rate variability during exercise performed below and above ventilatory threshold[END_REF]). After completing a given exercise bout, a rapid decrease in sympathetic activity and the return of parasympathetic heart activity to resting levels suggests a relative systemic recovery from the physiological stress imposed by the workload [START_REF] Seiler | Quantifying training intensity distribution in elite endurance athletes: is there evidence for an "optimal" distribution?[END_REF][START_REF] Seiler | Autonomic recovery after exercise in trained athletes: intensity and duration effects[END_REF]. The amount of time necessary for full parasympathetic reactivation after exercise can be significantly influenced by several factors, including exercise intensity [START_REF] Seiler | Autonomic recovery after exercise in trained athletes: intensity and duration effects[END_REF][START_REF] Stuckey | Autonomic recovery following sprint interval exercise[END_REF]) and cardiorespiratory fitness [START_REF] Seiler | Autonomic recovery after exercise in trained athletes: intensity and duration effects[END_REF][START_REF] Sandercock | Effects of exercise on heart rate variability: inferences from meta-analysis[END_REF]. Acute high-intensity exercise and longer periods of intensified training have been shown to disrupt HRV indices at rest; in turn, these disruptions have been associated with increased fatigue, decreased sleep quality, and decreased performance [START_REF] Haddad | Nocturnal heart rate variability following supramaximal intermittent exercise[END_REF][START_REF] Hynynen | Heart rate variability during night sleep and after awakening in overtrained athletes[END_REF][START_REF] Hynynen | Effects of moderate and heavy endurance exercise on nocturnal HRV[END_REF][START_REF] Garet | Individual interdependence between nocturnal ANS activity and performance in swimmers[END_REF][START_REF] Plews | Heart rate variability in elite triathletes, is variation in variability the key to effective training? A case comparison[END_REF][START_REF] Uusitalo | Heart rate and blood pressure variability during heavy training and overtraining in the female athlete[END_REF]. A number of studies have demonstrated that different recovery techniques used in sports training, such as cold, contrasted, or thermoneutral water immersion, significantly aid parasympathetic reactivation after intense exercise in welltrained athletes (Buchheit et al. 2009a;Al Haddad et al. 2010a;[START_REF] Stanley | The effect of post-exercise hydrotherapy on subsequent exercise performance and heart rate variability[END_REF]. Cold exposure and water immersion have both been found to suppress cardiac sympathetic activity and augment parasympathetic modulation as a result of arterial baroreflex activation [START_REF] Pump | Cardiovascular effects of static carotid baroreceptor stimulation during water immersion in humans[END_REF]. Cold stimulation triggers peripheral vasoconstriction, leading to a shift in blood volume toward the core [START_REF] Shibahara | The responses of skin blood flow, mean arterial pressure and R-R interval induced by cold stimulation with cold wind and ice water[END_REF]). This increased central volume leads to increased cardiac output, stroke volume, and arterial pressure. This activates the arterial high-pressure and cardiopulmonary low-pressure baroreceptors, which are responsible for reducing sympathetic nerve activity while shifting autonomic heart rate control toward a parasympathetic dominance [START_REF] Pump | Cardiovascular effects of static carotid baroreceptor stimulation during water immersion in humans[END_REF]. During water immersion, regardless of temperature, the hydrostatic pressure exerted on the body provides a mechanical stimulus, which contributes to the central shift of blood volume and, hence, results in baroreflex activation [START_REF] Stanley | The effect of post-exercise hydrotherapy on subsequent exercise performance and heart rate variability[END_REF]Al Haddad et al. 2010a). Although the effect of cold exposure on postexercise parasympathetic reactivation is usually studied using cold-water immersion protocols, the effect of dry-air whole-body cryostimulation (WBC; -110 °C) on postexercise autonomic recovery has not been documented, even though this recovery method is increasingly used in high-level sports [START_REF] Hausswirth | Effects of whole-body cryotherapy vs. far-infrared vs. passive modalities on recovery from exercise-induced muscle damage in highly-trained runners[END_REF][START_REF] Pournot | Time-course of changes in inflammatory response after whole-body cryotherapy multi exposures following severe exercise[END_REF]. Further, the influence of WBC on parameters of metabolic recovery and subsequent maximal exercise capacity has not been evaluated. In light of the necessity to optimize short-term recovery in a sport as seldom studied as synchronized swimming, we aimed to describe the autonomic and metabolic responses of elite synchronized swimmers to 4 different recovery methods used in sports training between 2 closely scheduled competition ballets: dry-air WBC, contrast-water therapy (CWT), active recovery (ACT), and a passive (control) condition (PAS). We compared the effect of these 4 protocols on postexercise parasympathetic reactivation, on metabolic and subjective parameters of recovery, and on subsequent maximal aerobic and anaerobic work capacity. We hypothesized that WBC would yield a significantly larger increase in vagalrelated HRV indices than all other recovery modes, and that WBC, ACT, and CWT would all be associated with favorable changes in metabolic parameters of recovery, compared with PAS. Materials and methods Subjects Eleven highly trained elite female athletes, composing the French synchronized swimming team, took part in the study 6 months before the 2011 World Swimming Championships in Shanghai, China. The mean age of the swimmers was 20.3 ± 1.8 years, mean height was 170.1 ± 4.8 cm, and mean body weight was 61.1 ± 4.6 kg. All participants were familiar with exercise testing and the site of the experiment, as all procedures took place at their usual training site. The study conformed to ethical guidelines of the Declaration of Helsinki, and the study was approved by a local human research ethics committee (Ile de France XI, France;Ref. 200978) prior to beginning the study. Every athlete on the team volunteered to participate and provided their informed consent in written form. Experimental design The testing period began after 10 consecutive days of recovery, following a 2-week training retreat, so that subjects were fit and well rested. After an initial 400-m freestyle swimming time trial to evaluate maximal oxygen consumption (V ˙ O 2max400 ) and its related swim velocity (V ˙ O 2max400 ) [START_REF] Lavoie | Applied physiology of swimming[END_REF][START_REF] Carré | Use of oxygen uptake recovery curve to predict peak oxygen uptake in upper body exercise[END_REF], each subject completed 4 test sessions, spread evenly over 5 weeks, during the precompetition period (end of January to early March). Each test session exposed the swimmers to a different recovery method between 2 identical maximal exercise bouts (their technical competition ballet, lasting approximately 3 min), performed 70 min apart. This recovery duration was chosen to mimic the time constraints that can occur during a competition for athletes taking part in multiple events (duo, solo, team, combination), and was the minimum duration that allowed us to perform all of the necessary tests and recovery interventions between the 2 ballets. All swimmers were tested on the same days each week, following a complete day of rest, and at the same time of day. The order in which each subject underwent the 4 different recovery protocols was randomized to reduce any possible effect of improved fitness or habituation to the protocol over these 5 weeks, and the number of subjects being tested with each recovery method was uniform throughout the experimentation period. Because all swimmers participating in the study were on a regular oral contraceptive treatment, we did not take menstrual cycle phases into consideration when planning the test sessions. Subjects were only allowed to consume water ad libitum during the test sessions. The volume of water consumed was calculated by weighing the water bottles with a food scale, and the subjects' weight fluctuations from the beginning to the end of each session were recorded. Exercise testing Figure 1 outlines the procedures followed during each test session. After their usual precompetition warm-up (45 min) and mental preparation (5 min), each subject performed their technical ballet routine individually (B1, 3 min in duration, with phases of apnea adding up to 64% of the total time). The second ballet (B2, identical routine) was preceded by a shorter warm-up period (15 min). At the end of each ballet, after performing the final underwater figure, the subjects remained under water as they swam to the edge of the pool, where they started breathing directly into the mask of the Cosmed K4b 2 portable telemetry system (Rome, Italy); this method has been previously validated [START_REF] Hausswirth | The Cosmed K4 telemetry system as an accurate device for oxygen uptake measurements during exercise[END_REF]. Ventilatory and gas-exchange variables were collected over 4 breathing cycles, and peak oxygen consumption (V ˙ O 2peak ) was calculated using backward extrapolation of the O 2 recovery curve [START_REF] Lavoie | Applied physiology of swimming[END_REF][START_REF] Carré | Use of oxygen uptake recovery curve to predict peak oxygen uptake in upper body exercise[END_REF][START_REF] Montpetit | V ˙ O 2 peak during free swimming using the backward extrapolation of the O 2 recovery curve[END_REF]. Immediately before and after each ballet, subjects provided their global rating of perceived exertion (RPE g ) and muscle aches (RPE m ) on a 6-to 20-point Borg Fig. 1. Schema of the testing protocol followed during each session, with the order of events listed from top to bottom, then left to right. B1, B2, simulated competition ballets; ACT, active recovery; CWT, contrast-water therapy; HRV, heart rate variability; La -, lactate concentration; PAS, passive condition; RPE g , global rating of perceived exertion; RPE m , rating of perceived exertion for muscle aches; V ˙ O 2 , oxygen consumption; WBC, whole-body cryostimulation. scale [START_REF] Borg | Perceived exertion as an indicator of somatic stress[END_REF]. To encourage maximal effort during each ballet, the coaches scored performance using 4 different 20-point scales commonly used in their practices to assess the swimmers' level of performance: precision, height, energy-displacement, and homogeneity. Recovery protocols Subjects performed 1 of the following 4 recovery methods during each testing session. Passive condition For the PAS protocol, subjects were instructed to lie quietly in a lounge chair placed near the pool for 30 min. Contrast-water therapy For the CWT protocol, subjects walked to the hydrotherapy unit, located in the same building as the pool, where they performed alternating cycles of cold and warm water immersion: 1 min standing in a cold-water bath (9° C) and 1 min sitting in a warm-water bath (39° C), immersed to the midsternal level for both, and beginning and ending with cold immersion (8 cold immersions and 7 warm immersions). Active recovery For the ACT protocol, 15 min of freestyle swimming at a speed corresponding to 40% of V ˙ O 2max (feedback on speed was given every 50 m by a coach) was followed by 15 min of light-intensity technical synchronized swimming exercises in the water (RPE g remaining below 13 on the Borg scale [START_REF] Borg | Perceived exertion as an indicator of somatic stress[END_REF][START_REF] Toubekis | Swimming performance after passive and active recovery of various durations[END_REF]. Whole-body cryostimulation For the WBC protocol, sessions were administered onsite in a specially built WBC unit (Zimmer MedizinSysteme GmbH, Ulm, Germany), consisting of 3 consecutive temperature-controlled chambers (-10, -60, and -110 °C). Preliminary medical clearance was obtained to ensure that none of the subjects had any contraindications to performing the WBC protocol. All swimmers performed at least 1 session of WBC before the study began, and were therefore familiar with the procedure. After all PostB1 measurements were obtained, subjects thoroughly dried their body and hair and dressed warmly before being taken by car (�800 m) to the WBC unit; this was done to eliminate the walk from the pool in cold weather. They then changed into a dry swimsuit, were fitted with a mask covering the nose and mouth to protect the airways, and donned an ear band, gloves, socks, and slippers to protect the extremities. Subjects passed through the 2 warmer rooms to reach the therapy room, where they remained for exactly 3 min, walking and moving their arms slowly while being supervised by a technician. At the end of each recovery period, just before B2, subjects marked, on a 10-cm visual analog scale, their perception of the recovery period's effectiveness, relative to their readiness to perform the second ballet (0, completely ineffective; 10, extremely effective). HRV analysis Subjects were fitted with heart rate monitors (Suunto Oy, Vantaa, Finland), and heart rate was collected in R-R interval mode for 6 min while the subject sat quietly in an isolated room at 4 distinct time points for each protocol: upon arrival before beginning their initial warm-up (PreB1); 6 min after the end of B1 (PostB1); 55 min after the end of B1, following the recovery protocol (PreB2); and 6 min after the end of B2 (Post B2). Data from the last 256 s of each sampling period were used for analysis, to allow the heart rate to stabilize once subjects assumed the seated position. To avoid influencing heart rate recovery after exercise, no particular breathing frequency was imposed [START_REF] Buchheit | Parasympathetic reactivation after repeated sprint exercise[END_REF]. For all HRV samples, it was subsequently verified that the respiration rate was always in the high-frequency (HF) range (>0.15-0.50 Hz). Data were transferred to a computer, using the Suunto Training Manager software, and were analyzed using specialized HRV analysis software (Nevrokard aHRV, Izola, Slovenia). Data were visually inspected to identify artifacts and occasional ectopic beats, which were removed manually. All HRV data were processed by the same individual. The time-varying HRV indices kept for analysis were the root-mean square difference of successive normal R-R intervals (rMSSD), and the short-term R-R interval variability index, SD1. The standard deviation around the minor axis of the Poincaré scatterplot obtained by plotting the length of the nth R-R interval against the length of its preceding (n-1)th R-R interval. Mean heart rate (HR mean ) was also analyzed. Power spectral density analysis was then performed using a fast Fourier transform with a nonparametric algorithm. The power density of HF (>0.15-0.50 Hz, reflecting parasympathetic activity) and lowfrequency (LF; 0.04-0.15 Hz, reflecting mixed sympathetic and parasympathetic activity) components of the spectrum were calculated by integrating the power density within their respective frequency bands. The ratio of LF to HF, an indicator of relative sympathovagal balance, was retained for analysis. . Metabolite analysis Blood samples were obtained from the finger tip, using a portable device (Lactate Pro LT-1710), to measure blood lactate concentration ([La -] b ) (Lactate Pro, Arkray, Kyoto, Japan) at 6 time points: before B1 (PreB1); 4 min after B1 (PostB1 4 =); before B2 (PreB2); and 4, 10, and 15 min after B2 (PostB2 4 =, PostB2 10 =, and PostB2 15 =, respectively). For the 400m swim time trial, [La -] b was measured 4 min after completing the swim. Statistical analysis Because this study involved a relatively small number of subjects and the data obtained did not always meet the assumptions of normality, as assessed visually by normal probability plot and by the Shapiro-Wilk test, nonparametric statistical analyses were conducted. To evaluate the effect of each exercise bout and recovery, within-protocol time differences in metabolic variables and HRV indices were evaluated using Friedman's ANOVA. If a time effect was significant, Wilcoxon's test was applied in a post hoc analysis to determine which specific time points differed from the others. Between-protocol comparisons were performed, using Wilcoxon's test, to determine whether there were any differences during the recovery period between the ACT, CWT, WBC, and PAS protocols. To isolate the effect of the recovery intervention, this analysis was performed on the difference between PreB2 and PostB1, represented as il"variable" rec (i.e., ilHF rec , il[La -] b rec , ilRPE m rec , etc.). Results are reported as the means ± SD for the parameters with normal distribution (ilV ˙ O 2peak , il[La -] b rec ); otherwise, results are expressed as the median and the values of the lower and upper quartiles. Spearman's coefficient of rank correlation was used to determine whether there were correlations between physiological, subjective, and autonomic variables of recovery and repeated maximal exercise. All analyses were performed using Statistica (version7.1, StatSoft France). The level of significance was set at p < 0.05 for all analyses. Results Exercise description Mean completion time for the 400 m swimming time trial was 308 ± 8 s, with a mean V ˙ O 2max400 of 62.1 ± 3.0 mL•kg -1 •min -1 . During the 5 weeks of experimentation, no changes in any baseline or exercise variables were detected according to the week of testing. There were no differences in the amount of water consumed (885 ± 388 mL) or in the subjects' weight fluctuations (0.3 ± 0.4 kg) over the course of the study. Because physiological parameters measured at PostB1 (V ˙ O 2peak , [La -] b ) were similar for all protocols, values were pooled for comparison with 400 m time-trial values. During the ballet (182 s), V ˙ O 2peak was slightly but not significantly lower than V ˙ O 2max400 (60.4 ± 2.0 mL•kg -1 •min -1 , p = 0.08), and [La -] b was significantly higher at PostB1 4 = than at the end of the 400 m time trial (11.0 ± 1.9 mmol•L -1 vs. 7.6 ± 2.0 mmol•L -1 , respectively, p = 0.004). Heart rate variability The impact of exercise on all HRV indices and HR mean was similar across all protocols, and there were no significant differences in these variables at PostB1 and PostB2. rMSSD, SD1, HF, and LF all decreased significantly with exercise, whereas HR mean increased and recovered to resting levels during the 70-min recovery period (SD1, and HR mean are shown in Fig. 2). LF/HF was the only index that did not show clear trends with exercise. WBC was the only protocol to yield a significant increase in vagal-related HRV indices and a decrease in HR mean at PreB2, compared with resting values at PreB1 (rMMSD, 178% ± 69% of PreB1 value, p = 0.012; SD1, 240% ± 144%, p = 0.009; and HR mean , 91% ± 14%, p = 0.047). ACT was the only protocol to result in a higher HR mean at PreB2 than at PreB1 (p = 0.01). Although some HRV indices appeared not to fully return to PreB1 levels with ACT, this difference did not reach significance (SD1, p = 0.09; HF, p = 0.11). Despite these differences, all HRV indices and HR mean values were similar at PostB2 and PostB1, and there were no differences between protocols. When the evolution of HRV indices during the recovery period were analyzed, WBC was the only protocol to yield a significantly larger change in HR mean and all HRV indices (ilHF rec , ilLF rec , ilLF/HF rec are displayed in Fig. 3). CWT did not differ from PAS on any of the HRV indices analyzed. Metabolic parameters WBC and ACT were the only protocols to yield significantly higher V ˙ O 2peak values at B2 than at B1, with gains of 5.4% ± 3.2% and 3.4% ± 2.9%, respectively (Fig. 4). Every swimmer reached a higher V ˙ O 2peak during B2 after using WBC in recovery, and all except 1 swimmer did so with ACT. In the CWT protocol, V ˙ O 2peak did not differ significantly between B1 and B2. PAS was the only protocol to yield a significant decrease in V ˙ O 2peak (-3.6% ± 2.1%); this was observed in every subject. [La -] b reached similar levels at PostB1 4 = and PostB2 4 =, and there was no significant protocol effect (all protocols combined, 11.2 ± 2.6 mmol•L -1 and 10.4 ± 2.1 mmol•L -1 , respectively; p = 0.14). il[La -] b rec was significantly larger, however, during ACT than PAS (p = 0.04), and nearly so for WBC than for PAS (p = 0.06), as shown in Fig. 5. After B2, there was Subjective ratings RPE m values peaked at PostB1 and PostB2, and returned to resting levels at the end of recovery (PreB2) for all protocols except WBC, during which RPE m remained slightly but significantly elevated, compared with PreB1 (13.4 ± 2.0 vs. 11.9 ± 2.5). PAS was the only protocol associated with significantly higher RPE m at PostB2 than at PostB1 (17.4 ± 1.4 vs. 16.5 ± 1.3); all other protocols showed similar RPE m after each ballet. The exercise and recovery trends for RPE g were similar to those for RPE m , except that RPE g values were significantly lower at PreB2 than at PreB1 after a shorter second warm-up for all protocols except WBC (p = 0.07). PAS was also associated with significantly lower ratings on the visual analog scale (lower perceived effectiveness of recovery) than all other protocols (PAS, 4.7 ± 2.1; CWT, 6.5 ± 1.6; ACT, 7.5 ± 1.1; WBC, 6.2 ± 1.3; p < 0.05). Subjective performance scores Mean scores obtained for B1 and B2 are shown in Table 1. Precision, energy-displacement, homogeneity, and overall scores were slightly but significantly lower at B2 than at B1, with a mean decrease of one-tenth of 1 point on the 20-point scale (-0.5% change). There was no significant effect of protocol on the difference in scores between B1 and B2. Discussion This study is the first to describe the physiological response of highly trained elite synchronized swimmers to 2 full-length competition ballets separated by a short recovery period. To our knowledge, this is the first time that WBC was investigated as a recovery technique between 2 closely scheduled maximal exercise bouts. The 2 most important findings of this study are that using WBC shortly after a full-length ballet resulted in a strong parasympathetic reactivation in elite swimmers, yielding 2-to 4-fold increases in vagal-related HRV indices, compared with pre-exercise values, within only 1 h; and that WBC exerted a significant influence on the metabolic parameters of recovery and subsequent exercise, with a larger clearance of plasma lactate and an increase in maximal aerobic work output during the second ballet. The latter was only matched by the effects of active recovery. Effect of repeated exercise and recovery on HRV Executing an elite-level synchronized swimming ballet demands a very large physical effort, requiring maximal aerobic work production with large anaerobic contributions. Rates of aerobic energy production at the end of each ballet were similar to those obtained after the 400-m swimming time trial; however, anaerobic contribution was significantly greater at the end of each ballet than after the 400-m swimming time trial, a test considered to be an accurate evaluation of maximal aerobic capacity in swimmers [START_REF] Lavoie | Applied physiology of swimming[END_REF]. Furthermore, RPE g and RPE m values obtained at the end of B1 and B2 were very high, indicating that the swimmers perceived these performances to be very difficult [START_REF] Borg | Perceived exertion as an indicator of somatic stress[END_REF]. Research on the metabolic toll of synchronized swimming ballets is scarce, especially at the elite level. Peak [La -] b values (mean, 11.0 ± 1.9 mmol•L -1 ) obtained at the end of each ballet surpassed those reported more than a decade ago by Table 1. Difference in the scores obtained between B1 and B2 for each protocol. et al. 2012), greater cardiorespiratory fitness is associated with a significantly faster recovery of resting vagal tone [START_REF] Seiler | Autonomic recovery after exercise in trained athletes: intensity and duration effects[END_REF][START_REF] Sandercock | Effects of exercise on heart rate variability: inferences from meta-analysis[END_REF]. In our swimmers, HRV indices obtained after the completion of B2 indicated that the sympathetic influence over heart rate control remaining 10 min after exercise was no greater than that 10 min after B1, suggesting that autonomic response to repeating the same exercise in this short time span is similar. This finding, together with the similar [La -] b , RPE g , and RPE m , and equal or greater V ˙ O 2peak attained for all protocols (except V ˙ O 2peak after PAS), demonstrates that with WBC, ACT, and CWT, the swimmers were able to repeat the same maximal workload with similar autonomic, metabolic, and subjective responses. Effect of specific recovery techniques on HRV The augmentation of pre-exercise HRV values observed in synchronized swimmers after WBC reflects a strong parasympathetic reactivation at the cardiac level, with values largely surpassing those measured at rest as early as 60 min after maximal exercise. To our knowledge, this was the first study to describe such a large increase in vagal-related HRV indices from any cold exposure recovery technique used after exercise, with mean increases ranging from 78% for rMSSD and 140% for SD1 to 296% for HF. [START_REF] Westerlund | Heart rate variability in women exposed to very cold air (-110 °C) during whole-body cryotherapy[END_REF] investigated the HRV response to WBC in the resting state in nonathletic women, and reported that 2 min of WBC (-110 °C) augmented HRV indices of parasympathetic activity by 53% for rMSSD and 47% for SD1. In the case of highly trained swimmers, we showed that this significant effect of WBC occurred even when the treatment was performed shortly after maximal exercise, in a context of heightened cardiac sympathetic activity and suppressed vagal tone. Other conventional cryostimulation methods used in the con- Height Precision Energydisplacement Homogeneity Overall score text of recovery from physical training, such as shoulder-deep cold-water immersion (5 min in 11 to 14 °C water) have been found to significantly aid postexercise parasympathetic reactivation (Al Haddad et al. 2010a;Buchheit et al. 2009a;[START_REF] Stanley | The effect of post-exercise hydrotherapy on subsequent exercise performance and heart rate variability[END_REF]) to pre-exercise levels. The additional increase in vagal modulation observed with WBC, compared with typical cold-water immersion Note: Overall score, mean of all 4 protocols. B1, B2, simulated competition ballets; ACT, active recovery; CWT, contrast-water therapy; PAS, passive condition; WBC, whole-body cryostimulation. [START_REF] Yamamura | Physiological loads in the team technical and free routines of synchronized swimmers[END_REF], who found that [La -] b in Japan National finalists only reached 4.7 ± 1.1 mmol•L -1 , or 46% ± 11% of their previously measured peak [La -] b . In addition, we found that the swimmers' high V ˙ O 2max (62.1 ± 3.0 mL•kg -1 •min -1 ) supports their highly trained status and conveys that this sport has evolved over the years to become more physically demanding. Indeed, reported aerobic capacities of elite synchronized swimmers have evolved from 44 ± 4 mL•kg -1 •min -1 in 1980 [START_REF] Poole | Physiological characteristics of elite synchronized swimmers[END_REF]) to 53 ± 5 mL•kg -1 •min -1 in 1999 [START_REF] Chatard | Performance and physiological responses to a 5-week synchronized swimming technical training programme in humans[END_REF]. In accordance with metabolic and subjective indicators of maximal work output during the ballets, the large decrease in all vagal-related HRV indices observed from 6 to 10min after the end of each ballet reflected a significant reduction in the parasympathetic modulation of heart rate after the completion of intense exercise [START_REF] Cottin | Heart rate variability during exercise performed below and above ventilatory threshold[END_REF]). Even though we did not measure heart rate response during exercise, previous data obtained during competitions from the French elite synchronized swimming team revealed that the swimmers reached their maximal heart rate over the course of the routine [START_REF] Hausswirth | Proceedings of the 14th Annual Congress of the ECSS. 24-27[END_REF]. In spite of the large exercise-induced shift in autonomic heart rate control, complete recovery of all vagal-related HRV indices occurred within 1hour of the end of the first ballet, attesting to the high fitness of these athletes [START_REF] Seiler | Autonomic recovery after exercise in trained athletes: intensity and duration effects[END_REF]. Even though the time span required for full postexercise parasympathetic reactivation (to pre-exercise levels) is known to increase with the relative intensity of exercise [START_REF] Seiler | Autonomic recovery after exercise in trained athletes: intensity and duration effects[END_REF]; Stuckey methods, could potentially be explained by the fact that the very low temperatures in WBC imposed a larger thermal stress than immersion in 14 °C water. However, studies quantifying relative thermal stress and changes in core temperature resulting from various modes of cryostimulation are lacking. Additionally, during WBC, the entire body is exposed to cold, including the face and neck. It has been shown that the direct effect of cold on the head alone, using face immersion in cold water (without breathholding), aids parasympathetic reactivation significantly after exercise (Al Haddad et al. 2010b). This increase in vagal tone is thought to be principally mediated by trigeminal brain stem pathways, rather than by the arterial baroreflex [START_REF] Khurana | The cold face test: a non-baroreflex mediated test of cardiac vagal function[END_REF]. Further, [START_REF] Eckberg | Trigeminal-baroreceptor reflex interactions modulate human cardiac vagal efferent activity[END_REF] showed that stimulating trigeminal cutaneous receptors using cold-water face immersion augments the magnitude of the vagal response induced by arterial baroreflex activation alone. CWT did not stand out as being particularly beneficial to autonomic recovery in our subjects, compared with PAS; both protocols resulted in a return to resting HRV indices. The short intermittent exposure to cold during CWT might have provided insufficient thermal stress to significantly boost parasympathetic activity beyond pre-exercise levels. [START_REF] Stanley | The effect of post-exercise hydrotherapy on subsequent exercise performance and heart rate variability[END_REF] reported that cold-water immersion exerts a larger effect on parasympathetic reactivation than contrast-water therapy, suggesting that the more important cold stimulus augments the effectiveness of water immersion in this respect. Further, the swimmers' habituation to spending several hours each day immersed in a thermoneutral pool for their training could have reduced the impact of this recovery method in this specific population. . The workload performed during ACT, despite its low intensity, could be expected to slow autonomic recovery after maximal exercise, because even a light exercise load would maintain a relative intensity-dependent degree of parasympathetic withdrawal and sympathetic nerve activity. However, in the case of highly aerobically trained individuals, our results demonstrate that a subsequent low-intensity active recovery does not significantly hinder parasympathetic reactivation. Five minutes after the completion of active recovery, all HRV parameters (except LF) had returned to near-baseline levels; only HR mean remained slightly elevated. These findings remain specific to active recovery performed in water; the effect of immersion, together with the horizontal body position during swimming, probably aided parasympathetic reactivation (Buchheit et al. 2009a[START_REF] Buchheit | Effect of in-versus out-of-water recovery on repeated swimming sprint performance[END_REF]. In this respect, the effects of the active recovery used here may not be comparable to active recovery protocols performed on land. Finally, during PAS, complete recovery to pre-exercise HRV indices occurred at least twice as fast as that observed by [START_REF] Stanley | The effect of post-exercise hydrotherapy on subsequent exercise performance and heart rate variability[END_REF], who showed that HRV indices returned to pre-exercise levels after 130 min of passive recovery. This discrepancy could be attributed to a higher level of cardiorespiratory fitness [START_REF] Seiler | Autonomic recovery after exercise in trained athletes: intensity and duration effects[END_REF]), but could also be attributed to different body positions. Our subjects adopted a supine position for 30 min, whereas the subjects of [START_REF] Stanley | The effect of post-exercise hydrotherapy on subsequent exercise performance and heart rate variability[END_REF] remained in a seated position for 10 min. Buchheit et al. (2009b) demonstrated that lying supine led to a faster heart rate recovery after exercise than sitting. Parasympathetic reactivation and metabolic parameters of recovery and repeated exercise Despite the large systemic sympathetic response necessary to support maximal work production rates during exercise [START_REF] Brooks | Balance of carbohydrate and lipid utilization during exercise: the "crossover" concept[END_REF], we showed that in these highly trained athletes, autonomic function at the cardiac level recovered fully within 70 min, and that poor parasympathetic reactivation after maximal exercise was therefore not a limiting factor to recovery and subsequent exercise capacity. No link was found between the extent of parasympathetic reactivation and any metabolic or subjective parameters of recovery or subsequent exercise capacity. Specifically, no differences in parasympathetic reactivation were found between PAS, ACT, and CWT; however, aerobic work output decreased significantly from B1 to B2 after PAS (-3.6% ± 2.1%), was maintained with CWT (+1.0% ± 3.4%), and increased with ACT (+3.4% ± 2.9%). To our knowledge, only a few studies have investigated whether a larger postexercise parasympathetic reactivation, occurring as a result of cold-water immersion, is associated with improved performance in the short term (Buchheit et al. 2009a;[START_REF] Stanley | The effect of post-exercise hydrotherapy on subsequent exercise performance and heart rate variability[END_REF]). Even though in both studies cold-water immersion enhanced parasympathetic reactivation and yielded greater subjective ratings of recovery than passive conditions, no associations were found between HRV recovery and absolute performance measures, suggesting that, in the case of healthy highly trained individuals, this aspect of recovery may not affect subsequent exercise capacity. From a medical safety perspective, however, optimizing parasympathetic reactivation might help reduce the occurrence of dangerous arrhythmias, which tend to occur more readily in situations with a strong sympathetic background, such as immediately after intense exercise in the heat [START_REF] Billman | Aerobic exercise conditioning: a nonpharmacological antiarrhythmic intervention[END_REF]. To our knowledge, this was the first study to describe the effect of WBC as a short-term (i.e., �1 h) recovery aid. It showed that elite athletes respond favorably to this technique in the context of repeated maximal aerobic exercise. WBC yielded results nearly similar to ACT on the parameters of metabolic recovery, aiding blood lactate clearance, and enabling a greater aerobic work output during B2, compared with PAS. It is well established that, compared with passive conditions, active recovery increases the clearance of lactate and other metabolic byproducts, such as hydrogen ions, ammonium ions, and inorganic phosphate [START_REF] Banfi | Whole-body cryotherapy in athletes[END_REF][START_REF] Fairchild | Glycogen synthesis in muscle fibers during active recovery from intense exercise[END_REF]). An increased blood lactate clearance has also been reported after cold exposure, compared with passive recovery [START_REF] Heyman | Effects of four recovery methods on repeated maximal rock climbing performance[END_REF]. Because the removal of these ions from the bloodstream is associated with a greater ability to repeat maximal aerobic and anaerobic performance in the short term [START_REF] Neric | Comparison of swim recovery and muscle stimulation on lactate removal after sprint swimming[END_REF], our findings support the reduced effectiveness of passive recovery in this regard, confirming active recovery and cold exposure to be more appropriate options between 2 synchronized swimming performances. In spite of the desirable effects of WBC, ACT, and CWT on the metabolic aspects of recovery and subsequent exercise capacity, these benefits were not accompanied by differences in performance scores attributed by the coaches. Even though the technical and aesthetic merit of a given ballet performance does not solely depend on maximal exercise capacity (they also depend on several aesthetic and technical characteristics), an increase in aerobic work output during the second ballet could have favorably impacted the scores attributed by coaches (such as the increased height of compulsory figures, the amplitude of movement, and the overall energy level and homogeneity of the performance). In this respect, [START_REF] Yamamura | Physiological characteristics of well-trained synchronized swimmers in relation to performance scores[END_REF] found a significant correlation between performance scores and the physiological attributes of synchronized swimmers, including aerobic and anaerobic work capacity and the amplitude of leg movements. Thus, in spite of the sensitivity and reproducibility limitations inherent to subjective evaluations of performance in judged sports, [START_REF] Damisch | Olympic medals as fruits of comparison? Assimilation and contrast in sequential performance judgments[END_REF][START_REF] Ansorge | Bias in judging women's gymnastics induced by expectations of within-team order[END_REF][START_REF] Ste-Marie | Prior processing effects on gymnastic judging[END_REF], the physiological evidence pointing to an optimized recovery with WBC and ACT encourage further investigations of sports events in which a 3% to 5% increase in V ˙ O 2peak , as was seen here, is usually associated with improved absolute performance (such as during a cycling or running time trial [START_REF] Saunders | Physiological measures tracking seasonal changes in peak running speed[END_REF]). Conclusion This study described the autonomic and metabolic responses of elite synchronized swimmers to 2 full-length competition ballets, and their adaptations to 4 different protocols performed during the recovery period separating them. We brought forth 2 novel findings, supporting the effectiveness of WBC in the context of postexercise autonomic and metabolic recovery and repeated exercise performance. First, we demonstrated that a single session of WBC performed shortly after a maximal exercise exerted a strong influence on parasympathetic reactivation in the context of a heightened sympathetic background. Second, our results showed that, similar to ACT, WBC is associated with improved metabolic recovery after maximal exercise. Future research should try to determine whether the regular use of WBC as a recovery technique confers additional benefits over longer periods of time, because fatigue accumulation during phases of intensified training has been associated with changes in the autonomic modulation of heart rate in athletes [START_REF] Pichot | Autonomic adaptations to intensive and overload training periods: a laboratory study[END_REF]. Fig. 2 . 2 Fig. 2. Evolution of SD1 (A) and mean heart rate (HR mean ) (B) for each ballet and recovery period. Values are means ± SE. Box plots represent median interquartile range (IQR, Q25-Q75), and error bars are maximal and minimal observations within 1.5 × IQR. Squares represent maximum and minimum observations above or below 1.5 × IQR. *, Significantly different from PreB1 (p < 0.05); †, significantly different from PAS (p < 0.05). In all conditions, PostB1 and PostB2 values were different from PreB1 and PreB2 values, so symbols were omitted for clarity. From light grey to black: PAS, CWT, WBC, ACT. B1, B2, simulated competition ballets; ACT, active recovery; CWT, contrast-water therapy; PAS, passive condition; SD1, short-term variability of successive R-R intervals; WBC, whole-body cryostimulation. Fig. 3 . 3 Fig. 3. Change in power spectral density variables from PostB1 to PreB2, ilHF (A), ilLF (B), and LF/HF (C). Box plots represent median, interquartile range (IQR, Q25-Q75), and error bars are maximal and minimal observations within 1.5 × IQR. Squares represent maximum and minimum observations above or below 1.5 × IQR. *, Significantly different from PAS (p < 0.05). HF, high frequency; LF, low frequency; ACT, active recovery; CWT, contrast-water therapy; PAS, passive condition; WBC, whole-body cryostimulation. Fig. 4 . 4 Fig. 4. ilV ˙ O 2peak , change in peak oxygen consumption between B2 and B1. B1, B2, simulated competition ballets; ACT, active recovery; CWT, contrast-water therapy; PAS, passive condition; WBC, wholebody cryostimulation. *, Significantly different from baseline (p < 0.05); †, significantly different from PAS (p < 0.05). Fig. 5 . 5 Fig. 5. Mean difference in blood lactate concentration [La -] b rec ) for each protocol. ACT, active recovery; CWT, contrast-water therapy; PAS, passive condition; WBC, whole-body cryostimulation. †, Significantly different from PAS (p < 0.05). The difference between WBC and PAS nearly missed significance (p = 0.059). Acknowledgements The authors of this study thank the French synchronized swimming athletes for participating in this study, as well as their respective coaches -Charlotte Massardier, Pascale Meyet, Anne Capron, and Julie Fabrefor their valuable assistance in scheduling this protocol within the athletes' training schedules. They would also like to thank Dr. Philippe Le Van, Jean-Robert Filliard, Marielle Volondat, and Philippe Van de Cauter for supervising the cryostimulation sessions and, finally, INSEP and the French National Swimming Federation for their support in this study.
01653893
en
[ "sdu.ocean" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01653893/file/Schmidt%20et%20al%202017%20HAL.pdf
Natascha Schmidt email: [email protected] Delphine Thibault email: [email protected] François Galgani email: [email protected] Andrea Paluselli email: [email protected] Richard Sempéré email: [email protected] Occurrence of microplastics in surface waters of the Gulf of Lion (NW Mediterranean Sea) Keywords: Marine litter, microplastic, Mediterranean Sea, Gulf of Lion, Marseille Bay Between 2014 and 2016 a total of 43 microplastic samples were collected at six sampling stations in the eastern section of the Gulf of Lion (located in the northwestern Mediterranean Sea), as well as upstream of the Rhône River. Microplastics were found in every sample with highly variable concentrations and masses. Concentrations ranged from 6 10 3 items km -2 to 1 10 6 items km -2 (with an average of 112 10 3 items km -2 ), and mass ranged from 0.30 g km -2 to 1018 g km -2 DW (mean 61.92 ± 178.03 g km -2 ). The samples with the highest and lowest microplastic count originate both from the Bay of Marseille. For the Bay of Marseille, it is estimated that the total microplastic load consist of 519 10 3 -101 10 6 items weighing 0.07 -118 kg. Estimations for daily microplastic transport by the Northern Current and the Rhône River, two important hydrologic features of the northwestern Mediterranean Sea, range from 0.18 to 86.46 t and from 0.20 to 21.32 kg, respectively. Particles < 1 mm 2 clearly dominated sampling stations in the Northern Current, the Rhône River and its plume (52, 53 and 61 %, respectively), suggesting a long exposure time in the environment. Items between 1 mm 2 and 5 mm 2 in size were the most abundant microplastics in Marseille Bay (55 %), which suggests coastal pollution sources or the removal of smaller particles from surface waters e.g. by ballasting owing to the presence of epibionts. Introduction Plastic and its chemical compounds have played an important role in the Anthropocene and might threaten human health [START_REF] Kobrosly | Prenatal Phthalate Exposures and Neurobehavioral Development Scores in Boys and Girls at 6-10 Years of Age[END_REF][START_REF] Tranfo | Urinary phthalate monoesters concentration in couples with infertility problems[END_REF][START_REF] Sathyanarayana | Phthalates and Children's Health[END_REF][START_REF] Heudorf | Phthalates: Toxicology and Exposure[END_REF] and both terrestrial [START_REF] Zhao | Microscopic anthropogenic litter in terrestrial birds in Shanghai, China: Not only plastics but also natural fibers[END_REF][START_REF] Lwanga | Microplastics in the Terrestrial Ecosystem: for Lumbricus terrestris (Oligochaeta, Lumbricidae)[END_REF][START_REF] Oehlmann | A critical analysis of the biological impacts of plasticizers on wildlife[END_REF] and marine environments [START_REF] Przybylinska | Environmental Contamination with Phthalates and its Impact on Living Organisms[END_REF]Van Franeker & Law, 2015;[START_REF] Sigler | The Effects of Plastic Pollution on Aquatic Wildlife: Current Situations and Future Solutions[END_REF]. In 2014, 311 million tons of plastic were produced worldwide, 15 % of which were consumed in Europe (PlasticsEurope, 2015). The degradation of large plastic items into microplastics (≤ 5 mm) in the ocean is a slow and heterogeneous process, varying with respect to the quality, shape and size of the plastic. This process is driven by mechanical forcing (e.g., waves), salt water, and UV radiation [START_REF] Ter Halle | Understanding the Fragmentation Pattern of Marine Plastic Debris[END_REF]. Because of its small size, micro debris can easily be ingested (e.g., [START_REF] Desforges | Ingestion of Microplastics by Zooplankton in the Northeast Pacific Ocean[END_REF][START_REF] Neves | Ingestion of microplastics by commercial fish off the Portuguese coast[END_REF]. Approximately 270 10 3 tons of plastic are suspected to float in the world's oceans [START_REF] Eriksen | Plastic Pollution in the World's Oceans: More than 5 Trillion Plastic Pieces Weighing over 250,000 Tons Afloat at Sea[END_REF]. Estimates for floating microplastic loads range from 7 10 3 to 35 10 3 tons for global open-ocean surface waters [START_REF] Cózar | Plastic debris in the open ocean[END_REF] or from 93 10 3 to 236 10 3 tons depending on the model used [START_REF] Van Sebille | A global inventory of small floating plastic debris[END_REF]. Plastic accounts for 60 to 80 % of all marine litter, followed in quantity by glass and metal [START_REF] Unep | Marine Litter: A Global Challenge[END_REF]. About 370 10 9 plastic particles or 1,455 tons have been estimated to be floating on the surface of the Mediterranean Sea [START_REF] Ruiz-Orejón | Floating plastic debris in the Central and Western Mediterranean Sea[END_REF]. Other estimates range from 756 to 2,969 tons [START_REF] Cózar | Plastic Accumulation in the Mediterranean Sea[END_REF] and from 874 to 2,576 tons [START_REF] Suaria | The Mediterranean Plastic Soup: synthetic polymers in Mediterranean surface waters[END_REF]. The Mediterranean Sea is a semi-enclosed basin subject to significant anthropogenic pressures (e.g., The MerMex group, 2011; [START_REF] Blanfuné | Response of rocky shore communities to anthropogenic pressures in Albania (Mediterranean Sea): Ecological status assessment through the CARLIT method[END_REF][START_REF] Hassoun | Acidification of the Mediterranean Sea from anthropogenic carbon penetration[END_REF][START_REF] Casale | Annual survival probabilities of juvenile loggerhead sea turtles indicate high anthropogenic impact on Mediterranean populations[END_REF]. Marine debris, including microplastics, are a particularly important concern in this region [START_REF] Deudero | Mediterranean marine biodiversity under threat: Reviewing influence of marine litter on species[END_REF][START_REF] Cózar | Plastic Accumulation in the Mediterranean Sea[END_REF][START_REF] Ioakeimidis | A comparative study of marine litter on the seafloor of coastal areas in the Eastern Mediterranean and Black Seas[END_REF][START_REF] Faure | An evaluation of surface micro-and mesoplastic pollution in pelagic ecosystems of the Western Mediterranean Sea[END_REF][START_REF] Pedrotti | Changes in the Floating Plastic Pollution of the Mediterranean Sea in Relation to the Distance to land[END_REF]. Concerns about marine litter in the Mediterranean Sea were first expressed in 1976 when the Barcelona Convention was signed with the goal of preventing and abating marine and coastal pollution (UNEP, 2009). In subsequent years, studies have been undertaken to better understand pollution sources and trajectories, through approaches as modeling the transport of floating marine debris [START_REF] Mansui | Modelling the transport and accumulation of floating marine debris in the Mediterranean basin[END_REF]. However, knowledges on the spatial and temporal microplastic distribution remain limited [START_REF] Ruiz-Orejón | Floating plastic debris in the Central and Western Mediterranean Sea[END_REF][START_REF] Suaria | The Mediterranean Plastic Soup: synthetic polymers in Mediterranean surface waters[END_REF][START_REF] Cózar | Plastic Accumulation in the Mediterranean Sea[END_REF]. Their contents are highly variable, although the sea surface circulation seems to be the main driver on the distribution of floating marine litter whatever their sizes. Currents affect timedependent movements that remain difficult to predict, and cause several non-trivial Lagrangian mechanisms [START_REF] Zambianchi | Marine litter in the Mediterranean Sea, An Oceanographic perspective[END_REF]. In semi-enclosed seas, such as the Mediterranean Sea, aggregation patterns are not permanent and high variability is observed at a small scale [START_REF] Suaria | The Mediterranean Plastic Soup: synthetic polymers in Mediterranean surface waters[END_REF]. Wind-induced effects on floating material and Stokes drift velocities require further investigation, such as refinement of regional models. Nevertheless, some available scenarios could be hypothesized with possible retention areas in the northwestern Mediterranean and the Tyrrhenian sub-basins [START_REF] Poullain | Surface Geostrophic Circulation of the Mediterranean Sea Derived from Drifter and Satellite Altimeter Data[END_REF][START_REF] Mansui | Modelling the transport and accumulation of floating marine debris in the Mediterranean basin[END_REF]. The Gulf of Lion is in the northwestern sector of the Mediterranean Sea, and its hydrodynamics are influenced by shallow water depths of the shelf, wind regimes (Mistral and Marin), the Northern Current (NC), and freshwater inputs from the Rhône River [START_REF] Gatti | The Rhone river dilution zone present in the northeastern shelf of the Gulf of Lion in December 2003[END_REF][START_REF] Fraysse | Intrusion of Rhone River diluted water into the Bay of Marseille: Generation processes and impacts on ecosystem functioning[END_REF]. The NC has a high seasonal variability: while a decrease in intensity is observed in summer, it becomes faster, deeper and narrower in winter [START_REF] Millot | Mesoscale and seasonal variabilities of the circulation in the western Mediterranean[END_REF]. Intrusion of the NC onto the shelf of the Gulf of Lion has been observed [START_REF] Ross | Impact of an intrusion by the Northern Current on the biogeochemistry in the eastern Gulf of Lion, NW Mediterranean[END_REF]Barrier et al., 2016 and references therein). This productive shelf is also highly exploited for commercial fishing [START_REF] Bănaru | Trophic structure in the Gulf of Lions marine ecosystem (north-western Mediterranean Sea) and fishing impacts[END_REF] and the coastal area is strongly influenced by tourism activities. Given this areas great economic, touristic and environmental significance, monitoring threats, such as pollution sources, is essential. Therefore, the primary goal of this study was to provide insight into the temporal and spatial distribution of microplastics in the eastern sector of the Gulf of Lion. Furthermore, we wanted to examine relationships between microplastic size distributions and possible pollution sources and transportation routes. Materials and Methods Following the framework of the Particule-MERMEX and PLASTOX projects, microplastic debris were collected at different times between February 2014 and April 2016 (Table S1) in three distinct areas with specific hydrodynamic characteristics (Figure 1) within the eastern sector of the Gulf of Lion (northwestern Mediterranean Sea). The first area is located 40 km offshore at the eastern part (station #1, also called 'Antares site') and is within the direct influence of the Northern Current, which runs east to west along the shelf break over 2,475 m of water [START_REF] Martini | Bacteria as part of bioluminescence emission at the deep ANTARES station (North-Western Mediterranean Sea) during a one-year survey[END_REF]. The second area includes the Bay of Marseille (stations #2, 3 and 4), which is significantly influenced by a population of approximately 1 million inhabitants and by the daily volume of about 250 10 3 m 3 of waste waters released from the Marseille-Cortiou wastewater treatment plant (WWTP) [START_REF] Savriama | Impact of sewage pollution on two species of sea urchins in the Mediterranean Sea (Cortiou, France): Radial asymmetry as a bioindicator of stress[END_REF][START_REF] Tedetti | Fluorescence properties of dissolved organic matter in coastal Mediterranean waters influenced by a municipal sewage effluent (Bay of Marseilles, France)[END_REF]. To the west, the third study area is the downstream part of the Rhône River (station #6, Arles, 48 km from the river mouth) and within the dilution plume (station #5, about 2.5 km from the mouth) [START_REF] Sempéré | Carbon inputs of the Rhône River to the Mediterranean Sea: Biogeochemical implications[END_REF]. Sampling dates, GPS coordinates, microplastic concentration, mean wind speed and wind direction are provided in the supplementary data (Table S1) along with information on precipitation. Surface current speeds and directions were extracted from the Mars 3D model (http://marc.ifremer.fr). Microplastic samples were collected using a Manta net (0.50 m x 0.15 m opening) mounted with a 780 µm mesh size and towed horizontally at the surface. Ten samples from March and April 2016 were collected (in Marseille at stations 3 and 4) with a 330 µm mesh size (Suppl. Table 1). Sampling was only conducted under low swell conditions (< 1 m). The net was towed for 20 minutes at an average speed of 2.5 knots approximately 50 m behind the research vessel. It was towed at a slight angle to avoid disturbances caused by the boat's wake. Samples from the Rhône River (station #6) were collected from a fixed location on the dock of the river. Sampled superficies at this station were calculated by comparing the flow rate during sampling with reference flow rates and river speeds. Lower-limit river speeds were used for estimates, since river speeds tend to be slower near the dock. The net was carefully rinsed and the content of the cod-end was poured into a 1 L glass bottle, preserved with a buffered seawater formalin solution (final concentration 5 %), and kept in cold and dark conditions until further analysis. Samples were then sieved (mesh size 125 µm), and rinsed with ultrapure water (ISO 3696). Plastic debris were picked out with tweezers under a dissecting microscope. Fibers were not taken into account due to the high risk of contamination. No Fourier Transform Infrared Spectroscopy (FTIR) Analysis was performed to verify the nature of the items, so despite all efforts to maximize result reliability, it cannot be excluded that some non-plastic items were estimated to be microplastics. The number, size and shape of each item was identified using a ZooScan © (HYDROPTIC SARL). Each item was placed on the screen of the ZooScan without any water. Surface area measurements in pixels were obtained using the ImageJ software and then converted into mm 2 and the Equivalent Spherical Diameter (ESD). Plastic items ≤ 5 mm were considered. All microplastics from each sample were then weighed (Mettler AE 240, reliability ± 0.1 mg). Microplastic abundance (items km -2 ) and dry weight (g km -2 ) were calculated for each sample using the towing distance and the net opening surface. Analysis of variance (oneway and two-way ANOVA) with a 0.05 level of significance was performed to assess whether the microplastic abundance and size distribution varied with space (stations) and time. The Tukey test was used whenever significant differences were detected. All statistical analyses were performed using R version 3.3.2. Results and Discussion Microplastic abundance Microplastic abundance ranged from 6 10 3 to 1 10 6 (mean 96 10 3 ) items km -2 in the Marseille Bay area, from 33 10 3 to 400 10 3 (mean 113 10 3 ) items km -2 in the Rhône River plume, from 7 10 3 to 69 10 3 (mean 34 10 3 ) items km -2 in the river itself and from 9 10 3 to 916 10 3 (mean 212 10 3 ) items km -2 off-shore (Fig. 2,top). The highest microplastic concentration (1 10 6 items km -2 ) was observed at station #2 (Marseille Bay area). The day this sample was collected was characterized by calm conditions with no noteworthy surface currents near the station. In contrast, the other two stations on the coast of Marseille, stations #3 and #4, showed very low particle concentrations (averages 20 10 3 and 10 10 3 items km -2 , respectively). While a comparison between these both stations and the station #2 is difficult, because the samples were collected in different years (2016 vs. 2014), some assumptions can still be considered. [START_REF] Goldstein | Scales of Spatial Heterogeneity of Plastic Marine Debris in the Northeast Pacific Ocean[END_REF] reported that a high spatial heterogeneity for microplastic concentrations could be found not only at a large scale but also on a smaller scale for samples taken at distances of 10 km from one another. Heterogeneous spatial debris distribution can be the result of currents, wave-and wind-driven turbulences, river inputs or hydrodynamic features such as upwelling, downwelling, gyres or fronts (e.g., [START_REF] Kukulka | The effect of wind mixing on the vertical distribution of buoyant plastic debris[END_REF][START_REF] Suaria | Floating debris in the Mediterranean Sea[END_REF][START_REF] Collignon | Neustonic microplastic and zooplankton in the North Western Mediterranean Sea[END_REF]. More generally, high concentrations of microplastics, especially small fragments, are found in coastal waters because of the proximity of densely populated areas, [START_REF] Pedrotti | Changes in the Floating Plastic Pollution of the Mediterranean Sea in Relation to the Distance to land[END_REF] and continental inputs from the atmosphere or rivers [START_REF] Collignon | Neustonic microplastic and zooplankton in the North Western Mediterranean Sea[END_REF]. Point pollutions could also play an important role in the Bay of Marseille, where the fierce northwestern Mistral wind can transport litter from city streets into coastal waters. Another possible source of microplastics in the Bay of Marseille is the local sewage facility (Cortiou) where treated wastewater enters the sea in the southeastern part of the city. On March 17, 2016 a slight surface current coming from Cortiou at a speed of approximately 0.5 m s -1 entered the area of stations #3 and 4. The microplastic concentrations observed that day were the highest ever found at station #4 (15 10 3 items km -2 ) and the second highest for station #3 (27 10 3 items km -2 ). Interestingly, microplastic abundance was always higher at station #3 compared to station #4, in spite of their geographical proximity (p < 0.05). Our median concentration (31 10 3 items km -2 ) was about one third of the mean value, highlighting potential surges in microplastic presence, possibly linked to climatic and hydrodynamic events. Hydrodynamic processes influencing microplastic distribution are e.g. vertical mixing or eddies and anticyclonic gyres. The latter of which are unsteady formations in the Mediterranean Sea [START_REF] Pedrotti | Changes in the Floating Plastic Pollution of the Mediterranean Sea in Relation to the Distance to land[END_REF], but could lead to punctual increases in regional microplastic abundances. Additionally, in our study area, there is the Northern Current, which varies greatly in intensity, depth, and position [START_REF] Millot | Mesoscale and seasonal variabilities of the circulation in the western Mediterranean[END_REF]. Data collected at station #1 showed temporal variability, with concentrations of microplastics being significantly higher on March 10, 2014 (p < 0.05) when the Northern Current was fast and narrow with maximum speeds of approximately 0.9 m s -1 . However, triplicated trawls exhibited a range of microplastic abundances from 103 10 3 to 916 10 3 items km -2 at this sampling date, implying that a nine fold difference in abundances can be observed in the same sampling area within two hours. This further highlights the strong temporal variability observed for microplastic concentrations. Overall no seasonal differences were detected (p > 0.05), but the low number of observations limits the strength of any comparison. [START_REF] Goldstein | Scales of Spatial Heterogeneity of Plastic Marine Debris in the Northeast Pacific Ocean[END_REF] observed seasonal heterogeneity at much larger scale in the northeastern Pacific Ocean between summer 2009 and fall 2010. Floating debris transported by the NC could be transported to the Balearic Islands, where models calculated high beaching probabilities [START_REF] Mansui | Modelling the transport and accumulation of floating marine debris in the Mediterranean basin[END_REF], or to the seafloor which is known to be a (micro-) plastic sink [START_REF] Claessens | Occurrence and distribution of microplastics in marine sediments along the Belgian coast[END_REF][START_REF] Ioakeimidis | A comparative study of marine litter on the seafloor of coastal areas in the Eastern Mediterranean and Black Seas[END_REF][START_REF] Woodall | The deep sea is a major sink for microplastic debris[END_REF]. Reasons of microplastic sedimentation can be the nature of the plastic material, (if its density is higher than the one of seawater, [START_REF] Tekman | Marine litter on deep Arctic seafloor continues to increase and spreads to the North at the HAUSGARTEN observatory[END_REF], the biofouling accumulation on microplastic surfaces [START_REF] Woodall | The deep sea is a major sink for microplastic debris[END_REF], the incorporation of free microplastics into marine aggregates or the incorporation of microplastics into fast-sinking faecal pellets after ingestion by zooplanktons and fishes [START_REF] Cole | Microplastic Ingestion by Zooplankton[END_REF]. The overall average microplastic abundance for our samples was 112 10 3 items km -2 , which is in the same range as other areas in the northwestern basin, where mean densities have been estimated to 115 10 3 items km -2 [START_REF] Collignon | Neustonic microplastic and zooplankton in the North Western Mediterranean Sea[END_REF], 130 10 3 items km -2 [START_REF] Faure | An evaluation of surface micro-and mesoplastic pollution in pelagic ecosystems of the Western Mediterranean Sea[END_REF] and 150 10 3 items km -2 (De Lucia et al., 2014). Higher amounts were measured for the entire Mediterranean basin (243 10 3 items km -2 , [START_REF] Cózar | Plastic Accumulation in the Mediterranean Sea[END_REF], due to high concentrations in some Mediterranean areas. Densely populated areas as the semienclosed Adriatic Sea and the Levantine Basin were characterized by high densities of 1,050 10 3 (max: 4,600 10 3 ; [START_REF] Suaria | Neustonic microplastics in the Southern Adriatic Sea. Preliminary results[END_REF] and 1,518 10 3 (max: 65 10 6 ; Van der Hal, 2017) items km -2 , respectively. Our results are consistent with previous studies and indicate that the northwestern Mediterranean Sea contains similar mean microplastic concentrations as the Atlantic and Pacific Oceans (mean: 134 10 3 items km -2 and 124 10 3 items km -2 , respectively, [START_REF] Eriksen | Plastic Pollution in the World's Oceans: More than 5 Trillion Plastic Pieces Weighing over 250,000 Tons Afloat at Sea[END_REF]. Hereby it needs to be considered that the Atlantic and Pacific Oceans are also known to be highly heterogeneous, with microplastic accumulation and nonaccumulation zones. Examples for a heavily contaminated area are the East Asian Seas, where a mean microplastic abundance of 1,720 10 3 items km -2 was measured [START_REF] Isobe | East Asian seas: A hot spot of pelagic microplastics[END_REF]. Microplastic abundances in the Rhône River at Arles (station #6; 34 10 3 ± 19 10 3 items km -2 ; net size 0.50 m x 0.15 m, mesh size 780 µm) were relatively low, but were similar to values reported by De Alencastro (2014) upstream at Chancy (~ 52 10 3 items km -2 ; net size 0.60 m x 0.18 m, mesh size 300 µm). In comparison, a mean microplastic abundance of 893 10 3 items km -2 was found in the Rhine River, a watercourse flowing through highly industrialized areas, such as North-Rhine Westphalia (Germany), where many plastic factories are located [START_REF] Mani | Microplastics profile along the Rhine River[END_REF]. Concentrations observed in the Rhône River plume (station #5, up to 400 10 3 items km -2 ) were higher than in the river itself, suggesting that the Rhône River -sea interface may generate a temporal accumulation zone for debris. In general, however, the area covered by our six sampling stations is not considered to be a retention area, but can better be described as a "transit area". The size of the Mediterranean basin reduces the potential for formation of permanent gyres as in the Atlantic, Pacific and Indian Oceans, where plastic often concentrates [START_REF] Cózar | Plastic Accumulation in the Mediterranean Sea[END_REF]. At the river mouth, microplastic concentrations were either significantly greater (p < 0.05) or similar to those observed upstream in the Rhône River. Concerning the river plume, we should highlight the similitude in zooplankton composition of two samples collected with the same Manta trawl, first, on the 10/03/14 at the station #1 (NC) and then (18/03/14), at the Rhône River Plume station (station #5). High abundances (> 1,000 individuals per sample) of Velella velella, a free-floating hydrozoan, were observed at both dates (Thibault D. pers. com.), implying a potential intrusion of water masses from the NC onto the shelf. Such intrusions have already been observed before [START_REF] Barrier | Strong intrusions of the Northern Mediterranean Current on the eastern Gulf of Lion: insights from in-situ observations and high resolution numerical modelling[END_REF]. Salinity data from the Mars 3D model support the hypothesis: while the Rhône River plume was extended in all directions on 10/03/14 and the following days, saltier surface waters pushed from the eastern direction into the area from 16/03/14 on and thus, reduced the extension area of the river plume. During the period examined, the velocity of the NC flowing through station #1 was about 0.4 m s -1 (Suppl. Table 1), but currents leaving the main branch in northwestern directions flowed at reduced speeds of about 0.2 m s -1 . At this speed range (0.2-0.4 m s -1 ), water masses could have travelled about 140-275 km in eight days, which is consistent with the straight line distance (120 km) between both stations. However, we would like to point out that these are only indications, since an accurate model would be needed to simulate the exact trajectory of the water masses and microplastics in question. Microplastic weight Microplastic dry weight showed a similar variability, ranging from 0.30 g DW km -2 to 1018 g DW km -2 , with the maximum observed in Marseille Bay (Fig. 2,bottom). An average of 61.92 g DW km -2 (± 178.03 g DW km -2 ) was found in the study area. This value is similar to averages of 60 and 63 g DW km -2 reported for the western part of the northwestern Mediterranean Sea [START_REF] Collignon | Neustonic microplastic and zooplankton in the North Western Mediterranean Sea[END_REF] and the upstream part of the Rhône River (De Alencastro, 2014), respectively. An estimated surface area of 87 km 2 of the Bay of Marseille would provide a total microplastic load of 0.07 to 118 kg (mean 9.94 kg), representing a range of concentrations from 0.5 10 6 to 101 10 6 (mean 8 10 6 ) microplastic pieces in surface waters. For the Rhône River, the flow rate used for calculations varied between 1,150 m 3 s -1 and 1,600 m 3 s -1 during the sampling period. Using minimum and maximum concentration and weight values, we calculated a daily microplastic spill of 0.20 -21.32 kg (dry weight), representing 10 10 6 -40 10 6 items discharged by the Rhône River into the Mediterranean Sea. Similarly, microplastic loads for the Northern Current were calculated using volumetric transport rates of 0.7 Sv [START_REF] Conan | Variability of the Northern Current off Marseilles, western Mediterranean Sea, from February to June 1992[END_REF] and 2 Sv [START_REF] Petrenko | Variability of circulation features in the Gulf of Lion NW Mediterranean Sea. Importance of inertial currents[END_REF] and the minimum and maximum concentration and weight values. This method provided an estimate of daily transport ranging from 0.18 to 86.46 tons (dry weight) of microplastic, representing 4 10 9 to 1 10 12 items. These calculations give minimum ranges, since they are based on the assumption that microplastics concentrate within 15 cm under the surface. Turbulences, especially in rivers, may however transfer microplastics through several meters of the water column. As interesting as they are, these extrapolations should be considered with caution, since microplastic abundances show a high amount of variability and are difficult to predict. Microplastic size distribution The mean size of microplastic was 1.48 ± 0.88 mm, however significant differences (p < 0.01) were observed between samples from the Bay of Marseille (stations # 2-4) and all other sampling stations. For better visualization of the size distribution of our samples, we calculated the equivalent spherical diameter (ESD) of each microplastic particle (Figure 3). A general exponential distribution curve was observed with the smallest items being the most important, except in the Bay of Marseille, where microplastics were more evenly distributed over the size range. The overall size distribution observed in this study closely resembles those observed for the Mediterranean Sea [START_REF] Ruiz-Orejón | Floating plastic debris in the Central and Western Mediterranean Sea[END_REF][START_REF] Cózar | Plastic Accumulation in the Mediterranean Sea[END_REF], open ocean waters [START_REF] Cózar | Plastic debris in the open ocean[END_REF] and the Northeast Pacific Ocean [START_REF] Goldstein | Scales of Spatial Heterogeneity of Plastic Marine Debris in the Northeast Pacific Ocean[END_REF]. Manta nets are the most commonly used sampling device for microplastic sampling in aquatic ecosystems and were also used in this study. The mesh size of the net can influence the size distribution as well as the speed of the tow, as smaller particles avoiding the net can be forced aside from the net opening or large particles can squeeze out through the mesh. This study used mainly a 780 µm mesh sized net and a 330 µm mesh sized net only for ten sampling events at stations # 3 and #4 (Suppl. Table 1). We expected to collect more 0.0-0.4 mm items at both concerned stations (#3 and #4) by using the 330 µm mesh sized net, but microplastics of this size class were observed neither in samples from the 330 µm mesh size, nor in samples from the 780 µm mesh size. No influence on the microplastic size distribution caused by the use of these different mesh sizes was hence observed. This was statistically confirmed by removing all samples collected with the 330 µm net from the dataset and repeating the one-way ANOVA with the following posthoc test and obtaining the same significant results. Size distribution can be an indicator of the source of marine debris and of its distance to the shoreline. While [START_REF] Pedrotti | Changes in the Floating Plastic Pollution of the Mediterranean Sea in Relation to the Distance to land[END_REF] observed that small sized microplastics were more abundant within the first kilometre adjacent to the coastline, [START_REF] Isobe | East Asian seas: A hot spot of pelagic microplastics[END_REF] found that the percentage of larger plastic particles is typically greater in areas close to the pollution source. The surface area distributions of stations #1 (NC) and 6 (Rhône River) clearly resemble each other. Both are dominated by small particles (< 1 mm 2 : 52 and 53 %, respectively, Figure 4). This size class only represented 27 % for stations in the Bay of Marseille, but represented 61 % of microplastic particles at the Rhône River plume. The second size class (1-5 mm 2 ) was the most abundant in Marseille Bay (55 %). The largest pieces (> 10 mm 2 ) were poorly represented (< 5 %) at all stations. The size class distributions are likely related to the distance of the collected particles from pollution sources. In the case of station #1 (NC), it is likely that microplastics were transported by the Northern Current and may have originated in regions farther east, such as the Italian coast. At station #6 (Rhône River), the size distribution suggests that the collected microplastics were in the Rhône River watershed for some time and certainly originated from highly industrialized and/or populated regions higher upstream (e.g., Lyon with ~ 500,000 inhabitants). The position of the Rhône River plume varies based on wind and river flow; therefore, debris will be contributed from both the river itself and surrounding coastal areas in variable amounts. Since the smallest particles are most abundant here, it is probable that these microplastics, have also been transported by water masses for some time before collection. In the Bay of Marseille (stations #2-4) the dominance of larger particles (1-5 mm 2 ) suggests that the microplastics collected in this area were closer to their source and mainly originate from the urban area. A more efficient removal of the smallest floating particles in this region, via ballasting due to epiphytic growth, could also be a possible explanation [START_REF] Ryan | Does size and buoyancy affect the long-distance transport of floating debris[END_REF]. Conclusions This study provides additional data on microplastic occurrence in the eastern Gulf of Lion. Our results revealed that surface water microplastic concentrations and size distributions in this area affected by anthropogenic impacts are consistent with those already published for the western Mediterranean Sea. Significant temporal and spatial heterogeneity was observed for microplastic abundances. Our results confirm that the Rhône River, large cities, such as Marseille, and the Northern Current act as sources and/or transportation routes of microplastics collected in the northwestern basin of the Mediterranean Sea. As our microplastics are floating, it was shown that it can be pertinent to study the zooplankton composition of samples additionally to currentology data, in order to improve our knowledge on microplastic transport in the sea. Fig. 1 . 1 Fig. 1. Fig. 2 . 2 Fig. 2. Fig. 3 . 3 Fig. 3. Fig. 4 . 4 Fig. 4. Acknowledgments This study was conducted as part of the PARTICULE-region PACA, MERMEX/MISTRALS and JPI Oceans PLASTOX projects. We acknowledge the technical support provided by the Service Atmosphere Mer (SAM) and the Microscopie et Imagerie (MIM) and Marine Environmental Chemistry (PACEM) M I O platforms. We sincerely thank the captain and crew of N. O. Antedon II and Thethys, as well as Sandrine Ruitton, who kindly allowed sampling time during her diving trips. We thank Maryvonne Henry and Anne Delmont for help with sample collection. Maria Luiza Pedrotti's work group from LOV laboratory in Villefranche-sur-Mer kindly shared their ZooScan expertise. Thanks to Javier Castro-Jimenez and Vincent Fauvelle for revising an earlier version of the manuscript. The project leading to this publication has received funding from European FEDER Fund under project 1166-39417. A PhD scholarship for N. Schmidt was provided by Agence de l'Eau.
01536928
en
[ "phys.mphy", "math" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01536928/file/1706.02793.pdf
Robert Coquereaux email: [email protected] Jean-Bernard Zuber email: [email protected] From orbital measures to Littlewood-Richardson coefficients and hive polytopes Keywords: Horn problem. Honeycombs. Polytopes. SU(n) Littlewood-Richardson coefficients Mathematics Subject Classification 2010: 17B08, 17B10, 22E46, 43A75, 52Bxx The volume of the hive polytope (or polytope of honeycombs) associated with a Littlewood-Richardson coefficient of SU(n), or with a given admissible triple of highest weights, is expressed, in the generic case, in terms of the Fourier transform of a convolution product of orbital measures. Several properties of this function -a function of three non-necessarily integral weights or of three multiplets of real eigenvalues for the associated Horn problem-are already known. In the integral case it can be thought of as a semi-classical approximation of Littlewood-Richardson coefficients. We prove that it may be expressed as a local average of a finite number of such coefficients. We also relate this function to the Littlewood-Richardson polynomials (stretching polynomials) i.e., to the Ehrhart polynomials of the relevant hive polytopes. Several SU(n) examples, for n " 2, 3, . . . , 6, are explicitly worked out. Introduction In a previous paper [START_REF] Zuber | Horn's problem and Harish-Chandra's integrals. Probability distribution functions[END_REF], the following classical Horn's problem was addressed. For two n by n Hermitian matrices A and B independently and uniformly distributed on their respective unitary coadjoint orbits O α and O β , labelled by their eigenvalues α and β, call ppγ|α, βq the probability distribution function (PDF) of the eigenvalues γ of their sum C " A`B. With no loss of generality, we assume throughout this paper that these eigenvalues are ordered, α 1 ě α 2 ě ¨¨¨ě α n (1) and likewise for β and γ. In plain (probabilistic) terms, p describes the conditional probability of γ, given α and β. The general expression of p was given in [START_REF] Zuber | Horn's problem and Harish-Chandra's integrals. Probability distribution functions[END_REF] in terms of orbital integrals and computed explicitly for low values of n. The aim of the present paper is to study the relations between this function p, and the tensor product multiplicities for irreducible representations (irreps) of the Lie groups Upnq or SUpnq, encoded by the Littlewood-Richardson (LR) coefficients. Our main results are the following. A central role is played by a function J n pα, β; γq proportional to p, times a ratio of Vandermonde determinants, see [START_REF] Coquereaux | On some properties of SU(3) Fusion Coefficients[END_REF]. This J n is identified with the volume of the hive polytope (also called polytope of honeycombs) associated with the triple pα, β; γq, see Proposition 4. It is thus known [START_REF] Heckman | Projections of Orbits and Asymptotic Behavior of Multiplicities for Compact Lie Groups[END_REF] to provide the asymptotic behavior of LR coefficients, for large weights. We find a relation between J n and a sum of LR coefficients over a local, finite, n-dependent, set of weights, which holds true irrespective of the asymptotic limit, see Theorem 1. In particular for SU(3), the sum is trivial and enables one to express the LR coefficient as a piecewise linear function of the weights, see Proposition 5 and Corollary 1. Implications on the stretching polynomial (sometimes called Littlewood-Richardson polynomial) and its coefficients are then investigated. The content of this paper is as follows. In sec. 1, we recall some basic facts on the geometric setting and on tensor and hive polytopes. We also collect formulae and results obtained in [START_REF] Zuber | Horn's problem and Harish-Chandra's integrals. Probability distribution functions[END_REF] on the function J n . Section 2 is devoted to the connection between Harish-Chandra's orbital integrals and SUpnq character formulae, to its implication on the relation between J n and LR coefficients (Theorem 1), and to consequences of the latter. In sec. 3, we reexamine the interpretation of J n as the volume of the hive polytope in the generic case (Proposition 4), through the analysis of the asymptotic regime. In the last section (examples), we take n " 2, 3 . . . , 6, consider for each case the expression obtained for J n , give the local relation existing between the latter and LR coefficients (this involves two polynomials, that we call R n and p R n , expressed as characters of SUpnq), and study the corresponding stretching polynomials. Some of the features studied in the main body of this article are finally illustrated in the last subsection where we consider a few specific hive polytopes. Convolution of orbital measures, density function and polytopes 1.Underlying geometrical picture We consider a particular Gelfand pair pUpnq˙H n , Upnqq associated with the group action of the Lie group Upnq on the vector space of n by n Hermitian matrices. This geometrical setup allows one to develop a kind of harmonic analysis where "points" are replaced by coadjoint orbits of Upnq : the Dirac measure (delta function at the point a) is replaced by an orbital measure whose definition will be recalled below, and its Fourier transform, here an orbital transform, is given by the so-called Harish-Chandra orbital function. This theory of integral transforms can also be considered as a generalization of the usual Radon spherical transform (also called Funk transform). Contrarily to Dirac measures, orbital measures are not discrete, since their supports are orbits of the chosen Lie group. Such a measure is described by a probability density function (PDF), which is its Radon-Nikodym derivative with respect to the Lebesgue measure. In Fourier theory one may consider the measure formally defined as a convolution product of Dirac masses: ă δ a ‹ δ b , f ą" ş δ a`b pxqf pxqdx. Here we shall consider, instead, the convolution product of two orbital measures described by the orbital analog of δ a`b pcq, a probability density function labelled by three Upnq orbits of H n . These orbits and that function p may be considered as functions of three hermitian matrices (we shall write it ppC|A, Bq), and this answers a natural question in the context of the classical Horn problem, as mentioned above in the Introduction, see also sec. 1.1.4 below. This was spelled out in paper [START_REF] Zuber | Horn's problem and Harish-Chandra's integrals. Probability distribution functions[END_REF]. Our main concern, here, is the study of the relations that exist between this function p, and the tensor product multiplicities for irreducible representations (irreps) of the Lie groups Upnq or SUpnq, encoded by the Littlewood-Richardson (LR) coefficients N ν λµ . For small values of n the function p can be explicitly calculated; for integral values of its arguments, the related function J n can be considered as a semi-classical approximation of the LR coefficients. Orbital measures For F , a function on the space of orbits, and O A , the orbit going through A P H n , one could formally consider the "delta function" ă δ O A , F ą" F pO A q, but we shall use test functions defined on H n instead. The orbital measure m A , that plays the role of δ O A , is therefore defined, for any continuous function f on H n , by ă m A , f ą" ż Upnq f pu ‹ Auqdu where the integral is taken with respect to the Haar mesure1 on Upnq, i.e., by averaging the function f on a Upnq coadjoint orbit. Fourier transform of orbital measures Despite the appearance of the Haar measure on the group Upnq entering the definition of m A , one should notice that this is a measure on the vector space H n , an abelian group. Being an analog of the Dirac measure, its orbital transform2 is a complex-valued function y m A pXq on H n defined by evaluating m A on the following exponential function: Y P H n Þ Ñ exppi tr pX Y qq P C. Hence we obtain : y m A pXq " ż Upnq exppi tr pXu ‹ Auqqdu As this quantity only depends on the respective eigenvalues of X and A, i.e., on the diagonal matrices x " px 1 , x 2 , . . . x n q, and α " pα 1 , α 2 , . . . , α n q, it is then standard to rename the previous Fourier transform and consider the following two-variable function, called the Harish-Chandra orbital function: Hpα, i xq " ż Upnq exppi tr pxu ‹ αuqq du (2) The HCIZ integral The following explicit expression of H was found in [START_REF] Harish-Chandra | Differential Operators on a Semisimple Algebra[END_REF][START_REF] Itzykson | The planar approximation II[END_REF]. Hpα, i xq " sf pn ´1q pdet e i x i α j q 1ďi,jďn ∆pixq∆pαq where ∆pxq " Π iăj px i ´xj q is the Vandermonde determinant of the x's. Here and in the following we make use of the superfactorial sf pmq :" m ź p"1 p! . (4 ă m A ‹ m B , f ą"ă m A b m B , pf q ą where pf qpa, bq :" f pa `bq . This orbital analog of δ a`b pcq has a non discrete support: for A, B P H n , the support of m A,B " m A ‹ m B is the set of uAu ‹ `vBv ‹ for u, v P Upnq. The probability density function p of m A,B is obtained by applying an inverse Fourier transformation to the product of Fourier transforms (calculated using y m A pXq) of the two measures: ppγ|α, βq " 1 p2πq n ˆ∆pγq sf pnq ˙2 ż R n d n x ∆pxq 2 Hpα, i xqHpβ, i xqHpγ, i xq ‹ . (5) Notice that p involves three copies of the HCIZ integral and that we wrote it as in integral on R n , whence the prefactor coming from the Jacobian of the change of variables. We shall see below (formulae extracted from [START_REF] Zuber | Horn's problem and Harish-Chandra's integrals. Probability distribution functions[END_REF]) how to obtain quite explicit formulae for this expression. On polytopes In the present context of orbit sums and representation theory, one encounters two kinds of polytopes, not to be confused with one another. On the one hand, given two multiplets α and β, ordered as in [START_REF] Beck | Coefficients and Roots of Ehrhart Polynomials[END_REF], we have what may be called the Horn polytope r H αβ , which is the convex hull of all possible ordered γ's that appear in the sum of the two orbits O α and O β . As proved by Knutson and Tao [20] that Horn polytope is identical to the convex set of real solutions to Horn's inequalities, including the inequalities (1), applied to γ. For SUpnq, this Horn polytope is pn ´1q-dimensional. On the other hand, combinatorial models associate to such a triple pα, β; γq, with γ P r H αβ , a family of graphical objects that we call generically pictographs. This family depends on a number pn ´1qpn ´2q{2 of real parameters, subject to linear inequalities, thus defining a d-dimensional polytope r H γ α β , with d ď pn ´1qpn ´2q{2. These two types of polytopes are particularly useful in the discussion of highest weight representations of SUpnq and their tensor product decompositions. Given two highest weight representations V λ and V µ of SUpnq, we look at the decomposition into irreps of V λ b V µ , or of λ b µ, in short, see below sec. 2. Consider a particular space of intertwiners (equivariant morphisms) associated with a certain "branching", i.e., a particular term ν in that decomposition, that we call an admissible triple pλ, µ; νq, see below Definition 3. Such ν's lie in the tensor polytope H λµ inside the weight space. The multiplicity N ν λµ of ν in the tensor product λ b µ is the dimension of the space of intertwiners determined by the admissible triple pλ, µ; νq. As proved in [START_REF] Knutson | The honeycomb model of GL n pCq tensor products I: proof of the saturation conjecture[END_REF], is is also the number of pictographs with integral parameters. It is thus also the number of integral points in the second polytope that we now denote H ν λ µ . These integral points may be conveniently thought of as describing the different "couplings" of the three chosen irreducible representations. Pictographs are of several kinds. All of them have three "sides" but one may distinguish two families: first we have those pictographs with sides labelled by integer partitions (KT-honeycombs [START_REF] Knutson | The honeycomb model of GL n pCq tensor products I: proof of the saturation conjecture[END_REF], KT-hives [START_REF] Knutson | The honeycomb model of GL n pCq tensor products II: Puzzles determine facets of the Littlewood-Richardson cone[END_REF]), then we have those pictographs with sides labelled by highest weight components of the chosen irreps (BZ-triangles [START_REF] Berenstein | Triple multiplicities for sl(r `1) and the spectrum of the external algebra in the adjoint representation[END_REF], O-blades [25], isometric honeycombs3 ). For convenience, we refer to H ν λ µ as the "hive polytope", or also "the polytope of honeycombs". As mentioned above, for SUpnq, and for an admissible triple pλ, µ; νq, the dimension of the hive polytope is pn ´1qpn ´2q{2: this may be taken as a definition of a "generic triple", but see below Lemma 1 for a more precise characterization. The cartesian equations for the boundary hyperplanes have integral coefficients, the hive polytope is therefore a rational polytope. All the hive polytopes that we consider in this article are "integral hive polytopes" in the terminology of [START_REF] King | Stretched Littlewood-Richardson and Kostka coefficients[END_REF], however the corners of all such polytopes (usually called "vertices") are not always integral points, therefore an "integral hive polytope" is not necessarily an integral polytope in the usual sense: the convex hull of its integral points is itself a polytope, but there are cases where the latter is strictly included in the former. We shall see an example of this situation in sec. 4.4.2. We shall return later to these polytopes and to the counting functions of their integral points, in relation with stretched Littlewood-Richardson coefficients, see sec. 3. 1.3 Some formulae and results from paper [START_REF] Zuber | Horn's problem and Harish-Chandra's integrals. Probability distribution functions[END_REF] 1.3.1 Determination of the density p and of the kernel function J n Some general expressions for the three variable function p were obtained in [START_REF] Zuber | Horn's problem and Harish-Chandra's integrals. Probability distribution functions[END_REF]. For the convenience of the reader, we repeat them here. The determinant entering the HCIZ integral is written as det e i x i α j " e i 1 n ř n j"1 x j ř n k"1 α k det e i px i ´1 n ř x k qα j (6 ) " e i 1 n ř n j"1 x j ř n k"1 α k ÿ P PSn ε P n´1 ź j"1 e i px j ´xj`1 qp ř j k"1 α P pkq ´j n ř n k"1 α k q , (7) where ε P is the signature of permutation P . In the product of the three determinants entering (5), the prefactor e i ř n j"1 x j ř n k"1 pα k `βk ´γk q{n yields, upon integration over 1 n ř x j , 2π times a Dirac delta of ř k pα k `βk ´γk q, expressing the conservation of the trace in Horn's problem. One is left with an expression involving an integration over pn ´1q variables u j :" x j ´xj`1 . ppγ|α, βq " sf pn ´1q n! δp ÿ k pα k `βk ´γk qq ∆pγq ∆pαq∆pβq J n pα, β; γq (8) J n pα, β; γq " i ´npn´1q{2 2 n´1 n! π n´1 ÿ P,P 1 ,P 2 PSn ε P ε P 1 ε P 2 ż R n´1 d n´1 u r ∆puq n´1 ź j"1 e i u j A j pP,P 1 ,P 2 q (9) A j pP, P 1 , P 2 q " j ÿ k"1 pα P pkq `βP 1 pkq ´γP 2 pkq q ´j n n ÿ k"1 pα k `βk ´γk q, (10) where the Vandermonde ∆pxq has been rewritten as r ∆puq :" ź 1ďiďjďn´1 pu i `ui`1 `¨¨¨u j q . (11) Discussion Several properties of ppγ|α, βq and of J n are described in the paper [START_REF] Zuber | Horn's problem and Harish-Chandra's integrals. Probability distribution functions[END_REF]. We only summarize here the information that will be relevant for our discussion relating these functions to the Littlewood-Richardson multiplicity problem. Note that the above expression of A j is invariant under simultaneous translations of all γ's @ i γ i Ñ γ i `c c P R . In the original Horn problem, this reflects the fact that the PDF ppγ|α, βq of eigenvalues of C " A `B is the same as that of C `cI, with a shifted support. Therefore in the computation of J n pα, β; γq, one has a freedom in the choice of a "gauge" (a) either γ n " 0, (b) or γ such that ÿ i γ i " ÿ i pα i `βi q , (12) (c) or any other choice, provided one takes into account the second term in the rhs of [START_REF] Fulton | Representation Theory, A First Course[END_REF] (which vanishes in case (b)). Note also that enforcing [START_REF] Haase | Lecture Notes on Lattice Polytopes, Fall School on Polyhedral Combinatorics[END_REF] starting from an arbitrary γ implies to translate γ Ñ γ " γ `c, with c " 1 n p ř i α i `ři β i ´ři γi q. If the original γ has integral components, this is generally not the case for the final γ. J n pα, β; γq has the following properties that will be used below: -(i) As apparent on [START_REF] Derksen | On the Littelwood-Richardson polynomials[END_REF], it is an antisymmetric function of α, β or γ under the action of the Weyl group of SUpnq (the symmetric group S n ). As already said, we choose throughout this paper the ordering (1) and likewise for β and γ. For pα, β; γq satisfying ( 12) -(ii) J n pα, β; γq is piecewise polynomial, homogeneous of degree 1 2 pn ´1qpn ´2q in α, β, γ in the generic case; -(iii) as a function of γ, it is of class C n´3 . This follows by the Riemann-Lebesgue theorem from the decay at large u of the integrand in [START_REF] Derksen | On the Littelwood-Richardson polynomials[END_REF], see [START_REF] Zuber | Horn's problem and Harish-Chandra's integrals. Probability distribution functions[END_REF]; -(iv) it is non negative inside the polytope r H αβ , cf sec. 1. By a slight abuse of language, when dealing with triples of highest weights pλ, µ; νq, we say that such an admissible triple is generic iff the associated triple pα, β; γq is, see below sec. 2.1. By another abuse of language, we also refer to a single highest weight λ as generic iff none of its Dynkin indices vanishes, i.e., iff λ does not lie on one of the walls of the dominant Weyl chamber, or if equivalently the associated α has no pair of equal components. From its interpretation as a probability density (up to positive factors), it is clear that J n could vanish at most on subsets of measure zero inside the Horn (or tensor) polytope. Actually it does not vanish besides the cases mentioned in points (v-vii) of the previous list. We want to construct the linear span of honeycombs r H γ αβ defined above in sect. 1.2. We first consider what may be called the "SUpnq case", where α n " β n " 0 and γ n is fixed by [START_REF] Haase | Lecture Notes on Lattice Polytopes, Fall School on Polyhedral Combinatorics[END_REF]. By relaxing the inequalities on the pn ´1qpn ´2q{2 parameters defining the usual honeycombs, one builds a vector space of dimension 1 2 pn ´1qpn `4q " 3pn ´1q `pn ´1qpn ´2q{2 whose elements are sometimes called real honeycombs. One may construct a basis of "fundamental honeycombs", see [START_REF] Fulton | Representation Theory, A First Course[END_REF], and consider arbitrary linear combinations, with real coefficients, of these basis vectors. The components of any admissible triplepα, β, γq, depend linearly of the components of the associated honeycombs along the chosen basis. In such a way, one obtains a surjective linear map, from the vector space of real honeycombs, to the vector space R 3pn´1q . One sees immediately that its fibers are affine spaces of dimension d max " pn ´1qpn ´2q{2, and for fixed α, β they are indexed by γ, i.e., by points of R pn´1q . By taking into account the inequalities defining usual honeycombs, but still working with real coefficients, the fibers of this map restrict to compact polytopes whose affine dimension d is at most equal to d max (the dimension can be smaller, because of the inequalities that define bounding hyperplanes). For given α and β, if γ belongs to the Horn polytope r H αβ Ă R pn´1q , the corresponding restricted fiber is nothing else than the associated hive polytope r H γ αβ . We therefore obtain a map π whose target set is the Horn polytope, a convex set, and whose fibers are compact polytopes. We then make use of the following result5 : the dimension of the fibers of π is constant on the interiors of the faces of its target set. In particular, it is constant on the interior of its face of codimension 0, which is the interior of the Horn polytope r H αβ . In the present situation this tells us that the dimension of π ´1pγq " r H γ αβ which is the fiber above γ, is constant when γ belongs to the interior of the Horn polytope r H αβ . In particular, its d-dimensional volume, where d has its maximal value d " pn ´1qpn ´2q{2 for SUpnq, cannot vanish there. We shall see later (in section 3) that this volume is given by J n pα, β; γq. In the case of GLpnq, (with α n , β n non fixed to 0), the argument is similar, so we have: Lemma 1. For α and β with distinct components, the function J n pα, β; γq does not vanish for γ inside the polytope r H αβ . 2 From Horn to Littlewood-Richardson and from orbital transforms to characters Young partitions and highest weights An irreducible polynomial representation of GLpnq or an irrep of SUpnq, denoted V λ , is characterized by its highest weight λ. One may use alternative notations, describing this highest weight either by its Dynkin indices (components in a basis of fundamental weights) λ i , i " 1, ¨¨¨, n, and λ n " 0 in SUpnq ; or by its Young components, i.e., the lengths of rows of the corresponding Young diagram: α " pλq, i.e., i pλq " n ÿ j"i λ j i " 1, ¨¨¨, n . (14) Note that such an α " pλq satisfies the ordering condition (1). In the decomposition into irreps of the tensor product of two such irreps V λ and V µ of GLpnq, we denote by N ν λµ the Littlewood-Richardson (LR) multiplicity of V ν . As recalled above, N ν λµ equals the number of honeycombs with integral labels and boundary conditions α " pλq, β " pµq, γ " pνq, i.e., the number of integral points in the polytope H ν λ µ [START_REF] Knutson | The honeycomb model of GL n pCq tensor products I: proof of the saturation conjecture[END_REF]. Given three U(n) (resp. SUpnq) weights λ, µ, ν, for instance described by their n (resp. n ´1) components along the basis of fundamental weights, invariance under the U(1) center of U(n) (resp. the Z n center of SUpnq), tells us that a necessary condition for the non-vanishing of N ν λµ is ř n j"1 jpλ j `µj ´νj q " 0 (resp. ř n´1 j"1 jpλ j `µj ´νj q " 0 mod n). Given three SUpnq weights λ, µ, ν obeying the above SUpnq condition, one can build three U(n) weights (still denoted λ, µ, ν) obeying the U(n) condition by setting λ n " µ n " 0 and ν n " 1 n ř n´1 j"1 jpλ j `µj ´νj q; in terms of partitions, with α " pλq, β " pµq and γ " pνq, the obtained triple pα, β; γq automatically obeys eq. ( 12). More generally we shall refer to a U(n) triple such that the equivalent U(n) conditions eq. ( 12), or eq. ( 15) below, hold true, as a U(n)-compatible triple, or a compatible triple, for short. Definition 2. A triple pλ, µ; νq of Upnq weights is said to be compatible iff n ÿ k"1 kpλ k `µk ´νk q " 0 . (15) For triples of SUpnq weights, we could use the same terminology, weakening the above condition (15) since it is then only assumed to hold modulo n, but in the following we shall always extend such SUpnq-compatible triples to U(n)-compatible triples, as was explained previously. We also recall another more traditional definition Definition 3. A triple pλ, µ; νq of Upnq or SUpnq weights is said to be admissible iff N ν λµ ‰ 0 . The reader should remember (at least in the context of this article !) the difference between compatibility and admissibility, the former being obviously a necessary condition for the latter. For given λ and µ, or equivalently, given α and β, if N ν λµ ‰ 0 for some h.w. ν, the corresponding γ must lie inside or on the boundary of the Horn polytope r H αβ , by definition of the latter. Since for n ě 3 the function J n pα, β; γq is continuous and vanishes on the boundary of its support, evaluating it for α, β, γ does not provide a strong enough criterion to identify admissible triples pα, β; γq. Relation between Weyl's character formula and the HCIZ integral There is an obvious similarity between the general form (5) of the PDF ppγ|α, βq and the expression of the LR multiplicity N ν λµ as the integral of the product of characters χ λ χ µ χ ν over the unitary group SU(n) or over its Cartan torus T n " Up1q n´1 N ν λµ " ż SUpnq du χ λ puqχ µ puqχ ν puq or N ν λµ " ż Tn dT χ λ pT qχ µ pT qχ ν pT q (16) with the normalized Haar measure on T n , dT " 1 p2πq n´1 n! |∆pe i t q| 2 n´1 ź i"1 dt i , (17) for T " diag pe i t j q j"1,¨¨¨,n with n ÿ j"1 t j " 0 . (18) This similarity finds its root in the Kirillov [START_REF] Kirillov | Lectures on the Orbit Method[END_REF] formula expressing χ λ as the orbital function H relative to O pλ`ρq , defined in [START_REF] Bégin | sup3q k fusion coefficients[END_REF], see below [START_REF] Knutson | The honeycomb model of GL n pCq tensor products II: Puzzles determine facets of the Littlewood-Richardson cone[END_REF][START_REF] Macdonald | The volume of a compact Lie group[END_REF]; note the shift of λ by the Weyl vector ρ, the half-sum of positive roots. Recall Weyl's formula for the dimension of the vector space V λ of h.w. λ dim V λ " ∆pα 1 q sf pn ´1q with α 1 " pλ `ρq , and as defined in ( 14) . From a geometrical point of view, this formula expresses dim V λ as the volume of a group orbit normalized by the volume of SUpnq, the latter being also equal to sf pn ´1q, once a natural Haar measure has been chosen, see [START_REF] Macdonald | The volume of a compact Lie group[END_REF]. From group characters to Harish-Chandra orbital functions Kirillov's formula [START_REF] Kirillov | Lectures on the Orbit Method[END_REF] relates Weyl's SUpnq character formula with the orbital function of O α 1 . Here and below, the prime on α 1 refers to the value of α, for the shifted highest weights λ `ρ α 1 " pλ `ρq , (20) and likewise for β 1 , γ 1 . Indeed evaluated on an element T of the SUpnq Cartan torus as in ( 18), Weyl's character formula reads χ λ pT q :" tr V λ pT q " det e i t i α 1 j ∆pe i t q with ∆pe i t q " ź 1ďiăjďn pe i t i ´ei t j q , (21) or in terms of the orbital function H defined in (2) and made explicit in (3) χ λ pT q " ∆pα 1 q sf pn ´1q ˜ź 1ďiăjďn i pt i ´tj q pe i t i ´ei t j q ¸Hpα 1 , i tq (22) or, owing to the Weyl dimension formula ( 19) χ λ pT q dim V λ " ∆pi tq ∆pe i t q Hpα 1 , i tq . (23) The polynomial R n pT q Consider the following (semi-convergent) integral J " ż R du e i uA u A P R a one-dimensional analogue of the integral encountered in [START_REF] Derksen | On the Littelwood-Richardson polynomials[END_REF]. If A is a half-integer, we may write according to a well-known identity. If A is an integer, the previous sum over n is understood as a principal value. Then A half-integer J " ż π ´π du e i uA 8 ÿ n"´8 p´1q n u `np2πq " ż π ´π du e i uA A integer J " ż π ´π du e i uA P.V. 8 ÿ n"´8 1 u `np2πq " ż π ´π du e i uA 1 2 tanpu{2q We now repeat this simple calculation for the pn ´1q-dimensional integral appearing in [START_REF] Derksen | On the Littelwood-Richardson polynomials[END_REF], evaluated either for unshifted α, β, γ or for shifted α 1 , β 1 , γ 1 , associated as above with a compatible triple of highest weights pλ, µ; νq. First we observe that the determinant det e i px i ´1 n ř x k qα 1 j that appears in the first line of ( 7) is nothing else than the numerator of Weyl's formula [START_REF] Knutson | Honeycombs and sums of Hermitian matrices[END_REF] for the SUpnq character χ λ pT q, evaluated for the unitary and unimodular matrix T " diag `ei px i ´1 n ř x k q ˘. ( 24 ) Henceforth we take t i " px i ´1 n ř x k q, ř t i " 0. Consider now the product of three such determinants as they appear in the computation of J n pα 1 , β 1 ; γ 1 q, see [START_REF] Derksen | On the Littelwood-Richardson polynomials[END_REF]. Each factor e i ř j u j A j , under 2π-shifts of the variables u j :" t j ´tj`1 , u j Ñ u j `pj p2πq, is not necessarily periodic, because of the second term of A j in ( 10): e i ř j u j A j Ñ e i ř j u j A j e ´2πi ř j jp j n ř k pα 1 k `β1 k ´γ1 k q . Indeed, for α 1 " pλ `ρq, etc, we have n ÿ k"1 pα 1 k `β1 k ´γ1 k q " n´1 ÿ k"1 kpλ k `µk ´νk q `npn ´1q 2 , the first term of which vanishes for a compatible triple pλ, µ; νq, see [START_REF] Ikenmeyer | Small Littlewood-Richardson coefficients[END_REF]. Thus we find that under the above shift, e i ř j u j A j Ñ e i ř j u j A j p´1q ř j jpn´1qp j . For n odd, like in SU(3), the numerator is 2πperiodic in each variable u j . For n even, however, we have a sign p´1q jp j . We may thus compactify the integration domain of the u-variables, bringing it from R n´1 back to p´π, πq n´1 by translations u j Ñ u j `p2πqp j , while taking the above sign into account. Thus for a compatible triple pλ, µ; νq and the A j 's standing for the expressions of (10) computed at shifted weights α 1 " pλ `ρq and likewise for β 1 and γ 1 , we have ż R n´1 ś n´1 j"1 du j e i u j A j r ∆puq " ż p´π,πq n´1 n´1 ź j"1 du j e i u j A j D n where D n " 8 ÿ p 1 ,¨¨¨,p n´1 "´8 p´1q ř j jp j pn´1q ź 1ďiăi 1 ďn 1 u i `ui`1 `¨¨¨`u i 1 ´1 `pp i `¨¨¨`p i 1 ´1qp2πq , (25) a sum that always converges. Now define n :" ź 1ďiăi 1 ďn 2 sinp 1 2 pu i `ui`1 `¨¨¨`u i 1 ´1qq " i ´npn´1q{2 ∆pe i t i q (26) R n pT q :" D n n . (27) R n , as defined by [START_REF] Rassart | A Polynomiality Property for Littlewood-Richardson Coefficients[END_REF], is a function of T with no singularity, since all the poles of the original expression r ∆puq ´1 have been embodied in the denominator ∆pe i t i q. It must be a polynomial in T and T ‹ , invariant under permutations and complex conjugation, hence a real symmetric polynomial of the e i t j . (Since det T " 1, T ‹ is itself a polynomial in T .) We conclude that R n pT q may be expanded on real characters χ κ pT q, κ P K, with K a finite n-dependent set of highest weights. Moreover R n pIq " 1, as may be seen by looking at the small t limit of [START_REF] Rassart | A Polynomiality Property for Littlewood-Richardson Coefficients[END_REF]. Thus Proposition 1. The integrals over R n´1 appearing in J n pα 1 , β 1 ; γ 1 q in (9), for α 1 " pλ `ρq, β 1 " pµ `ρq, γ 1 " pν `ρq, pλ, µ; νq a compatible triple, may be "compactified" in the form ż R n´1 ś n´1 j"1 du j e i u j A j r ∆puq " i npn´1q{2 ż p´π,πq n´1 n´1 ź j"1 du j e i u j A j R n pT q ∆pe i t i q (28) where the real polynomial R n pT q is defined through [START_REF] Rassart | A Polynomiality Property for Littlewood-Richardson Coefficients[END_REF]. There exists a finite, n-dependent set K of highest weights such that R n pT q may be written as a linear combination R n pT q " ř κPK r κ χ κ pT q of real characters. The coefficients r κ are rational and such that, when evaluated at the identity matrix, R n pIq " 1. Consider now the similar computation, again for a compatible triple pλ, µ; νq but with the A j 's standing for the expressions of (10) computed at unshifted weights, i.e., with α " pλq and likewise for β and γ. If the triple pα, β; γq is non generic, J n pα, β; γq " 0. If it is generic, and n is odd, pα, β; γq may be thought of as associated with the shift of the compatible triple pλ ´ρ, µ ´ρ ; ν ´ρq. Thus for n odd, this new calculation yields the same result as above. For n even, however, the latter triple is no longer compatible and a separate calculation has to be carried out. It is easy to see that the same line of reasoning leads to a modification of the formula [START_REF] Rassart | A Polynomiality Property for Littlewood-Richardson Coefficients[END_REF] and to a new family of real symmetric polynomials p R n pT q, according to ż R n´1 ś n´1 j"1 du j e i u j A j r ∆puq " ż p´π,πq n´1 n´1 ź j"1 du j e i u j A j p D n p D n :" 8 ÿ p 1 ,¨¨¨,p n´1 "´8 ź 1ďiăi 1 ďn 1 u i `ui`1 `¨¨¨`u i 1 ´1 `pp i `¨¨¨`p i 1 ´1qp2πq (29) p R n pT q :" p D n n , (30) with the same n as in [START_REF] Racah | Group theoretical concepts and methods in elementary particle physics[END_REF]. Note that the sum in ( 29) is convergent for n ą 2. The case n " 2 requires a special treatment, see below in sec. 4.2.1. Proposition 2. The integrals over R n´1 appearing in J n pα, β; γq in (9), for α " pλq, β " pµq, γ " pνq, pλ, µ; νq a compatible triple, may be compactified in the form ż R n´1 ś n´1 j"1 du j e i u j A j r ∆puq " i npn´1q{2 ż p´π,πq n´1 n´1 ź j"1 du j e i u j A j p R n pT q ∆pe i t i q (31) where the real polynomial p R n pT q is defined through [START_REF] Vergne | Poisson summation formula and box splines[END_REF]. There exists a finite n-dependent set p K of highest weights such that p R n pT q may be written as a linear combination p R n pT q " ř κP p K rκ χ κ pT q of real characters. The coefficients rκ are rational and such that, when evaluated at the identity matrix, p R n pIq " 1. For n odd, the following objects coincide with those of Proposition 1: p R n " R n , p K " K and r κ " rκ . A method of calculation and explicit expressions for low values of n of the polynomials R n , p R n and of the sets K, p K will be given in sections 2.4 and 4.2, establishing the rationality of the coefficients r κ , rκ . We shall see that the polynomial R n is equal to 1 for n " 2 and n " 3, but non-trivial when n ě 4. In contrast, already for n " 2, p R 2 pT q " 1 2 χ 1 pT q. These expressions of R n and p R n for low n suggest the following conjecture Conjecture 1. The coefficients r κ and rκ are non negative. As we shall see below in sec. 2.5 (v), this Conjecture 1 is related to Lemma 1. Relation between J n and LR coefficients We may now complete the computation of J n pα 1 , β 1 ; γ 1 q and J n pα, β; γq. We rewrite 1 ∆pe i t q " |∆pe i t q| 2 1 ∆pe i t q∆pe i t q∆pe i t q ˚, the first term |∆pe i t q| 2 is what is needed for writing the normalized Haar measure over the SUpnq Cartan torus T n , see [START_REF] King | Stretched Littlewood-Richardson and Kostka coefficients[END_REF], while the three Vandermonde determinants in the denominator provide the desired denominators of Weyl's character formula. Putting everything together we find Theorem 1. 1. For a compatible triple pλ, µ; νq, the integral J n of (8-9), evaluated for the shifted weights λ `ρ etc, or for the corresponding α 1 " pλ `ρq, β 1 " pµ `ρq, γ 1 " pν `ρq, may be recast as J n pα 1 , β 1 ; γ 1 q " ż Tn dT χ λ pT qχ µ pT qχ ν pT q R n pT q (32) where the integration is carried out on the Cartan torus with its normalized Haar measure. Writing R n pT q " ř κPK r κ χ κ pT q as in Prop. 1, this may be rewritten as J n pα 1 , β 1 ; γ 1 q " ÿ κPK ν 1 r κ N ν 1 λµ N ν 1 κν " ÿ ν 1 c pνq ν 1 N ν 1 λµ ( 33 ) where the sum runs over the finite set of irreps ν 1 obtained in the decomposition of ' κPK pν b κq, with rational coefficients c pνq ν 1 " ř κPK N ν 1 κν r κ . 2. For a compatible triple pλ, µ; νq of weights not on the boundary of the Weyl chamber, the integral J n of (8-9), evaluated for the unshifted weights λ, µ, ν, or for the corresponding α " pλq, β " pµq, γ " pνq, may be recast as J n pα, β; γq " ż Tn dT χ λ´ρ pT qχ µ´ρ pT qχ ν´ρ pT q p R n pT q (34) where the integration is carried out on the Cartan torus with its normalized Haar measure. Writing p R n pT q " ř κP p K rκ χ κ pT q as in Prop. 2, this may be rewritten as J n pα, β; γq " ÿ κP p K ν 1 rκ N ν 1 λ´ρ µ´ρ N ν 1 κ ν´ρ (35) " ÿ ν 1 ĉpνq ν 1 N ν 1 λ´ρ µ´ρ ( 36 ) where the sum runs over the finite set of irreps ν 1 obtained in the decomposition of ' κP p K `pν ´ρqbκ ˘, with rational coefficients ĉpνq ν 1 " ř κP p K N ν 1 κ ν´ρ rκ . Proof. (32) and (34) result from the previous discussion. The product R n pT qχ ν pT q may then be decomposed on characters, R n pT qχ ν pT q " ÿ κPK r κ χ κ pT qχ ν pT q " ÿ κPK ν 1 N ν 1 κν r κ χ ν 1 pT q " ÿ ν 1 c pνq ν 1 χ ν 1 pT q , with c pνq ν 1 " ř κPK N ν 1 κν r κ , which yields (33). Similarly, p R n χ ν´ρ " ř ν 1 ĉpνq ν 1 χ ν 1 with ĉpνq ν 1 " ř κP p K N ν 1 κ ν´ρ rκ , which gives (36). Recall that if either of λ, µ or ν lies on the boundary of the Weyl chamber, α, β or γ has at least two equal components and J n pα, β; γq " 0. Thus, in words, J n pα 1 , β 1 ; γ 1 q and J n pα, β; γq may be expressed as linear combinations of LR coefficients over "neighboring" weights ν 1 of ν. If Conjecture 1 is right, the coefficients c pνq ν 1 , ĉpνq ν 1 are also non negative. Remark. Note that even though the function J n pα, β; γq is defined for any triple pα, β; γq, compatible or not, integral or not, equations (33),(36) hold only for triples pα 1 , β 1 ; γ 1 q or pα, β; γq associated with compatible triples pλ, µ; νq. Recall also from the previous discussion that for n even, the triple pα 1 , β 1 ; γ 1 q is not integral and compatible if the triple pα, β; γq (or pλ, µ; νq) is. Comment. It would be interesting to invert relations (33,36) and to express the LR coefficients N ν λµ as linear combinations of the functions J n and their derivatives. In view of the considerations of [START_REF] Vergne | Poisson summation formula and box splines[END_REF], this doesn't seem inconceivable6 . Expression of the R and p R polynomials Here is the essence of the method used to compute R n and p R n , as defined through ( 27), [START_REF] Vergne | Poisson summation formula and box splines[END_REF]. We first introduce two families of functions, defined recursively f pu, mq " ´1 m ´1 B Bv f pv, m ´1q| v" f pu, 1q " 2u 8 ÿ m"1 1 u 2 ´p2πqm 2 `1 u " 1 2 tanpu{2q and gpu, 1q " 2u 8 ÿ m"1 p´1q m u 2 ´p2πq 2 m 2 `1 u " 1 2 sinpu{2q . R n and p R n , defined in [START_REF] Rassart | A Polynomiality Property for Littlewood-Richardson Coefficients[END_REF][START_REF] Vergne | Poisson summation formula and box splines[END_REF], are obtained explicitly by an iterative procedure. We start from 1{ r ∆puq " ź 1ďiďjďn´1 1 pu i `ui`1 `¨¨¨u j q First we pick a variable in p r ∆puqq ´1, say u 1 , shift it by p 1 p2πq, perform a partial fraction expansion of the rational function ś 2ďjďn 1 u 1 `¨¨¨`u j´1 `p1 p2πq with respect to the variable u 1 and make use of the previous identities in the summation over p 1 . This produces a sum of trigonometric functions of u 1 , ¨¨¨, u n´1 which are p2πq periodic or anti-periodic in each of these variables, times rational functions of u 2 , ¨¨¨, u n´1 . Then iterate with the variable u 2 , say, shifting it by p 2 p2πq etc. (Of course the order of the variables is immaterial.) As explained in sec. 2.2.2, the final result has the general form R n presp. p R n q ś 1ďiăi 1 ďn 2 sinp 1 2 pu i `ui`1 `¨¨¨`u i 1 ´1qq where R n , resp. p R, is a (complicated) trigonometric function of the u variables, or alternatively a symmetric trigonometric function of the t variables. The latter is then recast as a sum of real characters of the matrix T . This procedure will be illustrated in sec. 4.2 on the first cases, for 2 ď n ď 6. Remark. The reader may have noticed the parallel between this way of computing p R n and the computation of J n in [START_REF] Zuber | Horn's problem and Harish-Chandra's integrals. Probability distribution functions[END_REF]: both rely on an iterative partial fraction expansion, the connection between the two being the Poisson formula. As a consequence of this simple correspondence, J n pα, β; γq evaluated for a compatible triple and p R n have rational coefficients with the same least common denominator δ n , see below Prop. 3. Proposition 3. For any integral compatible triple pα, β; γq, J n pα, β; γq is an integral multiple of some rational number δ ´1 n . Proof. Call δ n the least common denominator of the coefficients ĉν 1 in (36). Then we see that J n pα, β; γq is an integral multiple of 1{δ n . Unfortunately we have no general expression of δ n and rely on explicit calculations for low values of n: n 2 3 4 5 6 ¨¨δ n 1 1 6 360 9! (iv) Asymptotic behavior. The asymptotic regime is read off (32-36): heuristically, we expect that asymptotically, for rescaled weights, the t-integral in the computation of J n will be dominated by t « 0, hence T « I, for which R n " p R n " 1, whence the asymptotic equality, for λ, µ, ν large J n pα 1 , β 1 ; γ 1 q « J n pα, β; γq « N ν λµ . (43) More precisely, it is known [START_REF] Rassart | A Polynomiality Property for Littlewood-Richardson Coefficients[END_REF] that, as a function of ν 1 , N ν 1 λµ can be extended to a continuous piecewise polynomial function, thus for large ν, one approximates the rhs of (33) by N ν λµ ř ν 1 c ν 1 « N ν λµ since the coefficients sum up to 1, again as a consequence of R n pIq " 1: ÿ ν 1 c ν 1 " ÿ κPK r κ ÿ ν 1 N ν 1 κν large ν « ÿ κPK r κ dim V κ " 1 as observed above in (37). We shall see below in sec. 3 that (32,33) enable us to go (a bit) beyond this leading asymptotic behavior. (v) Compare Conjecture 1 and Lemma 1. We just observe here that Conjecture 1 is consistent with Lemma 1. Indeed, if we apply (33) to an admissible (hence compatible) triple pλ, µ; νq, with the assumption that the sum over ν 1 includes ν with a non vanishing coefficient c ν , and using the non negativity of the other c ν 1 (as stated in Conj. 1), one obtains J n pα 1 , β 1 ; γ 1 q ě N ν λµ ą 0, in agreement with Lemma 1. On polytopes and polynomials The polytopes r H αβ and H ν λµ considered in this section have been introduced in sec. 1.2. Ehrhart polynomials Given some rational polytope P, call sP the s-fold dilation of P, i.e., the polytope obtained by scaling by a factor s the vertex coordinates (corners) of P in a basis of the underlying lattice. The number of lattice points contained in the polytope sP is given by a quasi-polynomial called the Ehrhart quasi-polynomial of P, see for example [START_REF] Stanley | Enumerative Combinatorics[END_REF]. It is polynomial for integral polytopes but one can also find examples of rational non-integral polytopes, for which it is nevertheless a genuine polynomial. We remind the reader that the first two coefficients (of highest degree) of the Ehrhart polynomial of a polytope P of dimension d are given, up to simple normalizing constant factors, by the d-volume of P and by the pd ´1q-volume of the union of its facets; the coefficients of smaller degree are usually not simply related to the volumes of the faces of higher co-dimension. We finally mention the Ehrhart-Macdonald reciprocity theorem: the number of interior points of P, of dimension d, is given, up to the sign p´1q d , by the evaluation of the Ehrhart polynomial at the negative value s " ´1 of the scaling parameter. Littlewood-Richardson polynomials It is well known [START_REF] Heckman | Projections of Orbits and Asymptotic Behavior of Multiplicities for Compact Lie Groups[END_REF][START_REF] Guillemin | Symplectic Fibrations and Multiplicity Diagrams[END_REF] that multiplicities like the LR coefficients admit a semi-classical description for "large" representations. In the present context, there is an asymptotic equality of the LR multiplicity N ν λµ , when the weights λ, µ, ν are rescaled by a common large integer s, with the function J n . Here again we assume that the admissible triple pλ, µ; νq is generic, in the sense of Definition 1. Indeed, from (43), as s Ñ 8 N sν sλ sµ « J n p psλ `ρq, psµ `ρq; psν `ρqq « J n psα, sβ; sγq " s pn´1qpn´2q{2 J n pα, β; γq . (44) The last equality just expresses the homogeneity of the function J n . These scaled or "stretched" LR coefficients have been proved to be polynomial ("Littlewood-Richardson polynomials") in the stretching parameter s [START_REF] Derksen | On the Littelwood-Richardson polynomials[END_REF][START_REF] Rassart | A Polynomiality Property for Littlewood-Richardson Coefficients[END_REF], N sν sλ sµ " P ν λµ psq (45) and it has been conjectured that the polynomial P ν λµ psq (of degree at most pn ´1qpn ´2q{2 by ( 44)), has non negative rational coefficients [START_REF] King | Stretched Littlewood-Richardson and Kostka coefficients[END_REF]. More properties of P ν λµ psq, namely their possible factorization and bounds on their degree have been discussed in [START_REF] King | The hive model and the polynomial nature of stretched Littlewood-Richardson coefficients[END_REF]. For a generic triple, our study leads to an explicit value (eq. ( 44)) for the coefficient of highest degree, namely the kernel function J n pα, β; γq, see eq. ( 9). From the very definition of the hive polytope H ν λ µ associated with an admissible triple (each integral point of which is a honeycomb contributing to the multiplicity), with Littlewood-Richardson, or stretching, polynomial P ν λµ psq, and from the general definition of the Ehrhart polynomial, it is clear that both polynomials are equal. Notice that P ν λµ psq, defined as the Littlewood-Richardson polynomial of the triple pλ, µ; νq or as the Ehrhart polynomial of the polytope H ν λ µ , is polynomial even if the hive polytope happens not to be an integral polytope; on the other hand the Ehrhart polynomial of the polytope defined as the convex hull of the integral points of H ν λ µ will differ from P ν λµ psq if H ν λ µ is not integral, see two examples in sec. 4.4.2 and 4.4.3. From the volume interpretation of the first Ehrhart coefficient, which was recalled in sec. 3.1, we find: Proposition 4. For SUpnq, the normalized d-volume V of the hive polytope H ν λµ equals d! J n pα, β; γq, with d " pn ´1qpn ´2q{2, for a generic and admissible triple pλ, µ; νq, with α " pλq, β " pµq, γ " pνq, and with J n pα, β; γq given by eq. ( 9). We use here the definition given by [START_REF] Haase | Lecture Notes on Lattice Polytopes, Fall School on Polyhedral Combinatorics[END_REF][START_REF] Bosma | The Magma algebra system. I. The user language[END_REF]: for a polytope of dimension d, the Euclidean volume v is related to the normalized volume V by v " V{d!. More generally the total normalized p-volume V p of the p-dimensional faces of a polytope is related to its total Euclidean p-volume v p by v p " V p {p!. This is consistent with the result [START_REF] Knutson | The honeycomb model of GL n pCq tensor products I: proof of the saturation conjecture[END_REF] that the LR coefficient is equal to the number of integral points in the hive polytope. In words, (44) says that the number of integral points of that polytope is asymptotically well approximated by its euclidean volume J n . The Blichfeldt inequality [START_REF] Blichfeldt | Notes on geometry of numbers, in the October meeting of the San Francisco section of the AMS[END_REF] valid for an integral polytope Q of dimension d, states that its number of integral points is smaller than V `d, where V is its normalized volume. This property, which a fortiori holds for a rational polytope H with integral part Q, together with Proposition 4, implies the following inequality for a generic hive polytope H ν λ µ of SUpnq: d! J n pα, β; γq ě N ν λµ ´d (46) with d " pn ´1qpn ´2q{2 and α " pλq, β " pµq, γ " pνq. Polytopes versus symplectic quotients Here is another argument relating the volume of the hive polytope with ppγ|α, βq, hence also with J n pα, β; γq, for α " pλq, β " pµq, γ " pνq, λ, µ, ν being dominant integral weights. It goes in two steps, as follows. Step 1. N ν λµ is the number of integral points of the hive polytope. For large s, the coefficient N sν sλ sµ is approximated by s d times the volume of the same polytope. Step 2. For large s, N sν sλ sµ is approximated 7 by the volume of a symplectic quotient of the product of three coadjoint orbits labelled by λ, µ, ν, where ν is the conjugate of ν. The same volume is given, up to known constants, by ppγ|α, βq, hence by J n pα, β; γq, see [START_REF] Knutson | Honeycombs and sums of Hermitian matrices[END_REF], Th4. Hence the result. As already commented in [START_REF] Knutson | Honeycombs and sums of Hermitian matrices[END_REF], the equality between the two volumes is quite indirect and it would be nice to construct a measure preserving map between the hive polytope and the above symplectic quotient, or a variant thereof. To our knowledge, this is still an open problem. The details of the first part of step 2 are worked out in [START_REF] Suzuki | Asymptotic dimension of invariant subspace in tensor product representation of compact Lie group[END_REF]. We should mention that this last reference also adresses the problem of calculating the function ppγ|α, βq, at least when the arguments are determined by dominant integral weights, and the authors present quite general formulae that are similar to ours. However, they do not use the explicit writing of the orbital measures using formula (3), which was a crucial ingredient of our approach and allowed us to obtain rather simple expressions for J n pα, β; γq. Subleading term From the asymptotic behavior (44), we have N sν sλ sµ " P ν λµ psq " s pn´1qpn´2q{2 J n p pλq, pµq; pνqqp1 `Ops ´1qq provided the leading coefficient J n p pλq, pµq; pνqq does not vanish. According to Lemma 1 the stretching polynomial P ν λµ psq is of degree pn ´1qpn ´2q{2 for ν inside the tensor polytope and for λ, µ R BC, but is of lower degree on the boundary of that polytope, or for λ or µ on BC. Write (33) for stretched weights J n p psλ `ρq, psµ `ρq; psν `ρqq " ÿ κPK r κ ÿ ν 1 N ν 1 sν κ N ν 1 sλ sµ . For s large enough, all the weights ν 1 " sν `k, where k runs over the multiset tκu of weights (i.e., counted with their multiplicity) of the irrep with highest weight κ, are dominant and thus contribute to the multiplicity N ν 1 sν κ [START_REF] Racah | Group theoretical concepts and methods in elementary particle physics[END_REF]. Thus J n p psλ `ρq, psµ `ρq; psν `ρqq " ÿ κPK r κ ÿ kPtκu N sν`k sλ sµ . ( 47 ) But as a function of λ, µ, ν, and in the case of SUpnq, the LR coefficient N ν λ µ is itself a piecewise polynomial [START_REF] Rassart | A Polynomiality Property for Littlewood-Richardson Coefficients[END_REF]: more precisely in the latter reference it is shown that, for the case of SUpnq, the quasi-polynomials giving the Littlewood-Richardson coefficients in the cones of the Kostant complex are indeed polynomials of total degree at most pn ´1qpn ´2q{2 in the three sets of variables defined as the components of the highest weights λ, µ, ν. 7 More precisely limsÑ8 1 s d N sν sλ sµ " ş ω d {d!, with d " pn ´1qpn ´2q{2, where ω is the symplectic 2-form on the symplectic and Kähler manifold of complex dimension d defined as pO λ ˆOµ ˆOν q{{SUpnq :" m ´1p0q{SUpnq, with m, the moment map m : pa1, a2, a3q P O λ ˆOµ ˆOν Þ Ñ a1 `a2 `a3 P LiepSUpnqq ˚. Remark. The well known Kostant-Steinberg method for the evaluation of the LR coefficients (a method where one performs a Weyl group average over the Kostant function) is not used in our paper, or it is only used as a check. However we should stress that, even in the case of SU [START_REF] Berenstein | Triple multiplicities for sl(r `1) and the spectrum of the external algebra in the adjoint representation[END_REF] where the LR coefficients can be deduced from our kernel function J 3 , see below sec. 4.1.2, the expressions obtained for N ν λµ using the Kostant-Steinberg method differ from ours. If we assume that N ν λ µ may be extended to a function of the same class as J n , namely C n´3 , see above sec. 1.3.2, a Taylor expansion to second order of the rhs of (47) is possible for n ě 4. This leaves out the cases n " 2 and n " 3 which may be treated independently, see below sec. [START_REF] Betke | Lattice points in lattice polytopes[END_REF] A case by case study for low values of n We examine in turn the cases n " 2, ¨¨¨, 6. Expression and properties of the J n function The expressions of J 2 , J 3 and J 4 were already given in [START_REF] Zuber | Horn's problem and Harish-Chandra's integrals. Probability distribution functions[END_REF]. We repeat them below for the reader's convenience. Those of J 5 and J 6 , which are fairly cumbersome, are available on the web site http://www.lpthe.jussieu.fr/ ~zuber/Z_Unpub.html The case of SU(2) In the case of n " 2, the function J 2 reads J 2 pα, β; γq " p1 I pγ 12 q ´1´I pγ 12 qq (49) where γ 12 :" γ 1 ´γ2 and 1 I is the characteristic function of the segment8 I " p|α 12 ´β12 |, α 12 `β12 q. Then, when evaluated for shifted weights, α 1 " α 12 `1 " λ 1 `1, β 1 " β 12 `1 " µ 1 `1, γ 1 " γ 12 `1 " ν 1 `1 ą 0, it takes the value 1 iff |α 12 ´β12 | ă γ 12 `1 ă α 12 `β12 `2, i.e., iff |α 12 ´β12 | ď γ 12 ď α 12 `β12 which is precisely the well known value of the LR coefficient, N ν λµ " $ ' & ' % 1 if |α 12 ´β12 | " |λ 1 ´µ1 | ď γ 12 " ν 1 ď α 12 `β12 " λ 1 `µ1 and ν 1 ´|λ 1 ´µ1 | even 0 otherwise . Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê Ê ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ Ï Ï Ï Ï Ï Ï Ï Ï Ï Ï Ï Ï Ï Ï Ï Ï Ï Ï Ú Ú Ú Ú Ú Ú Ú Ú Ú Ú Ú Ú Ú Ù Ù Ù Ù Ù Ù Ù Ù Á Á Á Figure 1: The Horn-tensor polygon r H αβ " H λµ for the two SU(3) weights λ " p9, 5q µ " p6, 5q, hence α " p14, 5, 0q, β " p11, 5, 0q. The multiplicity increases from 1 to 6 inside the polygon, giving a matriochka pattern to the successive contours. Proposition 6. 1. For an admissible triple, the function J 3 pα, β; γq of eq. ( 54) takes only values that are integral and non negative; as just discussed, these values vanish by continuity along the edges of the polygon; the vertices of the boundary polygon are integral and give admissible γ's; 2. for α " pλq, β " pµq, γ " pνq, J 3 pα, β; γq " N ν λµ ´1; in particular, if some λ i or µ i vanishes, hence α or β are non generic, N ν λµ " 1, a well-known property of SU(3); 3. the points ν of value J 3 p pλq, pµq; pνqq " m, for 0 ď m ă m max form a "matriochka" pattern, see Fig. 4. Now evaluate J 3 at shifted weights λ 1 " λ `ρ, µ 1 " µ `ρ, ρ the Weyl vector p1, 1q, hence α 1 i " i pλq `3 ´i, β 1 i " i pµq `3 ´i and still α 1 3 " β 1 3 " 0. Then J 3 pα 1 , β 1 ; γ 1 q " N ν λµ ( 56 ) with ν such that γ 1 i " i pνq `3 ´i, i " 1, 2, 3. The sum ř γP r H αβ XZ 2 ∆pγq ∆pαq∆pβq J 3 pα, β; γq equals 1 2 ; therefore replacing the sum by an integral over the domain γ 3 ď γ 2 ď γ 1 , see [START_REF] Harish-Chandra | Differential Operators on a Semisimple Algebra[END_REF], gives the same value (namely 1 2 ). Proof. Point 1 follows from Proposition 3, with δ 3 " 1. Integrality of the vertices of the polygon is seen by inspection of Horn's inequalities. Point 4 follows from (32) together with the fact that for n " 3, the polynomial R 3 " 1, see below sec. 4.2. Points 2 follows from (56) and the observation made in [START_REF] Coquereaux | Conjugation properties of tensor product multiplicities[END_REF] that, for SU(3), N ν`ρ λ`ρ µ`ρ " N ν λ µ `1 . (57) The matriochka pattern of point 3 matches the similar pattern of points of multiplicity m `1 in the tensor product decomposition λ b µ (cf [START_REF] Coquereaux | Conjugation properties of tensor product multiplicities[END_REF], eq [START_REF] Knutson | The honeycomb model of GL n pCq tensor products II: Puzzles determine facets of the Littlewood-Richardson cone[END_REF]]). Point 5 has already been derived in sec. 2.5 and is here a direct consequence of ř ν N ν λ µ dim V ν " dim V λ dim V µ . We want to stress a remarkable consequence of the above eq. (54,55,56) Corollary 1. The LR coefficients N ν λµ of SU(3) may be expressed as a piecewise linear function of the weights λ, µ, ν, sum of the four terms of (54). To the best of our knowledge, this expression was never given before. Note that the lines of non differentiability of the expression (54) split the plane into at most 9 domains. In each domain, the function J 3 is linear. This is to be contrasted with the known expressions that follow from Kostant-Steinberg formula (see for example [START_REF] Fulton | Representation Theory, A First Course[END_REF], Prop. [25][START_REF] Racah | Group theoretical concepts and methods in elementary particle physics[END_REF][START_REF] Rassart | A Polynomiality Property for Littlewood-Richardson Coefficients[END_REF][START_REF] Stanley | Enumerative Combinatorics[END_REF][START_REF] Suzuki | Asymptotic dimension of invariant subspace in tensor product representation of compact Lie group[END_REF] and which involve a sum over two copies of the SUp3q Weyl group. We should also recall that there exist yet another formula for the multiplicity N ν λµ , stemming from its interpretation [START_REF] Knutson | The honeycomb model of GL n pCq tensor products I: proof of the saturation conjecture[END_REF] as the number of integral solutions to the inequalities on the honeycomb variable, N ν λ µ " J 3 pα 1 , β 1 ; γ 1 q " minpα 1 1 , ´β1 3 `γ1 2 , α 1 1 `α1 2 `β1 1 ´γ1 1 q (58) ´maxpα 1 2 , γ 1 3 ´β1 3 , γ 1 2 ´β1 2 , α 1 1 `α1 3 `β1 1 ´γ1 1 , α 1 1 `α1 2 `β1 2 ´γ1 1 , α 1 1 ´γ1 1 `γ1 2 q " 1 `minpλ 1 `λ2 , ν 2 `σ, ν 2 ´µ2 `2σq ´maxpλ 2 , σ, ν 2 ´µ2 `σ, ν 2 ´λ2 ´µ2 `2σ, ν 2 ´µ1 ´µ2 `2σ, λ 1 `λ2 ´ν1 q , where σ :" 1 3 pλ 1 `2λ 2 `µ1 `2µ 2 ´ν1 ´2ν 2 q. See also [START_REF] Bégin | sup3q k fusion coefficients[END_REF][START_REF] Coquereaux | Conjugation properties of tensor product multiplicities[END_REF] for alternative and more symmetric formulae and [START_REF] Coquereaux | On some properties of SU(3) Fusion Coefficients[END_REF] for an expression in terms of a semi-magic square. Remark. The lines or half-lines of non-differentiability of J 3 , as they appear on expression (54), (see also Figures in [START_REF] Zuber | Horn's problem and Harish-Chandra's integrals. Probability distribution functions[END_REF]), are a subset of the lines along which two arguments of the min or of the max functions of (58) coincide. The case of SU(4) The case of SU( 4) is more complicated. Some known features of SU(3) are no longer true. In particular, it is generically not true that multiplicities N ν λµ are equal to 1 on the boundary of the polytope; there is no matriochka pattern, with multiplicities growing as one goes deeper inside the tensor polytope; and relation ( 57) is wrong and meaningless, since pλ `ρ, µ `ρ; ν `ρq cannot be compatible if pλ, µ; νq is. We first recall the expression of J 4 pα, β; γq given in [START_REF] Zuber | Horn's problem and Harish-Chandra's integrals. Probability distribution functions[END_REF]. With A j standing for A j pP, P 1 , P 2 q in the notations of [START_REF] Fulton | Representation Theory, A First Course[END_REF], J 4 pα, β; γq " 1 2 3 4! ÿ P,P 1 ,P 2 PS 4 ε P ε P 1 ε P 2 pA 1 q ˆ1 3! pA 2 ´A1 qp|A 3 ´A1 | 3 ´|A 3 ´A2 `A1 | 3 ´|A 3 ´A2 | 3 `|A 3 | 3 q ´1 3 pA 2 qp|A 3 | 3 ´|A 3 ´A2 | 3 q ´1 2 p|A 2 ´A1 | ´|A 2 |qp|A 3 ´A2 |pA 3 ´A2 q `|A 3 |A 3 q ˙. (59) One can actually restrict the previous triple sum over the Weyl group to a double sum only while multiplying the obtained result by 4!, and this is quite useful for practical calculations. Then, we have, for an admissible triple pλ, µ; νq of h.w. of SUp4q (with λ, µ R BC, i.e., λ i , µ i ‰ 0), and α " pλq, β " pµq, γ " pνq, Proposition 7. 1. N ν λµ ě 4 inside the tensor polytope. 2. J 4 pα, β; γq vanishes when γ belongs to the faces of the polytope r H αβ ; conversely J 4 does not vanish inside the polytope. 3. At these interior points, 6J 4 pα, β; γq, which is the normalized 3-volume V of the hive polytope H ν λµ , is an integer. 4. That integer satisfies V " 6J 4 pα, β; γq ě N ν λµ ´3. The sum ř γP r H αβ XZ 3 ∆pγq ∆pαq∆pβq J 4 pα, β; γq equals 1 12 , which matches the normalization [START_REF] Harish-Chandra | Differential Operators on a Semisimple Algebra[END_REF]. Proof. Point 1 results from a general inequality in integral d-polytopes that asserts that their number of integral points is larger or equal to d `1, see [START_REF] Beck | Coefficients and Roots of Ehrhart Polynomials[END_REF], Theorem 3.5. Here for points ν inside the tensor polytope, the polytope H ν λ µ is integral and 3-dimensional, hence d " 3. The first part of point 2 has been already amply discussed, while the second one follows from Lemma 1. Points 3 and 5 have been established in sec. 2.5. Point 4 follows from Blichfeldt's inequality (46). The consequences of Theorem 1 on the values of J 4 at shifted weights will be discussed in the next subsection. 1. Based on the study of numerous examples, it seems that for weights ν interior to the tensor polytope, we have the lower bound N ν λµ ě 8. Note that the afore mentioned inequality of Theorem 3.5 of [START_REF] Beck | Coefficients and Roots of Ehrhart Polynomials[END_REF] (which would give the weaker N ν λµ ě 7) is no longer applicable, since the hive polytope is not generally integral for n " 5, see a counter-example in sec. [START_REF] Betke | Lattice points in lattice polytopes[END_REF] For n " 2 and n " 3, the polynomial R n is equal to 1. Indeed: 8 ÿ p 1 "´8 p´1q p 1 u 1 `2πp 1 " 1 2 sinpu 1 {2q " i ∆pe i t j q (60) 8 ÿ p 1 ,p 2 "´8 1 pu 1 `2πp 1 qpu 2 `2πp 2 qpu 1 `u2 `2πpp 1 `p2 qq " 1 2 3 sinpu 1 {2q sinpu 2 {2q sinppu 1 `u2 q{2q " i 3 ∆pe i t j q . ( 61 ) On the other hand, P.V. 8 ÿ p 1 "´8 1 u 1 `2πp 1 " 1 u 1 `8 ÿ p 1 "1 2u u 2 1 ´p2πp 1 q 2 " cospu 1 {2q 2 sinpu 1 {2q " 1 2 i tr T ∆pe i t j q , hence p R 2 pT q " 1 4.2.4 Case n " 6 We have found, after long and tedious calculations 2 7 9! R 6 " 31356 `cospx 1 `x2 `x3 ´x4 ´x5 ´x6 q `perm. : 10 terms in total cospx 1 `x2 `2x 3 ´x4 ´x5 ´2x 6 q `perm. : 90 terms in total 1923 `cospx 1 `x2 `x3 ´x4 ´2x 5 q `perm. : 120 terms in total 284238 `cospx 1 `x2 ´x3 ´x4 q `perm. : 45 terms in total 126 `cosp2x 1 `x2 ´2x 3 ´x4 q `perm. : 180 terms in total 18906 `cospx 1 `x2 ´2x 3 q `perm. : 60 terms in total 1362 `cosp2x 1 ´2x 2 q `perm. : 15 terms in total 1801128 `cospx 1 ´x2 q `perm. : 15 terms in total 4919130 . Alternatively 2 8 9! R 6 pT q " 1699488 `715852χ p1,0,0,0,0q pT qpc.c.q `860χ p2,0,0,0,0q pT qpc.c.q `12032 ´χp0,1,0,0,0q pT qχ p2,0,0,0,0q pT q `c.c. ¯`202683χ p0,1,0,0,0q pT qpc.c.q `124χ p1,1,0,0,0q pc.c.q ´5207 ´χp0,0,1,0,0q pT qχ p1,1,0,0,0q pT q `c.c. ¯`10414χ p0,0,1,0,0q pT qpc.c.q `χp1,0,1,0,0q pT qpc.c.q `6876 ´χp1,0,1,0,0q pT qχ p0,1,0,0,0q pT q `c.c. " 2629422χ p0,0,0,0,0q pT q `1670 ´χp0,0,1,1,1q pT q `c.c. ¯`24167χ p0,0,2,0,0q pT q `13826 ´χp0,1,0,0,2q pT q `c.c. ¯`216561χ p0,1,0,1,0q pT q `957461χ p1,0,0,0,1q pT q `χp1,0,2,0,1q pT q `125χ p1,1,0,1,1q pT q `985χ p2,0,0,0,2q pT q . where the last expression is a decomposition as a sum over real representations, with a total dimension 2 8 9!, as it should. We also found : 9! p R 6 pT q " 5422χ p0,0,1,0,0q pT q `χp0,1,1,1,0q pT q `13pχ p0,2,0,0,1q pT q `χp1,0,0,2,0q pT qq 186χ p1,0,1,0,1q pT q `982pχ p0,0,0,1,1q pT q `χp1,1,0,0,0q pT qq . When evaluated at T " 1 we check that the dimension count is correct: p5422, 1, 13, 186, 982q.p20, 1960, 560 ˆ2, 540, 70 ˆ2q " 9!. We leave it to the reader to write the relations involving N ν λµ that follow from (33) and (36), see an example below in sec. 4.4.3. Stretching polynomials The case n " 2 This is a trivial case. Since for any admissible triple, N ν λµ " 1, we have, according to a general result [START_REF] King | Stretched Littlewood-Richardson and Kostka coefficients[END_REF], P ν λµ psq " 1. 4.3.2 The case n " 3 For n " 3, we have, from point 2. in sec. 4.1.2 N ν λµ ´1 " J 3 p pλq, pµq; pνqq and the latter is an homogeneous linear function of s, hence P ν λµ psq " N sν sλ sµ " J 3 p pλq, pµq; pνqq `1 " pN ν λµ ´1qs `1 . This expression is also valid for weights λ and/or µ on the boundary of the Weyl chamber C, in which case, as is well known ("Pieri's rule"), all LR multiplicities equal 1, and then again by the same general result [START_REF] King | Stretched Littlewood-Richardson and Kostka coefficients[END_REF], P ν λµ psq " 1, while as noticed above, J 3 " 0. Likewise as noticed in sec. 2.2.2, if ν lies on the boundary of tensor polytope, (the outer matriochka), N ν λ µ " 1 and thus again, P ν λµ psq " 1. Remark. The property that P ν λµ psq " 1 `spN ν λµ ´1q had been proved in [START_REF] King | Stretched Littlewood-Richardson and Kostka coefficients[END_REF], then recovered in [START_REF] Rassart | A Polynomiality Property for Littlewood-Richardson Coefficients[END_REF] using vector partition functions. The case n " 4 For n " 4, given weights λ, µ R BC, and weights ν interior to the polytope, J 4 p pλq pµq; pνqqq ‰ 0 (assuming that Lemma 1 holds true) and the stretching polynomial P ν λµ psq is of degree exactly 3. Now let us Taylor expand J 4 p psλ `ρq, psµ `ρq; psν `ρqq " s 3 J 4 p pλq, pµq; pνqq `1 2 s 2 a `Opsq , where the coefficient a, stemming here from the first order derivatives of J 4 , will receive shortly a geometric interpretation. The stretching polynomial P ν λµ psq must satisfy the three conditions 1. P ν λµ p1q " N ν λµ , by definition; 2. P ν λµ p0q " 1; 3. P ν λµ psq " J 4 p pλq, pµq; pνqqs 3 `1 2 s 2 a `Opsq, as discussed in (48). Recall now the discussion of sec. 3.1 and 3.2 : J 4 p pλq, pµq; pνqq is 1 6 times the normalized volume V of the hive polytope, and a is half the total normalized area A. There is a unique polynomial satisfying these conditions, namely P ν λµ psq " J 4 p pλq, pµq; pνqqs 3 `1 4 As 2 `´N ν λµ ´J4 p pλq, pµq; pνqq ´1 4 A ´1¯s `1 " 1 6 V s 3 `1 4 As 2 `pN ν λµ ´1 6 V ´1 4 A ´1qs `1 . (72) Then the alleged non-negativity of the s coefficient [START_REF] King | Stretched Littlewood-Richardson and Kostka coefficients[END_REF] amounts to The SUp4q hive polytope H ν λµ associated with the branching rule: pp21, 13, 5q, p7, 10, 12q; p20, 11, 9qq. Each integral point (367 of them) stands for a pictograph describing an allowed coupling of this triple, for example the one given in fig. 3. be thought of as a particular point of the tensor polytope and stands itself for a hive polytope of dimension 3 (d " pn ´1qpn ´2q{2 " 3 for SUp4q). It is displayed in fig. 2, right. It has 367 integral points: 160 are interior points, in blue in the figure, and 207 are boundary points. Among the latter, 17 are vertices, in red in the figure, the other boundary points are in brown. The polytope is integral since its vertices are integral -it is always so for SU(4) (see [START_REF] Buch | The saturation conjecture (after A. Knutson and T. Tao)[END_REF], example 2). Every single one of the 367 points of the polytope displayed in fig. 2, right, stands for a pictograph contributing by 1 to the multiplicity of the chosen tensor product branching rule. For illustration, we display one of them on fig. 3; actually we give several versions of this pictograph: first, the isometric honeycomb version and its dual, the O-blade version, and then, the KT-honeycomb version and its corresponding hive. Notice that for the first two kinds of pictographs the external vertices are labelled by Dynkin components of the highest weights, whereas for the last two, they are labelled by Young partitions. N ν λµ ? ě 1 6 V `1 4 A `1 , (73) The hive polytope has 12 facets (eight quadrilaterals, three pentagons and one heptagon), 27 edges, and 17 vertices (and Euler's identity is satisfied: 12 ´27 `17 " 2). Its normalized volume and area are V " 1484 and A " 410. The number of pictographs with prescribed edges gives the following sequence of multiplicities N sν sλ sµ " t367, 2422, 7650, 17535, 33561, 57212, 89972, 133325, 188755, 257746, . . .u, for s " 1, 2 . . . Only the first three terms of this sequence are used to determine the LR polynomial if we impose that its constant term be equal to 1: P ν λ,µ psq " p5936s 3 `2460s 2 `388s`24q{4! From our discussion in sec. 3.1, P ν λ,µ psq should be equal to the Ehrhart polynomial Epsq of the hive polytope; using the computer algebra package Magma [START_REF] Bosma | The Magma algebra system. I. The user language[END_REF] we checked that it is indeed so. The direct calculation of J 4 using (59) gives J 4 p pλq, pµq; pνqq " 742{3, and more generally J 4 p psλq, psµq; psνqq " 742 s 3 {3. Using the same eq. ( 59), we can also calculate J 4 for ρ-shifted arguments: J 4 p psλ `ρq, psµ `ρq; psν `ρqq " ment with our general discussion of sec. 3.4, the first two terms of P ν λµ psq and of J 4 p psλ ρq, psµ `ρq; psν `ρqq are identical, the leading term being also equal to J 4 p psλq, psµq; psνqq. One checks that the leading coefficient of Epsq, hence of P ν λ,µ psq, is equal to 1 3! of the normalized volume of the polytope and that the second coefficient is equal to 1 2 1 2! of the normalized 2-volume of its boundary. In accordance with Ehrhart-Macdonald reciprocity theorem, one also checks that ´P ν λµ p´1q " 160, the number of interior points in the polytope. Finally, on this example, one can test eq (66) which relates I 4 p pλ `ρq, pµ `ρq; pν `ρqq " 11592 to a sum of the Littlewood-Richardson coefficient N ν λµ and its twelve "neighbors" ν `α appearing in the tensor product ν b p1, 0, 1q. Likewise eq (67) relates 24J 4 p pλq, pµq; pνqq " 1484 to a sum over six weights ν 2 " p18, 10, 9q, p18, 11, 7q, p19, 9, 8q, p19, 11, 8q, p20, 9, 9q, p20, 10, 7q of the product N ν 2 λ´ρ µ´ρ N ν 2 ν´ρ p0,1,0q which takes the respective values 254, 235, 254, 243, 259, 239, the sum being indeed 1484. An example in SU(5) Consider the following tensor branching rule of SUp5q: pλ, µ; νq with λ " p1, 3, 2, 3q, µ " p2, 1, 4, 2q, ν " p3, 1, 4, 3q. The hive polytope H ν λµ has dimension d " 6. We shall see that it is not an integral polytope. We denote Q the convex hull of its integral points. H ν λµ has 66 vertices and 99 points, all of them being boundary points. Q has 64 vertices and 99 points (the latter being the same as for H ν λµ , by definition). Therefore we see that 2 vertices of H ν λµ are not (integral) points of H ν λµ . The normalized volume of H ν λµ is 2544 (it is 2538 for Q). The normalized volume of the boundary of H ν λµ is 3630 (it is 3618 for Q). The polynomial P ν λµ psq, i.e., the Ehrhart polynomial of H ν λµ , is 53s 6 {15 `121s 5 {8 `667s 4 {24 `679s 3 {24 `687s 2 {40 `73s{12 `1. In the case of H ν λµ , we check the first two coefficients related to the 6-volume of the polytope and to the 5-volume of the facets: 2544{6! " 53{15 and 1{2 ˆ3630{5! " 121{8. The Ehrhart polynomial of Q is 141s 6 {40 `603s 5 {40 `665s 4 {24 `679s 3 {24 `259s 2 {15 `92s{15 `1. In the case of Q, the same volume checks read: 2538{6! " 141{40 and 1{2 ˆ3618{5! " 603{40. An independent calculation using the function J 5 gives J 5 p pλq, pµq; pνqq " 53{15, the leading coefficient of the stretching polynomial. In the present example, where H ν λµ and Q differ, it is instructive to consider what happens under scaling. The two vertices of H ν λµ that are not integral points are actually half-integral points, so that they become integral by doubling. The polytope 2H ν λµ has again 66 vertices (by construction), it is integral, it has 1463 points, 18 being interior points and 1445 being boundary points. It could also be constructed as the hive polytope associated with the doubled branching rule p2λ, 2µ; 2νq, and its own Littlewood-Richardson (LR) polynomial, equal to its Ehrhart polynomial, can be obtained from the LR polynomial of H ν λµ by substituting s to 2s. The polytope 2Q has again 64 vertices (of course), it is integral, it has 1460 points, 18 being interior points ans 1442 being boundary points. Since Q Ă H ν λµ we have 2Q Ă 2H ν λµ , but now both polytopes are integral (and they are different). Q and H ν λµ have the same integral points, so, in a sense, they describe the same multiplicity for the chosen triple pλ, µ; νq, however, under stretching (here doubling) of the branching rule, we have to consider 2H ν λµ , not 2Q, otherwise we would miss three honeycombs (" 1463 ´1460) and find an erroneous multiplicity. These three honeycombs correspond to the two (integral) vertices of 2H ν λµ coming from the two (non integral) vertices of H ν λµ that became integral under doubling, plus one extra (integral) point, which is a convex combination of vertices. For illustration purposes we give below the three pictographs (in the O-blade version) that correspond to these three points. We consider the following tensor branching rule of SUp6q: pλ, µ; νq with λ " p1, 3, 1, 2, 1q, µ " p2, 1, 3, 2, 1q, ν " p4, 1, 6, 2, 1q. The multiplicity is 38. For SU [START_REF] Buch | The saturation conjecture (after A. Knutson and T. Tao)[END_REF], the number of fundamental pictographs is 5 ˆ3 `2 ˆ10 but there are 10 syzygies (one for each inner hexagon in the honeycomb picture) so that a basis has 25 elements, the set of 38 (integral) honeycombs is then described as a 25 ˆ38 matrix. The convex hull of these 38 points is then calculated, one finds that it is a 10 dimensional polytope Q (in R 25 ). The obtained polytope -which has no interior point and 38 integral points, 36 of them being vertices-happens not to coincide with the hive polytope H (we are in a situation analogous to the one examined in the previous SU(5) example). A quick study of Q reveals that this polytope, and so H itself, has dimension 10, and that the chosen triple is therefore generic. The fact that H differs from Q can be seen in (at least) three different ways: 1) The Ehrhart polynomial of Q fails to recover the multiplicity of ps λ, s µ; s νq, already for s " 2 where the multiplicity is 511. 2) The leading coefficient (30{9!) of this polynomial, hence the normalized volume of Q, differs from J 6 p pλq, pµq; pνqq " 32{9! determined directly or from Theorem 1 (part 2), we shall come back to this below. 3) A direct determination of the polytope H obtained as an intersection of 45 half-spaces -interpreted for instance as the number of (positive) edges in the oblade picture-will show that H is not an integral polytope (its vertices, aka corners, are rational but not all integral) and its integral part is indeed Q. We leave this as an exercise to the reader. The LR-polynomial associated with the chosen triple, equivalently the Ehrhart polynomial of H, is equal to The coefficient of s 10 , equal to 1{11340 " 32{9! and interpreted as the normalized volume of H, can be obtained from a direct evaluation of the expression of J 6 , but it can also be obtained easily from Theorem 1 (part 2). This double sum (35) involves the seven weights κ together with the seven associated coefficients p r κ that appear in (70) and turns out to involve only the following weights ν 1 : p1, 2, 2, 2, 0q, p1, 2, 3, 0, 1q, p2, 1, 2, 1, 1q, p2, 1, 3, 0, 0q. Most terms are actually zero (because of the vanishing of many Littlewood-Richardson coefficients), and the result is p1`2`2`1`13`13q{9! " 32{9!. s 4. 1 . 4 A 14 few facts about SU[START_REF] Blichfeldt | Notes on geometry of numbers, in the October meeting of the San Francisco section of the AMS[END_REF] 2 A 2 while the counting of interior points, through Ehrhart-Macdonald reciprocity theorem, gives us another lower bound on N ν λµ #pinterior pointsq " ´P ν λµ p´1q " N ν λµ ´p 1 `2q ě 0 . Figure 2 : 2 Figure2: Left: The SUp4q tensor polytope H λµ for λ " p21, 13, 5q, µ " p7, 10, 12q , and its 7092 integral points (distinct irreps). Each such point can itself be thought as a hive polytope, for example the one given on the right. Right:The SUp4q hive polytope H ν λµ associated with the branching rule: pp21, 13, 5q, p7, 10, 12q; p20, 11, 9qq. Each integral point (367 of them) stands for a pictograph describing an allowed coupling of this triple, for example the one given in fig.3. 742 3 s 3 7 O 37 `205 2 s 2 `12s `5 12 . In agree--blade version: edges are non-negative integers, opposite angles (sum of adjacent edges) around the inner points are equal. Isometric honeycomb version: opposite angles (sum of adjacent edges) of hexagons are equal. Figure 3 : 3 Figure3: One of the 367 pictographs associated with pp21, 13, 5q, p7, 10, 12q; p20, 11, 9qq. For completeness we also give below the corresponding KT-honeycomb and its dual hive. Figure 4 : 4 Figure 4: The three SUp5q pictographs (O-blade version) associated with p2λ, 2µ; 2νq, with λ "p1, 3, 2, 3q, µ " p2, 1, 4, 2q, ν " p3, 1, 4, 3q that belong to the hive polytope of this doubled branching rule but that do not belong to the double of the integral part of the hive polytope of pλ, µ; νq. 29 4. 4 . 3 2943 An example in SU[START_REF] Buch | The saturation conjecture (after A. Knutson and T. Tao)[END_REF] Take two orbits of the group Upnq acting on H n , labelled by Hermitian matrices A and B, and consider the corresponding orbital measures m A , m B . The convolution product of the latter is defined as usual: with f , a function on H n , one sets ) 1.1.4 Convolution product of orbital measures As mentioned above, it is natural to adopt the following definition Definition 1. A triple pα, β; γq is called generic if J n pα, β; γq is non vanishing. which equals 1, 1 2 , 1 12 , 1 288 , 1 34560 , ¨¨¨for n " 2, 3, 4, ¨¨¨. 2; -(v) it vanishes for ordered γ outside r H αβ ; -(vi) by continuity (for n ě 3) it vanishes for γ at the boundary of r H αβ ; -(vii) it also vanishes whenever at least two components of α or of β coincide 4 : this follows from the antisymmetry mentionned above; -(viii) its normalization follows from that of the probability density p, (normalized of course by ş R n d n γ ppγ|α, βq " 1), hence ż H αβ r d n´1 γ ∆pγq ∆pαq∆pβq J n pα, β; γq " 1 sf pn ´1q (13) Thus for generic points, the two polynomials J n p psλ `ρq, psµ `ρq; psν `ρqq and P ν λµ psq have the same two terms of highest degree d max " pn ´1qpn ´2q{2 and d max ´1. In the degenerate case where the term of degree d max vanishes and the next does not, the leading terms of degree d max ´1 are equal. If the degree is strictly lower than d max ´1, there is no obvious relation between the two polynomials, see examples at the end of sec. 4.3.3. .1.1 and 4.1.2. We thus Taylor expand for large s J n p psλ `ρq, psµ `ρq; psν `ρqq " ÿ r κ ÿ P λ µ ν`k{s psq κPK kPtκu " ÿ κPK r κ ¨dim V κ P ν λ µ psq `1 s kPtκu ÿ k∇ ν P ν λ µ psq `¨¨¨' " P ν λ µ psq ˆ1 `o´1 s ¯˙(48) since ř κPK r κ dim V κ " 1 as noticed above in sec. 2.2, and ř kPtκu k " 0 in any irrep. .4.2. 2. J 5 pα, β; γq vanishes outside (and on the boundary) of the polytope, as already discussed. 3. For a compatible triple pα, β; γq and γ inside the polytope r H αβ , 360J 5 pα, β; γq is a positive integer (see sec. 2.5), provided α and β have only distinct components. It is non vanishing according to Lemma 1. Moreover N ν λµ ď 6! J 5 pα, β; γq `6 according to (46). As in section 2.2 the notation χ λ denotes the character of the Lie group SUpnq associated with the irrep of highest weight λ. Also recall that for n odd, p R n " R n . 4. ř γP r H αβ XZ 4 J 5 pα, β; γq ∆pγq ∆pαq∆pβq " 1 288 , see (13) again. 4.2 The polynomials R n and p R n . Application of Theorem 1 4.2.1 Cases n " 2 and n " 3 In practice we use the normalized Haar measure that makes the volume of Upnq equal to 1. The context being specified, people often simply write "Fourier transform" or "Fourier orbital transform" rather than "spherical transform" or "orbital transform". The reader may look at[START_REF] Coquereaux | Conjugation properties of tensor product multiplicities[END_REF] for an explicit descriptions and a few examples of O-blades and isometric honeycombs in the framework of the Lie group SU[START_REF] Berenstein | Triple multiplicities for sl(r `1) and the spectrum of the external algebra in the adjoint representation[END_REF]. See also our SUp4q example in sec. 4.4.1. If α and β are Young partitions describing the highest weights λ, µ of two Upnq or SUpnq irreps, this occurs when some Dynkin label of λ or µ vanishes, i.e., when λ or µ belongs to a wall of the dominant Weyl chamber C. We thank Allen Knutson for pointing this out to us. Our thanks to Michèle Vergne for pointing to that possibility. This result should be connected with the fact that the support of the convolution product of measures on concentric 2-spheres is an annulus. χ 1 pT q, while p R " R 3 " 1. Acknowledgements We acknowledge stimulating discussions with Olivier Babelon, Paul Zinn-Justin and especially Allen Knutson and Michèle Vergne. Consequences of Theorem 1 (i) We start with a useful lemma Lemma 2. With the notations of Theorem 1, we have the relations ÿ κPK r κ dim V κ " 1 (37) Proof. From the relation R n pT q " ř κPK r κ χ κ pT q evaluated at T " I, with R n pIq " 1, it follows that ř κPK r κ dim V κ " 1. Then because of the reality of the irreps of h.w. κ, hence The two relations (38) and (40) are proved in the same way. (ii) Localization of the normalization integral of J n . For two given integral (non negative) α and β, consider the sum of J n pα, β; γq∆pγq over the integral γ's inside the connected part r H αβ of the support of J n . If either α or β is non generic, (i.e., has two equal components), all J n pα, β; γq vanish. Conversely if both α and β are generic, i.e., λ and µ are not on the boundary of the Weyl chamber, we make use of ( 19) and (36) by Lemma 2. (The ν's on the boundary of the Weyl chamber, for which ν ´ρ is not dominant, do not contribute because of the vanishing of J n pα, β; γq.) Comparing with [START_REF] Harish-Chandra | Differential Operators on a Semisimple Algebra[END_REF], we find that In others words, the normalization integral of J n over the sector γ n´1 ď ¨¨¨ď γ 1 localizes over the integral points of that sector. (iii) Quantization of J n . We conclude that in agreement with the general formula (33), provided we assume that the indicator function vanishes at the end points of the interval I. On the other hand, as we shall see below in sec. 4.2.1, p R 2 " 1 2 χ 1 pT q, so that (36) amounts to which is consistent with (49) if we assume now that the indicator function takes the value 1 2 at the end points of the interval I. This rather peculiar situation is a consequence of the irregular, discontinuous, structure of J 2 . The case of SU(3) For n " 3, J 3 takes a simple form within the tensor polytope (here a polygon). In [START_REF] Zuber | Horn's problem and Harish-Chandra's integrals. Probability distribution functions[END_REF], the following was established. The function with A 1 and A 2 as in [START_REF] Fulton | Representation Theory, A First Course[END_REF], may be recast in a more compact form: Proposition 5. Take α 1 ě α 2 ě α 3 , and likewise for β. For γ satisfying (12), Horn's inequalities and γ 1 ě γ 2 ě γ 3 , where pγ 2 ´α3 ´β1 q ´pγ 1 ´α1 ´β2 q if γ 2 ´α3 ´β1 ě 0 and γ 1 ´α1 ´β2 ă 0 pγ 3 ´α2 ´β3 q ´pγ 2 ´α3 ´β1 q if γ 3 ´α2 ´β3 ě 0 and γ 2 ´α3 ´β1 ă 0 pγ 1 ´α1 ´β2 q ´pγ 3 ´α2 ´β3 q if γ 1 ´α1 ´β2 ě 0 and γ 3 ´α2 ´β3 ă 0 . J 3 pα, β; γq takes non negative values inside the tensor polygon and vanishes by continuity along the edges of the polygon. It also vanishes whenever two components of α or β coincide (non generic orbits). The non-negativity follows from the interpretation of J 3 as proportional with a positive coefficient to the PDF p. Consider now an admissible triple pλ, µ; νq of highest weights of SUp3q. The associated triple pα, β; γq is defined as explained above, 3 pλ 1 `2λ 2 `µ1 `2µ 2 ´ν1 ´2ν 2 q, an integer, so that ř 3 i"1 pγ i ´αi ´βi q " 0. Then In contrast, for n ě 4, one finds non trivial polynomials R n pT q and p R n pT q. For instance for n " 4, with the notations D 4 , p D 4 and 4 introduced in ( 26) and likewise Now, in SU(4), we can write χ p1,0,1q pT qχ ν pT q " χ ν pT q `ÿ ν 1 χ ν 1 pT q χ p0,1,0q pT qχ ν´ρ pT q " ÿ ν 2 χ ν 2 pT q with a sum over the h.w. ν 1 , resp. ν 2 , appearing in the decomposition of ν b p1, 0, 1q, resp. of pν ´ρq b p0, 1, 0q. Notice that p1, 0, 1q is the highest weight of the adjoint representation, hence one may write ν 1 " ν `α where α runs over the 12 non zero roots α for ν "deep enough" in the Weyl chamber, i.e., provided all ν `α are dominant weights, and over three times the weight 0 . Thus we may write J 4 p pλ `ρq, pµ `ρq; pν `ρqq " where ∆N ν λ µ :" ´N ν λ µ q may be regarded as a second derivative term (a discretized Laplacian), while the "first derivative" term vanishes because of ř α " 0. Example: Take λ " p1, 2, 2q, µ " p2, 2, 1q, ν " p1, 4, 1q, the ν 1 and their multiplicities read pν 1 , N ν 1 ν p1,0,1q q " tp0, 3, 2q, 1q, pp0, 4, 0q, 1q, pp0, 5, 2q, 1q, pp0, 6, 0q, 1q, pp1, 3, 3q, 1q, pp2, 2, 2q, 1q, pp2, 3, 0q, 1q, pp2, 4, 2q, 1q, pp2, 5, 0q, 1q, pp3, 3, 1q, 1q, pp1, 4, 1q, 3qu , J 4 p pλ `ρq, pµ `ρq; pν `ρqq " 97{24 while N ν λµ " 5, ř ν 1 N ν 1 ν p1,0,1q N ν 1 λ µ " 52, the rhs of (66) equals 97{24, and matches the lhs. Note that in that example, only 10 out of the 12 α contribute. There is a second relation, which follows from (36) with the above expression of p R 4 J 4 p pλq, pµq; pνqq " For the previous example λ " p1, 2, 2q, µ " p2, 2, 1q, ν " p1, 4, 1q, three weights ν 2 contribute N ν 2 ν´ρ p0,1,0q " 1, namely p0, 2, 0q, p1, 2, 1q, p0, 4, 0q, but only the first two give N ν 2 λ´ρ µ´ρ " 1, the third has N ν 2 λ´ρ µ´ρ " 0, and the rhs equals 1 3 , which is the value of J 4 p pλq, pµq; pνqq. Case n " 5 For n " 5, likewise R 5 pT q " 1 180 " 45 `12 `cospx 1 ´x2 q `perm. : 10 terms in total cospx 1 `x2 ´x3 ´x4 q `perm. : 15 terms in total ˘ı " 7 72 `1 40 tr T tr T ‹ `1 1440 rptr T q 2 ´tr T 2 src.c.s " 7 72 `1 40 χ p1,0,0,0q pT qχ p1,0,0,0q pT ‹ q `1 360 χ p0,1,0,0q pT qχ p0,1,0,0q pT ‹ q " 1 360 `45 `10χ p1,0,0,1q pT q `χp0,1,1,0q pT q ˘. Comment: note that at T " I, 45 `10 ˆ24 `75 " 360, R 5 pIq " 1, as it should. Then denoting the h.w. appearing in p1, 0, 0, 1q b ν, resp. p0, 1, 1, 0q b ν, by ν 1 , resp. ν 2 , R 5 pT qχ ν pT q " 1 360 ´45χ ν pT q `10 ÿ ν 1 χ ν 1 pT q `ÿ ν 2 χ ν 2 pT q ānd 360J 5 p pλ `ρq, pµ `ρq; pν `ρqq " 45N ν λ µ `10 Here again, for ν "deep enough" in C, we can make the formula more precise: ν 1 ´ν runs over the 24 weights (=roots) of the adjoint representation p1, 0, 0, 1q, including 4 copies of 0 and 20 non zero roots α; likewise ν 2 ´ν runs over the 75 weights of the p0, 1, 1, 0q representation, including 5 copies of 0, twice the 20 α and the 30 weights β of the form ˘pα ij ˘α kl q with 1 ď i ă j ă k ă l ď 5 or ˘pα ij `α kl q with 1 ď i ă k ă j ă l ď 5. Here we are making use of the notations αi , 1 ď i ď 4 for the simple roots, and αij " αi `¨¨¨`α j´1 with 1 ď i ă j ď 5 for the positive roots . Thus "deep enough" actually means: all ν `α and ν `β P C. Then (68) reads J 5 p pλ `ρq, pµ `ρq; pν `ρqq " N ν λ µ `1 30 (with 20{30 `30{360 " 3{4). Example. λ " p2, 3, 3, 2q, µ " p3, 2, 3, 2q, ν " p5, 3, 2, 3q, N ν λµ " 211. We find in the lhs of (68) 360J 5 p pλ `ρq, pµ `ρq; pν `ρqq " 63213 while the three terms in the rhs equal respectively 9495, 42010, 11708 with a sum of 63213, qed. In [START_REF] Betke | Lattice points in lattice polytopes[END_REF][START_REF] Beck | Coefficients and Roots of Ehrhart Polynomials[END_REF] inequalities were obtained between coefficients of the Ehrhart polynomial of an integral polytope. Recall that for n " 4, all hive polytopes are integral [START_REF] Buch | The saturation conjecture (after A. Knutson and T. Tao)[END_REF], and we may apply on (72) these inequalities which read which is precisely the Blichfeldt inequality mentioned above at point 4 of sec. 4.1.3. In contrast, for non generic triples pλ, µ; νq, J 4 p pλq, pµq; pνqq " 0, the stretching polynomial is of degree strictly less than 3, and reads in general If the coefficient a is non vanishing, it has now to be interpreted as the normalized area of the 2-dimensional hive polytope (a polygon). If a " 0, either N ν λµ ě 2 and P ν λµ psq " pN ν λµ ´1qs `1, or N ν λµ " 1 and P ν λµ psq " 1, consistent with the result of sec. 3.4 and the two general results P " 1 if N ν λµ " 1 and P " s `1 if N ν λµ " 2. In the former case (dimension 2 polytope, degree 2 Ehrhart polynomial), Erhrart-Macdonald reciprocity theorem gives us an upper bound on N ν λµ ď a `2, while the alleged non-negativity of the s-coefficient gives a lower bound, N ν λµ ě 1 2 pa `2q. Thus one should have Also denoting c :" #internal points " P p´1q " a ´N ν λµ `2, b " # boundary points, b `c :" #total of points " N ν λµ , hence a `2 " b `2c which is Pick's formula for the Euclidean area a{2 " b{2 `c ´1. Examples: Here we denote for short J 1 4 " J 4 p psλ `ρq, psµ `ρq; psν `ρqq. Take λ " p2, 2, 1q, µ " p2, 1, 3q, for ν " p0, 1, 4q, N ν λµ " 3, P ν λµ psq " 1 2 ps `1qps `2q, J 1 4 " 1 12 p6s 2 `15s `7q while for ν " p2, 4, 0q, N ν λµ " 3, P ν λµ psq " 2s `1, J 1 4 " 1 2 p1 `4sq and for ν " p2, 0, 4q, N ν λµ " 4, P ν λµ psq " ps `1q 2 , J 1 4 " 1 4 p4s 2 `7s `2q. Take λ " p3, 0, 3q, µ " p2, 3, 1q, for ν " p3, 4, 0q, N ν λµ " 3, P ν λµ psq " 2s `1, J 1 4 " 1 8 p14s `5q, while for ν " p2, 3, 1q, N ν λµ " 6, P ν λµ psq " ps `1qp2s `1q, J 1 4 " 1 8 p16s 2 `18s `3q. Consider the irreps of highest weight λ " p21, 13, 5q and µ " p7, 10, 12q. Their tensor product contains 7092 distinct irreps ν with multiplicities ranging from 1 to 377. The tensor polytope H λµ is displayed in fig. 2, left. The total multiplicity (sum of multiplicities for the various ν's) is 537186. Let us now consider a particular term in the decomposition of the tensor product into irreps: the admissible triple pλ, µ; νq, with ν " p20, 11, 9q, whose multiplicity is equal to 367. This term can
01775105
en
[ "info.info-os", "info.info-dc" ]
2024/03/05 22:32:18
2018
https://inria.hal.science/hal-01775105/file/main.pdf
Arif Ahmed Guillaume Pierre Docker Container Deployment in Fog Computing Infrastructures Keywords: Docker, Container, Edge Cloud, Fog Computing The transition from virtual machine-based infrastructures to container-based ones brings the promise of swift and efficient software deployment in large-scale computing infrastructures. However, in fog computing environments which are often made of very small computers such as Raspberry PIs, deploying even a very simple Docker container may take multiple minutes. We demonstrate that Docker makes inefficient usage of the available hardware resources, essentially using different hardware subsystems (network bandwidth, CPU, disk I/O) sequentially rather than simultaneously. We therefore propose three optimizations which, once combined, reduce container deployment times by a factor up to 4. These optimizations also speed up deployment time by about 30% in datacenter-grade servers. I. INTRODUCTION Fog computing extends datacenter-based cloud platforms with additional resources located in the immediate vicinity of the end users. By bringing computation where the input data was produced and the resulting output data will be consumed, fog computing is expected to support new types of applications which either require very low network latency to their end users (e.g., augmented reality applications) or produce large volumes of data which are relevant only locally (e.g., IoT-based data analytics). Fog computing architectures are fundamentally different from classical cloud platforms: to provide computing resources in the physical vicinity of any end user, fog computing platforms must necessarily rely on very large numbers of small Points-of-Presence connected to each other with commodity networks, whereas clouds are typically organized with a handful of extremely powerful data centers connected by dedicated ultra-high-speed networks. This geographical spread also implies that the machines used in any Pointof-Presence may not be datacenter-grade servers but much weaker commodity machines. As a matter of fact, one option which is being explored is to use single-board computers such as Raspberry PIs for this purpose. Despite their obvious hardware limitations, Raspberry PIs offer excellent performance/cost/energy ratios and are well-suited to scenarios where the device's physical size and energy consumption are important enablers for actual deployment [START_REF] Van Kempen | MEC-ConPaaS: An experimental single-board based mobile edge cloud[END_REF], [START_REF] Hajji | Understanding the performance of low power Raspberry Pi cloud for big data[END_REF]. However, building a high-performance fog platform based on tiny single-board computers is a difficult challenge: in particular these machines have very limited I/O performance. In this paper, we focus on the issue of downloading and deploying Docker containers in single-board computers. We assume that server machines have limited storage capacity and therefore cannot be expected to keep in cache the container images of many applications that may be used simultaneously in a public fog computing infrastructure. Deploying container images can be painfully slow, in the order of multiple minutes depending on the container's image size and network condition. However, such delays are unacceptable in scenarios such as a fog-assisted augmented reality application where the end users are mobile and new containers must be dynamically created when a user enters a new geographical area. Reducing deployment times as much as possible is therefore instrumental in providing a satisfactory user experience. We show that this poor performance is not only due to hardware limitations. In fact it results from the way Docker implements the container's image download operation: Docker exploits different hardware subsystems (network bandwidth, CPU, disk I/O) sequentially rather than simultaneously. We therefore propose three optimization techniques which aim to improve the level of parallelism of the deployment process. Each technique reduces deployment times by 10-50% depending on the content and structure of the container's image and the available network bandwidth. When combined together, the resulting "Docker-pi" implementation makes container deployment up to 4 times faster than the vanilla Docker implementation, while remaining totally compatible with unmodified Docker images. Interestingly, although we designed Docker-pi in the context of single-board computers, it also provides 23-36% performance improvements on high-end servers as well, depending on the image size and organization. This paper is organized as follows. Section II presents the background and related work. Section III analyzes the deployment process and points out its inefficiencies. Section IV proposes and evaluates three optimizations. Finally, Section V discusses practicalities, and Section VI concludes. II. BACKGROUND A. Docker background Docker is a popular framework to build, package, and run applications inside containers [START_REF] Inc | Docker: Build, ship, and run any app, anywhere[END_REF]. Applications are packaged in the form of images which contain a part of a file system with the required libraries, executables, configuration files, etc. Images are stored in centralized repositories where they are accessible from any compute server. To deploy a container, Docker therefore first downloads the image from the repository and locally installs it, unless the image is already cached in the compute node. Starting a container from a locally-installed image is as quick as starting the processes which constitute the container's application. The deployment time of any container is therefore dictated by the time it takes to download, decompress, verify, and locally install the image before starting the container itself. 1) Image structure: Docker images are composed of multiple layers stacked upon one another: every layer may add, remove, or overwrite files present in the layers below itself. This enables developers to build new images very easily by simply specializing pre-existing images. The same layering strategy is also used to store file system updates performed by the applications after a container has started: upon every container deployment, Docker creates an additional writable top-level layer which stores all updates following a Copy-on-Write (CoW) policy. The container's image layers themselves remain read-only. Table I shows the structures of the three images used in this paper. 2) Container deployment process: Docker images are identified with a name and a tag representing a specific version of the image. Docker users can start any container by simply giving its name and tag using the command: docker run IMAGE:TAG [parameters] Docker keeps a copy of the latest deployed images in a local cache. When a user starts a container, Docker checks its cache and pulls the missing layers from the docker registry before starting the container. This work aims to better understand the hardware resource usage of the Docker container deployment process, and to propose alternative techniques to speed up the download and installation of the required image layers. We assume that the image cache is empty at the time of the deployment request: fog computing servers will most likely have very limited storage capacity so in this context we expect that cache misses will be the norm rather than the exception. B. Related work Many research efforts have recognized the potential of single-board devices for building fog computing infrastructures and have evaluated their suitability for handling cloud-like types of workloads. For instance, Bellavista et al demonstrated that even extremely constrained devices such as Raspberry PIs may be successfully used to build IoT cloud gateways. [START_REF] Bellavista | Feasibility of fog computing deployment based on Docker containerization over RaspberryPi[END_REF]. With proper configuration, these devices can achieve scalable performance with minimal overhead. However, the study assumes that the Docker container images are already cached in the local nodes. In contrast, we focus on the download-and-install part of the container deployment, and show that simple modifications can significantly improve the performance of this operation. A number of approaches propose to improve the design of Docker registries [START_REF] Anwar | Improving Docker registry design based on production workload analysis[END_REF]. CoMICon is a distributed Docker registry which distributes layers of an image among multiple nodes to increase availability and reduce the container provisioning time [START_REF] Nathan | CoMICon: A co-operative management system for docker container images[END_REF]. Distribution allows one to pull an image from multiple registries simultaneously, which reduces the average layer's download times. Similar approaches rely on peer-to-peer protocols instead [START_REF] Babrou | Docker registry bay[END_REF], [START_REF] Ma | Dockyard -container and artifact repository[END_REF]. However, distributed downloading relies on the assumption that multiple powerful servers are interconnected with a high-speed local-area network, and therefore that the main performance bottleneck is the long-distance network to a remote centralized repository. In the case of fog computing platforms, servers will be geographically distributed to maximize proximity to the end users, and they will rarely be connected to one another using high-capacity networks. As we discuss in the next section, the main bottleneck in fog computing nodes is created by hardware limitations of every individual node. Another way to improve the container deployment time is to propose a new Docker storage driver. Slacker proposes to rely on a centralized NFS file system to share the images between all the nodes and registries [START_REF] Harter | Slacker: Fast distribution with lazy Docker containers[END_REF]. The lazy pulling of the container image in the proposed model significantly improves the overall container deployment time. However, Slacker expects that the container image is already present in the local multi-server cluster environment; in contrast, a fog computing environment is made of large numbers of nodes located far from each other, and the limited storage capacity of each node implies that few images can be stored locally for future use. Besides, Slacker requires flattening the Docker images in a single layer. This makes it easier to support snapshot and clone operations, but it deviates from the standard Docker philosophy which promotes the layering system as a way to simplify image creation and updates. Slacker therefore requires the use of a modified Docker storage driver (with de-duplication features) while our work keeps the image structure unmodified and does not constraint the choice of a storage driver. We discuss the topic of flattening Docker images in Section V-A. III. UNDERSTANDING THE DOCKER CONTAINER DEPLOYMENT PROCESS To understand the Docker container deployment process in full details we analyzed the hardware resource usage during the download, installation and deployment of a number of Docker images on a Raspberry PI-based infrastructure. A. Experimental setup We monitored the Docker deployment process on a testbed which consists of three Raspberry Pi 3 machines connected to each other and to the rest of the Internet with 10 Gbps Ethernet [START_REF]The mobile edge cloud testbed at IRISA Myriads team[END_REF]. The testbed was installed with Table I depicts the images we used for this study. The first one simply conveys a standard Ubuntu operating system: it is composed of one layer containing most of the content, and four small additional layers which contain various updates of the base layer. We created a so-called Mubuntu image by adding an extra 51 MB layer which represents a typical application's code and data which rely on the base Ubuntu image, following the incremental approach promoted by the Docker system. Finally, the BigLayers image is composed of four big layers which allow us to highlight the effect of the layering system on container deployment performance. We stored these images in the public Docker repository [START_REF] Inc | Docker hub[END_REF] so every experiment includes realistic download performance from a highly-utilized public repository. We instrumented the testbed nodes to monitor the overall deployment time as well as the utilization of important resources during the container deployment process: • Deployment time: We measured deployment times from the moment the deployment command is issued, to the time when Docker reports that the container is started. • Network activities: We monitored the traffic from/to the Docker daemon (excluding other unrelated processes) on the Ethernet interface using NetHogs tool at a 1second granularity. • Disk throughput: We monitored the disk activity with the iostat Linux command which monitors the number of bytes written to or read from disk at a 1-second granularity. • CPU usage: We monitored CPU utilization by watching the /proc/stat file at a 1-second granularity. Every container deployment experiment was issued on an otherwise idle node, and with an empty image cache. B. Monitoring the Docker container deployment process Figure 1 depicts the results when deploying the three images using regular Docker. Figure 1(a) shows the deployment time of our three images in different network conditions: deploying the Ubuntu, Mubuntu and Biglayers images with unlimited network bandwidth respectively takes 240, 333 and 615 seconds. Clearly, the overall container deployment time is roughly proportional to the size of the image. When throttling the network capacity, deployment times grow steadily as the network capacity is reduced to 1 Mbps, 512 kbps, and 256 kbps. For instance, deploying the Ubuntu container takes 6 minutes when the network capacity is reduced to 512 kbps. This is considerable with regards to the deployment efficiency one would expect from a container-based infrastructure. However, the interesting information for us is the reason why deployment takes so long, as we discuss next. Figure 1(b) depicts the utilization of different hardware resources from the host machine during the deployment of the standard Ubuntu image. The red line shows incoming network bandwidth utilization, while the blue curve represents the number of bytes written to the disk and the black line shows the CPU utilization. The first phase after the container creation command is issued involves intensive network activities, which indicates that Docker is downloading the image layers from the remote image registry. By default Docker downloads up to three image layers in parallel. The duration of downloads clearly depend on the image size and the available network capacity: between 55 s and 110 s for the Ubuntu and Mubuntu images. During this phase, we observe no significant disk activity in the host machine, which indicates that the downloaded file is kept in main memory. After the download phase, Docker extracts the downloaded image layers to the disk before building the final image of the application. The extraction of a layer involves two operations: decompression (which is CPU-intensive) and writing to the disk (which is disk-intensive). We observe that the resource utilization alternates between periods during which the CPU is busy (∼40% utilization) while few disk activities are performed, and periods during which disk writes are the only notable activity of the system. We conclude that, after the image layers have been downloaded, Docker sequentially decompresses the image and writes the decompressed data to disk. When the image data is big, Docker alternates between partial decompressions and disk writes, while maintaining the same sequential behavior. We see exactly the same phenomenon in Figure 1(c). However, here, the downloading of the first layer terminates before the other layers have finished downloading. The extraction of the first layer can therefore start before the end of the download phase, creating a small overlap between the downloading and extraction phases. the other two take a negligible amount of time. In this paper, we therefore focus on optimizing the pull operation. 2) Pulling image layers in parallel: By default, Docker downloads image layers in parallel with a maximum parallelism level of three. These layers are then decompressed and extracted to disk sequentially starting from the first layer. However, when the available network bandwidth is limited, downloading multiple layers in parallel will delay the download completion of the first layer, and therefore will postpone the moment when the decompression and extraction process can start. Therefore, delaying the downloading of the first layer ultimately leads to slowing down the extraction phase. 3) Single-threaded decompression: Docker always ships the image layers in compressed form, usually implemented as a gzipped tar file. This reduces the transmission cost of the image layers but it increases the CPU demand on the client node to decompress the images before extracting the image to disk. Docker decompresses the images via a call to the standard gunzip.go function, which happens to be singlethreaded. However, even very limited machines usually have several CPU cores available (4 cores in the case of a Raspberry Pi 3). The whole process is therefore bottlenecked by the single-threaded decompression. As a result the CPU utilization never grows beyond ∼40% of the four cores of the machine, wasting precious computation resources which may be exploited to speed up image decompression. 4) Resource under-utilization: The standard Docker container deployment process under-utilizes the available hardware resources. Essentially, deploying a container begins with a network-intensive phase during which the CPU and disk are mostly idle. It then alternates between CPUintensive decompression operations (during which the network and disk are mostly idle) and I/O-intensive image extraction operations (during which the network and CPU are mostly idle). The only case where these operations slightly overlap are images such as Mubuntu and BigLayers when the decompress and extraction process of the first layer can start while the last images are still being downloaded. This resource under-utilization is one of the main reason for the poor performance of the overall container deployment process. The main contribution of this paper is to show how one may reorganize the Docker deployment process to maximize resource utilization during deployment. IV. OPTIMIZING THE CONTAINER DEPLOYMENT PROCESS To address the inefficiencies presented in the previous section we propose and evaluate three optimizations which deal with different issues in the deployment process. We can therefore combine them all together, which brings significant performance improvement. A. Sequential image layer downloading Simultaneous downloading of multiple images obviously aims to maximize the overall network throughput. However, it has negative effects because the decompress and extraction phases of each image layer must take place sequentially to preserve the Copy-on-Write policy of Docker storage drivers. The decompress & extract phase can start only after the first layer has been downloaded. Downloading multiple image layers in parallel will therefore delay the download completion of the first layer because this download must share network resources with other image layer downloads, and will therefore also delay the moment when the first layer can start its decompress & extract phase. We therefore propose to download image layers sequentially rather than concurrently. Figure 2 illustrates the effect of downloading layers sequentially rather than in parallel. In both cases, three threads are created to handle the three image layers. However, in the first option the downloads take place in parallel whereas the only required inter-thread synchronization requires that the decompression and extraction of layer n can start only after the decompression and extraction of layer n -1 has completed. In sequential downloading, the second layer starts downloading only when the first download has completed, which means that it takes place while the first layer is being decompressed and extracted to disk. This allows the first-layer extraction to start sooner and it also increases resource utilization because the download and Implementing sequential downloading can be done easily by setting the "max-concurrent-downloads" parameter to 1 in the /etc/docker/daemon.json configuration file. Figure 3(a) depicts the resource usage of the host machines when deploying the Mubuntu image with sequential downloading and a 1 Mbps network capacity. We observe that after the downloading of layer 1 has completed, the utilization of hardware resources is much greater, with in particular a clear overlap between periods of intensive network, CPU and I/O resources. Also we can observe that the decompression of the first layer (visible as the first spike of CPU utilization) takes place sooner than in Figure 1(c). Figure 3(b) compares the overall container deployment times with parallel and sequential downloads in various network conditions. When the network capacity is unlimited the performance gains in the deployment of the Ubuntu, Mubuntu and BigLayers images are 3%, 4.2% and 6% respectively. However, the performance gains grow steadily as the available network bandwidth gets reduced. With a bandwidth cap of 256 kbps, sequential downloading brings improvements of 6% for the Ubuntu image, 10% for Mubuntu and 12% for BigLayers. This is due to the fact that slower network capacities exacerbate the duration of the download phases and increases the delaying effect of parallel layer downloading. B. Multi-threaded layer decompression By default, Docker uses the gunzip.go library to decompress the downloaded image layers before extracting to the disk. However, this function is single-threaded, which implies that the CPU utilization during decompression never exceeds 40% of the four available cores in the Raspberry Pi machine. We therefore propose to replace the single-threaded gunzip.go library with a multi-threaded implementation so that all the available CPU resources may be used to speed up this part of the container deployment process. We use pgzip, which is a multi-threaded implementation of the standard gzip/gunzip functions [START_REF] Post | klauspost/pgzip: Go parallel gzip (de)compression[END_REF]. Its functionalities are exactly the same as those of the standard gzip, however it splits the work between multiple independent threads. When applied to large files of at least 1 MB, this can significantly speed up decompression. Figure 4(a) depicts the deployment time of a singlelayered image while using various of threads for decompression. When pgzip uses a single thread, the performance and CPU utilization during decompression are very similar to the standard gunzip implementation. However, when we increase the number of threads, the overall container deployment time decreases from 154 s to 136 s. At the same time, the CPU utilization during decompression steadily increases from 40% to 71% of the four available CPU cores. If we push beyond 12 threads, no additional gains are observed. We clearly see that the parallel decompression does not scale linearly, as it is not able to exploit the full capacity of the overall CPU: this is due to the fact that gzip decompression must process data blocks of variable size so the decompression operation itself is inherently singlethreaded [START_REF] Sitaridi | Massively-parallel lossless data decompression[END_REF]. The benefit of multi-threading decompression is that other necessary operations during decompression such as data buffering and CRC verification can be delegated to other threads and moved out of the critical path. Figure 4(b) shows the effect of using parallel decompression when deploying Mubuntu images with 12 threads. We observe that the CPU utilization is greater during the decompression phases than with standard Docker, in the order of 70% utilization instead of 40%. Also, the decompression phase is notably shorter. Figure 4(c) compares the overall container deployment times with parallel decompression against that of the standard Docker. The network performance does not influence the time of the decompression phase so we conducted the evaluation only with an unlimited network capacity. The performance gain from multi-threaded decompression is similar for all three images, in the order of 17% of the overall deployment time. C. I/O pipelining Despite the sequential downloading and the multithreaded decompression techniques, the container deployment process still under-utilizes the hardware resources. This is due to the sequential nature of the workflow which is applied to each individual layer: each layer is first downloaded in its entirety, then it is decompressed entirely, then it is extracted to disk. This requires Docker to keep the entire decompressed layer in memory, which can be significant considering that a Raspberry Pi 3 has only 1 GB of main memory. Also, it means that the first significant disk activity can start only after the first layer has been fully downloaded and decompressed. Similarly, Docker necessarily decompresses and extracts the last layer to disk while the networking device is mostly inactive. However, there is no strict requirement for the download, decompress and extraction of a single layer to take place sequentially. For example, decompression may start right after the first bytes of the compressed layer have been downloaded. Similarly, extracting the layer may start immediately after the beginning of the layer image has been decompressed. We therefore propose to reorganize the download, decompression and extraction of a single layer in three separate threads where each thread pipelines data to the next as soon as some data is available. In Unix shell syntax this essentially replaces the sequential "download; decompress; crc-check; extract" command with the concurrent "download | decompress | crc-check | extract" command. Since we stream the incoming downloaded data without buffering the entire layer, the extraction can start writing content to disk long before the download process has completed. We implemented pipelining using the io.pipe() GO API [START_REF]The Go Authors[END_REF], which creates a synchronized in-memory pipe between an io.reader(pgzip/decompress) and an io.writer(network/download) without internal buffering. However, we must be careful about synchronizing this process between multiple image layers: for example, if we created an independent pipeline for each layer separately, the result would violate the Docker policy that layers must be extracted to disk sequentially, as one layer may overwrite a file which is present in a lower layer. If we extracted multiple layers simultaneously we could end up with the wrong version of the file being given to the container. Rather than building complex synchronization mechanisms, we instead decided to rely on Docker's sequential downloading feature already discussed in Section IV-A. When a multi-layer image is deployed, this imposes that layers are downloaded and extracted one after the other, while using the I/O pipelining technique within each layer. Figure 5 evaluates the I/O pipelining technique using a single-layer image. We can see that the pipelined version is roughly 50% faster than its standard counterpart: in the standard deployment, resources are used one after the other: first network-intensive download, then CPU-intensive decompression, then finally disk-intensive image creation. In the pipelined version all operations take place simultaneously, which better utilizes the available hardware and significantly reduces the container deployment time. D. Docker-pi The three techniques presented previously address different issues. Sequential downloading of the image layers speeds up the downloading of the first layer in slow network We therefore propose Docker-pi, an optimized version of Docker which combines the three techniques to optimize container deployment on single-board machines. Figure 6(a) shows the resource usage while deploying the Mubuntu image using Docker-pi. We can clearly see that the networking, CPU and disk resources are used simultaneously and have a much greater utilization than with the standard Docker implementation. In particular, the CPU and disk activities start very early after the first few blocks of data have been downloaded. Figure 6(b) highlights significant speedups compared to vanilla Docker: with no network cap, Docker-pi is 73% faster than Docker for the Ubuntu image, 65% faster for Mubuntu and 58% faster for BigLayer. When we impose bandwidth caps the overall deployment time becomes constraint by the download times, while the decompression and extraction operations take place while the download is taking place. In such bandwidth-limited environments the deployment time therefore cannot be reduced any further other than by pre-fetching images before the container deployment command is issued. The reason why the gains are slightly lower for the Mubuntu and BigLayers images is that the default Docker download concurrency degree of 3 already makes them benefit from some of the improvements we proposed in Docker-pi. If we increase the concurrency degree of Docker to 4, the BigLayers image deploys in 644 s whereas Dockerpi needs only 207 s, which represents 68% improvement. V. DISCUSSION A. Should we flatten all Docker images? Flattening Docker images may arguably provide performance improvement in the deployment process. Indeed, multiple image layers may contain successive versions of the same file whereas a flattened image contains only a single version of every file. A flattened image is therefore slightly smaller than its multi-layered counterpart. Systems such as Slacker actually rely on the fact that images have been flattened [START_REF] Harter | Slacker: Fast distribution with lazy Docker containers[END_REF]. On the other hand, Docker-pi supports both flattened images and unmodified multi-layer images. We however do not believe that flattening all images would bring significant benefits. Docker does not provide any standard tool to flatten images. This operation must be done manually by exporting an image with all its layers, and re-importing the result as a single layer while re-introducing the startup commands from all the initial layers. The operation must be redone every time any update is made in any of the layers. Although this process could be integrated in an image build workflow, it contradicts the Docker philosophy which promotes incremental development based on image layer reusability. In a system where many applications execute concurrently, one may reasonably expect many images to share at least the same base layers (e.g., Ubuntu) which produce a standard execution environment. If all images were flattened this would result in large amounts of redundancy between different images, creating the need for sophisticated deduplication techniques [START_REF] Harter | Slacker: Fast distribution with lazy Docker containers[END_REF]. On the other hand, we believe that the layering system can be seen as a domain-specific form of de-duplication which naturally integrates in a developer's devops workflow. We therefore prefer keeping docker images unmodified, and demonstrated that container deployment can be made extremely efficient without the need for flattening images. B. Does Docker-pi work also for powerful server machines? Although we designed Docker-pi for single-board machines, the inefficiencies of vanilla Docker also exist in powerful server environments. We therefore evaluated the respective performance of Docker and Docker-pi in the Grid'5000 testbed which is commonly used for research on parallel and distributed computing including Cloud, HPC and Big Data [START_REF] Balouek | Adding virtualization capabilities to the Grid'5000 testbed[END_REF]. We specifically used a Dell PowerEdge C6220 server equipped with two 10-core Intel Xeon E5- 2660v2 processors running at 2.2GHz, 128 GB of main memory and two 10 Gbps network connections. Figure 6(c) compares the deployment times of Docker and Docker-pi with our three standard images. Obviously container deployment is much faster in this environment than in Raspberry PIs. However, here as well Docker-pi provides respectable performance improvement in the order of 23-36%. In this powerful server the network and CPU resources cannot be considered as bottlenecks so the sequential layer downloading and multi-threaded decompression techniques bring little improvement compared to the standard Docker. On the other hand, the sequential nature of the download/decompress/extract process is still present regardless of the hardware architecture, so the I/O pipelining technique brings similar performance gains as with the Raspberry PI. VI. CONCLUSION The transition from virtual machine-based infrastructures to container-based ones brings the promise of swift and efficient software deployment in large-scale computing infrastructures. However, this promise is not being held in fog computing platforms which are often made of very small computers such as Raspberry PIs, and where deploying even a very simple Docker container may take multiple minutes. We identified three sources of inefficiency in the Docker deployment process and proposed three optimization techniques which, once combined together, speed up container deployment roughly by a factor 4. Last but not least, we demonstrated that these optimizations also bring significant benefits in regular server environments. This work eliminates the unnecessary delays that take place during container deployment. Depending on the hardware, deployment time is now dictated only by the slowest of the three main resources: network bandwidth, CPU, or disk I/O. As hardware will evolve in the future the bottleneck may shift from one to the other. But, regardless of the specificities of any particular machine, Docker-pi will exploit the available hardware to its fullest extent. C. Critical observations 1 )Figure 1 . 11 Figure 1. Deployment times and resource usage using standard Docker Figure 2 . 2 Figure 2. Standard and sequential layer pull operations Figure 3 .Figure 4 . 34 Figure 3. Resource usage and deployment time with sequential layers downloading Figure 5 . 5 Figure 5. Evaluation of I/O pipelining Figure 6 . 6 Figure 6. Evaluation of Docker-pi in RPI and powerful server machines Table I STRUCTURE I OF THE DOCKER IMAGES This setup also allowed us to emulate slower network connections -which are arguably representative of real fog computing scenarios -by throttling network traffic at the network interface level. We used the tc command to run experiments either with unlimited bandwidth, or with limits of 1 Mbps, 512 kbps or 256 kbps. Ubuntu Mubuntu BigLayers 6th layer - 51 MB - 5th layer <1 MB <1 MB - 4th layer <1 MB <1 MB 62 MB 3rd layer <1 MB <1 MB 54 MB 2nd layer <1 MB <1 MB 64 MB 1st layer 46 MB 46 MB 52 MB Total size 50 MB 101 MB 232 MB Docker version 17.06.
01775113
en
[ "shs.socio", "shs.scipo" ]
2024/03/05 22:32:18
2017
https://hal.science/hal-01775113/file/Towards%20Cities%20of%20Informers.pdf
Anaïk Purenne «TOWARDS CITIES OF INFORMERS? COMMUNITY-BASED SURVEILLANCE IN FRANCE AND CANADA" What are the effects of citizen-based surveillance? Examining contrasted programs in France and Canada, this article shows that citizen involvement in surveillance actions can have ambivalent, multifaceted effects. Participatory surveillance can help to strengthen the community's sense of belonging, while paradoxically contributing to instil fear. However, these initiatives do not inevitably lead to a culture of generalized suspicion. Depending on the ability of residents to open up controversial subjects for debate, such programs can also leave the way open to a democratization of public action. Introduction Since the end of the 1980s, the sociological reflection on the new political uses of "surveillance" (i.e., data collection and analysis) has emerged as an important theme in the academic literature under the label of Surveillance Studies. In both North America and Europe, research attention has focused on surveillance technologies as an instrument of subtle and diffuse social control, and on their effects in terms of privacy and social inequalities. This research agenda, however, has minimized forms of data collection that are not (or not necessarily) based on new technologies-in particular, citizen-based surveillance. "Participatory surveillance" (Hier and Greenberg 2009) exercised by the average citizen has long been negatively perceived by the institutions in charge of social control. Indeed, citizenbased surveillance was often dedicated to identifying inadequate practices and abuses of force, or to compensating for the police's failure to act in deprived areas [START_REF] Marx | Commentary: Some Trends and Issues in Citizen Involvement in the Law Enforcement Process[END_REF][START_REF] Vindevogel | La mobilisation des énergies privées pour l'amélioration de la sécurité publique à New York[END_REF]). However, this kind of community-based surveillance is now perceived and used by public institutions as a means to extend their own capacity for surveillance, as is illustrated by the spread of public vigilance campaigns against terrorism [START_REF] Chan | The New Lateral Surveillance and a Culture of Suspicion, In: "Surveillance and Governance: Crime Control and Beyond[END_REF][START_REF] Larsen | Public Vigilance Campaigns and Participatory Surveillance after 11 September 2001[END_REF]. For the police, citizen involvement also offers the opportunity to improve their image and to give the impression of being close to local communities, the members of which are encouraged to take an active part in crime prevention (Garland 2001: 123s) or in national security programs. As Mark Andrejevic suggests (2005), such citizen involvement pushed by top-down programs can be analyzed as an extension of the government surveillance toolbox. In this sense, it deserves the same attention as CCTV, computerized databases, electronic monitoring, and other technological devices [START_REF] Larsen | Public Vigilance Campaigns and Participatory Surveillance after 11 September 2001[END_REF][START_REF] Parnaby | Natural Surveillance, Crime Prevention, and the Effects of Being Seen[END_REF]. The present article focuses on the formalization of community-based surveillance in crimeprevention citizens' groups. Two prominent approaches characterize the literature on community-based initiatives such as Neighborhood Watch: on the one hand, quantitative evaluation research that addresses, for instance, the overall "effectiveness" of these programs to deter crime (see for example [START_REF] Bennett | The Effectiveness of Neighborhood Watch[END_REF][START_REF] Rosenbaum | The Theory and Research behind Neighborhood Watch: is it a sound Fear and Crime Reduction Strategy?[END_REF], and, on the other hand, in-depth ethnographic observation of anti-crime citizens' initiatives in a given city [START_REF] Bénit | Nous avons dû prendre la loi entre nos mains'. Pouvoirs publics, politique sécuritaire et mythes de la communauté à Johannesburg[END_REF][START_REF] Raoulx | Un centre ville 'safe and clean' (sûr et propre)? Marginalité, interventions urbanistiques et contrôle social en Amérique du Nord. Les exemples de Vancouver (Colombie-Britannique) et de Richmond (Virginie)[END_REF][START_REF] Schneider | Refocusing Crime Prevention. Collective Action and the Quest for Community[END_REF][START_REF] Vindevogel | La mobilisation des énergies privées pour l'amélioration de la sécurité publique à New York[END_REF]. While these studies have generated empirical knowledge on a wide range of forms and meanings associated with communitybased surveillance [START_REF] Dupont | Police communautaire et de résolution des problèmes[END_REF], the results are hardly cumulative given the absence of any explicit theoretical framework and/or comparative dimension. For his part, Gary Marx has proposed a research program for considering citizen involvement in anti-crime groups across time and place. His underlying hypothesis is that the coproduction of security has ambiguous and contradictory effects. On the one hand, these initiatives can facilitate cooperation among inhabitants of various backgrounds, as the perception of shared concerns cements local communities and prompts them to take responsibility for local issues. On the other hand, they can generate unanticipated effects that require careful examination. First, insofar as poor neighborhoods generally lack the social resources needed to mobilize around local problems, community-based surveillance programs can increase social and spatial inequalities by giving rise to a two-tier system (on this topic, see [START_REF] Schneider | Refocusing Crime Prevention. Collective Action and the Quest for Community[END_REF]. Second, and most importantly, there is a risk that, as [START_REF] Marx | Commentary: Some Trends and Issues in Citizen Involvement in the Law Enforcement Process[END_REF] observes, fear of crime and suspicion of others will be exacerbated, prompting the emergence of a "nation of informers." This last idea echoes the related literature on "lateral surveillance." While civilian participation is often assumed to give a democratic face to surveillance, Andrejevic emphasizes that "the result has not so much been a democratization of politics […], but the injunction to embrace strategies of law enforcement. […] In an era in which everyone is to be considered potentially suspect, we are invited to become spies" (2005: 494). The culture of suspicion may indeed result in reinforcing racial stereotyping and racism [START_REF] Chan | The New Lateral Surveillance and a Culture of Suspicion, In: "Surveillance and Governance: Crime Control and Beyond[END_REF]. The purpose of this article is to discuss this hypothesis through empirical investigation in Canada and France. The marked contrasts between these two contexts (see below) allow us to appreciate the degree of convergence or divergence regarding the effects of civilian crime prevention initiatives. Indeed, comparing contrasted rather than similar cases better reflects the plurality of social reality [START_REF] Giraud | Les défis de la comparaison à l'âge de la globalisation : pour une approche centrée sur les cas les plus différents inspirée de Clifford Geertz[END_REF]. This attention to plurality is especially important because "often, surveillance technologies and practices are seen as being undesirable, antithetical to democracy and individual autonomy" (Albrechtslund and Glud 2010: 235). Covering very different contexts may provide a more nuanced picture of surveillance. In Anglo-American countries, citizen involvement in self-defense and crime-prevention groups has a long and complex history [START_REF] Brown | The American Vigilante Tradition[END_REF][START_REF] Wilson | Broken Windows. The Police and Neighborhood Safety[END_REF]. As in other countries, "the informal watching of communities by their members [preceded] the institution of public police" (Chan 2008: 224). Yet the tradition of vigilantism remained vivid even after the creation of regular, professional police forces, and vigilante groups have sometimes taken the law into their own hands in an attempt to substitute for the justice system. Unlike these earlier forms of civilian participation, most contemporary initiatives are devoted to protecting local communities by reporting crime to the authorities [START_REF] Marx | Citizen Involvement in the Law Enforcement Process[END_REF]. Significantly, the formalization of citizen-based surveillance into neighborhood patrols and Neighborhood Watch groups in the 1970s and 1980s has often been supported by governmental agencies or by voluntary police associations such as the National Sheriffs' Association. While these surveillance programs have received most media and academic attention, they are only the tip of the iceberg. Community mobilization is not confined to police-run programs in which residents are trained to become the eyes and ears of the police and help them through information sharing. As Michele Elizabeth Cairns outlines, "for a number of reasons, citizens have become involved in crime prevention. Some of them include a desire to increase the livability in their neighborhoods, to educate themselves on ways to protect against crime and to avoid victimizations, and to address underlying reasons for criminality" (Cairns 1998: 17). In cities like New York and Vancouver, such bottom-up, grassroots initiatives were implemented by residents who had become aware of the police's inability to solve local problems on their own. The Community Policing Centers that we studied in Vancouver offer a good example of these grassroots initiatives. Community-based surveillance developed more recently in some European countries. This late development coincides with the decline of Neighborhood Watch programs in the Anglo-American world as a result of limited effectiveness and lack of participation [START_REF] Chan | The New Lateral Surveillance and a Culture of Suspicion, In: "Surveillance and Governance: Crime Control and Beyond[END_REF]. In France, Claude Guéant, Home Minister in the former right-wing government of Nicolas Sarkozy, promoted the so-called "Voisins vigilants" in the mid-2000s. This government policy mainly focuses on information sharing between the police and citizens. Explicit reference is made to the Anglo-Saxon model of Neighborhood Watch, which is seen as an appropriate model of action, especially for suburban areas. It is stated that, in these neighborhoods, "part of the local population is present throughout the day, and there is preexisting social cohesion." 1 Without being officially supported by the left-wing government elected in 2012, these initiatives have not been abolished despite strong media criticism. Instead, the Voisins vigilants programs have been renamed under the more neutral label of "civic participation," and continue to develop to this day. According to the latest official report published by the Gendarmerie Nationale, 2 the figures rose from eight programs in mid-2011 to 123 in mid-2012 and to 484 in mid-2013. The Home Ministry indicates that these numbers-which do not even give the full picture since they only concern rural and suburban areas 3 -are likely to double in the next few years. In addition to the different origins of the programs studied, another strong contrast lies in the diametrically opposed political cultures of France and Canada. In France, a strong emphasis is laid on the nation-state, which plays a central role in maintaining social order. In this country, the mere mention of strengthening the role of communities and intermediary bodies leads to 1 http://asset.rue89.com/files/cir_33332.pdf [Accessed: January 20, 2013]. 2 Direction Générale de la Gendarmerie Nationale, Participation citoyenne: bilan de la mise en oeuvre, 19 septembre 2013. The Gendarmerie Nationale is a military institution under the Home Ministry that is in charge of public safety and has jurisdiction over rural areas and small towns. 3 To our knowledge, no official estimate is publicly available regarding urban areas and larger towns placed under the Police Nationale jurisdiction, which is the main civil law enforcement agency in France. heated political debate [START_REF] Donzelot | Faire société. La politique de la ville aux Etats-Unis et en France[END_REF]. Hence the vehement criticism voiced against the idea that local communities might develop a capacity for social control, and the widely held perception that public safety should remain a matter of professional state actors [START_REF] Robert | Les territoires du contrôle social, quels changements ?[END_REF]. By contrast, Anglo-American countries such as the United States and Canada are more open to community and lateral surveillance. For instance, the US has "more formal public and private programs for involving citizens in information gathering and less ambivalence toward (and suspicion of) such efforts. This attitude reflects Anglo-American traditions of government and police in principle being a part of the community. […] The English language has no equivalent for the French la délation, the activity of informers (called les corbeaux, for crows)" (Marx 2013: 59). A further difference that needs to be highlighted concerns the degree of urbanization of the sites under study. In France, the sites we selected are located in a suburban zone of the Essonne department, which was one of the pilot areas used under the governmental Voisins vigilants policy. The city of Breuillet (approximately 8,000 inhabitants) and the adjacent village of Saint-Yon (approximately 1,000 inhabitants) signed an official agreement with representatives of state agencies in October 2012. While these two municipalities are fairly representative of the French context, their population density and social composition (with an overwhelming majority of white residents) are very different from those of the site selected in Canada. Vancouver, which is one of the most multicultural cities in Canada, is populated by approximately 600,000 inhabitants distributed across 24 districts. Almost half of these districts have developed a Community Policing Center since the mid-1990s. These differences in the social composition of the neighborhoods and urban/suburban features appear to play an important role in terms of perceptions of insecurity and suspicion of others. Indeed, it is generally believed that inner cities with a high population density and mixed communities foster the acceptance of otherness. On the contrary, low density suburban areas marked by greater social homogeneity are said to encourage rejection of other people and to foster identitarian closure. The contrasts between these two contexts allow us to consider the potential diversity of "lateral surveillance" and of its implications for citizens. Both case studies were based on document analysis (official speeches, policy guidelines, agency reports, newspapers) and a series of interviews (n=circa 30). Respondents were recruited through a snowball method. Half of them were volunteers or community organizers; the other half were police officers or elected officials in communication with the community-based crime-prevention groups we studied. Some of them were top officials (mayor, chief of police), and others were lower grades (front-line or neighborhood police officers). The questions asked concerned the following: organization of the group, means for recruiting participants, motivation to participate and issues at stake, type of operations initiated, relationship between the group and local authorities, theory of crime prevention, and potential added value of these initiatives. It must be outlined that most interviews in Vancouver were conducted in 2012 and 2013, at a time when the police were questioning the Community Policing Centers, whereas the study in France took place in 2013, that is, when the Voisins vigilants programs were becoming popular. These contextual differences, as well as the small number of participants actually interviewed, constitute obvious limitations of the present analysis. This article, however, is not intended to reach any firm conclusions or broad generalizations; instead, it is exploratory in nature. The purpose of this comparative analysis is to identify the conditions under which surveillance initiatives can result in positive or negative effects. Indeed, "questions concerning the potential of surveillance for contributing to individual autonomy and dignity, fairness and due process, community cooperation […] have been rare in the field" (Monahan et al. 2010: 106). These contrasting experiences confirm that anti-crime citizens' groups can strengthen ties within local communities, echoing the personal desire of a majority of participants to get involved in their own neighborhoods. This is especially true when the groups' objectives are not limited to safety concerns (1). While these efforts are intended to promote inclusion and civic participation among local populations, they can also lead to reinforcing the fear of others. Indeed, public authorities often wish to stay at the frontline to fight crime, and are prone to dramatize safety concerns in order to ensure sustained involvement in these programs (2). Can these trends be resisted and result in a democratization of politics (3)? "It is not about Safety, it is about Quality of Life" 4 As pointed out a few decades ago by Gary Marx and Dane Archer (1971), citizen involvement in crime prevention initiatives and self-defense groups is often a controversial issue among local communities. The specter of vigilantism is still vivid, and often gives a negative aura to groups whose participants are depicted as obsessed by law and order. The advocates of these programs, for their part, lay emphasis on the fact that participatory surveillance is like any other form of civilian participation in collective action. Our case study suggests that the reality is not so binary. These two dimensions are often combined owing to the heterogeneous nature of these groups and the diversity of members' expectations. In addition, while certain participants clearly value law and order, most activities are dedicated to preventing the degradation of the neighborhood. Suspicious or Civic-Minded Participants? In fact, whether they are instigated by public authorities or not, participatory surveillance initiatives bring together people from diverse backgrounds. Both the Voisins vigilants programs in the Essonne department and the Community Policing Centers in Vancouver appear to have a relatively mixed membership. 5 One can therefore observe a variety of expectations regarding surveillance. In Breuillet and Saint-Yon, as in many French small towns, the projects were initiated and designed by the mayor in partnership with the local police. After publishing a formal call for participation, the elected officials coopted a number of citizens. The latter can be distinguished in two categories. First, military staff of the nearby French Air Force base are well represented among the residents involved in the experience, with the mayor of Breuillet being himself a former Air Force colonel. These residents explicitly value an ordermaintenance perspective and a close partnership with local public safety officers, and some even advocate for a conservative "get tough" philosophy of crime control. Second, there is a majority of retired people who seek involvement in civic life. In this case, participants' expectations focus on building a connection with their community in order to feel useful. Some of these pensioners are already involved in the activities of other associations, while others volunteer for local services such as the public library. For these people, participation in surveillance programs is all the more attractive since it does not involve a huge investment in time. Except for attending two or three meetings per year, no specific effort is required because domestic burglary is limited in the area. All that participants have to do is report, if necessary, evidence of wrongdoing to the police, which is evidently a duty for all citizens. In other words, these programs are often seen as a way of demonstrating one's goodwill and civic spirit without having to commit too strongly. Such flexible or "plug-in volunteering" [START_REF] Lichterman | Elusive Togetherness. Church groups trying to bridge America's Divisions[END_REF]) is particularly convenient for pensioners who frequently travel to other countries. This is illustrated by the case of a volunteer we met: a retired woman who decided to abandon her term as a municipal councilor in order to spend more time with her grandchildren living abroad. These observations are not confined to crime prevention programs. On the contrary, they follow widespread trends that have been well documented by Nina [START_REF] Eliasoph | Making Volunteers. Civic Life after Welfare's End[END_REF] and other scholars concerning collective participation at large. This literature emphasizes the fact that involvement in community-based activities is especially attractive because it is brief, irregular, and uncomplicated (on this topic, see for instance [START_REF] Talpin | Participating Is Not Enough. Civic Engagement and Personal Change[END_REF]). The polarization of expectations and trends is also observable in Vancouver, even though the ten Community Policing Centers (CPCs) spread throughout the city engage in more substantial day-to-day activities than the French Voisins vigilants groups. Most of them are run by residents and operated by volunteers, with the support of professional organizers. 6These volunteers are coopted by CPC members, without any interference from elected members of the City Council. Among other activities, volunteers are in charge of keeping a permanent office open during weekdays, with the aim of supporting victims and encouraging people to report crime. Those most involved are thus seniors, who have more time to participate on a regular basis. These citizens often put forward the idea that "it is important to take responsibility for your neighborhood." 7 Indeed, the main focus of the CPCs is on "community building." By contrast, participants in the Voisins vigilants programs in France are not expected to know each other since the mayor is at the core of the network. These contrasts between the two models of action reflect the differences between the French and the Anglo-American traditions of government. Echoing the belief that local communities are legitimate actors of social control, a major concern in Vancouver is to build social relationships in neighborhoods through the organization of community events like barbecues, parties, and fairs that promote multiculturalism or intergenerational understanding. These community events also give people the opportunity to meet each other, and to reinforce social interaction both between neighbors and between neighbors and the police. Yet, students are also well represented among participants of the CPCs. The majority of young volunteers are applicants to the police force, insofar as volunteering is a mandatory requirement to enter the police academy. These students do not necessarily live in the area. In some centers, most young volunteers come from other neighborhoods or other cities. As a result, they do not necessarily care about community development or about building social relationships with the community. This creates a tension between the practices promoted by professional organizers and the expectations of these "police wannabees," who prefer patrolling to detect stolen cars or suspicious behaviors. Fear of Crime or Neighborhood Preservation? Despite the differences between the two countries and the contrasted expectations of participants, the daily issues at stake seem fairly convergent. For both in the Essonne department and in Vancouver, attention is focused not so much on crime as on disorder and quality of life. In the French case, the specific traits and makeup of the neighborhood as well as the residential trajectories of the volunteers are important factors to consider for understanding this focus on disorder. Indeed, residential mobility often equates to social mobility. In Breuillet and Saint-Yon, many inhabitants belong to middle-income groups who have reached the end of an upward residential trajectory. Most of the volunteers we interviewed previously lived in an apartment in Paris or in a neighborhood near Paris. They then decided to move to a more remote location, where they bought an individual house for their family. As is the case with suburban populations in general, their primary concern is to maintain their current living conditions and a pleasant environment surrounded by fields and forests [START_REF] Charmes | La vie périurbaine face à la menace des gated communities[END_REF]. Their fear is to be caught up by lower income groups and to experience a loss of social status [START_REF] Palierse | Voisins Vigilants ou le 'Neighborhood Watch' à la française : une nouvelle forme de reproduction des pouvoirs locaux[END_REF][START_REF] Humez | La construction sociale de la nuisance. Un exemple dans un quartier industriel de la banlieue lyonnaise[END_REF]. Getting involved in collective projects of crime prevention is a way to address this concern, even though security issues are not necessarily central. The area is not directly affected by crime and most of the concerns raised by the volunteers relate to troubles such as abandoned waste or noisy behavior, which are deemed as important as protecting one's home from burglary. As noted by a volunteer who had been living in Breuillet for ten years, the implementation of a Voisins vigilants program also offered the opportunity to develop social relationships in a type of neighborhood where these are generally weak-even though this was not a stated objective of the mayor: It allowed me to meet neighbors. I organized a cocktail reception. I invited everyone. Some have been here for ten years or more and have never met. They have been talking to each other ever since. This is also true in Vancouver where participants describe themselves as "people caring more about their neighborhood than about safety."8 In this case, however, the concern is not to maintain one's social status (since the social and ethnic makeup of the neighborhoods is more mixed), but to develop a voluntary approach to crime prevention. In this perspective, most CPCs organize citizen patrols to inspect the neighborhood as well as regular community cleanup campaigns to remove litter or graffiti. The aim is to reduce visible signs of disorder, which are believed to attract criminals and criminal activities. This approach, which focuses on environmental improvements, seems consistent with the well-known "broken window" theory developed by James Wilson and George Kelling (1982). According to the latter, the physical image of the neighborhood must be enhanced to prevent further disorder, rising fear, and criminal victimization. As a CPC website indicates, "removing debris and unwanted graffiti not only makes the neighborhood cleaner, it also makes it safer. Research has shown that systematic removal of garbage and graffiti can greatly reduce crime, vandalism and mischief." 9 As a volunteer summarized it a few years ago in a local newspaper, the aim is to address "broader quality of life issues such as garbage-strewn alleys and streets, unsafe traffic conditions and just plain neighborliness."10 These observations confirm other studies' findings that the fear of urban decay, which can be synonymous with a loss of social status for individuals, often takes precedence over the fear of victimization.11 Thus, participatory surveillance resembles at first glance other forms of civic engagement, and can be viewed as one of many forms of community action to strengthen ties between neighbors and prevent urban decay [START_REF] Raoulx | Un centre ville 'safe and clean' (sûr et propre)? Marginalité, interventions urbanistiques et contrôle social en Amérique du Nord. Les exemples de Vancouver (Colombie-Britannique) et de Richmond (Virginie)[END_REF][START_REF] Vindevogel | La mobilisation des énergies privées pour l'amélioration de la sécurité publique à New York[END_REF]). Yet findings concerning the genuine motivation of participants do not capture the whole picture, as these actions may have unanticipated effects. Institutional Re-Framing of Social Expectations and Rise of Suspicion Whether these community actions are initiated by the mayor or by residents' associations, public institutions often try to use them to their advantage. These initiatives are perceived as an opportunity to improve police-community relations in a context of trust deficit. Yet the police agenda is less focused on using citizen-based surveillance as a source of knowledge and information about local issues than it is on rallying support for their "crime fighting" approach.12 Their main concern is to increase the one-way flow of information from the police to citizens, and not vice versa-this being especially true when the police use sophisticated information systems. This asymmetric partnership appears as a double-edged sword insofar as it may lead to increased suspicion and discrimination among citizens. The Police Agenda on Participatory Surveillance In both Canada and France, widespread distrust of police has become a major political concern in the past few decades. In British Columbia, for instance, a Commission of Inquiry into Policing was established in 1992 by the Attorney General of the province after a series of police killings. This independent commission, conducted by local lawyer Wallace Oppal, published a report entitled Closing the Gap: Policing and the Community that advocated a new governance of public safety and a reform of police departments in order to "respond to society's changing social conditions." 13 The report further emphasized that the police needed to recognize the role of citizens and civil society-based initiatives. In the wake of this report, the then Chief Constable of Vancouver Police examined different ways of aligning the police department with local needs and concerns. The proposal was made to the Vancouver city council to support the creation of community-based associations run by residents and operated by volunteers, instead of expensive local police stations. This cost-effective means to improve relationships between police and the community was supported by the mayor, who encouraged the creation of "community policing centers" through public funding and training programs [START_REF] Cairns | Community-Police Partnerships: coproducing Crime Prevention Services[END_REF]. Nevertheless, most interviews with senior police officers indicate that the Vancouver police continued to envision their role as one of fighting crime, and that the philosophy of community policing is nothing but a slogan. Even though the police department officially stresses that it is important to "empower citizens to participate in community affairs," 14 success is measured in terms of arrest statistics and crime breakdowns. 15 As a result, the main focus remains on securing community support for police work. Question: In your eyes, what is the most valuable function of the Community Policing Centers? Answer: The crime analyst identifies a trend and can send information and crime alerts to people in the neighborhood. We also talk to the Community Policing Centers, giving them information about crime hotspots to do foot or bike patrol. It is in this sense that the Community Policing Centers are valuable for us, not so much as information sources. (Interview with the Chief Constable of Vancouver Police, December 20th, 2011) Thus, week, the CPCs receive fresh information and crime alerts about hotspots, which encourage them to target a certain time and place. As an interviewed sergeant clearly put it, citizen participation in safety issues amounts to "targeted patrols where the police needs are." In France, relations between citizens and the police also appear to be largely negative. Tensions initially emerged in the banlieues, and more specifically among youth of immigrant origin, who are targeted by law and order policies [START_REF] Boucher | Casquettes contre képis. Enquête sur la police de rue et l'usage de la force dans les quartiers populaires[END_REF][START_REF] Marlière | La police et les "jeunes de cité[END_REF][START_REF] Marlière | La France nous a lâchés! Le sentiment d'injustice chez les jeunes des cités[END_REF][START_REF] Mohammed | La police dans les "quartiers sensibles[END_REF][START_REF] Mucchielli | Le rap de la jeunesse des quartiers relégués. Un univers de représentations structuré par des sentiments d'injustice et de victimation collective[END_REF]. In cities where riots broke out in response to police involvement in deaths, these tensions are so extreme that inhabitants are demanding "the right to life" before public institutions that are perceived as aggressive and even deadly [START_REF] Kokoreff | Refaire la cité[END_REF]. This negative perception was initially limited to groups subjected to disproportionate police surveillance. In the past few years, however, it has spread to the broader society, as law and order policies have come to affect the average man [START_REF] Purenne | L'introduction des technologies de surveillance dans le travail policier[END_REF]. And yet, this dissatisfaction appears to be an ambivalent one, insofar as criticism of improper policies coexists with strong expectations regarding the state's role as a supplier of resources and facilities [START_REF] Merklen | La politique dans les cités ou les quartiers comme cadre de la mobilisation[END_REF]. In this context, buzzwords such as community-based surveillance and citizen participation have been given more attention as a way of minimizing these tensions and rallying support for the police. Residents are encouraged to help the police solve crime. For instance, in Breuillet and Saint-Yon, participants in the Voisins vigilants program are given a cell phone number where they can reach constables at any time should they witness a burglary or hear an alarm sound in the neighborhood. Meetings are organized to keep them updated. The dramatization 15 Just as in France, the so-called "traditional approach" to policing was renewed by the importation of Compstat (for "COMPared STATistics"), a computerized information tool developed in New York in the 1990s. The Vancouver Police department (VPD) was the first Canadian police force to adopt the Compstat program, which is intended to improve police effectiveness through crime analysis and targeted enforcement. of safety concerns, which is aimed at motivating participants, is also fuelled by the local media, with burglaries and violent thefts regularly hitting the headlines. These convergent discourses sustain the impression that crime is commonplace, even though the neighborhood is safe compared to other areas. Thus, while insecurity is not the number one preoccupation of inhabitants, efforts on the part of public authorities and the media to raise awareness contribute to a kind of fatalism and rampant fear. 16 Detecting suspicious behaviors then becomes a way of serving society and the community. This, in turn, raises the question of the targets of participatory surveillance. Detecting Suspicious Behavior or Suspicious Individuals? Suspicious behavior is difficult to grasp and define. As a result, even well intended citizens are driven to guard against "suspicious individuals," the perception of risk being itself based on the appearance of the person (age, clothing, ethnicity, etc.) rather than on his/her effective behavior. 17 Most volunteers explain that they generally focus their attention on people who are not from the neighborhood, especially if they look "different." One of the participants in a Voisins vigilants program in France regretted that public transport allows "youths and thugs [meaning: from working-class districts]" to reach the residential area where he lives. Another volunteer, an old lady, felt suspicious of and even angry with nomads and Roma who took the regional train to collect items from the clothing containers in the neighborhood. According to another interviewee, "they come from everywhere: Roma, Gypsies and nomads." This "us and them" perception is reinforced whenever certain kinds of activities, generally attributed to teenagers, take place in the neighborhood: drinking alcohol, listening to music, etc. These activities seem suspicious in that young people who do not reside in the neighborhood are presumed, rightly or wrongly, to be disrespectful and to leave trash behind. Participants in the Voisins vigilants programs are thus prone to engage in the proactive surveillance of every act and gesture of these "strangers." As a volunteer puts it: So when I walk along the banks [of the lake], I have to be careful, I look to see if there is anything […] And when you do this on a regular basis, it is often the same faces you see, people who live here. So when you see a new face, something a bit suspicious, you become more vigilant. I generally come around a second time to see if the person is still there. Thus, everything happens as if nobody was allowed to disturb the peace of the community. Suspicion of others and racial discrimination, which are commonplace among the interviewees in France,18 are probably reinforced by the fact that the suburban areas in which they live are fairly homogenous in terms of their ethnic and social make up. As explained earlier, they are mostly populated by white middle-class people whose first concern is to preserve their living environment and their social status. In general, the specific structure of these neighborhoods is not considered favorable to developing an openness to otherness. It makes it more likely that participants will align themselves with police positions and professional "expertise," according to a French political culture that emphasizes the role of professional state actors in ensuring law and order. These probably mutually reinforcing factors can play an important role in explaining why the capacity to distance oneself from prejudices and negative stereotyping that we observed in some Vancouver CPCs does not characterize the French situation (see excerpts below), which is marked rather by conservatism and defensive attitudes. Excerpts of Interviews with a Couple of Volunteers in Breuillet and Saint-Yon Participants are typically French [meaning: white], I am not racist in saying that. There are a lot of seniors like me, there are a few young ones, but these are all people who are aware that thugs are going around. […] A lot of people bury their heads in the sand. We say: No there are offenders and we must protect ourselves. […] We don't want to become a ghetto or a poor suburb. We must try to perfect the appearance of the neighborhood, maintain the look of the place. Our dream was a house with a large garden for the kids, so we had to move further away from our workplace, since I work in Paris. But for this quality of life... It is hard to imagine leaving this place now, even though we have a problem with burglaries. So all possible efforts must be made to stop them. I wish there were more young people. They can bring a fresh perspective, you know, as new residents can see the situation through a different lens. I am willing to try any new idea, any change in the right direction. But we cannot ask too much of young people. The condo is becoming more democratic: Some of the houses aren't well kept; the population structure is changing. I'm not particularly racist, no more than anyone else. But you now see people with dark skin among the population. This "us and them" attitude also exists to a lesser extent in Vancouver, as liaison police officers in charge of community-based programs emphasize the "image of a society where criminal events come from outside and where the neighborhood is called for a united front against it" (Dupont 2007: 110). In particular, residents are encouraged to focus their attention on a given group that is supposed to pose a threat to citizens: drug addicts. Indeed, addiction is viewed by the police as a "criminogenic" factor and as the main reason behind high levels of property crime in Vancouver. 19 You have to pay particular attention to individuals with hoodies, caps, sunglasses, backpacks. I set up the typical profile of the offender who deserves surveillance: If they look like drug addicts, not healthy, a little dirty, badly dressed. […] Of course it could be an eccentric billionaire, but there is a 70 per cent chance that this is a person with bad intentions. We'd rather be safe than sorry. 20 Thus, restoring the trust between the police and local residents may inhibit confidence building between citizens at large, as the police are tempted to exploit participatory surveillance for their own purpose. Indeed, whether intentionally or not, the way in which the police use surveillance programs can reinforce prejudices regarding certain social groups, which proves to be counterproductive and fear instilling. This, in turn, may give rise to targeted surveillance and result in social/racial discrimination. Suspicion can even translate into a quasi-privatization of public spaces, as inhabitants become tempted to guard against "different" people. These results are convergent with the analysis of surveillance scholars who regard surveillance at large as a means of social sorting and categorical suspicion [START_REF] Ericson | Policing the Risk Society[END_REF][START_REF] Lyon | Surveillance as Social Sorting. Privacy, Risk and Digital Discrimination[END_REF]. Whether technology-dependent or citizen-based, surveillance can thus contribute to reinforcing social divisions. How Collective Reflection and Alternative Approaches can address the Fear of Crime In this context, "strangers" and non-residents may be potentially dangerous in the eyes of local residents, especially if they are young, from an ethnic background, etc. Should these exclusionary trends, which are similar to the social and racial profiling often used in police work, be seen as inevitable? As Elizabeth Comack and Jim Silver have argued, far from being intrinsically reactionary or tainted with populism, citizens can also mobilize to promote "a sophisticated understanding of the underlying causes of (2008: 818). Interviews that were conducted in three Vancouver CPCs placed under the supervision of senior organizers confirm this argument. Even though critical reflexivity is more the exception than the rule, local observations indicate that it can lead inhabitants to promote a comprehensive approach to deal with marginalized people present in the neighborhood. These CPCs are located in working-class residential areas with a large immigrant population from Asia. One of them is situated close to the Downtown Eastside, a neighborhood marked by persistent poverty, drug trafficking, mental illness, homelessness, etc. The presence of homeless, drug users and poor people appears to be a major concern for residents. The CPC volunteers repeatedly record complaints related to drug sale or aggressive panhandling. CPC business plans outline that many residents "fear that the street disorder which appears to characterize the Downtown Eastside will move east into [the] neighborhood. Addressing community disintegration and visible signs of disorder is of utmost importance for community members." At the same time, organizers and volunteers do not share the "us and them" vision that is increasingly popular among police forces and may lead "to demonize the criminal, to excite popular fears and hostilities, and to promote support for state punishment" (Garland 1996: 461). As a community organizer puts it: "We are much aware that sometimes the remedy can be more dangerous than the illness. One may aggravate safety problems when fear of crime spreads." Rather than viewing "outsiders" as potential offenders and "career criminals" that ought to be neutralized, he defends the idea that "there are no people who are criminal by nature. Some people steal because they are hungry. […] There is no such thing as the good guys on the one side and the bad guys on the other." 21 This vision appears to be convergent with what David Garland and others call a "criminology of the self." This "welfarist criminology that depicted the offender as disadvantaged or poorly socialized and made it the state's responsibility-in social as well as penal policy-to take positive steps of a remedial kind" (Garland 1996: 462). Thus, even though CPC members face pressure from the city and the police to focus surveillance on crime hotspots and "at risk" groups, they stress the need for alternative, "creative" responses to crime issues and organize themselves to develop their own agenda. Different programs have therefore been implemented to reduce fear of crime and help vulnerable persons. First, CPC efforts aim to reduce underreporting. Volunteers provide a communication channel that is accessible to everyone, especially the most vulnerable or new residents. As a 2009 report indicates, "low rates of reporting crime, often accompanied by reports of fear, complacency or futility, have been major issues." Indeed, many residents do not have English as a first language, or else do not feel comfortable reporting to the police because of negative experiences in their country of origin. In view of this, organizers and volunteers have extended office hours to try and reduce fear of crime among vulnerable people who have been victimized. Second, volunteers also meet marginalized people in order to create platforms for dialogue about their needs and concerns. In their view, excluding people deemed socially undesirable because of their disturbing behavior (for instance, the homeless or the mentally ill, who have often been the target of zero tolerance policies) and pushing the problem outside the neighborhood are not appropriate solutions. They would rather listen to marginalized people and provide referral to dedicated state or community resources.The aim is to help them "negotiate the system," as they often "have a lack of knowledge of systems and resources of both the government and the police."22 Some CPCs promote this inclusive approach and view themselves as "the safety net for everything" and as "the middle man"23 between institutions and citizens, particularly the most vulnerable. One of the CPC coordinators even tries to enroll people with drug addiction or disabilities as part-time volunteers in order to help them develop personal and social skills. In her view, these actions also contribute to reducing fear of crime and street disorder in the neighborhood by providing resources and support to those who are deprived. Third, CPC organizers also promote peer mediation to reduce fear of crime. To that end, they sometimes use humorous and ludic methods that are somewhat similar to those endorsed by US radical movements. 24 Starting from the assumption that legal and traditional policing responses are unhelpful and even counterproductive to solve certain recurring problems, the objective is to develop a capacity for informal social control without dramatizing the issues at stake. The example of a "pyjama patrol" initiated by a CPC is emblematic. Residents living near a park were complaining about young people who drank alcohol at night. They reported the trouble over and over again to the police. Patrol cars were sent occasionally, with no effect on the problem. Alerted by neighbors, the CPC members decided to take on the problem.They organized citizen teams and trained them to patrol in the park while wearing a nightcap. After several weeks of this "pyjama patrol," the disturbance ceased. Conclusion: Favorable and Unfavorable Conditions for a Democratization of Public Action Based on the examination of two contrasting programs in France and Canada, this comparative study supports the hypothesis that anticrime citizens' initiatives have ambivalent and multifaceted effects. Participatory surveillance can help to strengthen social relationships and the sense of belonging in a community, while paradoxically contributing to instilling fear instead of reducing the sense of insecurity felt by citizens. Such widespread fear currently reinforces popular prejudices against "strangers." Although streets are public spaces, the mere presence of certain groups (typically youth, nomads, drug addicts, homeless people, etc.) is disapproved by residents and regarded as an intrusion into their private lives. This is not a new trend, but instead a long-term phenomenon. According to the French historian Gérard Noiriel, suspicion towards impoverished, marginalized and migrant populations dates back to the advent of modern society. This mistrust, coupled with the increased mobility of individuals and goods, prompted the development of remote identification protocols such as logbooks for workers and anthropometric identity booklets for nomads [START_REF] Noiriel | Profiling Minorities. A Study of Stop-and-Search Practices in Paris[END_REF]). These unanticipated impacts deserve attention. Indeed, "citizen involvement in law enforcement […] is unlike other forms of citizen participation. The stakes are higher; the risk of miscarriage is greater, and the consequences of abuse or error appear more serious" (Marx and Archer 1971: 71). In this regard, our comparative analysis suggests that these prejudices and risks are: 1) more prevalent in areas marked by greater social and ethnic homogeneity; and 2) fuelled by police forces who want to exploit civilian mobilization to serve their own organizational interests. These two conditions are not conducive to the empowerment of citizens. Be that as it may, is participatory surveillance necessarily associated with exclusion, stigmatization and racism? Or, on the contrary, is it empowering in the sense that it "allows a community to better understand itself and its environment" (Monahan et al. 2010: 109)? Citizens involved in surveillance are often portrayed as "denunciators," obsessed with watching one another and prone to reporting any suspicious behavior to law enforcement agencies [START_REF] Dupont | Police communautaire et de résolution des problèmes[END_REF]. This negative representation is especially present in France, where the specter of collaboration with Nazi Germany is still vivid. Yet our data suggest that community-based surveillance initiatives do not necessarily constitute a "reactionary form of political mobilization" (Comack and Silver 2008: 817), nor do they inevitably lead to a culture of generalized suspicion. General assertions about participants who uncritically buy into the law and order discourse fail to encapsulate the contrasting reality of participatory surveillance. Our comparative study offers a more balanced view, one that stresses an important mitigating factor, namely the ability of residents to open up controversial issues for debate and to reflect collectively on the complex causes of crime. The latter is the condition for lateral surveillance to open the way for a democratization of public action. Such critical ability is by no way innate: It can be reinforced or, conversely, weakened by two complementary factors that are present in the Vancouver context and absent in the French one. Firstly, the presence of professional organizers with a background in community development or criminology can foster the capacity for reflection. The role of professional organizers is crucial to broadening the debate and to resisting basic slogans and beliefs about crime and urban disorder. Secondly, the socialization of citizens in a multicultural context likely plays an important role insofar as the acceptance of otherness can create a more fertile ground for a tolerant approach to risk. The capacity for collective reflection is especially important because basic beliefs that provide legitimacy to "tough on crime" strategies are now proliferating and becoming a kind of new common sense in many advanced countries. According to Loïc Wacquant: These punitive policies are conveyed everywhere by an alarmist, even catastrophist discourse on "insecurity" animated with martial images and broadcast to saturation by the commercial media, the major political parties, and professionals of order. […] This discourse heedlessly revalorizes repression and stigmatizes the youths […], the jobless, homeless, beggars, drug addicts and prostitutes, and immigrants […], designated as the natural vectors of a pandemic of minor offenses that poison daily life. (2009: 2-3) As a counterpoint to this dominant discourse, citizen mobilization can shed additional light on urban disorder and crime, for instance by combining knowledge derived from usage and field expertise gained from being present every day in the neighborhood. This type of mobilization can even help public institutions "think outside the box," and broadens the scope of remedies to regulate social problems, as the experience of the vigies (watchdogs) of Villiers-le-Bel illustrates 25 [START_REF] Besnier | Approche genrée de la participation coproductive des habitants en matière de sécurité publique. L'expérience du collectif du 29 juin à Villiers-le-Bel[END_REF][START_REF] Evita | Les habitants, co-producteurs de sécurité. Une expérience de coproduction participative en matière de sécurité ou les ingrédients d'un système collaboratif[END_REF]. This informal group brings together committed women of different backgrounds and operates with the support of a community organizer. It has been set up to tackle often interlinked juvenile violence and institutional violence, its motto being "No to all forms of violence." Very active in the media, this group of residents has contributed to shifting the terms of the debate by emphasizing that deficiencies in basic public services (such as poor transport links) fuel the feeling of social and territorial exclusion and cause many young people to occupy public space. As a result, the promotion of youth mobility and improved provision of public transport have become some of their favorite themes. 26 As a founding member of the group explains: "We do not intend to take the place of the institutions. But when we feel that something can help to strengthen social cohesion, we put the emphasis on it" (quoted in Besnier 2012: 47-48). [safety and security] issues [...] [and can] have a very clear vision of what they believe the role of the police in the inner city should be: one in which the police are part of a wider effort of community mobilization" Interview with a community organizer, July 17th, 2013, Vancouver. In the case of France, population mix is defined according to gender and age. It is important to outline that, in contrast to the Vancouver groups whose members often come from different ethnic and social backgrounds, French participants in the Voisins vigilants programs are more homogeneous. The social and cultural makeup of the Voisins vigilants groups reflects the composition of the neighborhood, which is populated by a majority of white, middle income citizens. A lot of offices were created in 1995 by residents' associations as part of their crime prevention programs. Their number rose from five to 17 at the time. Since then, their number has decreased, and there are now eight community-run offices, plus two police-run offices. Each center receives a core funding of one hundred thousand dollars from the City of Vancouver. The centers also develop fundraising activities to obtain money from shopkeepers, insurance companies, etc. Interview with a volunteer, December 19th, 2012. Interview with a community organizer, April 10th, 2012. http://www.hastingssunrisecpc.com/page/community-cleanup [Accessed: May 2013]. The Vancouver Sun, March 16th, 1996 (quoted in Cairns 1998). For similar results regarding gated communities, see for instance[START_REF] Charmes | Gated Communities: Ghettos for the Rich? La Vie des Idées[END_REF]. In the case of a criminal event, information is welcomed to assist in solving investigations. http://www.pssg.gov.bc.ca/policeservices/shareddocs/specialreport-opal-closingthegap.pdf [Accessed: May 2013]. VPD, Evaluation Report back of CPCs, Administrative report toVancouver City Council, Oct. 24, 2008, p. 10. In addition, one cannot exclude the possibility that the way in which the police use citizen participation for their own ends may result in unequal treatment between districts. Indeed, a two-tier system may develop between neighborhoods that demonstrate goodwill in helping the police (and hence deserve special police attention through intensive patrols and prompt care) and ones that are less organized and more disadvantaged (and hence do not deserve the same degree of attention and reaction). Nevertheless, the data we collected do not appear to document this trend. This kind of prejudice is also commonplace in police operations, as many researchers have shown. It is viewed as "the result of a habitual, and often subconscious, use of widely accepted negative stereotypes in making decisions about who appears suspicious or who is more prone to commit certain types of crimes. […] Ethnic profiling targets certain persons because of what they look like and not what they have done" (Open Society Justice Initiative 2009: 19-20). These trends need to be confirmed or refuted by further quantitative or qualitative research, as they are based on a small number of interviews and observations. http://vancouver.ca/police/assets/pdf/reports-policies/vpd-chronic-offenders-sentencing.pdf [Accessed: January 20, 2013] 20 Observation, December 2011, course of training given to volunteers by a police officer in charge of Neighborhood Watch. Interview with a community organizer, July 17th 2013, Vancouver. CPC Business Plan 2009, April 17th, 2009, p.13. Interview with a community organizer, April 10th, 2012. See for instance[START_REF] Alinsky | Rules for Radicals. A Pragmatic Primer for Realistic Radicals[END_REF]. Acknowledgments Our sincerest thanks to all the people who contributed to this empirical study by answering our questions, and especially to Clair MacGougan, Urvashi Singh, Eric Charmes and Emmanuel Martinais for their comments on earlier drafts. Thanks as well to Ariane Dorval and to the anonymous reviewers of Surveillance & Society for their helpful recommendations to improve the paper.
01775188
en
[ "info.info-se", "info.info-pl", "info.info-fl" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01775188/file/GenetHJ-FOSSACS18.pdf
Thomas Genet Timothée Haudebourg Thomas Jensen Verifying Higher-Order Functions with Tree Automata HAL is Introduction Higher-order functions are an integral feature of modern programming languages such as Java, Scala or JavaScript, not to mention Haskell and Caml. Higher-order functions are useful for program structuring but pose a challenge when it comes to reasoning about the correctness of programs that employ them. To this end, the correctness-minded software engineer can opt for proving properties interactively with the help of a proof assistant such as Coq [START_REF] Coq | The coq proof assistant reference manual: Version 8[END_REF] or Isabelle/HOL [START_REF] Paulson | The isabelle reference manual[END_REF], or write a specification in a formalism such as Liquid Types [START_REF] Rondon | Liquid types[END_REF] or Bounded Refinement Types [START_REF] Vazou | Abstract refinement types[END_REF][START_REF] Vazou | Bounded refinement types[END_REF] and ask an SMT solver whether it can prove the verification conditions generated from this specification. This approach requires expertise of the formal method used, and both the proof construction and the annotation phase can be time consuming. Another approach is based on fully automated verification tools, where the proof is carried out automatically without annotations or intermediate lemmas. This approach is accessible to a larger class of programmers but applies to a more restricted class of program properties. The flow analysis of higher-order functions was studied by Jones [START_REF] Jones | Flow analysis of lazy higher-order functional programs[END_REF] who proposed to model higher-order functions as term rewriting systems and use regular grammars to approximate the result. More recently, the breakthrough results of Ong et al. [START_REF] Ong | On model-checking trees generated by higher-order recursion schemes[END_REF] and Kobayashi [START_REF] Kobayashi | Types and higher-order recursion schemes for verification of higherorder programs[END_REF][START_REF] Kobayashi | Predicate abstraction and CEGAR for higherorder model checking[END_REF][START_REF] Matsumoto | Automata-based abstraction for automated verification of higher-order tree-processing programs[END_REF] show that combining abstraction with model checking techniques can be used with success to analyse higher-order functions automatically. Their approach relies on abstraction for computing over-approximations of the set of reachable states, on which safety properties can then be verified. In this paper, we pursue the goals of higher-order functional verification using an approach based on the original term rewriting models of Jones. We present a formal verification technique based on Tree Automata Completion (TAC) [START_REF] Genet | Equational approximations for tree automata completion[END_REF], capable of checking a class of properties, called regular properties, of higher-order programs in a fully automatic manner. In our approach, a program is represented as a term rewriting system R and the set of (possibly infinite) inputs to this program as a tree automaton A. The TAC algorithm computes a new automaton A * , by completing A with all terms reachable from A by R-rewriting. This automaton representation of the reachable terms contains all intermediate states as well as the final output of the program. Checking correctness properties of the program is then reduced to checking properties of the computed automaton. Moreover, our completion-based approach permits to certify automatically A * in Coq [START_REF] Boyer | Certifying a Tree Automata Completion Checker[END_REF], i.e. given A, R and A * , obtain the formal proof that A * recognizes all terms reachable from A by R-rewriting. Example 1. The following term rewriting system R defines the filter function along with the two predicates even and odd on Peano's natural numbers. @(@(filter , p), cons(x, l)) → if @(p, x) then cons(x, @(@(filter , p), l)) else @(@(filter , p), l) @(@(filter , p), nil ) → nil @(even, 0) → true @(even, s(x)) → @(odd , x) @(odd , 0) → false @(odd , s(x)) → @(even, x) This function returns the input list where all elements not satisfying the input boolean function p are filtered out. Variables are underlined and the special symbol @ denotes function application where @(f, x) means "x applied to f ". We want to check that for all lists l of natural numbers, @(@(filter , odd ), l) filters out all even numbers. One way to do this is to write a higher-order predicate, exists, and check that there exists no even number in the resulting list, i.e. that @(@(exists, even), @(@(filter , odd ), l)) always rewrites to false. Let A be the tree automaton recognising terms of form @(@(exists, even), @(@(filter , odd ), l)) where l is any list of natural numbers. The completion algorithm computes an automaton A * recognising every term reachable from L(A) (the set of terms recognised by A) using R with the definition of the exists function. Formally, L(A * ) = R * (L(A)) = {t | ∃s ∈ L(A), s → * R t} To prove the expected property, it suffices to check that true is not reachable, i.e. true does not belong to the regular set L(A * ). We denote by regular properties the family of properties characterised by a regular set. In particular, regular properties do not count symbols in terms, nor relate subterm heights (a property comparing the length of the list before and after filter is not regular) Termination of the tree automata completion algorithm is not ensured in general [START_REF] Genet | Termination criteria for tree automata completion[END_REF]. For instance, if R * (L(A)) is not regular, it cannot be represented as a tree automaton. In this case, the user can provide a set of equations that will force termination by introducing an approximation based on equational abstraction [START_REF] Meseguer | Equational abstractions[END_REF]: L(A * ) ⊇ R * (L(A)). Equations make TAC powerful enough to verify first-order functional programs [START_REF] Genet | Termination criteria for tree automata completion[END_REF]. However, state-of-the-art TAC has two short-comings. (i) Equations must be given by the user, which goes against full automation, and (ii) even with equations, termination is not guaranteed in the case of higher-order programs. In this paper we propose a solution to these shortcomings with the following contributions: -We state and prove a general termination theorem for the Tree Automata Completion algorithm (Section 3); -From the conditions of the theorem we characterise a class of higher-order functional programs for which the completion algorithm terminates (Section 4). This class covers common usage of higher-order features in functional programming languages. -We define an algorithm that is able to automatically generate equations for enforcing convergence, thus avoiding any user intervention (Section 5). All proofs missing in this paper can be found in the accompanying technical report [START_REF] Genet | Verifying higher-order functional programs with tree automata : Extended version[END_REF]. The paper is organised as follow: We describe the completion algorithm and how to use equations to ensure termination in Section 2. The technical contributions as described above are developed in Sections 3 to 5. In Section 6, we present a series of experiments validating our verification technique, and discuss the certification of results in Coq. We present related work in Section 7. Section 8 concludes the paper. Background This section introduces basic concepts used throughout the paper. We recall the usual definitions of term rewriting systems and tree automata, and present the completion algorithm which forms the basis of our verification technique. Term rewriting and tree automata Terms. An alphabet F is a finite set of symbols, with an arity function ar : F → N. Symbols represent constructors such as nil or cons, or functions such as filter , etc. For simplicity, we also write f ∈ F n when f ∈ F and ar(f ) = n. For instance, cons ∈ F 2 and nil ∈ F 0 . An alphabet F and finite set of variables X induces a set of terms T (F, X ) such that: x ∈ T (F, X ) ⇐ x ∈ X f (t 1 , . . . , t n ) ∈ T (F, X ) ⇐ f ∈ F n and t 1 , . . . , t n ∈ T (F, X ) A language is a set of terms. A term t is linear if the multiplicity of each variable in t is at most 1, and closed if it contains no variables. The set of closed terms is written T (F). A position in a term t is a word over N pointing to a subterm of t. Pos(t) is the set of positions in t, one for each subterm of t. It is defined by: Pos(x) = {λ} Pos(f (t 1 , . . . , t n )) = {λ} ∪ {i.p | 1 ≤ i ≤ n ∧ p ∈ Pos(t i )} where λ is the empty word and "." in i.p is the concatenation operator. For p ∈ Pos(t), we write t| p for the subterm of t at position p, and t[s] p for the term t where the subterm at position p has been replaced by s. We write s t if t is a subterm of s and s t if it is a subterm and s = t. If L ⊆ T (F), we write L for the language L and all its subterms. A substitution σ is an application of X → T (F, X ), mapping variables to terms. We tacitly extend it to the endomorphism σ : T (F, X ) → T (F, X ) where tσ is the result of the application of the term t to the substitution σ. Term rewriting systems [START_REF] Baader | Term Rewriting and All That[END_REF] provide a flexible way of defining functional programs and their semantics. A rewriting system is a pair F, R , where F is an alphabet and R a set of rewriting rules of the form l → r, where l, r ∈ T (F, X ), l ∈ X and Var (r) ⊆ Var (l). A TRS can be seen as a set of rules, each of them defining one step of computation. We write R a rewriting system F, R if there is no ambiguity on F. A rewriting rule l → r is said to be left-linear if the term l is linear. Example 1 shows a TRS representing a functional program, where each rule is left-linear. In that case we say that the TRS R is left-linear. A rewriting system R induces a rewriting relation → R where for alls s, t ∈ T (F, X ), s → R t if it exists a rule l → r ∈ R, a position p ∈ Pos(s) and a substitution σ such that lσ = s| p and t = s[rσ] p . The reflexive-transitive closure of → R is written → * R . The rewriting system introduced in the previous example also derives a rewriting relation → R where @(@(f ilter, odd), cons(0, cons(s(0), nil))) → * R cons(s(0), nil) The term cons(s(0), nil) is irreducible (no rule applies to it) and hence the result of the function call. We write IRR(R) for the set of irreducible terms of R. Tree automata [START_REF] Comon | Tree automata techniques and applications[END_REF] are a convenient way to represent regular sets of terms. A tree automaton is a quadruple F, Q, Q f , ∆ where F is an alphabet, Q a finite set of states, Q f the set of final states, and ∆ a rewriting system on F ∪ Q. Rules in ∆, called transitions, are of the form l → q where q ∈ Q and l is either a state (∈ Q), or a configuration of the form f (q 1 , . . . , q n ) with f ∈ F, q 1 . . . q n ∈ Q. A term t is recognised by a state q ∈ Q if t → * ∆ q, which we also write t → * A q. We write L(A, q) for the language of all terms recognised by q. A term t is recognised by A if there exists q ∈ Q f s.t. t ∈ L(A, q). In that case we write t ∈ L(A). E.g., the tree automaton A = F, Q, Q f , ∆ with F = {0 : 0, s : 1}, Q f = {q pair } and ∆ = {0 → q pair , s(q odd ) → q pair , s(q pair ) → q odd , nil → q list , cons(q pair , q list ) → q list } recognises all lists of even natural numbers. An -transition is a transition q → q where q ∈ Q. A tree automaton A is -free if it contains no -transitions. A is deterministic if for all terms t there is at most one state q such that t → * ∆ q . A is reduced if for all q there is at least one term t such that t → * ∆ q . Tree Automata Completion algorithm The verification algorithm is based on tree automata completion. Given a program represented as a rewriting system R, and its input represented as a tree automaton A 0 , the tree automata completion algorithm computes a new tree automaton A * recognising the set of all reachable terms starting from a term in L(A). For a given R, we write this set R * (L(A)) = {t | ∃s ∈ L(A), s → * R t}. It includes all intermediate computations and, in particular, the output of the functional program. The algorithm proceeds by computing iteratively A 1 , A 2 , . . . such that A i+1 = C R (A i ) until it reaches a fix-point, A * . Here, C R (A i ) represents one step of completion and is performed by searching and completing the critical pairs of A i . lσ R / / * A i rσ q ⇒ lσ R / / A i+1 * rσ A i+1 * n n q Definition 1 (Critical pair). A critical pair is a triple l → r, σ, q where l → r ∈ R, σ is a substitution, and q ∈ Q such that lσ → * A i q and rσ → * A i q. Completing a critical pair consists in adding the necessary transitions in A i+1 to have rσ → * A i+1 q, and hence rσ ∈ L(A i+1 , q). Example 2. Let A 0 be the previously defined tree automaton recognising all lists of even natural numbers. Let R = {s(s(x)) → s(x)}. A 0 has a critical pair s(s(x)) → s(x), σ, q pair with σ(x) = q pair . To complete the automaton, we need to add transition such that s(q pair ) → * A 1 q pair . Since we already have the state q odd recognising s(q pair ), we only add the transition q odd → q pair . The formal definition of the completion step, including the procedure of choosing which new transition to introduce, can be found in [START_REF] Genet | Verifying higher-order functional programs with tree automata : Extended version[END_REF]. Every completion step has the following property: L(A i ) ⊆ L(A i+1 ) and s ∈ L(A i ) ⇒ s → R t ⇒ t ∈ L(A i+1 ) It implies that, if a fix-point A * then it recognises every term of R * (L(A)). However it is in general impossible to compute a tree automaton recognising R * (L(A)) exactly, and this may cause the completion algorithm to diverge. Instead we shall over-approximate it by an automaton A * such that L(A * ) ⊇ R * (L(A)). The approximation is performed by introducing a set E of equations of the form l = r where l, r ∈ T (F, X ). From E we derive the relation = E , the smallest congruence such that for all equation l = r and substitution σ we have lσ = E rσ. In this paper we also write E for the TRS {l → r | l = r ∈ E}. At each completion step, the algorithm simplifies the automaton by merging states together according to E. Definition 2 (Simplification Relation). Let A = F, Q, Q f , ∆ be a tree automaton and E be a set of equations. If s = t ∈ E, σ : X → Q, q, q ∈ Q such that sσ → * A q, tσ → * A q and q = q then A can be simplified into A = A{q → q} (where q has been substitued by q), denoted by A E A . We write S E (A) for the unique automaton (up to renaming) A such that A * E A and A is irreducible by E . One completion step is now defined by A i+1 = S E (C R (A i )). sσ E A i * tσ A i * q q ⇒ sσ E A i+1 * tσ A i+1 * m m q Example 3. This example shows how using equations can lead to approximations in tree automata. Let A be the tree automaton defined by the set of transitions ∆ = {0 → q 0 , s(q 0 ) → q 1 }. This automaton recognises the two terms 0 in q 0 and s(0) (also known as 1) in q 1 . Let E = {s(x) = x} containing the equation that equates a number and its successor. For σ = {x → 0} we have s(x)σ → A q 1 , xσ → A q 0 and s(x)σ = E xσ. Then in S E (A), q 0 and q 1 are merged. The resulting automaton has transitions {0 → q 0 , s(q 0 ) → q 0 }, which recognises N in q 0 . The idea behind the simplification is to overapproximate R * (L(A)) when it is not regular. It has been shown in [START_REF] Genet | Termination criteria for tree automata completion[END_REF] that it is possible to tune the precision of the approximation. For a given TRS R, initial state automaton A and set of equations E, the termination of the completion algorithm is undecidable in general, even with the use of equations. Our contribution in this paper consists in finding a class of TRS/programs and equations E for which the completion algorithm with equations terminates. Termination of Tree Automata Completion In this section, we show that termination of the completion algorithm with a set of equations E is ensured under the following conditions: if (i) A k is reduced -free and deterministic (written REFD in the rest of the paper) for all k; (ii) every term of A k can be rewritten into a term of a given language L ⊆ T (F) using R (for instance if R is terminating); (iii) L has a finite number of equivalence classes w.r.t E. Completion is known to preserve -reduceness and -determinism if E ⊇ E r ∪ E R [START_REF] Genet | Termination criteria for tree automata completion[END_REF] where The contracting equations ensure that the completion algorithm will merge enough states during the simplification steps to terminate. Note that E c L cannot be empty, unless L is finite. To prove termination of completion, we first prove that it is possible to bound the number of states needed in A * to recognise a language L by the number of normal forms of L with respect to E c L . In our case L will be the set of output terms of the program. Since A * does not only recognises the output terms, we need additional states to recognise intermediate computation terms. In the proof of Theorem 1 we show that with E R , the simplification steps will merge the states recognising the intermediate computation with the states recognising the outputs. If the latter set of states is finite then we can show that A * is finite. E R = {s = t | s → t ∈ R} and E r = {f (x 1 , . . . , x n ) = f (x 1 , . . . , x n ) | f ∈ F n }. Condition (i) Theorem 1. Let A be an REFD tree automaton, R a left-linear TRS, E a set of equations and L a language closed by subterms such that for all k ∈ N and for all s ∈ L (A k ), there exists t ∈ L s.t. s → * R t. If E ⊇ E r ∪ E c L ∪ E R then the completion of A by R and E terminates with a REFD A * . A Class of Analysable Programs The next step is to identify a class of functional programs and a language L for which Theorem 1 applies. By choosing L = T (F) and providing a set of contracting equations E c T (F ) , the termination theorem above proves that the completion algorithm terminates on any functional program R. If this works in theory, in practice we want to avoid introducing equations over the application symbol (such as @(x, y) = y). Contracting equations on applications makes sense in certain cases, e.g., with idempotent functions (@(sort, @(sort, x)) = @(sort, x)), but in most cases, such equations dramatically lower the precision of the completion algorithm. Hence, we want to identify a language L with no contracting equations over @ in E c L . Since such a language L still has to have a finite number of normal forms w.r.t. E c L (Theorem 1), it cannot include terms containing an un-bounded stack of applications. For instance, L cannot contain all the terms of the form @(f, x), @(f, @(f, x)), @(f, @(f, @(f, x)), etc. The @ stack must be bounded, even if the applications symbols are interleaved with other symbols (e.g. @(f, s(@(f, s(@(f, s(x))))))). To do that we (i) define a set B d of all terms where such stack size is bounded by d ∈ N; (ii) define a set K n and a class of TRS called K-TRS such that for any TRS R in this class, K n is closed by R and K n ∩ IRR(R) ⊆ B φ(n) . This is done by first introducing a type system over the terms; (iii) finally define L = B φ(n) ∩ IRR(R) that can be used to instantiate Theorem 1. Definition 4. For a given alphabet F = C ∪ {@}, B d is the set of terms where every application depth is bounded by d. It is the smallest set defined by: f ∈ B 0 ⇐ f ∈ C 0 f (t 1 , . . . , t n ) ∈ B i ⇐ f ∈ C n ∧ t 1 . . . t n ∈ B i @(t 1 , t 2 ) ∈ B i+1 ⇐ t 1 , t 2 ∈ B i t ∈ B i+1 ⇐ t ∈ B i In Section 5, we show how to produce E c such that B d ∩ IRR(R) has a finite number of normal forms w.r.t. E c with no equations on @. However we don't have for all k, for all term t ∈ L (A k ) a term s ∈ B d ∩ IRR(R) s.t. t → * R s in general. Theorem 1 cannot be instantiated with L = B d ∩ IRR(R). Instead we define (i) a set K n ⊆ T (F) and φ such that K n ∩ IRR(R) ⊆ B φ(d) and (ii) a class of TRS, called K-TRS for which L (A k ) ⊆ K n . In K-TRS, the right hand sides of TRS rules are contained in a set K whose purpose is to forbid the construction of unbounded partial applications during rewriting. If the initial automaton satisfies L (A) ⊆ K n then we can instantiate Theorem 1 with L = K n ∩ IRR(R) and prove termination. Types In order to define K and K n we require the TRS to be well-typed. Our definition of types is inspired by [START_REF] Baader | Term Rewriting and All That[END_REF]. Let A be a non-empty set of algebraic types. The set of types T is inductively defined as the least set containing A and all function types, i.e. A → B ∈ T ⇐ A, B ∈ T . The function type constructor → is assumed to be right-associative. The arity of a type A is inductively defined on the structure of A by: ar(A) = 0 ⇐ A ∈ A ar(A → B) = 1 + ar(B) ⇐ A → B ∈ T Instead of using alphabets, in a typed terms environment we use signatures F = C ∪ {@} where C is a set of constructor symbols associated to a unique type and @ the application symbol (with no type). We also assign a type to every variable. We write f : A if the symbol f has type A and t : A a term t ∈ T (F, X ) of type A. We write W(F, X ) for the set of all well typed terms using the usual definition. We extend the definition of term rewriting systems to typed TRS. A TRS is well typed if all rules are of the form l : A → r : A (type is preserved). In the same way, an equation s = t is well typed if both s and t have the same type. In the rest of this paper we only consider well typed equations and TRSs. Definition 5 (Functional TRS). A higher-order functional TRS is composed of rules of the form @(. . . @(f, t 1 : A 1 ) . . . , t n : A n ) : A → r : A where f : A 1 → . . . → A n → A ∈ C n , t 1 . . . t n ∈ W(C, X ) and r ∈ W(F, X ). A functional TRS is complete if for all term t = @(t 1 , t 2 ) : A such that ar(A) = 0, it is possible to rewrite t using R. In other words, all defined functions are total. Types provides information about how a term can be rewritten. For instance we expect the term @(f : A → B, x : A) : B to be rewritten by every complete (no partial function) TRS R if ar(A → B) = 1. Furthermore, for certain types, we can guarantee the absence of partial applications in the result of a computation using the type's order. For a given signature F, the order of a type A, written ord(A), is inductively defined on the structure of A by: ord(A) = max{ord(f ) | f : • • • → A ∈ C n } ord(A → B) = max{ord(A) + 1, ord(B)} where ord(f : A 1 → . . . → A n → A) = max{ord(A 1 ), . . . , ord(A n )} (with, for A i = A, ord(A i ) = 0). For instance ord(int) = 0 and ord(int → int) = 1. Example 5. Define two different types of lists list and list . The first defines lists of int with the constructor consA : int → list → list ∈ C, while the second defines lists of functions with the constructor consB : (int → int) → list → list ∈ C. The importance of order becomes manifest here: in the first case a fully reduced term of type list cannot contain any @ whereas in the second case it can. ord(list) = 0 and ord(list ) = 1. Lemma 1. If R is a complete functional TRS and A a type such that ord(A) = 0, then all closed terms t of type A are rewritten into an irreducible term with no partial application: ∀s ∈ IRR(R), t → * R s ⇒ s ∈ B 0 . The class K-TRS Recall that we want to define (i) a set K n ⊆ T (F) and φ such that K n ∩IRR(R) ⊆ B φ(n) and (ii) a class of TRS K-TRS for which L (A k ) ⊆ K n . Assuming that L (A) ⊆ K n we instantiate Theorem 1 with L = K n ∩ IRR(R) and prove termination. Definition 6 (K-TRS). A TRS R is part of K-TRS if for all rules l → r ∈ R, r ∈ K where K is inductively defined by: x : A ∈ K ⇐ x : A ∈ X f (t 1 , . . . , t n ) : A ∈ K ⇐ f ∈ C n ∧ t 1 , . . . , t n ∈ K @(t 1 : A → B, t 2 : A) : B ∈ K ⇐ t 1 ∈ Z, t 2 ∈ K ∧ B ∈ A (1) @(t 1 : A → B, t 2 : A) : B ∈ K ⇐ t 1 , t 2 ∈ K ∧ ord(A) = 0 (2) with Z defined by: t ∈ Z ⇐ t ∈ K @(t 1 , t 2 ) ∈ Z ⇐ t 1 ∈ Z, t 2 ∈ K By constraining the form of the right hand side of each rule of R, K defines a set of TRS that cannot construct unbounded partial applications during rewriting. The definition of K takes advantage of the type structure and Lemma 1. The rules (1) and (2) ensure that an application @(t 1 , t 2 ) is either: (1) a total application, and the whole term can be rewritten; or (2) a partial application where t 2 can be rewritten into a term of B 0 (Lemma 1). In (1), Z allows partial applications inside the total application of a multi-parameter function. Example 6. Consider the classical map function. A typical call to this function is @(@(map, f ), l) of type list, where f is a mapping function, and l a list. The whole term belongs to K because of rule (1): list is an algebraic type and its subterm @(map, f ) : list → list belongs to Z. This subterm is a partial application, but there is no risk of stacking partial applications as it is part of a complete call (to the map function). Example 7. Consider the function stack defined by: @(@(stack, x), 0) → x @(@(stack, x), S(n)) → @(@(stack, @(g, x)), n) Here g is a function of type (A → A) → A → A. The stack function returns a stack of partial applications whose height is equal to the input parameter: @(@(stack, f ), S(S(S . . . S k (0) . . . ))) → * R @(g, @(g, @(g, . . . @(g k , f ) . . . ))) The depth of partial applications stacks in the output language is not bounded. With no equations on the @ symbol, the completion algorithm may not terminate. Notice that x is a function and @(g, x) a partial application. Hence the term @(@(stack, @(g, x)), n) is not in K, so the TRS does not belong to the K-TRS class. We define K n as {tσ | t ∈ K, σ : X → B n ∩ IRR(R)} and claim that if for all rule l → r of the functional TRS R, r ∈ K and if L(A) ⊆ K n then with Theorem 1 we can prove that the completion of A with R terminates. The idea is the following: -Prove that if A recognises terms of K n , then it is preserved by completion using the notion of K n -coherence of A. -Prove that K n ∩ IRR(R) ⊆ B n+2B ∩ IRR(R) where B ∈ N is a fixed upper bound of the arity of all the types of the program. -Prove that there is a finite number of normal form of B n+2B ∩ IRR(R) w.r.t E c L . -Finally, we use those three properties combined, and instantiate Theorem 1 with L = B n+2B ∩ IRR(R) to prove Theorem 2, defined as follows. Theorem 2. Let A be a K n -coherent REFD tree automaton, R a terminating functional TRS such that for all rule l → r ∈ R, r ∈ K and E a set of equations. Let L = B n+2B ∩ IRR(R). If E = E r ∪ E c L ∪ E R then the completion of A by R and E terminates. To prove that after each step of completion, the recognised language stays in K n , we require the considered automaton to be K n -coherent. Definition 7 (K n -coherence). Let L ⊆ W(F) and n ∈ N. L is K n -coherent if L ⊆ K n ∨ L ⊆ Z n \ K n By extension we say that a tree-automaton A = F, Q, Q f , ∆ is K n -coherent if the language recognised by all states q ∈ Q is K n -coherent. If K n -coherence is not preserved during completion, then some states in the completed automaton may recognise terms outside of K n . Our goal is to show that it is preserved by C R (•) (Lemma 2) then by S E (•) (Lemma 3). Lemma 2 (C R (A) preserves K n -coherence). Let A be a REFD tree automa- ton. If A is K n -coherent, then C R (A) is K n -coherent. Lemma 3 (S E (A) preserves K n -coherence). Let A be a REFD tree automaton, R a functional TRS and E a set of equations such that E = E r ∪ E c L ∪ E R with L = B n+2B ∩ IRR(R). If A is K n -coherent then S E (A) is K n -coherent. By using Lemma 2 and Lemma 3, we can prove that the completion algorithm, which is a composition of C R (A) and S E (A), preserves K n -coherence. The proofs of these two lemmas are based on a detailed analysis of the completion algorithm itself. The complete proofs are provided in [START_REF] Genet | Verifying higher-order functional programs with tree automata : Extended version[END_REF]. Lemma 4 (Completion preserves K n -coherence). Let A = F, Q, Q f , ∆ be a tree automaton, R a functional TRS and E a set of equations. If E = E r ∪ E c L ∪ E R with L = B n+2B ∩ IRR(R) and A is K n -coherent then for all k ∈ N, A k is K n -coherent. In particular, A * is K n -coherent. By construction we can prove that the depth of irreducible K n terms is bounded, which correspond to the following lemma. Lemma 5. For all t : T ∈ K n , t : T ∈ IRR(R) ⇒ t : T ∈ B n+2B-arity(T ) . Proof of Theorem 2 Proof. According to Lemma 4, for all k ∈ N, the completed automaton A k is K n -coherent. By definition this implies that L (A k ) ⊆ K n . Moreover, we know that IRR(R) ∩ K n ⊆ B n+2B (Lemma 5). Let L = B n+2B ∩ IRR(R). R is terminating, so for every term s ∈ L (A k ) there exists t ∈ L such that s → * R t. Since the number of normal form of L is finite w.r.t E, Theorem 1 implies that the completion of A by R and E terminates. Theorem 2 states a number of hypotheses that must be satisfied in order to guarantee termination of the completion algorithm: -The initial automaton A must be K n -coherent and REFD. -R must be terminating. -All left-hand sides of rules of R are in the set of terms K. This is a straightforward syntactic check. If it is not verified, we can reject the TRS before starting the completion. -The set of equations E must be of the form E r ∪ E c L ∪ E R . The equation sets E r and E R are determined directly from the syntactic structure of R. However, there is no unique suitable set of contracting equations E c L . This set must be generated carefully, because a bad choice of contracting equations (i.e., equations that equate too many terms) will have a severe negative impact on the precision of the analysis result. In this section, we describe a method for generating all possible sets of contracting equations E c L . To simplify the presentation, we only present the case where L = W(C) and IRR(R) ⊆ W(C) (i.e., all results are first-order terms). Our approach looks for contracting equations for the set of closed terms W(C) instead of the set B n+2B mentioned in Theorem 2. More precisely, we generate the set of equations iteratively, as a series of equation sets E k c where the equations only equate terms of depth at most k. Recall that a contracting equation is of the form u = u| p with p = λ, i.e., it equates a term with a strict subterm of the same type. A set of contracting equations over the set W(C) is then generated as follows: (i) generate the set of left-hand side of equations as a covering set of terms [START_REF] Kounalis | Testing for the ground (co-)reducibility property in term-rewriting systems[END_REF], so that for each term t ∈ W(C) there exists a left-hand side u of an equation and a substitution σ such that t = uσ. (ii) for each left-hand side, generate all possible equations of the form u = u| p , satisfying that both sides have the same type. (iii) from all those equations, we build all possible E c L (with L = W(C)) such that the set of normal forms of W(C) w.r.t. E c L is finite. Since E c L is left-linear and L = W(C), this can be decided efficiently [START_REF] Comon | Sequentiality, Monadic Second-Order Logic and Tree Automata[END_REF]. Example 8. Assume that C = {0 : 0, s : 1}.. For k = 1, the covering set is {s(x), 0} and E 1 c = {{s(x) = x}}. For depth 2, the covering set is {s(s(x)), s(0), 0} and E 2 c = E 1 c ∪ {{s(s(x)) = x}, {s(s(x)) = s(x)}, {s(0) = 0}, {s(0) = 0, s(s(x)) = x}, {s(0) = 0, s(s(x)) = s(x)}}. All equation sets of E 1 c and E 2 c satisfy Definition 3 and lead to different approximations. To verify a property ϕ on a program, we use completion and equation generation as follows. The program is represented by a TRS R and function calls are represented by an initial tree automaton A. Both have to respect the hypothesis of Theorem 2. The algorithm searches for a set of contracting equations E c such that verification succeeds, i.e. L(A * ) satisfy ϕ. Starting from k = 1, we apply the following algorithm: 1. We first complete the tree automaton A k recognising the finite subset of L(A) of terms of maximum depth k. Since L(A k ) is finite and R is terminating, the set of reachable terms is finite, completion terminates without equations and computes an automaton A * k recognising exactly the set R * (L(A k )) [START_REF] Genet | Equational approximations for tree automata completion[END_REF]. If there exists a set of equations E c able to verify the program, this algorithm will find it eventually, or find a counter example. However if there is no set of equations that can verify the program, this algorithm does not terminate. Experiments The verification technique described above has been integrated in the Timbuk library [START_REF] Genet | Reachability Analysis and Tree Automata Calculations[END_REF]. We implemented the naive equation generation where all possible equation sets E c are enumerated. Despite the evident scalability issues of this simple version of the verification algorithm, we have been able to verify a series of properties of several classical higher-order functions: map, filter , exists, forall , foldRight, foldLeft as well as higher-order sorting functions parameterised by an ordering function. Most examples are taken from or inspired by [START_REF] Ong | Verifying higher-order functional programs with patternmatching algebraic data types[END_REF][START_REF] Matsumoto | Automata-based abstraction for automated verification of higher-order tree-processing programs[END_REF] and have corresponding TRSs in the K set defined above. The property ϕ consists in checking that a finite set of forbidden terms is not reachable (Patterns section of Timbuk specifications). Given A, R and A * , the correctness of the verification, i.e. the fact that L(A * ) ⊇ R * (L(A)), can be checked in a proof assistant embedding a formalisation of rewriting and tree automata. It is enough to prove that (a) L(A * ) ⊇ L(A) and that (b) for all critical pairs l → r, σ, q of A * we have rσ → * A * q. Property (a) can be checked using standard algorithms on tree automata. Property (b) can be checked by enumerating all critical pairs of A * (there are finitely many) and by proving that all of them satisfy rσ → * A * q. Since there exists algorithms for checking properties (a) and (b), the complete proof of correctness can automatically be built in the proof assistant. For instance, the automaton A * can be used as a certificate to build the correctness proof in Coq [START_REF] Boyer | Certifying a Tree Automata Completion Checker[END_REF] and in Isabelle/HOL [START_REF] Felgenhauer | Reachability, confluence, and termination analysis with state-compatible automata[END_REF]. It is also used to build unreachability proofs in Isabelle/HOL [START_REF] Felgenhauer | Reachability, confluence, and termination analysis with state-compatible automata[END_REF]. Besides, since verifying (a) and (b) is automatic, the correctness proof may be run outside of the proof assistant (in a more efficient way) using a formally verified external checker extracted from the formalisation. All our (successful) completion attempts output a comp.res file, containing A, R and A * , which has been certified automatically using the external certified checker of [START_REF] Boyer | Certifying a Tree Automata Completion Checker[END_REF]. Timbuk's site http://people.irisa.fr/Thomas.Genet/ timbuk/funExperiments/ lists those verification experiments. Nine of them are automatically proven. Two other examples show that correct counter-examples are generated when the property is not provable. On one example equation generation times out due to our naïve enumeration of equations. For this last case, by providing the right set of equations in mapTree2NoGen the verification of the function succeeds. Related Work When it comes to verifying first-order imperative programs, there exist several successful tools based on abstract interpretation such as ASTREE [START_REF] Blanchet | A static analyzer for large safety-critical software[END_REF] and SLAM [START_REF] Ball | The SLAM project: debugging system software via static analysis[END_REF]. The use of abstract interpretation for verifying higher-order functional programs has comparatively received less attention. The tree automaton completion technique is one analysis technique able to verify first-order Java programs [START_REF] Boichut | Rewriting Approximations for Fast Prototyping of Static Analyzers[END_REF]. Until now, the completion algorithm was guaranteed to terminate only in the case of first-order functional programs [START_REF] Genet | Termination criteria for tree automata completion[END_REF]. Liquid Types [START_REF] Rondon | Liquid types[END_REF], followed by Bounded Refinement Types [START_REF] Vazou | Abstract refinement types[END_REF][START_REF] Vazou | Bounded refinement types[END_REF], and also Set-Theoretic Types [START_REF] Castagna | Polymorphic functions with set-theoretic types: part 1: syntax, semantics, and evaluation[END_REF][START_REF] Castagna | Polymorphic functions with settheoretic types: part 2: local type inference and type reconstruction[END_REF], are all attempts to enrich the type system of functional languages to prove non-trivial properties on higher-order programs. However, these methods are not automatic. The user has to express the property he wants to prove using the type system, which can be tedious and/or difficult. In some cases, the user even has to specify straightforward intermediate lemmas to help the type checker. The first attempt in verifying regular properties came with Jones [START_REF] Jones | Flow analysis of lazy higher-order functional programs[END_REF] and Jones and Andersen [START_REF] Jones | Flow analysis of lazy higher-order functional programs[END_REF]. Their technique computes a grammar over-approximating the set of states reachable by a rewriting systems. However, their approximation is fixed and too rough to prove programs like Example 1 (filter odd). Our program and property models are close to those of Jones and Andersen. However, the approximation in our analysis is not fixed and can be automatically adapted to the verification objective. Ong et al. proposes one way of addressing the precision issue of Jones and Andersen's approach using a model checking technique on Pattern Matching Recursion Schemes [START_REF] Ong | Verifying higher-order functional programs with patternmatching algebraic data types[END_REF] (PMRS). This technique improves the precision but is still not able to verify functions such as Example 1 (see [START_REF] Salmon | Analyse d'atteignabilité pour les programmes fonctionnels avec stratégie d'évaluation en profondeur[END_REF] page 85). As shown in our experiments, our technique handles this example. Kobayashi et al. developed a tree automata-based technique [START_REF] Matsumoto | Automata-based abstraction for automated verification of higher-order tree-processing programs[END_REF] (but not relying on TRS and completion), able to verify regular properties (including safety properties on Example 1). We have verified a selection of examples coming from [START_REF] Matsumoto | Automata-based abstraction for automated verification of higher-order tree-processing programs[END_REF] and observed that we can verify the same regular properties as they can. Our prototype implementation is inferior in terms of execution time, due to the slow generation of equations. A strength of our approach is that our verification results are certifiable and that they can be used as certificates to build unreachability proofs in proof assistants (see Section 6). Our verification framework is based on regular abstractions and uses a simple abstraction mechanism based on equations. Regular abstractions are less expressive than Higher-Order Recursion Schemes [START_REF] Ong | On model-checking trees generated by higher-order recursion schemes[END_REF][START_REF] Kobayashi | Types and higher-order recursion schemes for verification of higherorder programs[END_REF] or Collapsible Pushdown Au-tomata [START_REF] Broadbent | C-shore: a collapsible approach to higher-order verification[END_REF], and equation-based abstractions are a particular case of predicate abstraction [START_REF] Kobayashi | Predicate abstraction and CEGAR for higherorder model checking[END_REF]. However, the two restrictions imposed in this particular framework result in two strong benefits. First, the precision of the approximation is formally defined and precisely controlled using equations: L(A * ) ⊆ (R/E) * (L(A)) [START_REF] Genet | Equational approximations for tree automata completion[END_REF]. This precision property permits us to prove intricate properties with simple (regular) abstractions. Second, using tree automata-based models facilitates the certification of the verification results in a proof assistant. This significantly increases the confidence in the verification result compared e.g., to verdicts obtained by complex CEGAR-based model-checkers. Conclusion & Future Work This paper shows that tree automata completion is a simple yet powerful, fully automatic verification technique for higher-order functional programs, expressed as term rewriting systems. We have proved that the completion algorithm terminates on a subset of TRS encompassing common functional programs, and provided experimental evidence of the viability of the approach by verifying properties on fundamental higher-order functions including filtering and sorting. One remaining question is whether this approach is complete: if there exists a regular approximation of the reachable terms of a functional program, can we build it using equations? We can already answered this question in the positive when L = W(C), i.e., all results are first order terms [START_REF] Genet | Automata Completion and Regularity Preservation[END_REF]. Extending this result to all kind of results, including higher-order ones, is a promising research topic. The generation of the approximating equations is automatic but simpleminded, and too simple to turn the prototype into a full verification tool. Further work will look into how sets of contracting equations can be generated in a more efficient manner, notably by taking the structure of the TRS into account and using a CEGAR approach. The present verification technique is agnostic to the evaluation strategy. An interesting research track would be to experiment completion-based verification techniques with different term rewriting semantics of functional programs such as outlined by Clemente et al. [START_REF] Clemente | Ordered tree-pushdown systems[END_REF]. This would permit us to take a particular evaluation strategy into account, and in certain cases, improve the precision of the verification. We already experimented with this in [START_REF] Genet | Reachability Analysis of Innermost Rewriting -extended version[END_REF]. This is in line with our long-term research goal of providing a light-weight verification tool to assist the working OCaml programmer. Our work focuses on verifying regular properties represented by tree automata. Dealing with non-regular over-approximations of reachable terms would allow us to verify relational properties like comparing the length of the list before and after f ilter. This is one of the objective of techniques like [START_REF] Kobayashi | Predicate abstraction and CEGAR for higherorder model checking[END_REF]. Building non-regular over-approximations of reachable terms for TRS, using a form of completion, is possible [START_REF] Boichut | Towards more precise rewriting approximations[END_REF]. However, up to now, adapting automatically the precision of such approximations to a given verification goal is not possible. Extending their approach with equations may provide a powerful verification tool worth pursuing. is ensured by showing that, in our verification setting, completion preserve REFD. The last condition is ensured by having E ⊇ E cL where E c L is a set of contracting equations.Definition 3 (Contracting Equations). Let L ⊆ T (F).A set of equations is contracting for L, denoted by E c L , if all equations of E c L are of the form u = u| p with u a linear term of T (F, X ), p = λ and if the set of normal forms of L w.r.t the TRS E c L = {u → u| p | u = u| p ∈ E c L } is finite. Example 4. Assume that F = {0 : 0, s : 1}. The set E c L = {s(x) = x} is contracting for L = T (F) because the set of normal forms of T (F) with respect to E c L = {s(x) → x} is the (finite) set {0}. The set E c L = {s(s(x)) = x} is contracting because the normal forms of {s(s(x)) → x} are {0, s(0)}. 2 . 2 If L(A * k ) does not satisfy ϕ then verification fails: a counterexample is found. 3. Otherwise, we search for a suitable set E c . All E c of E k c that introduce a counterexample in the completion of A k with R and E c are filtered out.[START_REF] Boichut | Rewriting Approximations for Fast Prototyping of Static Analyzers[END_REF]. Then for all remaining E c , we try to complete A with R andE = E r ∪E R ∪E cand check ϕ on the completed automaton. If ϕ is true on A * then verification succeeds. Otherwise, we try the next E c . 5. If there remain no E c , we start again with k = k + 1.
01775190
en
[ "info.info-se", "info.info-fl", "info.info-pl" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01775190/file/main.pdf
Thomas Genet Tristan Gillard Timothée Haudebourg Sébastien Lê Cong Extending Timbuk to Verify Functional Programs Timbuk implements the Tree Automata Completion algorithm whose purpose is to over-approximate sets of terms reachable by a term rewriting system. Completion is parameterized by a set of equations defining which terms are equated in the approximation. In this paper we present two extensions of Timbuk which permit us to automatically verify safety properties on functional programs. The first extension is a language, based on regular tree expressions, which eases the specification of the property to prove on the program. The second extension automatically generates a set of equations adapted to the property to prove on the program. Motivations . In this paper we focus on static analysis of safety properties on functional programs. Let us illustrate this on a simple example. Assume that we want to analyze the following delete OCaml function: let rec delete x l= match l with In Timbuk [START_REF] Genet | Timbuk 3.2 -a Tree Automata Library[END_REF], this program will be translated in the following TRS, where ite encodes the if-then-else construction and eq encodes a simple equality on two arbitrary constant symbols A and B. The Ops section defines the symbols with their arity, the Const section defines the constructor symbols (symbols that are not associated with a function), the Vars section defines variables and the TRS section associates the name of the TRS with its rules. In the following, we denote by F the set of symbols defined in the Ops section and T (F) the set of ground terms built on F. We denote by C the set of constructor symbols defined by Const, and T (C) the set of ground terms defined on C. Ops delete:2 cons:2 nil:0 A:0 B:0 ite:3 true:0 false:0 eq:2 Const A B nil cons true false Vars X Y Z TRS R delete(X,nil)->nil delete(X,cons(Y,Z))->ite(eq(X,Y),delete(X,Z),cons(Y,delete(X,Z))) ite(true,X,Y)->X ite(false,X,Y)->Y eq(A,A)->true eq(A,B)->false eq(B,A)->false eq(B,B)->true Let us denote by L the set of all possible lists of A's and B's. On this program, we are interested in proving that for all l ∈ L, delete(A,l) can only result into a list where A does not occur. This is equivalent to proving that for all l∈ L, delete(A,l) never rewrites to a list containing an A. This can be done using reachability analysis on rewriting with the above TRS R. We denote by I the set of all initial terms, i.e., I = {delete(A, l) | l ∈ L} and let Bad be the set of lists containing at least one A. We denote by R * (I) the set of terms reachable by rewriting terms of I with R, i.e., R * (I) = {t | s ∈ I and s → R * t}, where → R * is the reflexive and transitive closure of → R . If R * (I) ∩ Bad = ∅ then there is no way to rewrite a term of the form delete(A,l) with l∈ L into a list containing an A and the property is also true on the functional program. Note that the property proved on the TRS is stronger than the property proved on the functional program. In particular, it is independent of the evaluation strategy: it can be call-by-value as well as call-by-name. Thus, the property is true for OCaml as well as for Haskell programs. 1 This paper presents two extensions of Timbuk making the above analysis possible and automatic. -The first extension are simplified regular tree expressions which let the user easily and intuitively define the set of initial terms I. -The second extension automatically generates abstraction equations, using algorithms described in [START_REF] Genet | Verifying higher-order functions with tree automata[END_REF] and [START_REF] Genet | Automata Completion and Regularity Preservation[END_REF]. This makes it possible to automatically build a regular over-approximation App of R * (I) such that App ∩ Bad = ∅, if it exists. In Section 2, we define simplified regular expressions. In Section 3, we explain why abstraction equations are necessary and we show how to generate them in Section 4. In Section 5, we show how to interact with Timbuk in order to carry out a complete analysis, as the one shown above. Finally, in Section 6, we conclude and give further research directions. Simplified regular tree expressions We defined the TRS but we still need to define the set of initial terms I in Timbuk. Until now, it could only be defined using a tree automaton [START_REF] Comon | Tree Automata Techniques and Applications[END_REF]. Defining I with this formalism is possible but it is error-prone and lacks readability. As in the case of word languages, there exists an alternative representation for regular tree languages: regular tree expressions [START_REF] Comon | Tree Automata Techniques and Applications[END_REF]. However, unlike classical regular expressions for words, regular tree expressions are difficult to read and to write. For instance, the regular tree expression defining terms of the form f (g n (a), h m (b)) with n, m ∈ N is f (g( 1 ) * , 1 . 1 a, h( 2 ) * , 2 . 2 b) , where 1 and 2 are new constants. In this expression, the sub-expression g( 1 ) * , 1 . 1 a represents the language g n (a). The effect of * , 1 is to iteratively replace 1 by g( 1 ), and the effect of . 1 a is to replace 1 by a. Regular tree expressions are expressive enough to define any regular tree language. To be complete w.r.t. regular tree languages, this formalism needs named placeholders (like 1 and 2 above) because the effect of the iteration symbol * depends on the position where it occurs. However, named placeholders make regular tree expressions difficult to read and to write, even if they define simple languages. For instance, the set I = {delete(A, l) | l ∈ L} defined above can be written delete(A, cons((A|B), 1 ) * , 1 . 1 nil) where 1 is a new constant. In this paper, we propose a new formalism for defining regular tree languages: simplified regular tree expression (SRegexp for short). Those expressions are not complete w.r.t. regular languages but are easier to read and to write. For instance, the set I is defined by the SRegexp delete(A,[cons((A|B),*|nil)]). Those regular expressions are defined using only 3 operators: '|' to build the union of two languages, '*' to iterate a pattern and the optional brackets '[ ... ]' to define the scope of the embedded *. The SRegexp cons((A|B),*|nil) repeats the pattern cons(A, _) or cons(B,_) as long as possible and terminates by nil. Thus, it defines the language {nil, cons(A,nil), cons(B,nil), cons(A,cons(A,nil)), cons(A,cons(B,nil)),. . .}. The brackets define the scope of the pattern to repeat with *. In the SRegexp delete(A,[cons((A|B),*|nil)]), the iteration applies on cons(A, _) or cons(B,_) but not on delete(A,_). Thus, this expression represents the language {delete(A, nil), delete(A,cons(A,nil)), delete(A,cons(B,nil)), . . .}. 2We implemented SRegexp in Timbuk together with a translation to standard regular tree expressions. We also implemented the translation from regular tree expressions to tree automata defined in [START_REF] Kuske | Construction of tree automata from regular expressions[END_REF]. Thus, from a SRegexp I, Timbuk can automatically generate a tree automaton A whose recognized language L(A) is equal to I. We also implemented the converse operations: tree automata to regular expression using the algorithm [START_REF] Guellouma | Construction of rational expression from tree automata using a generalization of Arden's lemma[END_REF] and regular tree expressions to SRegexp. Note that, since SRegexp are incomplete w.r.t. regular tree languages, conversion from regular tree expression to SRegexp may fail. Thus, the over-approximation of reachable terms computed by Timbuk is presented as a SRegexp if it is possible, or as a tree automaton otherwise. The need for abstraction equations Starting from R and I = L(A), computing R * (I) is not possible in general [START_REF] Gilleron | Regular tree languages and rewrite systems[END_REF]. Nevertheless, if R is a left-linear TRS then R * (I) can be over-approximated with tree automata completion [START_REF] Genet | Decidable Approximations of Sets of Descendants and Sets of Normal Forms[END_REF]. From A and R, completion builds a tree automaton A * such that L(A * ) ⊇ R * (I). If Bad is regular, to prove R * (I) ∩ Bad = ∅, it is enough to check that L(A * ) ∩ Bad = ∅, which can be done efficiently [START_REF] Comon | Tree Automata Techniques and Applications[END_REF]. For this technique to succeed, the precision of the approximation A * is crucial. For instance, L(A * ) = T (F) is a valid regular over-approximation but it cannot be used to prove any safety property since it also contains Bad. In Timbuk, approximations are defined using sets of abstraction equations, following [START_REF] Meseguer | Equational abstractions[END_REF] and [START_REF] Genet | Equational tree automata completion[END_REF]. Example 1. Let L be the set of terms defined with the symbol s of arity 1 and the constant symbol 0. Let X be a variable. The effect of the equation s(s(X)) = s(X) is to merge in the same equivalence class terms s(s(0)) and s(0), s(s(s(0))) and s(s(0)), etc. Thus, with this single equation, L/ = E consists of only two equivalence classes: a class containing only 0 and the class containing all the other natural numbers {s(0), s(s(0)), . . .}. An equation s(X) = X would define a single equivalence class containing all natural numbers. It would thus define a rougher abstraction. An equation s(s(X)) = X defines two equivalence classes: the class of even numbers {0, s(s(0)), . . .} and the class of odd numbers {s(0), s(s(s(0))), . . .}. For completion to terminate, the set T (F)/ = E (E-equivalence classes of T (F)) has to be finite [START_REF] Genet | Automata Completion and Regularity Preservation[END_REF]. When dealing with functional programs, this restriction can be relaxed as follows. Functional programs manipulate sorted terms and the associated TRSs preserve sorts. Provided that equations also preserve sorts, having a finite set T (F) S / = E , where T (F) S is the set of well-sorted terms, is enough. Besides, since well-sorted terms define a regular language, this information can be provided to Timbuk using tree automata, regular expressions or SRegexp. Going back to the delete example that we want to analyze, with set E = {cons(X, cons(Y, Z)) = cons(Y, Z)}, L/ = E is finite but T (F) S / = E may not be. For instance, terms delete(A, nil), delete(A, delete(A, nil)), etc. are all in separate equivalence classes. Again, we can take advantage from the fact that delete is a functional program and relax the termination condition of completion by focusing it on the data manipulated by the program. Instead of asking for finiteness of T (F) S / = E , we only require finiteness of T (C) S / = E , where T (C) S is the set of well-sorted constructor terms. Let us note E c the above set of equations {cons(X, cons(Y, Z)) = cons(Y, Z)}. As shown in Example 2, E c defines a finite set of equivalence classes on T (C) S , i.e., lists of A's and B's. 3 Provided that delete is a terminating and complete functional program, it is possible to extend E c so that completion terminates. This has been shown for first-order functional programs [START_REF] Genet | Termination Criteria for Tree Automata Completion[END_REF] and for higher-order functional programs [START_REF] Genet | Verifying higher-order functions with tree automata[END_REF]. The extension of E c consists in adding two sets of equations E R = {l = r | l → r ∈ R} and E r = {f (X 1 , . . . , X n ) = f (X 1 , . . . , X n ) | f ∈ F , arity of f is n, and X 1 , . . . , X n are variables}. Since E R and E r are fixed by the program, the precision of the approximation only depends on the equivalence classes defined by E c . Thus, to explore approximations, it is enough to explore all possible E c . Generating abstraction equations E c Additionally to the fact that (1) T (C) S / =Ec has to be finite, the termination theorems of [START_REF] Genet | Termination Criteria for Tree Automata Completion[END_REF][START_REF] Genet | Verifying higher-order functions with tree automata[END_REF] imposes additional constraints on E c . Equations in E c have to be contracting, i.e., they are of the form u = u| p where (2) u| p is a strict subterm of u and (3) u| p has the same sort as u. 4 Conditions ( 2) and (3) makes it possible to prune the search space of equations in E c . For instance the following equations do not need to be considered: cons(X, Y ) = Z because of condition (2), cons(X, cons(Y, Z)) = cons(X, Z) because of condition (2), cons(X, Y ) = X because of condition [START_REF] Boichut | Towards more precise rewriting approximations[END_REF]. Timbuk implements two different algorithms to explore the space of possible E c . Those algorithms are parameterized by a natural number k ∈ N and, for a given k, they generate a set EC(k) of possible E c . By increasing k, we increase the precision of equations sets E c in EC(k). The first algorithm is based on covering sets [START_REF] Kounalis | Testing for the Ground (Co-)Reducibility Property in Term-Rewriting Systems[END_REF] and generates contracting equations with variables [START_REF] Genet | Verifying higher-order functions with tree automata[END_REF]. In this algorithm k defines the depth of the covering set used to generate the equations. From a covering set S, we generate all equations sets E c = {u = u| p | u ∈ S} satisfying conditions (1) to (3). Example 3. Let X be a variable and T (C) S be the set of well-sorted constructor terms defined with symbol s of arity 1 and the constant symbol 0. For k = 1, the covering set is {s(X), 0} and EC(1) = {{s(X) = X}}. For k = 2, the covering set is {s(s(X)), s(0), 0} and EC(2) = {{s(s(X)) = X}, {s(s(X)) = s(X)}, {s(0) = 0}, {s(0) = 0, s(s(X)) = X}, {s(0) = 0, s(s(X)) = s(X)}}. The second algorithm generates ground contracting equations [START_REF] Genet | Automata Completion and Regularity Preservation[END_REF]. In this algorithm k represents the number of equivalence classes expected in T (C) S / =Ec . Since equation sets have to be ground and meet conditions (2) and (3), we can finitely enumerate all the possible equations sets E c for a given k. Example 4. Let T (C) S be the set of well-sorted constructor terms defined with symbol s of arity 1 and the constant symbol 0. For k = 1 the set EC(1) = {{s(0) = 0}}. For k = 2, the set EC(2) = {{s(s(0)) = 0}, {s(s(0)) = s(0)}. A systematic way to build ground EC(k), based on tree automata enumeration, is given in [START_REF] Genet | Automata Completion and Regularity Preservation[END_REF]. Using the first or second algorithm to generate EC(k), to prove that there exists a tree automaton A * over-approximating R * (L(A)) and such that L(A * ) ∩ Bad = ∅, we run the following algorithm: It has been shown in [START_REF] Genet | Automata Completion and Regularity Preservation[END_REF] that the ground enumeration of EC(k) is complete w.r.t. tree automata that are closed by R-rewriting. Thus, if there exists such a A * , the above iterative algorithm will find it. However, on properties that cannot be shown using a regular approximation, such as [START_REF] Boichut | A theoretical limit for safety verification techniques with regular fix-point computations[END_REF], this algorithm may diverge. 1. Start with k = 1 2. Build EC(k) 3. Pick one E c in EC(k) 4. Complete A into A * using R and E c ∪ E R ∪ E r 5. If L(A * ) ∩ Bad = ∅ then Interacting with Timbuk Download http://people.irisa.fr/Thomas.Genet/timbuk/timbuk3.2.tar.gz and compile and install Timbuk 3.2. The online version of Timbuk does not integrate all the features presented here. In Timbuk's archive, the full specification of the delete example can be found in the file FunExperiments/deleteBasic.txt. This file contains the TRS, the SRegexp presented above and a tree automaton named TC which defines well-sorted constructor terms as explained in Example 2. This automaton is used to prune equation generation. Note that this automaton could be inferred from the typing information of the functional program. Here, the automaton TC states that lists are built with cons and nil, that elements of the list are either A or B, and that true and false are of the same type but cannot appear in a list. Thus, ill-typed terms of the form cons(nil, true) are not considered for equation generation. Finally, the Patterns section defines the set Bad of terms that should not be reachable. Currently, the pattern section is limited to terms or patterns (terms with holes '_') and cannot handle SRegexp or automata. In the present example, we only consider a subset of bad terms: terms of the form cons(A,_), i.e., lists starting by A. Assuming that your working directory is FunExperiments, you can run Timbuk on this example by typing: timbuk --fung 30 deleteBasic.txt. Where --fung is the option triggering ground equation generation (the second algorithm for generating EC(k)) and 30 is a maximal number of completion steps. We get the following output: Generated equations: ------------------- cons(A,cons(A,nil)) = cons(A,nil) cons(B,cons(A,nil)) = cons(A,nil) cons(B,nil) = nil B = B nil = nil delete(X,Y) = delete(X,Y) A = A true = true cons(X,Y) = cons(X,Y) false = false ite(X,Y,Z) = ite(X,Y,Z) eq(X,Y) = eq(X,Y) eq(A,A) = true eq(A,B) = false eq(B,A) = false eq(B,B) = true delete(X,nil) = nil delete(X,cons(Y,Z)) = ite(eq(X,Y),delete(X,Z),cons(Y,delete(X,Z))) ite(true,X,Y) = X ite(false,X,Y) = Y Regular expression: ------------------- [cons(B, *|nil)] Proof done! -----------Completion time: 0.006595 seconds The three first generated equations belong to E c , reflexive equations of the form B = B, nil = nil, . . . belong to E r and the last eight equations belong to E R . The set T (C) S / =Ec has two equivalence classes: the class containing nil and all lists containing only B's and the class of lists containing at least one A. Thus, the effect of E c is to forget any B and preserve any A that appears in a list. Using the --fun option instead of --fung while running Timbuk, triggers the first algorithm for generating EC(k), i.e., E c with variables. On this example, the generated E c part has two equations instead of three: cons(X,cons(A,Y)) = cons(A,Y) and cons(B,X) = X. The effect of this set E c is the same as the ground E c above. Indeed, this E c splits lists into two equivalence classes: the class of lists without A's and the class of lists with at least one A. Finally, in Timbuk's output, Proof done! means that Timbuk manages to build a regular approximation of R * (I) that contains no term of the Patterns section. Timbuk outputs the resulting simplified regular expression [cons(B, *|nil)]. This proves that results are lists without any occurrence of A's. Here, one can read the outputted SRegexp to check that the property is true. How-ever, this can be difficult when the outputted SRegexp is more complex. Thus, on most examples, we use additional predicates to check properties like it is commonly done with proof assistants. On our previous example, given a predicate member (testing membership on lists), we can check that terms of the language member(A,delete(A,cons((A|B),*|nil))) never rewrite to true. We can also check the dual property expected on delete: deleting A's should not delete all B's. We hope to check this property using initial terms member(B,delete(A, [cons((A|B),*|nil)])) and a patterns section set to false. However, the property is not true and, during completion, Timbuk finds a counterexample: Found a counterexample: ---------------------- Using this initial set of terms, Timbuk succeeds to do the proof and produces a slightly different E c : cons(A,cons(B,nil)) = cons(B,nil), cons(B,cons(B,nil)) = cons(B,nil), cons(A,nil) = nil. This time, E c forgets about A's and preserves B's. More than 20 other examples (with ground/non-ground equations generation) can be found on the Timbuk page http://people.irisa.fr/Thomas.Genet/ timbuk/funExperiments/, including functions on lists, trees, sorting functions, higher-order functions, etc. Conclusion and further Research We know that completion is terminating on higher-order functional programs thanks to the recent result of [START_REF] Genet | Verifying higher-order functions with tree automata[END_REF]. Besides, we also know that ground equation generation of E c is complete w.r.t. tree automata that are closed by R [START_REF] Genet | Automata Completion and Regularity Preservation[END_REF]. In other words, if there exists a tree automaton A * , closed by R and overapproximating the set of reachable terms, then it will eventually be found by generating ground equations. With the first algorithm where equations of E c may contain variables, we do not have a similar completeness result, yet. However, generating equations with variables remains an interesting option because the set E c can be smaller. This is the case in the previous example where E c with variables defines the same set of equivalence classes but with fewer equations. From a theoretical perspective, Tree Automata Completion can be seen as an alternative to well-established higher-order model-checking techniques like PMRS [START_REF] Ong | Verifying higher-order functional programs with patternmatching algebraic data types[END_REF] or HORS [START_REF] Matsumoto | Automata-Based Abstraction for Automated Verification of Higher-Order Tree-Processing Programs[END_REF] to verify higher-order functional programs. Timbuk implements Tree Automata Completion but was missing several features for those theoretical results to be usable in practice. First, stating the property to prove using a tree automaton was error-prone and lacked readability. Using simplified regular expressions significantly improves this step and makes property definition closed to what is generally used in a proof assistant. Second, equations which are necessary to define the approximation, had to be given by the user [START_REF] Genet | Termination Criteria for Tree Automata Completion[END_REF]. Now, Timbuk can automatically generate a set of equations adapted to a given verification objective. Combining those two extensions makes Timbuk a competitive alternative to higher-order model checking tools like [START_REF] Ong | Verifying higher-order functional programs with patternmatching algebraic data types[END_REF] and [START_REF] Matsumoto | Automata-Based Abstraction for Automated Verification of Higher-Order Tree-Processing Programs[END_REF]. In those model-checking tools and in Timbuk, the properties under concern are "regular properties", i.e. properties proven on regular languages. Those regular properties are stronger than what offers tests (they prove a property on an infinite set of values) but weaker than what can be proven using induction in a proof assistant. However, unlike proof assistants, Timbuk does not require to write lemmas or proof scripts to prove a regular property. An interesting research direction is to explore how to lift those regular properties to general properties. In other words, how to build a proof that ∀ x l. not(member(x,delete(x,l))) from the fact that all terms from member(A,delete(A,cons((A|B),*|nil))) rewrite to false. We believe that this is possible by taking advantage of parametricity such as in [START_REF] Wadler | Theorems for free![END_REF]. This is ongoing work. In this paper, the verification is performed on a TRS representing the functional program. To directly perform the verification on real functional programs rather than on TRSs, we need a transformation. We could reuse the HOCA transformation of [START_REF] Avanzini | Analysing the complexity of functional programs: higher-order meets first-order[END_REF]. However, it does not take the priorities of the pattern matching rules of the functional program into account when producing the TRS. Furthermore, this translation needs to be certified, i.e., we need a formal proof that the behavior of the outputted TRS R covers all the possible behaviors of the functional program. With such a proof on R, if Timbuk can prove that no term of member(A,delete(A,cons((A|B),*|nil))) can be rewritten to true with R, then we have a similar property on the functional program. The equation generation process does not cover all TRSs but only TRSs encoding terminating, complete, higher-order, functional programs. We currently investigate how to generate equations without the termination and completeness restrictions on the program. Another research direction is to extend this verification principle to more general theorems. For the moment, theorems that can be proved using Timbuk need to have a regular model. For instance, Timbuk is able to prove the theorem member(A,delete(A,l)) →R * true for all lists l=cons((A|B),*|nil) because the language of terms reachable from the initial language member(A,delete(A,cons((A|B),*|nil))) is, itself, regular. Assume that we have a predicate eq encoding equality on lists. To prove a theorem of the form eq(delete(A,l),l) →R * false for all list l=cons(B,*|nil), the language of reachable terms is no longer regular. However, recent advances in completionbased techniques for non-regular languages [START_REF] Boichut | Towards more precise rewriting approximations[END_REF] should make such verification goals reachable. Example 2 . 2 Let us consider the set L of well-sorted lists of A and B. The set L is the regular language associated with the SRegexp cons((A|B), * |nil). Let X, Y, Z be variables. The set E = {cons(X, cons(Y, Z)) = cons(Y, Z)} defines a set of E-equivalence classes L/ = E with three classes: one class only contains nil, one class contains all lists ending with an A and the last class contains all lists ending with a B. Ops delete:2 cons:2 nil:0 A:0 B:0 ite:3 true:0 false:0 eq:2 Const A B nil cons true false Vars X Y Z TRS R delete(X,nil)->nil delete(X,cons(Y,Z))->ite(eq(X,Y),delete(X,Z),cons(Y,delete(X,Z))) ite(true,X,Y)->X ite(false,X,Y)->Y eq(A,A)->true eq(A,B)->false eq(B,A)->false eq(B,B)->true SRegexp A0 delete(A,[cons((A|B),*|nil)]) Automaton TC States qe ql qb Final States qe ql qb Transitions A->qe B->qe nil->ql cons(qe,ql)->ql true->qb false->qb Patterns cons(A,_) -Term member(B,delete(A,nil)) rewrites to a forbidden pattern For the property to hold, lists in initial terms should contain at least one B: member(B,delete(A,[cons((A|B),*|[cons(B,*|[cons((A|B),*|nil)])])])) verification is successful Otherwise, if EC(k) not empty, pick a new E c in EC(k) and go to 4. 6. When EC(k) is empty, increment k and go to 2. When the analysis depends on the evaluation strategy, completion can be extended to take it into account[START_REF] Genet | Reachability Analysis of Innermost Rewriting[END_REF]. See the page http://people.irisa.fr/Thomas.Genet/timbuk/funExperiments/ simplifiedRegexp.html for more examples. In fact, in T (C) S there are also terms true and false but they cannot be embedded in lists. Thus, each of them defines its own equivalence class. In the end, in T (C) S /=E c there are 5 equivalence classes. Note that the sort information can be inferred from the tree automaton recognizing well-sorted terms. For instance, the automaton associated to the SRegexp of Example 2 recognizes A and B by into the same state, thus A and B will have the same sort (see automaton TC in Section 5) Acknowledgements Many thanks to the anonymous referees for their valuable comments.
01688181
en
[ "sdv.mhep.hem", "sdv.bbm.gtp" ]
2024/03/05 22:32:18
2017
https://univ-rennes.hal.science/hal-01688181/file/Garnier%20et%20al.%20-%202017%20-%20VLITL%20is%20a%20major%20cross-%CE%B2-sheet%20signal%20for%20fibrinog.pdf
Cyrille Garnier Fatma Briki Brigitte Nedelec Patrick Le Pogamp Ahmet Dogan Nathalie Rioux-Leclercq Renan Goude Caroline Beugnet Laurent Martin Marc Delpech Frank Bridoux Gilles Grateau Jean Doucet Philippe Derreumaux Sophie Valleix VLITL is a major cross-beta-sheet signal for fibrinogen A alpha-chain frameshift variants établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Fibrinogen is a 340-kDa glycoprotein, composed of two identical heterotrimers, each consisting of one Aα, one Bβ, and one γ-chain. [START_REF] Weisel | Mechanisms of fibrin polymerization and clinical implications[END_REF] Mutations altering any of these three chains are commonly associated with autosomal or recessive bleeding/thrombotic disorders (http://site.geht.org/base-fibrinogene/ ) without any clinical evidence of amyloidosis. However, a small fraction of Aα-chain variants are amyloidogenic and lead to massive Aα-chain deposition as amyloid fibrils in AFib-patient's kidneys. The first case of fibrinogen Aα-chain-derived amyloidosis (AFib) has been described by Benson in 1993, [START_REF] Benson | Hereditary renal amyloidosis associated with a mutant fibrinogen alpha-chain[END_REF] and until now, only the Aα-chain has been linked to AFib (http://amyloidosismutations.com/mut-afib.php). AFib is a rare, late-onset, autosomal dominant condition characterized by massive amyloid deposition in the glomerular compartment of the kidney. [START_REF] Benson | Hereditary renal amyloidosis associated with a mutant fibrinogen alpha-chain[END_REF] Heterozygous AFib-patients typically display a chronic kidney disease in the fourth/fifth decade of life leading to progressive end-stage renal failure. [START_REF] Gillmore | Diagnosis, pathogenesis, treatment, and prognosis of hereditary fibrinogen A alpha-chain amyloidosis[END_REF] Though amyloid mechanisms involved in AFib are still unknown, it was shown that AFib-fibrils were exclusively composed of the mutant Aα-chain, [START_REF] Benson | Hereditary renal amyloidosis associated with a mutant fibrinogen alpha-chain[END_REF][START_REF] Hamidi Asl | Renal amyloidosis with a frame shift mutation in fibrinogen aalpha-chain gene producing a novel amyloid protein[END_REF] suggesting that the wild-type Aα-chain does not contribute to amyloid deposition, analogous to what it was previously observed in patients with familial lysozyme, β2-microglobulin, and apoC-III amyloidosis. [START_REF] Valleix | D25V apolipoprotein C-III variant causes dominant hereditary systemic amyloidosis and confers cardiovascular protective lipoprotein profile[END_REF][START_REF] Valleix | Hereditary systemic amyloidosis due to Asp76Asn variant β2-microglobulin[END_REF][START_REF] Pepys | Human lysozyme gene mutations cause hereditary systemic amyloidosis[END_REF] The therapeutic approach of AFib consists only in supportive treatment by dialysis and by renal or combined hepatorenal transplantation. Recently, Benson's group suggests that preemptive hepatic transplantation may avert the progression of renal damage and may be a promising treatment for AFib prior to the need for renal dialysis or kidney transplantation. [START_REF] Fix | Liver transplant alone without kidney transplant for fibrinogen Aα-chain (AFib) renal amyloidosis[END_REF] Although the first case of AFib was recognized more than 20 years ago, we still do not know which specific part of mutant Aα-chain sequences directly participates in the β-aggregation process. To date, a total of 15 Aα-chain variants are known to be amyloid-prone in humans, and remarkably these amyloidogenic variants are all clustered in a small portion of Aα-chain from residues 517 to 555. [START_REF] Benson | Hereditary renal amyloidosis associated with a mutant fibrinogen alpha-chain[END_REF][START_REF] Gillmore | Diagnosis, pathogenesis, treatment, and prognosis of hereditary fibrinogen A alpha-chain amyloidosis[END_REF][START_REF] Hamidi Asl | Renal amyloidosis with a frame shift mutation in fibrinogen aalpha-chain gene producing a novel amyloid protein[END_REF][START_REF] Rowczenio | Online registry for mutations in hereditary amyloidosis including nomenclature recommendations[END_REF][START_REF] Kang | Hereditary amyloidosis in early childhood associated with a novel insertion-deletion (indel) in the fibrinogen Aalpha chain gene[END_REF][START_REF] Uemichi | Hereditary renal amyloidosis with a novel variant fibrinogen[END_REF][START_REF] Uemichi | A frame shift mutation in the fibrinogen A alpha chain gene in a kindred with renal amyloidosis[END_REF][START_REF] Yazaki | The first pure form of Ostertag-type amyloidosis in Japan: a sporadic case of hereditary fibrinogen Aα-chain amyloidosis associated with a novel frameshift variant[END_REF] Missense Aα-chain variants have been reported in several AFibfamilies worldwide, with Glu526Val being the most common amyloidogenic Aα-chain variant. [START_REF] Rowczenio | Online registry for mutations in hereditary amyloidosis including nomenclature recommendations[END_REF] In contrast, amyloidogenic Aα-chain frameshift variants are "private" -ie that each of them has been reported in only a single family, and therefore available clinical information associated with these variants is very limited. [START_REF] Gillmore | Diagnosis, pathogenesis, treatment, and prognosis of hereditary fibrinogen A alpha-chain amyloidosis[END_REF][START_REF] Hamidi Asl | Renal amyloidosis with a frame shift mutation in fibrinogen aalpha-chain gene producing a novel amyloid protein[END_REF][START_REF] Kang | Hereditary amyloidosis in early childhood associated with a novel insertion-deletion (indel) in the fibrinogen Aalpha chain gene[END_REF][START_REF] Uemichi | A frame shift mutation in the fibrinogen A alpha chain gene in a kindred with renal amyloidosis[END_REF][START_REF] Yazaki | The first pure form of Ostertag-type amyloidosis in Japan: a sporadic case of hereditary fibrinogen Aα-chain amyloidosis associated with a novel frameshift variant[END_REF] Two of them have exceptionally been associated with pediatric AFib-cases, [START_REF] Hamidi Asl | Renal amyloidosis with a frame shift mutation in fibrinogen aalpha-chain gene producing a novel amyloid protein[END_REF][START_REF] Kang | Hereditary amyloidosis in early childhood associated with a novel insertion-deletion (indel) in the fibrinogen Aalpha chain gene[END_REF] suggesting that frameshifts may be particularly aggressive and likely highly prone to self-aggregation; therefore, it is of particular interest to investigate the precise mechanisms responsible for their amyloidogenicity. We report a novel "private" amyloidogenic Aα-chain frameshift variant (c.1620delT/Phe521Leufs) and show that renal fibrils of AFib patients are composed of a short polypeptide derived from the C-terminal part of the Phe521Leufs Aα-chain without evidence of a wild-type counterpart. This additional confirmation of what part of the mutant Aα-chain is deposited in the disease tissue of AFib-patients prompted us to explore how the mutant Phe521Leufs-chain contributes to Aα-chain amyloid formation. Materials and methods Genetic analysis Blood samples from the family members were obtained after their written informed consents. This study had the approval of the Ethics Committee of the Hospital of Rennes and was performed according to the Declaration of Helsinki. The entire coding region and flanking splice sites of exon 5 of FGA were sequenced as previously described. [START_REF] Hamidi Asl | Renal amyloidosis with a frame shift mutation in fibrinogen aalpha-chain gene producing a novel amyloid protein[END_REF] The nomenclature of the fibrinogen Aα-chain deletion is based on the FGA transcript reference (NM_000508.3). For more clarity with the historically conventional nomenclature, the FGA variant is described as Phe521Leufs according to the mature protein without the signal peptide. According to the recommended Human Genome Variation Society (HGVS), which starts the amino acid numbering at the initiator methionine, Phe521Leufs corresponds to Phe540Leufs. In Table 1, all amyloidogenic Aα-chain frameshift variants are listed with the two nomenclatures. To convert the conventional mature protein amino acid numbering to the HGVS nomenclature, add 19 nucleotides for Aα-chain changes. Histology and transmission electron microscopy of renal biopsies Renal biopsies were processed according to standard techniques, as previously described. [START_REF] Hamidi Asl | Renal amyloidosis with a frame shift mutation in fibrinogen aalpha-chain gene producing a novel amyloid protein[END_REF] Immunostaining for the specific antibody (rabbit polyclonal antibody from Dr. Merrill Benson, 1:100) corresponding to the abnormal fibrinogen Aα-chain (GAQNLASSQIQRN) was performed, as previously described. [START_REF] Hamidi Asl | Renal amyloidosis with a frame shift mutation in fibrinogen aalpha-chain gene producing a novel amyloid protein[END_REF] Laser microdissection and tandem mass spectrometry (LMD/MS) analysis The LMD and LC-MS/MS methods have been previously summarized in full text. [START_REF] Sethi | Laser microdissection and mass spectrometry?based proteomics aids the diagnosis and typing of renal amyloidosis[END_REF] In silico tools AMYLPRED2 is a consensus algorithm for prediction of amyloidogenic determinants combining 11 different methods, available at http://biophysics.biol.uoa.gr/AMYLPRED2/). [START_REF] Tsolis | A consensus method for the prediction of "aggregation-prone" peptides in globular proteins[END_REF] The cross-β TANGO score results from a statistical mechanics model based on simple physico-chemical principles of secondary structure formation. [START_REF] Fernandez-Escamilla | Prediction of sequencedependent and mutational effects on the aggregation of peptides and proteins[END_REF] The PASTA energy is indicative of the aggregation propensity and predicts which portions of the sequence are more likely to stabilize the cross-ß core of fibrillar aggregates. [START_REF] Walsh | PASTA 2.0: an improved server for protein aggregation prediction[END_REF] Fibril formation All five synthetic peptides were purchased from genepep prestation, France, and stock solutions were prepared at the final concentration of 2 to 6 mM and stored at -20°C. Fibril formation was induced in 10 mM MOPS buffer, 150mM NaCl, pH 7.2 at a final peptide concentration ranging from 0.125 to 1mM. Peptide solutions were incubated at room temperature without agitation or adjuvants from one day to two months. Fluorescence experiments 100µL of the five peptides were added to 900 µL of fibrillation buffer containing Thioflavin T (5 µM final concentration). Fluorescence was measured with a Perkin-Elmer Luminescence Spectrometer LS55 with slit widths of 10 nm. The excitation was at 450 nm and emission spectra of ThT were recorded from 470 to 600 nm. Transmission electron microscopy of aggregates formed in vitro Aliquot of the five peptides (10 µl with a concentration of ~10mg/ml) were placed on carboncoated copper grids (300 mesh), washed and negatively stained for 1min with 1% (wt/vol) uranyl acetate, and wicked dry prior to analysis using a Philips CM12 transmission electron microscope operating at accelerating voltages of 120 kV. X-ray microdiffraction experiments performed on in vitro aggregate samples and on ex vivo renal fibrils All samples were pelleted by centrifugation at 5000g for 10 min. Small concentrated drops of samples were deposited on cylindical fibres of about 100µm diameter. X-ray microdiffraction was performed at the ESRF-European Synchrotron Radiation Facility (Grenoble, France) on the microfocus beam line ID13, as previously described. [START_REF] Briki | Synchrotron x-ray microdiffraction reveals intrinsic structural features of amyloid deposits in situ[END_REF] Results Phe521Leufs is a novel amyloidogenic « private » frameshift variant of fibrinogen Aα-chain associated with a severe form of renal amyloidosis Amyloidosis was initially diagnosed from a 27-year-old woman (patient I.1) who presented proteinuria (1.30 g/L) during routine medical screening at her first pregnancy (Figure 1A). Progressively, she developed nephrotic syndrome without hypertension (120/80 mmHg). On physical examination, she had no signs of peripheral or autonomic neuropathy, and all cardiac investigations were normal. A renal biopsy was performed and amyloidosis was diagnosed, but the etiology of her renal amyloidosis remained undetermined. Three years later, this patient was diagnosed with malignant hypertension (210/130 mmHg), and was treated with transfusion of fresh frozen plasma and association of several anti-hypertensive drugs, resulting in adequate control of blood pressure. However, she continued to develop uremia leading to acute renal failure requiring hemodialysis. One year later, she underwent her first renal transplantation, but amyloidosis recurred on the renal graft five years later, and she received her second renal graft. The etiology of this renal amyloidosis was reevaluated when one of her daughter (II.2) began to manifest proteinuria at 22-years old associated with glomerular amyloid deposits. This proband's daughter also developed an acute episode of malignant hypertension leading to acute renal failure and hemodialysis. In both of these affected AFib-probands, there was no history of bleeding or thrombotic disorders (even during surgical procedures and during the nephrotic phase), and all routine coagulation investigations were normal including Clauss fibrinogen activity level, fibrinogen antigen level, activated partial thromboplastin time, prothrombin time, thrombin time, and reptilase time, indicating absence of significant quantitative or qualitative fibrin clot abnormalities. Finally, hereditary AFib was confirmed on the basis of history of renal disease in two family members, documented evidence of renal dysfunction secondary to Congo red amyloid deposits in glomeruli, histological evidence of amyloid fibrils at electron microscopy, immunohistochemistry of renal biopsies, and detection of FGA mutation (Figures 1A-E). The two AFib-probands (I.1 and II.2) were heterozygous for a novel single base pair deletion (c.1620delT), expected to alter the reading frame of the Aα-chain mRNA at codon 521: Phe521Leufs (numbering according to the mature protein) or Phe540Leufs (according to the recommended HGVS nomenclature, including the signal peptide). This mutation was not detected in family members II.1 and II.3 who had no proteinuria, confirming that this novel Aα-chain variant segregated with renal disease in this kindred (Figures 1A and1E). This thymine deletion is expected to cause loss of the last C-terminal 62 amino acids of the wild-type Aα-chain and, instead, the incorporation of 27 new residues (521-LSVRLSLGAQNLASSQIQRNPVLITLG-547) before premature termination of the translation at codon 548 (Figure 1D and Figure 2B). Therefore, the genetic data were concordant with immunohistochemistry analysis showing that the Congo red glomeruli deposits positively stained with a specific antihuman Aα-chain monoclonal antibody that recognized the mutant C-terminal portion of all amyloidogenic frameshifts (Figure 1D). [START_REF] Hamidi Asl | Renal amyloidosis with a frame shift mutation in fibrinogen aalpha-chain gene producing a novel amyloid protein[END_REF] Sequence alignment of Phe521Leufs with amyloidogenic Aα-chain frameshift variants reported thus far showed that all invariably truncate at codon 548, producing highly similar mutant C-terminal sequences with a common portion of 15 amino acids, ASSQIQRNPVLITLG, residues 533-547 (Table 1). The C-terminus end of Phe521Leufs chain constitutes amyloid fibrils in vivo To determine which part of the Aα-chain contributes to the formation of amyloid fibrils, renal deposits from proband II.2 were extracted by laser-microdissection/liquid chromatography and tandem mass spectrometry (LMD/MS), [START_REF] Sethi | Laser microdissection and mass spectrometry?based proteomics aids the diagnosis and typing of renal amyloidosis[END_REF] and the Phe521Leufs amyloid proteome was compared to the proteome obtained from AFib patients carrying the Glu526Val missense variant, the most common amyloidogenic Aα-chain variant. Several independent samples (replicates) were analyzed for Phe521Leufs and Glu526Val variants. For both variants, the amyloid proteome profile indicated that Aα-chain showed the highest probability score in the deposits and the signature proteins (SAP and apoE) were present, giving confidence in the identification of amyloid (Figure 2A). Peptides corresponding to the new C-terminal sequence of Phe521Leufs were detected only in the case carrying this mutation and not in Glu526Val cases (Figure 2A). Detailed examination of the protein coverage of Phe521Leufs showed that the amyloid peptide contained all of the modified C-terminal sequence encoded by the Phe521Leufs allele (100% coverage) but not the wild-type C-terminal Aα-chain sequence after residue 521 (Figure 2B). In conclusion, the amyloid peptide characterized in Phe521Leufs deposits was a hybrid 49-mer fragment with the first 22 residues identical to the wild-type Aα-chain residues 499-520 (AFFDTASTGKTFPGFFSPMLGE), and the Cterminal 27 residues corresponding to the sequence encoded by Phe521Leufs (LSVRLSLGAQNLASSQIQRNPVLITLG) (Figure 2B). Therefore, our amyloid proteome analysis indicated that wild-type Aα-chain does not contribute to amyloid formation, and that only the C-terminal sequence generated by the Phe521Leufs allele is amyloid in vivo, a finding consistent with previous publications. [START_REF] Benson | Hereditary renal amyloidosis associated with a mutant fibrinogen alpha-chain[END_REF][START_REF] Hamidi Asl | Renal amyloidosis with a frame shift mutation in fibrinogen aalpha-chain gene producing a novel amyloid protein[END_REF][START_REF] Yazaki | The first pure form of Ostertag-type amyloidosis in Japan: a sporadic case of hereditary fibrinogen Aα-chain amyloidosis associated with a novel frameshift variant[END_REF] VLITL is predicted to be a major cross-β-sheet signal of Phe521Leufs chain To explore why the mutant Phe521Leufs sequence contributes to amyloid formation, we first performed in silico analyses on the full-length sequences of Phe521Leufs and all frameshifts to search for motifs with a propensity to form amyloid (Table 1). [START_REF] Ventura | Short amino acid stretches can mediate amyloid formation in globular proteins: the Src homology 3 (SH3) case[END_REF][START_REF] Yoon | Detecting hidden sequence propensity for amyloid fibril formation[END_REF] We used AMYLPRED2, [START_REF] Tsolis | A consensus method for the prediction of "aggregation-prone" peptides in globular proteins[END_REF] which combines 11 individual algorithms, and PASTA2, [START_REF] Walsh | PASTA 2.0: an improved server for protein aggregation prediction[END_REF] which predicts which amino acid and β-sheet orientation is energetically favored. Combined algorithms indicated that, in all cases, the amino acid sequence encoded by each amyloidogenic frameshift variant created a common short stretch motif with high amyloidogenic propensity that involved a five-residue fragment (VLITL) (Figures 3A and3B). TANGO identified VLITL as the unique hot-spot with a high intrinsic propensity for β-aggregation regardless of pH and ionic strength (Figure 3A), and PASTA2 predicted that VLITL forms parallel in-register intermolecular β-sheets in amyloid (favorable pairing energy of -5.57, corresponding to 10.7 kcal/mol) (Figure 3B). Consistent with these predictions, VLITL is consistently present in Aα-chain frameshift variants associated with renal amyloidosis (Table 1), while VLITL is absent from the amino acid sequences of those that are not clinically amyloidogenic (http://site.geht.org/basefibrinogene/). 21,22 This genotype-phenotype correlationship suggests that nucleotide indel mutations producing mutant Aα-chains containing VLITL at their C-termini will be likely amyloidogenic (Table 1). More importantly, we show here that VLITL is part of renal fibrils of AFib-patient II.2 (residues 44-48 of the 49-mer amyloid peptide found ex vivo) (Figure 2B), reinforcing the view that VLITL indeed might confer amyloidogenic properties. VLITL responsible for the amyloidogenic property of Phe521Leufs-derived peptides To experimentally verify that VLITL is the major amyloid sequence determinant of Aαchain frameshifts, we generated synthetic Aα-chain derived-peptides containing the predicted amyloid-prone VLITL motif, and compared their capacity for fibril formation with that of their VLITL-deleted counterparts. To this end, we designed AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNPVLITLG, the 49-mer fulllength Phe521Leufs-peptide identified in AFib-patient's deposits, and its corresponding VLITL-deleted control (AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNP_G). We also investigated the amyloid propensity of the ASSQIQRNPVLITLG-peptide, common to all amyloidogenic Aα-chain variants, and its VLITL-deleted counterpart ASSQIQRNP_G. Fibrillogenesis experiments were performed under the same physiological conditions and amyloid formation evaluated using ThT, a dye binding to βsheet aggregates with a characteristic maximum emission fluorescence intensity at 485 nm, TEM to determine the morphological features of aggregates, and X-ray microdiffraction (XRD) to ensure that aggregates possess the typical cross-β architecture. [START_REF] Makin | Diffraction to study protein and peptide assemblies[END_REF][START_REF] Nilsson | Techniques to study amyloid fibril formation in vitro[END_REF] A s s h o w n i n F i g u re s 4E-G). In contrast, ASSQIQRNP_G, did not exhibit any ThT spectral change (Figure 4E), and did not form aggregates (amorphous or fibrillar in nature) (data not shown), demonstrating that ASSQIQRNP_G does not form any kind of amyloid species under the same physiological conditions that the ASSQIQRNPVLITLG-peptide does. Therefore, our experiments show that the 49-mer and 15-mer Phe521Leufs-derived peptides are readily amyloidogenic in vitro but lose their fibril-forming ability when VLITL is absent, supporting that their amyloidogenic behavior depends on VLITL amyloidogenic properties. Next, we experimentally verified whether VLITL itself forms amyloid in vitro. A ThT assay of VLITL showed typical amyloid enhancement of fluorescence emission at 485 nm (Figure 4I). In addition, TEM revealed twisted mature fibrils with a ribbon-like appearance (Figures 4J and4K) and XRD confirmed that VLITLfibrils are amyloid with a highly ordered and well-defined cross-β structure (Figure 4L). Structural similarities between in vitro and ex vivo Phe521Leufs-derived fibrils The structures of Phe521Leufs-derived fibrils generated in vitro were compared to those formed in AFib-patient kidneys (individual II.2). To this end, we performed ex situ XRD studies of Phe521Leufs-fibrils using kidney cuts obtained without denaturing extraction procedures to preserve the natural state of aggregates formed in vivo, offering the opportunity to gain structural information on fibrils in their natural cellular context (Figure 3H). [START_REF] Briki | Synchrotron x-ray microdiffraction reveals intrinsic structural features of amyloid deposits in situ[END_REF] XRD showed that the signal of the equatorial reflection from natural fibrils was very close to that of fibrils generated by ASSQIQRNPVLITLG, recapitulating the structural features of renal AFib-fibrils in physiological conditions (Figures 4C,4G,4H, and 4L). [START_REF] Riek | The activities of amyloids from a structural perspective[END_REF] Therefore, this mutant 15-mer Aα-chain derived-sequence likely governs the detailed structure of Phe521Leufs-fibrils formed in patient kidneys, and might constitute a valuable "minimalist" in vitro amyloid model. Discussion Here, in addition to report a new "private" Aα-chain amyloidogenic frameshift variant, the aim of this study was to focus on the molecular mechanisms by which fibrinogen Aα-chain frameshift variants become amyloidogenic in humans. Our results show that VLITL is a key amyloid-prone motif located at the Cterminally mutant end of all Aα-chain frameshift sequences, conferring amyloidogenic properties to Aα-chain frameshift variants. In this study, we first characterized which part of the Aα-chain was deposited in Phe521Leufs-patient's tissues, and we demonstrated that a short 49-mer peptide deriving from the Phe521Leufs-allele formed the renal fibrils. These clinico-anatomopathological findings overlap those obtained from our French AFib-kindred carrying the Val522Alafs variant which had previously documented that a 49-amino acid peptide encoded by the Val522Alafs-allele was similarly deposited as fibrils in kidneys. [START_REF] Hamidi Asl | Renal amyloidosis with a frame shift mutation in fibrinogen aalpha-chain gene producing a novel amyloid protein[END_REF] More recently, Yazaki et al. also confirmed that the carboxyl terminal region of the amyloidogenic Ser523Argfs variant, containing VLITL, constituted AFib-patient's renal deposits. [START_REF] Yazaki | The first pure form of Ostertag-type amyloidosis in Japan: a sporadic case of hereditary fibrinogen Aα-chain amyloidosis associated with a novel frameshift variant[END_REF] Therefore, three ex vivo biochemical analysis of amyloid deposits from AFib-patient's carrying three distinct frameshift variants of Aα-chain concordantly established that only the C-terminal fragments of Aαchain frameshift variants contribute to amyloid formation, and not the wild-type Aαchain. We then tested the hypothesis that these highly similar Aα-chain C-terminal mutant sequences, specific to amyloidogenic variants, might contain segments with a high propensity to self-aggregate into ß-sheets rendering mutant Aα-chains amyloidogenic. Our in silico analysis predicted that the C-terminal end of all mutant Aαchains contained a major amyloid hot spot, VLITL. This prediction was in good concordance with the fact that VLITL was found within renal amyloid deposits of Phe521Leufs-, Val522Alafs-and Ser523Argfs-AFib patients. [START_REF] Hamidi Asl | Renal amyloidosis with a frame shift mutation in fibrinogen aalpha-chain gene producing a novel amyloid protein[END_REF][START_REF] Yazaki | The first pure form of Ostertag-type amyloidosis in Japan: a sporadic case of hereditary fibrinogen Aα-chain amyloidosis associated with a novel frameshift variant[END_REF] To experimentally verify that VLITL is a major ß-sheet signal of Aα-chain frameshift variants, we evaluated the ability of synthetic Phe521Leufs-derived peptides containing the VLITL motif to polymerize into ß-sheet in vitro. We also compared their capacity for fibril formation with that of their VLITL-deleted counterparts. We provide in vitro evidence that VLITL is a fibril-forming motif, necessary for the β-sheet arrangement of the full-length Aα-chain peptide deposited as extracellular fibrils in Phe521Leufs-patient's kidneys, while its absence abrogates fibril formation of Phe521Leufs-derived peptides. Therefore, combined in vitro and in vivo experiments support that the amyloidogenic behavior of the Phe521Leufs variant depends on the amyloidogenic properties of VLITL. The location of VLITL at the extreme end of renal fibrils probably renders this motif easily accessible for intermolecular interactions in vivo. VLITL satisfies two major criteria that are recognized as essential for triggering amyloid formation: high β-sheet propensity and appropriate position within the mutant Aα-chains for nucleating fibril formation. [START_REF] Esteras-Chopo | The amyloid stretch hypothesis: recruiting proteins toward the dark side[END_REF][START_REF] Pastor | Hacking the code of amyloid formation: the amyloid stretch hypothesis[END_REF] It is important to note that Phe521Leufs-derived peptides form amyloid fibrils in "physiological" experimental conditions. On this basis, it was therefore particularly relevant to compare the structural characteristics of Phe521Leufs-derived fibrils formed in vitro with those formed in the Phe521fs-patient's kidneys. To this end, X-ray analysis of Aα-chain-fibrils from Phe521fs-patients was performed "in situ", directly on the pathological renal tissue without denaturating extraction procedures, to preserve the natural state of Aα-chain aggregates. [START_REF] Briki | Synchrotron x-ray microdiffraction reveals intrinsic structural features of amyloid deposits in situ[END_REF] This structural analysis revealed that ASSQIQRNPVLITLG-fibrils closely reproduced the ß-sheet structural organization of Aα-. chain-fibrils assembled in their natural cellular context. Therefore, ASSQIQRNPVLITLG, the mutant portion common to all amyloidogenic frameshifts, is a "minimalist" in vitro Aα-chain model suitable for all amyloidogenic Aα-chain frameshift variants. This in vitro Aα-chain model might be useful for testing anti-amyloid agents targeting the VLITL motif because we showed that deleting VLITL disrupts fibril aggregation propensity of synthetic Phe521Leufs-derived peptides. Also, this in vitro model can be useful to conduct high-resolution structure investigations to shed light on the precise atomistic structure of Aα-chain frameshift derived-fibrils. The reasons why fragments of the mutant Aα-chain C-terminus accumulate and form amyloid in kidneys, and why the mutant Aα-chain could not be detected in the plasma of AFib-patients are not understood, [START_REF] Hamidi Asl | Renal amyloidosis with a frame shift mutation in fibrinogen aalpha-chain gene producing a novel amyloid protein[END_REF] but it has been proposed that it can be the consequence of an accelerated metabolism of the mutant Aα-chain. [START_REF] Hamidi Asl | Renal amyloidosis with a frame shift mutation in fibrinogen aalpha-chain gene producing a novel amyloid protein[END_REF] In case of Aα-chain dysfibrinogenic variants, experiments carried on the His494fs and Ala499fs frameshift variants, each introducing in their mutant C-terminal part a novel unpaired cysteine residue, revealed that these mutant Aα-chains circulated as disulphide-linked complexes with albumin. More importantly, it has been shown that these abnormal Aα-chainalbumin complexes directly altered the fibrin clot structure, conferring to the patients a dysfibrinogenic phenotype, clinically apparent as recurrent episodes of thromboembolism. [START_REF] Homer | Novel Aalpha chain truncation (fibrinogen Perth) resulting in low expression and impaired fibrinogen polymerization[END_REF][START_REF] Margaglione | A frameshift mutation in the human fibrinogen Aalpha-chain gene (Aalpha(499)Ala frameshift stop) leading to dysfibrinogen San Giovanni Rotondo[END_REF][START_REF] Dempfle | Demonstration of heterodimeric fibrinogen molecules partially conjugated with albumin in a novel dysfibrinogen: fibrinogen Mannheim V[END_REF] Unlike Aα-chain dysfibrinogenic variants, the amino acid composition of the mutant C-terminus of amyloidogenic frameshift variants do not contain a novel cysteine residue, thus they are unlikely able to form abnormal disulphide conjugates with albumin. This could explain why, despite lacking Lys556, Lys580, and Lys601 in the αC-domain important for factor XIII cross-linking, AFibpatients do not show clinical and biological evidence of clotting disorders. Further, we can speculate that these "albumin-free" amyloidogenic Aα-chains might be more susceptible to undergo an aberrant proteolytic cleavage, yielding high concentrations of a catabolic intermediate containing the amyloid-prone VLITL motif serving as an amyloid core to initiate fibrillogenesis in the cellular environment of the kidney. Consistent with data from the literature, it is unlikely a coincidence that no mutations involving a cysteine residue in the Aα-chain have been associated with AFib, while they explain a large number of congenital dysfibrinogenemias associated with circulating fibrinogen-albumin conjugates (http://site.geht.org/base-fibrinogene/). An instructive example is given by Fibrinogen Dusart, caused by the replacement of arginine 554 by cysteine (Arg554Cys) in the Aα-chain, which is associated with Aα-chains-554C-albumin complexes and recurrent episodes of thrombosis, [START_REF] Koopman | Molecular basis for fibrinogen Dusart (A alpha 554 Arg-->Cys) and its association with abnormal fibrin polymerization and thrombophilia[END_REF][START_REF] Mosesson | The relationship between the fibrinogen D domain self-association/cross-linking site (gammaXL) and the fibrinogen Dusart abnormality (Aalpha R554C-albumin): clues to thrombophilia in the "Dusart syndrome[END_REF] while replacement of Arg554 by leucine (Arg554Leu) causes renal AFib-amyloidosis. [START_REF] Benson | Hereditary renal amyloidosis associated with a mutant fibrinogen alpha-chain[END_REF] The six polypeptide chains of the normal fibrinogen molecule are covalently linked by numerous interchain disulphide bonds, with no free sulphydryl groups, [START_REF] Weisel | Mechanisms of fibrin polymerization and clinical implications[END_REF] supporting that an unpaired cysteine residue in the fibrinogen molecule might be critical on overall clot structure. In summary, we provide compelling evidence that VLITL is part of Phe521Leufsfibrils formed in vivo, and is a major fibril-forming motif necessary for β-sheet arrangement of the full-length Aα-chain Phe521Leufs-peptide identified in AFibpatient's kidneys. This VLITL amyloid motif, exclusively present at the C-terminal end of amyloidogenic Aα-chain frameshift sequences, and also previously identified in amyloid deposits of Val522Alafs and Ser523Argfs AFib-patients support that VLITL is predictive of high risk of Aα-chain amyloid formation. This finding sheds light on the as yet unresolved issue of the mechanisms underlying Aα-chain amyloidogenesis, yielding an uncommon example among hereditary amyloidoses. 4 A -D , AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNPVLITLG self-assembled into ThT-positive, β-sheet-enriched fibrillar aggregates exhibiting cross-β architecture that unambiguously define amyloid fibrils. The ThT profile forAFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNP_G did not parallel that obtained for its corresponding amyloid VLITL-counterpart. Despite a mild fluorescence intensity, maximum emission was observed at 510 nm instead of 485 nm, indicating that these aggregates have distinct dye-binding characteristics from typical amyloid fibrils (Figure4A). Such a ThT profile has been associated with unstructured/amorphous aggregates. formation of fibrils (Figure5A). In contrast, AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNPVLITLG organized into fibrils with a diameter of 11.4±0.9 nm and twisted ribbon-like substructure (Figure4B). XRD of AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNPVLITLG-fibrils showed a typical "cross-β" molecular architecture with 4.72 Å sharp signal arising from the repeated β-strand spacing along the fibril axis and 11.5 Å signal corresponding to the stacking of β-sheets perpendicular to the fibril axis (Figures4C and 4D). In contrast, XRD pattern of AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNP_G showed one diffuse reflection ring around 4.5 Å arising from the mean distance of randomly distributed peptide chains (Figure5B). Therefore, XRD provided definitive structural evidence that species formed by AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNP_G lack ordered cross-β organization, the major diagnostic hallmark of amyloid. Experimental studies of ASSQIQRNPVLITLG demonstrated that this VLITL-containing peptide bound ThT with a characteristic amyloid maximum emission fluorescence intensity at 485 nm and formed fibrillar aggregates with a diameter of 13±0.9 nm and cross-β molecular architecture (Figures performed and analyzed the in vitro fibrillogenesis studies. F.B and J.D performed and analyzed the X-ray diffraction studies. N.R-L and L.M performed and analyzed the microscopy and immunohistochemical analysis. A.D, J.T, and J.A performed the LMD/MS analysis. C.B performed the molecular screening. F.B, M.D contributed to the discussion of the data. G.G contributed to the discussion, provided critical review of the manuscript, edited and approved the final version of the manuscript. B.N contributed to the preparation of the manuscript and prepared the figures. P.D, S.V analyzed in silico calculations and designed in vitro experiments. S.V supervised all aspects of this work, developed the whole idea, designed the experiments, collected and interpreted the data and wrote the manuscript. Conflict-of-interest disclosure: The authors declare no competing interests. Correspondence: Pr. Sophie Valleix, Laboratoire de Génétique Moléculaire, Hôpital Necker-Enfants Malades, 75015 Paris, Université Paris Descartes, Sorbonne Paris Cité, Faculté de Médecine Paris; AP-HP, Paris, France. E-mail : [email protected] Figure 1 . 1 Figure 1. Novel amyloidogenic"private" Aα-chain frameshift variant in a French family. (A) shows the family pedigree of the AFib-kindred with familial segregation of the Phe521Leufs variant due to a single thymine deletion at Phe521. Squares denote male family members, circles female family members, and solid symbols affected family members. (B) shows partial sequence of FGA exon 5 from II.2, indicating heterozygosity for deletion of a single thymine leading to superimposed sequences after codon 521. (C) shows Congo red deposits in the glomeruli of renal specimens from II.2 (x400). (D) illustrates the amyloid fibrils found in the mesangium and under the glomerular basement membrane, appearing as straight unbranched fibrils with 10 nm diameter by EM (x1400). (E) shows positive staining with the specific Aα-chain antibody raised against the C-terminal Aα-chain mutant sequence (black arrow). Figure 2 . 2 Figure 2. The C-terminal mutant region predicted by Phe521Leufs is detected in renal amyloid deposits, but not its wild-type counterpart. (A) shows the results of LMD/MS-based proteomics analysis of amyloid plaques from seven cases of AFib. Cases 1-6 carry the Aα-chain Glu526Val (E526V) variant and case 7 the Phe521Leufs (F521fs) variant. The identified proteins are listed by relative probability score for identity, and the top 20 proteins of 103 proteins are shown. The columns show the protein name, the UnitProt identifier (protein accession number in the UniProt database, http://www.uniprot.org/), the molecular weight of the protein, and one microdissection from 7 patient specimens involved by AFib. The numbers indicate the number of total peptide spectra identified for each protein. Fibrinogen Aα-chain is the most abundant protein amyloidogenic in this sample set, consistent with AFib amyloidosis in each case. To show the presence of mutated proteins in the amyloid plaques, the raw mass spectrometry data files were searched using the human SwissProt database supplemented with Aα-chain variants Glu526Val and Phe521Leufs. Consistent with genetic analysis, cases 1-6 contain the tryptic peptide carrying the Glu526Val variant (Row 7, green rectangle), whereas this variant is not present in the case with the Phe521Leufs variant. In contrast, the novel tryptic peptides generated by the frameshift in Phe521Leufs variant are only present in case 7 (Row 17, red rectangle). (B) shows the Aα-chain protein coverage in seven cases of AFib amyloidosis. Cases 1-6 carry the Glu526Val variant and case 7 the Phe521Leufs variant. The top line represents the Cterminal sequence of native Aα-chain. The amino acid residues of Phe521Leufs (F521 in red) and Glu526Val (E526 in green) are indicated. The first line of rectangles (blue) labeled "WT" represents the coverage of the wild-type Aα-chain by the mass spectrometry-based proteomic method used. Two samples (S1 and S2) from four patients are shown. In cases 1-6 (Glu526Val variant), most of the coverage is identical to the wild-type Aα-chain except for the tryptic peptide carrying the point mutation (green rectangle) instead of the amino acid present in the wild-type peptide. In case 7 (Phe521Leufs), frameshift leads to a novel sequence indicated by the red rectangle and Figure 3 . 3 Figure 3. In silico studies of human Aα-chain amyloidogenic frameshift variants. (A) The TANGO aggregation scores of Phe521Leufs variant exhibited a very strong signal for β-sheet aggregation for VLITL at pH 2 (full line) and pH 8 (dotted line) at its Cterminus. (B) PASTA2 predicted that VLITL is likely to stabilize the cross-ß core of fibrillar aggregates and predicts parallel in-register intermolecular β-sheets for VLITL. Figure 4 . 4 Figure 4. In vitro fibrillogenesis and structural analysis of Aα-chain frameshiftderived polypeptides identified in amyloid deposits in vivo. (A) shows ThT fluorescence assays performed on AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNPVLITLG (red curve) and AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNP_G (blue curve). Characteristic enhanced ThT fluorescence at 485 nm was only observed for AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNPVLITLG suggesting that this peptide formed amyloid β-sheet structures. In contrast, a spectral red shift at 510 nm was recorded for AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNP_G, suggesting that it does not form typical amyloid β-sheet structures. (B) Transmission electron micrographs showed that AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNPVLITLG forms fibrillar aggregates and AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNP_G spherical aggregates. (C) Fibrils formed by AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNPVLITLG exhibited a meridional peak at 4.72 Å, indicating the spacing between β-strands within fibrils associated with an equatorial reflection at 11.50 Å. This microdiffraction pattern confirms the presence of the characteristic amyloid cross-β architecture constituted by intermolecular β-sheets with β-strands oriented perpendicular to the fibril axis. (D) presents an extracted radial profile from the 2D pattern shown in (C). The noisy background is due to the small angle used for the profile extraction to avoid the intense peaks from salts in the sample. (E) shows data from ThT fluorescence assays performed on ASSQIQRNPVLITLG (red curve) and ASSQIQRNP_G (blue curve), revealing that only ASSQIQRNPVLITLG induces enhanced fluorescence at 485 nm and that only the peptide containing VLITL forms aggregates with β-sheet conformation. (F) Transmission electron micrographs showed that ASSQIQRNPVLITLG forms aggregates of fibrillar morphology. (G) shows XRD profiles of ASSQIQRNPVLITLG fibrils displaying the typical "cross-β" microdiffraction pattern of amyloid fibrils with a spacing of 4.75 Å along the meridional direction and a periodicity of 9.90 Å in the equatorial direction. (H) illustrates the in situ X-ray microdiffraction pattern from a cut of the pathological kidney specimen of patient II.2 with in vivo amyloid fibrils, showing meridional (4.71 Å) and equatorial (10.0 Å) reflections. (I) shows a ThT fluorescence assay of VLITL with increased fluorescence intensity at 485 nm, demonstrating that the dye bound to β- Figure 5 . 5 Figure 5. Structural characteristics of AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNP_G assemblies. (A) shows the morphological features of aggregates. Note that these aggregates are spherical and do not form fibrillar structures, in contrast to amyloid fibrils formed by AFFDTASTGKTFPGFFSPMLGELSVRLSLGAQNLASSQIQRNPVLITLG (Fig. 4B). (B) X-ray microdiffraction did not reveal a cross-β microdiffraction pattern, demonstrating that the aggregates are not amyloid. Figure 5 5 Figure 5 Table 1 1 Aα-chain frameshift variants associated FGA nucleotide variations Predicted C-termin al mutant Aα-chains Renal Aα-chain fibril with renal amyloidosis encoded by the respective shifted reading frames composition with premature stop at codon 548 Met517_Phe521delinsGlnSerfs*28 (p. Met536_Phe540delinsGlnSerfs*28 c.1606_1620 delATGTTAGGAGAGTTT …QSVRLSLGAQNLASSQIQRNPVLITLG No (Kang et al., Kidney Int 2005) insCA Phe521Leufs*28 (p.Phe540Leufs*28) c.1620delT (New variant reported here) …LSVRLSLGAQNLASSQIQRNPVLITLG Yes Phe521Serfs*27 (p.Phe540Ser*27) c.1619_1622delTTGT (This report) (The XVth ISA) …SVRLSLGAQNLASSQIQRNPVLITLG Yes Val522Alafs*27 (p.Val541Alafs*27) (Hamidi et al., Blood 1997) c.1622delT …AVRLSLGAQNLASSQIQRNPVLITLG Yes (Our previous AFib- Ser523Argfs*26 (p.Ser543Argfs*26) (Yazaki et al., Amyloid 2015) c.1624_1627del AGTG …RLSLGAQNLASSQIQRNPVLITLG kindred) Yes Glu524Glufs*25 (p.Glu543Glufs*25) (Uemichi et al., Blood 1996) c.1629delG …LSLGAQNLASSQIQRNPVLITLG No Thr525Thrfs*24 (p.Thr544Thrfs*24) (Gillmore et al., J Am Soc Nephrol 2009) c.1632delT …SLGAQNLASSQIQRNPVLITLG Yes Ser532Serfs*16 (p.Ser551Serfs*16) (The XIIIth ISA) c.1653delT …ASSQIQRNPVLITLG No Table 1 . 1 Summary of FGA variants associated with renal amyloidosis. Irrespective of the nucleotide position of the FGA deletion/insertion anomalies, all frameshifts truncate at codon 548 and generate similar mutant C-terminal sequences. In the third column of the Table are shown the predicted C-terminal portions of frameshifts: in black are the portion which differs of four/five residues between frameshifts and in blue is the common 15 amino acid sequence, shared by all frameshifts: ASSQIQRNPVLITLG contains the VLITL motif (underligned). All amyloidogenic Aα-chain frameshift variants are listed according to the two nomenclatures. To convert the conventional mature protein amino acid numbering to the HGVS nomenclature, add 19 nucleotides for Aα-chain changes. Acknowledgments We thank all of the family members for their participation in this study, and l'Association Française contre l'Amylose. We thank Fabrice Senger and Marie-Paule Ramée for technical assistance with light microscopy and transmission electron microscopy, respectively. We thank Céline Leroux for her assistance with FGA sequencing, and J D Theis for his assistance in proteomics analysis at the Mayo Clinic. This work was supported in part by grants from l'Association Française contre l'Amylose.
01775199
en
[ "info.info-db", "info.info-wb" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01775199/file/main.pdf
Iovka Boneva Jose Lozano Sławek Staworko Relational to RDF Data Exchange in Presence of a Shape Expression Schema We study the relational to RDF data exchange problem, where the target constraints are specified using Shape Expression schema (ShEx). We investigate two fundamental problems: 1) consistency which is checking for a given data exchange setting whether there always exists a solution for any source instance, and 2) constructing a universal solution which is a solution that represents the space of all solutions. We propose to use typed IRI constructors in source-totarget tuple generating dependencies to create the IRIs of the RDF graph from the values in the relational instance, and we translate ShEx into a set of target dependencies. We also identify data exchange settings that are key covered, a property that is decidable and guarantees consistency. Furthermore, we show that this property is a sufficient and necessary condition for the existence of universal solutions for a practical subclass of weakly-recursive ShEx. Introduction Data exchange can be seen as a process of transforming an instance of one schema, called the source schema, to an instance of another schema, called the target schema, according to a set of rules, called source-to-target tuple generating dependencies (sttgds). But more generally, for a given source schema, any instance of the target schema that satisfies the dependencies is a solution to the data exchange problem. Naturally, there might be no solution, and then we say that the setting is inconsistent. Conversely, there might be a possibly infinite number of solutions, and a considerable amount of work has been focused on finding a universal solution, which is an instance (potentially with incomplete information) that represents the entire space of solutions. Another fundamental and well-studied problem is checking consistency of a data exchange setting i.e., given the source and target schemas and the st-tgds, does a solution exist for any source instance. For relational databases the consistency problem is in general known to be undecidable [START_REF] Beeri | The implication problem for data dependencies[END_REF][START_REF] Kolaitis | The complexity of data exchange[END_REF] but a number of decidable and even tractable cases has been identified, for instance when a set of weakly-acyclic dependencies is used [START_REF] Fagin | Data exchange: semantics and query answering[END_REF]. Resource Description Framework (RDF) [2] is a well-established format for publishing linked data on the Web, where triples of the form (subject, predicate, object) allow to represent an edge-labeled graph. While originally RDF was introduced schemafree to promote its adoption and wide-spread use, the use of RDF for storing and exchanging data among web applications has prompted the development of schema languages for RDF [START_REF]Shapes Constraint Language (SHACL)[END_REF][START_REF] Ryman | Oslc resource shape: A language for defining constraints on linked data[END_REF][START_REF] Sirin | Data Validation with OWL Integrity Constraints[END_REF]. One such schema language, under continuous development, is Shape Expressions Schemas (ShEx) [START_REF] Boneva | Semantics and Validation of Shapes Schemas for RDF[END_REF][START_REF] Staworko | Complexity and Expressiveness of ShEx for RDF[END_REF], which allows to define structural constraints on nodes and their immediate neighborhoods in a declarative fashion. In the present paper, we study the problem of data exchange where the source is a relational database and the target is an RDF graph constrained with a ShEx schema. Although an RDF graph can be seen as a relational database with a single ternary relation Triple, RDF graphs require using Internationalized Resource Identifiers (IRIs) as global identifiers for entities. Consequently, the framework for data exchange for relational databases cannot be directly applied as is and we adapt it with the help of IRI constructors, functions that assign IRIs to identifiers from a relational database instance. Their precise implementation is out of the scope of this paper and belongs to the vast domain of entity matching [START_REF] Köpcke | Frameworks for Entity Matching: A Comparison[END_REF]. Example 1. Consider the relational database of bug reports in Figure 1, where the relation Bug stores a list of bugs with their description and ID of the user who reported the bug, the name of each user is stored in the relation User and her email in the relation Email. Additionally, the relation Rel identifies related bug reports for any bug report. Now, suppose that we wish to share the above data with a partner that has an already existing infrastructure for consuming bug reports in the form of RDF whose structure is described with the following ShEx schema (where : is some default prefix): TBug → {:descr :: Lit 1 ,:rep :: TUser 1 ,:related :: TBug * } TUser → {:name :: Lit 1 ,:email :: Lit 1 ,:phone :: Lit ? } The above schema defines two types of (non-literal) nodes: TBug for describing bugs and TUser for describing users. Every bug has a description, a user who reported it, and a number of related bugs. Every user has a name, an email, and an optional phone number. The reserved symbol Lit indicates that the corresponding value is a literal. The mapping of the contents of the relational database to RDF is defined with the following logical rules (the free variables are implicitly universally quantified). Bug(b, d, u) ⇒ Triple(bug2iri(b),:descr, d) ∧ TBug(bug2iri(b)) ∧ Triple(bug2iri(b),:rep, pers2iri(u)) Rel(b 1 , b 2 ) ⇒ Triple(bug2iri(b 1 ),:related, bug2iri(b 2 )) User(u, n) ⇒ Triple(pers2iri(u),:name, n) ∧ TUser(pers2iri(u)) User(u, n) ∧ Email(u, e) ⇒ Triple(pers2iri(u),:email, e) ∧ Lit(e) On the left-hand-side of each rule we employ queries over the source relational database, while on the right-hand-side we make corresponding assertions about the triples in the target RDF graph and the types of the nodes connected by the triples. The atomic values used in relational tables need to be carefully converted to IRIs with the help of IRI constructors pers2iri and bug2iri. The constructors can be typed i.e., the IRI they introduce are assigned a unique type in the same st-tgd. We point out that in general, IRI constructors may use external data sources to properly assign to the identifiers from the relational database unique IRIs that identify the object in the RDF domain. For instance, the user Jose is our employee and is assigned the corresponding IRI emp:jose, the user Edith is not an employee but a registered user of our bug reporting tool and consequently is assigned the IRI user:edith, and finally, the user Steve89 is an anonymous user and is assigned a special IRI indicating it anon:3. Figure 2 presents an RDF instance that is a solution to the problem at hand. We point out that the instance uses a (labeled) null literal ⊥ 1 for the email of Steve89 that is required by the ShEx schema but is missing in our database. "Kabang!" "Boom!" "Bang!" "Kaboom!" bug:3 bug:1 bug:4 bug:2 emp:jose user:edith anon:3 "Jose" "[email protected]" "Edith" "e@o. The presence of target schema raises the question of consistency. On the one hand, we can prove that for any instance of the relational database in Example 1 there exists a target solution that satisfies the schema and the set of source-to-target tuple generating dependencies. On the other hand, suppose we allow a user to have multiple email addresses, by changing the key of Email to both uid and email ). Then, the setting would not be consistent as one could construct an instance of the relational database, with multiple email addresses for a single user, for which there would be no solution. Our investigation provides a preliminary analysis of the consistency problem for relational to RDF data exchange with target ShEx schema. Our contribution can be summarized as follows: a formalization of relational to RDF data exchange with target ShEx schema and typed IRI constructors. a decidable characterization of a fully-typed key-covered data exchange setting that is a sufficient and necessary condition for consistency. an additional restriction of weak-recursion on ShEx schemas that ensures the existence of universal solution. Related Work. Relational Data Exchange, Consistency. The theoretical foundations of data exchange for relational databases are laid in [START_REF] Arenas | Relational and XML Data Exchange[END_REF][START_REF] Fagin | Data exchange: semantics and query answering[END_REF]. Source-to-target dependencies with Skolem functions were introduced by nested dependencies [START_REF] Fuxman | Nested Mappings: Schema Mapping Reloaded[END_REF] in order to improve the quality of the data exchange solution. General existentially quantified functions are possible in second order tgds [START_REF] Arenas | The Language of Plain SO-tgds: Composition, Inversion and Structural Properties[END_REF]. Consistency in the case of relational data exchange is undecidable, and decidable classes usually rely on chase termination ensured by restrictions such as acyclicity, or guarded dependencies, or restrictions on the structure of source instances. The consistency criterion that we identify in this paper is orthogonal and is particular to the kind of target constraints imposed by ShEx schemas. In [START_REF] Marnette | Static Analysis of Schema-mappings Ensuring Oblivious Termination[END_REF], static analysis is used to test whether a target dependency is implied by a data exchange setting, these however rely on chase termination. Consistency is an important problem in XML data exchange [START_REF] Arenas | Relational and XML Data Exchange[END_REF] but the techniques developed for XML do not apply here. Value Invention, Relational to RDF Data Exchange. Value invention is used in the purely relational setting for generating null values. Tools such as Clio [START_REF] Fagin | Clio: Schema Mapping Creation and Data Exchange[END_REF] and ++Spicy [START_REF] Marnette | ++Spicy: an Open-Source Tool for Second-Generation Schema Mapping and Data Exchange[END_REF] implement Skolem functions as concatenation of their arguments. IRI value invention is considered by R2RML [START_REF] R2rml | RDB to RDF Mapping Language[END_REF], a W3C standard for writing customizable relational to RDF mappings. The principle is similar to what we propose here. A R2RML mapping allows to specify logical tables (i.e. very similar to left-hand-sides of source-to-target dependencies), and then how each row of a logical table is used to produce one or several triples of the resulting RDF graph. Generating IRI values in the resulting graph is done using templates that specify how a fixed IRI part is to be concatenated with the values of some of the columns of the logical table. R2RML does not allow to specify structural constraints on the resulting graph, therefore the problem of consistency is irrelevant there. In [START_REF] Sequeda | On Directly Mapping Relational Databases to RDF and OWL[END_REF], a direct mapping that is a default automatic way for translating a relational database to RDF is presented. The main difference with our proposal and with R2RML is that the structure of the resulting RDF graph is not customizable. In [START_REF] Boneva | Graph Data Exchange with Target Constraints[END_REF] we studied relational to graph data exchange in which the target instance is an edge labelled graph and source-to-target and target dependencies are conjunctions of nested regular expressions. Such a framework raises a different kind of issues, among which is the materialization of a solution, as a universal solution is not necessarily a graph itself, but a graph pattern in which some edges carry regular expressions. On the other hand, IRI value invention is not relevant in such framework. Organization. In Section 2 we present basic notions. In Section 3 we show how ShEx schemas can be encoded using target dependencies. In Section 4 we formalize relational to RDF data exchange. In Section 5 we study the problem of consistency. And finally, in Section 6 we investigate the existence of universal solutions. Conclusions and directions of future work are in Section 7. The missing proofs can be found in the full version [?]. Preliminaries First-order logic. A relational signature R (resp. functional signature F) is a finite set of relational symbols (resp. functional symbols), each with fixed arity. A type symbol is a relational symbol with arity one. A signature is a set of functional and relational symbols. In the sequel we use R, resp. F, resp. T for sets of relational, resp. functional, resp. type symbols. We fix an infinite and enumerable domain Dom partitioned into three infinite subsets Dom = Iri ∪ Lit ∪ Blank of IRIs, literals, and blank nodes respectively. Also, we assume an infinite subset NullLit ⊆ Lit of null literals. In general, by null values we understand both null literals and blank nodes and we denote them by Null = NullLit ∪ Blank. Given a signature W = R ∪ F, a model (or a structure) of W is a mapping M that with any symbol S in W associates its interpretation S M s.t.: -R M ⊆ Dom n for any relational symbol R ∈ R of arity n; f M : Dom n → Dom, which is a total function for any function symbol f ∈ F of arity n. We fix a countable set V of variables and reserve the symbols x, y, z for variables, and the symbols x, y, z for vectors of variables. We assume that the reader is familiar with the syntax of first-order logic with equality and here only recall some basic notions. A term over F is either a variable in V , or a constant in Dom, or is of the form f (x) where f ∈ F and the length of x is equal to the arity of f ; we remark that we do not allow nesting of function symbols in terms. A dependency is a formula of the form ∀x.ϕ ⇒ ∃y.ψ and in the sequel, we often drop the universal quantifier, write simply ϕ ⇒ ∃y.ψ, and assume that implicitly all free variables are universally quantified. The semantics of first-order logic formulas is captured with the entailment relation M, ν |= φ defined in the standard fashion for a model M , a first-order logic formula φ with free variables x and a valuation ν : x → Dom. The entailment relation is extended to sets of formulas in the canonical fashion: M |= {ϕ 1 , . . . , ϕ n } iff M |= ϕ i for every i ∈ {1, . . . , k}. Relational Databases. We model relational databases using relational structures in the standard fashion. For our purposes we are only concerned with functional dependencies, which include key constraints. Other types of constraints, such as inclusion dependencies and foreign key constraints, are omitted in our abstraction. A relational schema is a pair R = (R, Σ fd ) where R is a relational signature and Σ fd is a set of functional dependencies (fds) of the form R : X → Y , where R ∈ R is a relational symbol of arity n, and X, Y ⊆ {1, . . . , k}. An fd R : X → Y is a short for the following formula ∀x, y. R(x) ∧ R(y) ∧ i∈X (x i = y i ) ⇒ j∈Y (x j = y j ). An instance of R is a model I of R and we say that I is valid if I |= Σ fd . The active domain dom(I) of the instance I is the set of values from Dom that appear in R I for some relational symbol R in R. Unless we state otherwise, in the sequel we consider only instances that use only constants from Lit \ NullLit. RDF Graphs and Shape Expressions Schemas. Recall that an RDF graph, or graph for short, is a set of triples in (Iri ∪ Blank) × Iri × (Iri ∪ Blank ∪ Lit). The set of nodes of the graph G is the set of elements of Iri ∪ Blank ∪ Lit that appear on first or third position of a triple in G. We next define the fragment of shape expression schemas that we consider, and that was called RBE 0 in []. Essentially, a ShEx is a collection of shape names, and each comes with a definition consisting of a set of triple constraints. A triple constraint indicates a label of an outgoing edge, the shape of the nodes reachable with this label, and a multiplicity indicating how many instances of this kind of edge are allowed. We remark that the constraints expressible with this fragment of ShEx, if non-recursive, can also be captured by a simple fragment of SHACL with AND operator only. Formally, a multiplicity is an element of {1, ?, *, +} with the natural interpretation: 1 is exactly one occurrence, ? stands for none or one occurrence, * stands for an arbitrary number of occurrences, and + stands for a positive number of occurrences. A triple constraint over a finite set of shape names T is an element of Iri × (T ∪ {Lit}) × {1, ?, *, +}, where Lit is an additional symbol used to indicate that a node is to be a literal. Typically, we shall write a triple constraint (p, T, µ) as p :: T µ . Now, a shape expressions schema, or ShEx schema for short, is a couple S = (T , δ) where T is a finite set of shape names, and δ is shape definition function that maps every symbol T ∈ T to a finite set of triple constraints over T such that for every shape name T and for every IRI p, δ(T ) contains at most one triple constraint using p. For a finite set T of shape names, a T -typed graph is a couple (G, typing) where G is a graph and typing is a mapping from the nodes of G into 2 T ∪{Lit} that with every node of G associates a (possibly empty) set of types. Let S = (T , δ) be a ShEx schema. The T -typed graph (G, typing) is correctly typed w.r.t. S if it satisfies the constraints defined by δ i.e., for any node n of G: -if Lit ∈ typing(n), then n ∈ Lit; -if T ∈ typing(n) then n ∈ Iri i.e., |K| = 1 if µ = 1, |K| ≤ 1 if µ = ?, and |K| ≥ 1 if µ = + (there is no constraint if µ = *). For instance, a correct typing for the graph in Figure 2 assigns the type TBug to the nodes bug:1, bug:2, bug:3, and bug:4; the type TUser to the nodes emp:jose, user:edith, and anon:3; and Lit to every literal node. ShEx Schemas as Sets of Dependencies In this section we show how to express a ShEx schema S = (T , δ) using dependencies. First, we observe that any T -typed graph can be easily converted to a relational structure over the relational signature G T = {Triple} ∪ T ∪ {Lit}, where Triple is a ternary relation symbol for encoding triples, and T ∪ {Lit} are monadic relation symbols indicating node types (details in Appendix A). Consequently, in the sequel, we may view a T -typed graph as the corresponding relational structure (or even a relational database over the schema (G T , ∅)). Next, we define auxiliary dependencies for any two T, S ∈ T and any p ∈ Iri We point out that in terms of the classical relational data exchange, tc and mult ≥1 are tuple generating dependencies (tgds), and mult ≤1 is an equality generating dependency (egd). We capture the ShEx schema S with the following set of dependencies: Σ S = Relational to RDF Data Exchange In this section, we present the main definitions for data exchange. Definition 1 (Data exchange setting). A relational to RDF data exchange setting is a tuple E = (R, S, Σ st , F, F int ) where R = (R, Σ fd ) is a source relational schema, S = (T , δ) is a target ShEx schema, F is a function signature, F int as an interpretation for F that with every function symbol f in F of arity n associates a function from Dom n to Iri, and Σ st is a set of source-to-target tuple generating dependencies, clauses of the form ∀x.ϕ ⇒ ψ, where ϕ is a conjunction of atomic formulas over the source signature R and ψ is a conjunction of atomic formulas over the target signature G T ∪F. Furthermore, we assume that all functions in F int have disjoint ranges i.e., for f 1 , f 2 ∈ F int if f 1 = f 2 , then ran(f 1 ) ∩ ran(f 2 ) = ∅. Definition 2 (Solution). Take a data exchange setting E = (R, S, Σ st , F, F int ), and let I be a valid instance of R. Then, a solution for I w.r.t. E is any T -typed graph J such that I ∪ J ∪ F int |= Σ st and J |= Σ S . A homomorphism h : I 1 → I 2 between two relational structures I 1 , I 2 of the same relational signature R is a mapping from dom(I 1 ) to dom(I 2 ) that 1) preserves the values of non-null elements i.e., h(a) = a whenever a ∈ dom(I 1 ) \ Null, and 2) for every R ∈ R and every a ∈ R I1 we have h(a) ∈ R I2 , where h(a) = (h(a 1 ), . . . , h(a n )) and n is the arity of R. Definition 3 (Universal Solution) . Given a data exchange setting E and a valid source instance I, a solution J for I w.r.t. E is universal, if for any solution J for I w.r.t. E there exists a homomorphism h : J → J . As usual, a solution is computed using the chase. We use a slight extension of the standard chase (explained in [?]) in order to handle function terms, which in our case is simple (compared to e.g. [START_REF] Arenas | The Language of Plain SO-tgds: Composition, Inversion and Structural Properties[END_REF]) as the interpretation of function symbols is given. Consistency Definition 4 (Consistency). A data exchange setting E is consistent if every valid source instance admits a solution. We fix a relational to RDF data exchange setting E = (R, S, Σ st , F, F int ) and let S = (T , δ). We normalize source-to-target tuple generating dependencies so that their right-hand-sides use exactly one Triple atom and at most two type assertions on the subject and the object of the triple; such normalization is possible as our st-tgds do not use existential quantification. In this paper, we restrict our investigation to completely typed st-tgds having both type assertions, and therefore being of the following form ∀x. ϕ ⇒ Triple(s, p, o) ∧ T s (s) ∧ T o (o), where s is the subject term, T s is the subject type, p ∈ Iri is the predicate, o is the object term, and T o is the object type. Because the subject of a triple cannot be a literal, we assume that s = f (y) for f ∈ F and for y ⊆ x, and T s ∈ T . As for the object, we have two cases: 1) the object is an IRI and then o = g(z) for g ∈ F and for z ⊆ x, and T o ∈ T , or 2) the object is literal o = z for z ∈ x and T o = Lit. Moreover, we assume consistency with the target ShEx schema S i.e., for any st-tgd in Σ st with source type T s , predicate p, and object type T o we have p :: T µ o ∈ δ(T s ) for some multiplicity µ. Finally, we assume that every IRI constructor in F is used with a unique type in T . When all these assumptions are satisfied, we say that the source-to-target tuple generating dependencies are fully-typed. While the st-tgds in Example 1 are not fully-typed, an equivalent set of fully-typed dependencies can be easily produced if additionally appropriate foreign keys are given. For instance, assuming the foreign key constraint Bug[uid ] ⊆ User[uid ], the first rule with Bug on the left-hand-side is equivalent to Now, two st-tgds are contentious if both use the same IRI constructor f for their subjects and have the same predicate, hence the same subject type T s and object type T o , and p :: T µ o ∈ δ(T s ) with µ = 1 or µ = ?. We do not want two contentious sttgds to produce two triples with the same subject and different objects. Formally, take two contentious st-tgds σ 1 and σ 2 and assume they have the form (for i ∈ {1, 2}, and assuming x 1 , x 2 , y 1 , y 2 are pairwise disjoint) σ i = ∀x i , y i . ϕ i (x i , y i ) ⇒ Triple(f (x i ), p, o i ) ∧ T s (f (x i )) ∧ T o (o i ). The st-tgds σ 1 and σ 2 are functionally overlapping if for every valid instance I of R I ∪ F int |= ∀x 1 , y 1 , x 2 , y 2 . ϕ 1 (x 1 , y 1 ) ∧ ϕ 2 (x 2 , y 2 ) ∧ x 1 = x 2 ⇒ o 1 = o 2 . Finally, a data-exchange setting is key-covered if every pair of its contentious st-tgds is functionally overlapping. Note that any single st-tgd may be contentious with itself. Theorem 1. A fully-typed data exchange setting is consistent if and only if it is keycovered. The sole reason for the non-existence of a solution for a source instance I is a violation of some egd in Σ S . The key-covered property ensures that such egd would never be applicable. Intuitively, two egd-conflicting objects o 1 and o 2 are necessarily generated by TBug TUser TEmp TTest Fig. 3: Dependency graph with dashed weak edges and plain strong edges two contentious st-tgds. The functional-overlapping criterion guarantees that the terms o 1 and o 2 are "guarded" by a primary key in the source schema, thus cannot be different. Theorem 2. It is decidable whether a fully-typed data exchange setting is key-covered. The proof uses a reduction to the problem of functional dependency propagation [START_REF] Klug | Determining View Dependencies Using Tableaux[END_REF]. Universal Solution In this section, we identify conditions that guarantee the existence of a universal solution. Our results rely on the existence of a universal solution for sets of weakly-acyclic sets of dependencies for relational data exchange [START_REF] Fagin | Data exchange: semantics and query answering[END_REF]. As the tgds and egds that we generate are driven by the schema (cf. Section 3), we introduce a restriction on the ShEx schema that yields weakly-acyclic sets of dependencies, and consequently, guarantees the existence of universal solution. The dependency graph of a ShEx schema S = (T , δ) is the directed graph whose set of nodes is T and has an edge (T, T ) if T appears in some triple constraint p :: T µ of δ(T ). There are two kinds of edges: strong edge, when the multiplicity µ ∈ {1, +}, and weak edge, when µ ∈ {*, ?}. The schema S is strongly-recursive if its dependency graph contains a cycle of strong edges only, and is weakly-recursive otherwise. Take for instance the following extension of the ShEx schema from The dependency graph of this schema, presented in Figure 3. contains two cycles but neither of them is strong. Consequently, the schema is weakly-recursive (and naturally so is the ShEx schema in Example 1). As stated above, a weakly-recursive ShEx schema guarantees a weakly-acyclic set of dependencies and using results from [START_REF] Fagin | Data exchange: semantics and query answering[END_REF] we get Proposition 1. Let E = (R, S, Σ st , F, F int ) be a data exchange setting and I be a valid instance of R. If S is weakly recursive, then every chase sequence of I with Σ st ∪ Σ S is finite, and either every chase sequence of I with Σ st fails, or every such chase sequence computes a universal solution of I for E. Conclusion and Future Work We presented a preliminary study of the consistency problem for relational to RDF data exchange in which the target schema is ShEx. Consistency is achieved by fully-typed and key-covered syntactic restriction of st-tgds. An open problem that we plan to investigate is consistency when the fully-typed restriction is relaxed; we believe that it is achievable if we extend the definition of contentious st-tgds. Another direction of research is to consider a larger subset of ShEx. Finally, we plan to extend our framework to typed literals which are not expected to bring fundamental difficulties but are essential for practical applications. A ShEx Schemas as Sets of Dependencies Lemma 2. For any T -typed graph (G, typing), let rdf -to-inst(G, typing) be defined as below. For any I instance of (G T , ∅) satisfying Triple I ⊆ (Iri ∪ Blank) × Iri × (Iri ∪ Blank ∪ Lit) and Lit I ⊆ Lit and T I ⊆ Iri for all T ∈ T , let inst-to-rdf (I) be defined as below. rdf -to-inst(G, typing) ={Triple(s, p, o) | (s, p, o) ∈ G} ∪ {T (n) | n node of G, T ∈ typing(n)} inst-to-rdf (I) =(G, typing) with G = {(s, p, o) | Triple(s, p, o) ∈ I} and typing(n) = {T ∈ T ∪ {Lit} | T (n) ∈ I} for any n node of G Then for any T -typed graph (G, typing) and any instance I of (G T , ∅) in the domain of inst-to-rdf , the following hold: 1. rdf -to-inst(G, typing) is an instance of (G T , ∅); 2. inst-to-rdf (I) is a T -typed graph; 3. inst-to-rdf (rdf -to-inst(G, typing) ) is defined and is equal to (G, typing). Proof. 1. Immediately follows from the definition rdf -to-inst(G, typing). 2. Immediately follows from the definition of inst-to-rdf (I). Let I = rdf -to-inst(G, typing). By definition, inst-to-rdf (I) is defined if (a) Triple I ⊆ (Iri ∪ Blank) × Iri × (Iri ∪ Blank ∪ Lit) and (b) Lit I ⊆ Lit and (c) T I ⊆ Iri for all T ∈ T . Note that (a) follows from the definition of rdf -to-inst and the fact that G is an RDF graph. Also, (b) and (c) follow from the definition of rdf -to-inst and the fact that typing is a typing. Then it immediately follows from the definitions that inst-to-rdf (rdf -to-inst(G, typing)) = (G, typing). A.1 Proof of Lemma 1 Take a typed graph (G, typing) and ShEx schema S = (T , δ). For the ⇒ direction, we will prove by contrapositive. Assume that (G, typing) |= Σ S . Our goal is to prove (G, typing) is not correctly typed w.r.t. S. By definition of entailment, there is one dependency σ ∈ Σ S that is not satisfied. The dependency σ can be of the following forms: mult ≥1 (T s , p). By construction of Σ S , the dependency σ occurs when a triple constraint is of the form p :: T µ o where µ ∈ {1, +} and p some property. Since σ is not satisfied, T s ∈ typing(n) for some node n of G. Because the cardinalty of the set of triples with node n and propery p is 0, the definition of correctly typed in the typed graph (G, typing) w.r.t. S is violated. mult ≤1 (T s , p). By construction of Σ S , the dependency σ occurs when a triple constraint is of the form p :: T µ o where µ ∈ {1, ?}. Since σ is not satisfied, we have that (s, p, o 1 ) ∈ G and (s, p, o 2 ) ∈ G and T s ∈ typing(s), which violates the definition of correctly typed in the typed graph (G, typing) w.r.t. S. tc(T s , T o , p). By construction of Σ S , the dependency σ occurs when a triple constraint is of the form p :: T µ o where µ ∈ {1, ?, *, +}. Since σ is not satisfied, (s, p, o) ∈ G and T s ∈ typing(s). Because the node o ∈ G, it must hold T o ∈ typing(o). But this fact is not, then the typed graph (G, typing) w.r.t. S is not correctly typed. For the ⇐ direction, assume that (G, typing) |= Σ S . Our goal is to prove (G, typing) is correctly typed w.r.t. S. We will prove by contradiction. Suppose that (G, typing) is not correctly typed w.r.t. S. Then we have two cases when there is a node n ∈ G: B The chase Let E = (R, S, Σ st , F, F int ) be a data exchange setting with R = (R, Σ fd ) and S = (T , δ), and let I be an instance of R ∪ G T . For a tgd or std σ = ∀x.φ → ψ and a homomorphism h : φ → I, we say that σ is applicable to I with h if (1) either ψ is without existential quantifier and I ∪ F int , h |= ψ, or (2) ψ = ∃y.ψ and for all h extension of h on y, I ∪ F int , h |= ψ . Then applying σ to I with h yields the instance I defined as follows. In the case (1), I = h Fint (ψ). In the case (2), I = h Fint (ψ ) where h is an extension of h and for y ∈ y, h (y) is a fresh null value that depends on S. If δ(T ) contains a triple constraint p :: Lit µ , then h (y) ∈ NullLit \ dom(I). If δ(T ) contains p :: T µ for some T ∈ T , then h (y) ∈ Blank \ dom(I). For an egd σ = ∀x.φ → x = x , if there exists a homomorphism h : φ → I s.t. h(x) = h(x ), we say that σ is applicable to I with h and the result is (1) the instance I obtained by replacing h(x) by h(x ) (resp. h(x ) by h(x)) in all facts of I if h(x) (resp. h(x )) is a null value, and (2) the failure denoted ⊥ if both h(x) and h(x ) are non nulls. We write I σ,h --→U if σ is applicable to I with h yielding U , where U is either another instance or ⊥, and I σ,h --→U is called a chase step. Let Σ be a set of dependencies and I be an instance. A chase sequence of I with Σ is a finite or infinite sequence of chase steps I i σi,hi ---→I i+1 for i = 0, 1, . . ., with I 0 = I and σ i a dependency in Σ. The well-known result from [START_REF] Fagin | Data exchange: semantics and query answering[END_REF] still holds in our setting: if there exists a finite chase sequence then it constructs a universal solution. C Proofs of Theorems 1 and 2 Before proving the theorems, we define a mapping h F that will be used to define the notion of homomorphism from a formula into an instance. Let F be a function signature and F be an interpretation of F. For a term t over F and a mapping h : V → Dom, we define h F (t) as: h F (t) =      h(x) if t = x ∈ V a if t = a ∈ Dom f (h F (t )) if t = f (t ) is a function term. The mapping h F is extended on atoms and conjunctions of atoms as expected: h F (R(t)) = R(h F (t)) and h F ( i∈1..k R i (t i )) = i∈1..k h F (R i (t i )) . Note that if the argument of h F does not contain function terms, the interpretation F is irrelevant so we allow to omit the F superscript and write e.g. h(t) instead of h F (t). A homomorphism h : φ → M between the conjunction of atoms φ over signature W = R ∪ F and the model M = I ∪ F of W is a mapping from fvars(φ) to Dom s.t. for every atom R(t) in φ it holds that R(h F (t)) is a fact in I, where I, resp. F , is the restriction of M to R, resp. to F. Remark that if φ does not contain function terms, then F in the above definition is irrelevant and we write h : φ → I instead of h : φ → M and h(t) instead of h F (t). C.1 Proof of Theorem 1 Take a data exchange setting E = (R, S, Σ st , F, F int ) with S = (T , δ). Assume first that E is consistent, and let I be a valid instance of R and J be a solution for I by E. That is, I ∪ J |= Σ st ∪ Σ S . Let T s , T o , p and µ ∈ {1, ?} be such that p :: T µ o ∈ δ(T s ). Suppose by contradiction that, for i = 1, 2, σ i = ∀x.φ i (x i , y i ) ⇒ Triple(f (x i ), p, o i ) ∧ T s (f (x i )) ∧ T o (o i ) are two contentious stds in Σ st and they are not functionally overlapped that is I |= ∀x 1 , x 2 , y 1 , y 2 .φ 1 (x 1 , y 1 )∧φ 2 (x 2 , y 2 )∧x 1 = x 2 ⇒ o 1 = o 2 . That is, there is a homomorphism h : φ 1 ∧ φ 2 → I s.t. I, h |= φ 1 ∧ φ 2 but h Fint (o 1 ) = h Fint (o 2 ). Because J is a solution of E, we know that I ∪ J ∪ F int |= σ i for i = 1, 2 and deduce that J contains the facts (1) Triple(h Fint (f (x 1 )), p, h Fint (o 1 )), Triple(h Fint (f (x 1 )), p, h Fint (o 2 )) and T s (h Fint (f (x 1 ))). On the other hand, by definition mult ≤1 (T s , p) = ∀x, y, z. T s (x) ∧ Triple(x, p, y) ∧ Triple(x, p, z) ⇒ y = z is in Σ S and J |= mult ≤1 (T s , p). But mult ≤1 (T s , p) applies on the facts (1) with homomorphism h s. t. h (x) = h Fint (f (x 1 )), h (y) = h Fint (o 1 ) and h (z) = h Fint (o 2 ), therefore h Fint (o 1 ) = h Fint (o 2 ). Contradiction. Assume now that E is key-covered, and let I be a valid instance of R. We construct a solution for I by E. We first chase I with Σ st until no more rules are applicable, yielding an instance J. Because Σ st contains only stds (that is tgds on different source and target signatures), we know that J exists. We now show that no egd from Σ S is applicable to J. By contradiction, let mult ≤1 (T s , p) = ∀x, y 1 , y 2 . T s (x) ∧ Triple(x, p, y 1 ) ∧ Triple(x, p, y 2 ) ⇒ y 1 = y 2 be an egd that is applicable to J. That is, there is a homomorphism h : T s (x) ∧ Triple(x, p, y 1 ) ∧ Triple(x, p, y 2 ) → I s.t. Triple(h(x), p, h(y 1 )), Triple(h(x), p, h(y 2 )) and T s (h(x)) are facts in J and h(y 1 ) = h(y 2 ). By construction of J as the result of chasing I with Σ st and by the fact that Σ st is fully-typed, it follows that there are two (not necessarily distinct) stds σ i = ∀x, y i .φ i (x, y i ) ⇒ Triple(f (x), p, o i )∧T s (f (x))∧T o (o i ) and there exist h i : φ i → I homomorphisms satisfying the following: (2) f Fint (h i (x)) = h(x), and h i (z i ) = h(y i ) if o i = z i are variables, and g Fint (h i (z i )) = h(y i ) if o i = g(z i ) for some vectors of variables z i and function symbol g, for i = 1, 2. Then h 1 ∪ h 2 : φ 1 ∧ φ 2 → I is a homomorphism, and because E is key-covered we know that h 1 (o 1 ) = h 2 (o 2 ). This is a contradiction with h(y 1 ) = h(y 2 ) using (2) and the fact that the functions f Fint and g Fint are injective, and implies that no egd from Σ S is applicable to J. Finally, we are going to add the facts J to J so that J ∪ J satisfies the tgds and the egd's in Σ S . Note that J does not satisfy Σ S because some of the mult ≥1 (T s , p) might not be satisfied. For any mult ≥1 (T s , p) in Σ S , let b Ts,p ∈ Blank be a blank node distinct from other such blank nodes, that is, b Ts,p = b To,p if T s = T o or p = p . Now, let J 1 and J 2 be the sets of facts defined by: J 1 = T 1 (b Ts,p ) | p :: T µ 1 ∈ δ(T s ) for µ ∈ {1, +} J 2 = Triple(b Ts,p , q, b T1,q ) | T 1 (b Ts,p ) ∈ J 1 and q :: T µ 2 ∈ δ(T 1 ) for µ ∈ {1, +} Intuitively, J 1 adds to the graph nodes b Ts,p whenever the property p is required by type T s in S. A property is required if it appears in a triple constraint with multiplicity 1 or +. Such node has type T 1 as required by the corresponding triple constraint p :: T µ 1 in δ(T s ). Then, J 2 adds to the graph triples for the properties q that are required by the nodes added by J 1 . Remark that J 1 ∪ J 2 is a correctly typed graph. We finally connect J 1 ∪ J 2 to J. Let Then G = J ∪ J 1 ∪ J 2 ∪ J 3 satisfies the tgds in Σ S . It remains to show that G also satisfies the egd's in Σ S . This is ensured by construction as J satisfies the egd's and J 2 and J 3 add a unique triple Triple(b, p, b ) only to unsatisfied typing requirements T s (b) for types T s , that is, for every Triple(b, p, b ) added by J 2 or J 3 there is no different Triple(b, p, b ) in J ∪ J 2 ∪ J 3 . This concludes the proof of Theorem 1. C.2 Proof of Theorem 2 Let E = (R, S, Σ st , F, F int ) with S = (T , δ) and R = (R, Σ fd ) be a fully-typed data exchange setting. Fig. 1 : 1 Fig. 1: Relational database (source) Fig. 2 : 2 Fig. 2: Target RDF graph (solution) tc(T, S, p) := T (x) ∧ Triple(x, p, y) ⇒ S(y) mult ≥1 (T, p) := T (x) ⇒ ∃y.Triple(x, p, y) mult ≤1 (T, p) := T (x) ∧ Triple(x, p, y) ∧ Triple(x, p, z) ⇒ y = z Lemma 1 . 1 {tc(T, S, p) | T ∈ T , p :: S µ ∈ δ(T )} ∪ {mult ≥1 (T, p) | T ∈ T , p :: S µ ∈ δ(T ), µ ∈ {1, +}} ∪ {mult ≤1 (T, p) | T ∈ T , p :: S µ ∈ δ(T ), µ ∈ {1, ?}}. For every ShEx schema S = (T , δ) and every T -typed RDF graph (G, typing), (G, typing) is correctly typed w.r.t. S iff (G, typing) |= Σ S . Bug(b, d, u) ⇒ Triple(bug2iri(b),:descr, d) ∧ TBug(bug2iri(b)) ∧ Lit(d) Bug(b, d, u) ⇒ Triple(bug2iri(b),:rep, pers2iri(u)) ∧ TBug(bug2iri(b)) ∧ TUser(pers2iri(u)) Example 1 : 1 TUser → {:name :: Lit 1 ,:email :: Lit 1 ,:phone :: Lit ? } TBug → {:rep :: TUser 1 ,:descr :: Lit 1 ,:related :: TBug * ,:repro :: TEmp ? } TEmp → {:name :: Lit 1 ,:prepare :: TTest + } TTest → {:covers :: TBug + } - Lit ∈ typing(n) and n ∈ Lit. By definition of Lit, the node n is of type literal, means n ∈ Lit. Contradiction. -We have two sub-cases when T ∈ typing(n): • n ∈ Iri. By definition, all nodes of G are in the set Lit∪Iri∪Blank. Because T (n) is fact in (G, typing), then n ∈ Iri ∪ Blank. Because blank nodes are potentially IRIs, then n ∈ Iri. Contradiction. • There is a triple constraint p :: S µ ∈ δ(T ) such that * There is a triple (n, p, m) such that S ∈ typing(m). Since T (n) and Triple(n, p, m) are facts in (G, typing) and (G, typing) |= tc(T, S, p), then S(m) is fact in (G, typing). Thus, S ∈ typing(m). Contradiction. * Let K be the set of triples whose first element is n and second element is p. The cardinality of K is not bounded by µ. Thus, we have the following cases: • When µ = 1 and |K| = 1. It follows that mult ≤1 (T, S, p) ∈ Σ S and mult ≥1 (T, S, p) ∈ Σ S . Since (G, typing) |= Σ S , then |K| = 1. Contradiction. • When µ = ? and |K| > 1. It follows that mult ≤1 (T, S, p) ∈ Σ S . Since (G, typing) |= Σ S , then |K| ≤ 1. Contradiction. • When µ = + and |K| < 1. It follows that mult ≥1 (T, S, p) ∈ Σ S . Since (G, typing) |= Σ S , then |K| ≥ 1. Contradiction. J 3 = 3 Triple(a, p, b Ts,p ) | T s (a) ∈ J and ∃Triple(a, p, b ) in J and p :: T µ o ∈ δ(T s ) The proof goes by reduction to the problem of functional dependency propagation. We start by fixing some vocabulary and notions standard in databases. A view over a relational signature R is a set of queries over R. Recall that a n-ary query is a logical formula with n free variables. If V = {V 1 , . . . , V n } is a view, we see V as a relational signature, where the arity of the symbol V i is the same as the arity of the query V i , for 1 ≤ i ≤ n. Given a relational schema R = (R, Σ fd ), a view V, and an instance I of R, by V(I) we denote the result of applying the query V to I. The latter is an instance over the signature V. Now, the problem of functional dependency propagation FDPROP(R, V, Σ V fd ) is defined as follows. Given a relational schema R = (R, Σ fd ), a view V over R, and a set of functional dependencies Σ V fd over V, R = (R, Σ fd ) holds iff for any I valid instance of R, V(I) |= Σ V fd . It is known by [START_REF] Klug | Determining View Dependencies Using Tableaux[END_REF] that the problem FDPROP(R, V, Σ V fd ) is decidable. We will construct a view V and a set Σ V fd of functional dependencies over V s.t. FDPROP(R, V, Σ V fd ) iff E is key-covered. Let σ 1 , σ 2 be two contentious stds from Σ st that are functionally overlapping as those in the premise of the key-coverdness condition. That is, for some T s , T o , p, f , for i = 1, 2, we have Recall that o i and o i and either both variables, or are both functional terms with the same function symbol. Let z, resp. z be the vectors of variables is o 1 , resp. o 2 . That is, if e.g. o 1 is a variable then z is a vector of length one of this variable, and if o 1 = g(z 1 , . . . , z n ) for some function symbol g, then z = z 1 , . . . , z n . Remark that z ⊆ x∪y 1 , and similarly for z . Now, for any such couple σ 1 , σ 2 of two (not necessarily distinct) stds, we define the query V σ1,σ2 as the union of two queries, and the functional dependency fd σ1,σ2 , as follows. where for any two vectors of variables y and z, y -z designates the set of variables y\z, and m is the length of x, and n is the length of z and z . Then as in the premise of the condition for key-covered} (5) as in the premise of the condition for key-covered The sequel is the proof that FDPROP(R, V, Σ V fd ) iff E is key-covered, which by [START_REF] Klug | Determining View Dependencies Using Tableaux[END_REF] implies that key-coverdness is decidable. For the ⇒ direction, suppose that FDPROP(R, V, Σ V fd ). Let I a valid instance of R and let J = V(I). We show that for any two contentious stds σ 1 , σ 2 ∈ Σ st that are functionally overlapping as in the premise of the condition for key-covered, it holds that Let ν be a valuation of the variables x 1 ∪y 1 ∪y 2 s.t. I ∪F int , ν |= φ 1 ∧φ 2 . By definition of q σ1 and q σ2 and x 1 = x 2 it is easy to see that V σ1,σ2 (ν(x 1 ), ν(z)), and V σ1,σ2 (ν(x 2 ), ν(z )) are facts in J Because J satisfies fd σ1,σ2 , we deduce that ν(z) = ν(z ), therefore ν Fint (o 1 ) = ν Fint (o 2 ), which concludes the proof of the ⇒ direction. For the ⇐ direction, suppose that E is key-covered. Let I a valid instance of R and let J = V(I). , 2 be two stds in Σ st that satisfy the premise for key-covered. Let V σ1,σ2 (a, b) and V σ1,σ2 (a, b ) be two facts in J. That is, by definition and x 1 = x 2 there exist valuations ν of the variables y -z 1 and ν of the variables y -z 2 s.t. I, ν[x 1 /a, z/b] |= φ 1 (x 1 , y 1 ) and I, ν [x 2 /a, z /b ] |= φ 2 (x 2 , y 2 ). We now distinguish two cases, depending on whether the two facts were generated by the same query V σi (for some i ∈ 1..2), or one was generated by V σ1 and the other one by V σ2 . - This concludes the proof of Theorem 2.
01775201
en
[ "qfin" ]
2024/03/05 22:32:18
2018
https://uca.hal.science/hal-01775201/file/Leger_Bosch_Farmland_Tenure_and_Transaction_Costs_WP_Territoires_1.pdf
Christine Léger-Bosch Document de travail de l'UMR Territoires n°1 Farmland Tenure and Transaction Costs: Public and Collectively Owned Land vs Conventional Coordination Mechanisms in France Keywords: Q15, D23, L3, Q1 JEL codes: Land tenure, Transaction costs, Farmland, Agriculture umr- Introduction For approximately twenty years, land access has been an issue for farmers in developed countries. Different factors have contributed to this phenomenon. Farms have often had to grow due to competitive constraints [START_REF] Eastwood | Chapter 65 Farm Size[END_REF], while urbanization has reduced available farmland (see [START_REF] Prokop | Overview on best practices for limiting soil sealing and mitigating its effects in EU-27[END_REF] for the EU case). At the same time, the potential for income related to land development has increased private owners' tendency to make unsecured tenancy arrangements [START_REF] Myyra | Land Improvements under Land Tenure Insecurity: The Case of pH and Phosphate in Finland[END_REF][START_REF] Ciaian | Institutional Factors Affecting Agricultural Land Markets[END_REF]. Land policies and public authorities have been progressively fitted with tools, not so much to mitigate these side effects upon farmland access but rather to curb urban sprawl. A variety of tools are now available for farmland preservation, including urban planning and zoning, economic incentives such as taxes, and market interventions by Rights Acquisitions 1 (RAs) [START_REF] Alterman | The challenge of farmland preservation: lessons from a six-nation comparison[END_REF][START_REF] Dissart | Protection des espaces agricoles et naturels: une analyse des outils américains et français[END_REF]. In France, as in the US, RAs have increased beyond the traditional conservatory logic linked to natural spaces and thus concern agricultural areas [START_REF] Dissart | Protection des espaces agricoles et naturels: une analyse des outils américains et français[END_REF]. In France in particular, what we will henceforth call Long-term and Full Rights Acquisitions (LFRAs) of farmland by public and collective legal persons is currently increasing. These initiatives allow farmers to access farmland through lease arrangements from owners involved in agricultural activity through political or ideological interests, i.e., whose economic preferences are based on the permanence of any farming use rather than on urbanization or on the establishment of a specific agricultural activity on the land in question. Do LFRAs succeed in preserving farmland? For the moment, scholars are focused on other types of RAs rather than LFRAs. RAs differ depending on their temporality and perimeter. First, they can concern either the whole rights of the bundle of property rights, i.e., full acquisitions, as in all types of RA realized in France by NGOs or local authorities 2 , or only a part of these rights, such as land preservation programs settled in the United States (e.g., Purchase of Development Rights, PDR). Second, long-term acquisitions appear when a public or collective legal person consider RAs to be under permanent protection. That is the case for 1 In reality, RAs sometimes rest on farmland already owned by one of the legal persons involved in the project, e.g., when the farmland mobilized had been purchased for another intended use or project. Even if there is not really an "acquisition" in legal terms, we conserve the term Rights Acquisitions because there is regardless a new appropriation. 2 One exception in public urban project management is the possible use of the transfer of right to build. acquisitions by NGOs such as "Terre de Liens"3 in France or certain land trust4 acquisitions in North America. In contrast, short-term acquisitions constitute an intermediary step along a project or a public intervention in the market (e.g., SAFER5 action in France) but also include temporary easements such as PDR, which finally place them close to zoning. Studies that assess agricultural effects of RAs to preserve farmland focus on land trusts without analyzing the effects on the agricultural economy [START_REF] Parker | Land trusts and the choice to conserve land with full ownership or conservation easements[END_REF][START_REF] Dissart | Protection des espaces agricoles et naturels: une analyse des outils américains et français[END_REF]) and on PDR; PDR programs are neither full nor long-term RAs to preserve farmland [START_REF] Towe | An Empirical Examination of the Timing of Land Conversions in the Presence of Farmland Preservation Programs[END_REF][START_REF] Liu | Do Agricultural Land Preservation Programs Reduce Farmland Loss? Evidence from a Propensity Score Matching Estimator[END_REF][START_REF] Schilling | Measuring the effect of farmland preservation on farm profitability[END_REF][START_REF] Gottlieb | Are preserved farms actively engaged in agriculture and conservation?[END_REF]. Farmland preservation and land access are expected benefits of LFRAs. However, as [START_REF] Dissart | Protection des espaces agricoles et naturels: une analyse des outils américains et français[END_REF] notes, beyond the numerous preservation tools, the best way to preserve agricultural land may be to maintain the profitability of farms. Verifying this virtuous effect intuition from the perspective of farms is thus necessary. Indeed, there is agreement that land tenure and notably secure rights affect farm profitability, as they bring investment and access to credit, facilitate reallocation of production factors to maximize allocative efficiency in resource use, and allow for economic diversification and growth [START_REF] Deininger | Tenure security and land-related investment: Evidence from Ethiopia[END_REF][START_REF] Deininger | Land registration, governance, and development: Evidence and implications for policy[END_REF] 6 . Land insecurity exists despite the existence of transferable property titles. Indeed, access to land can largely rest upon leases, due to business agriculture and large farms. Certain arrangements are considered more secure than others. Their variability is due to the law that created different contracts and to various implementations by contractors. [START_REF] Myyra | Land Improvements under Land Tenure Insecurity: The Case of pH and Phosphate in Finland[END_REF] empirically verified this by showing that Finnish land tenure insecurity on leased land decreases land improvements with a long pay-back period. Moreover, operator access to land through personal ownership leads to better soil and enhanced productivity. These results confirm (here through opportunity costs) that the transaction having as an object access to land use contains variability in efficiency among different coordination mechanisms. Therefore, verifying virtuous effects of access to farmland from LFRAs requires comparing this mechanism to traditional ones, i.e., operator ownership and lease arrangements with an individual private owner. Transaction costs (TCs), including comparative planning, adapting, and monitoring costs of task completion incurred by agents in alternative governance structures, allow the exploration of this efficiency variability by comparison [START_REF] Coase | The Nature of the Firm[END_REF][START_REF] Williamson | The economic institutions of capitalism : firms, markets, relational contracting[END_REF]. By showing that TCs explain emerging (if costs are lower) and declining (if costs are higher) coordination mechanisms for a given transaction, TC economics yields evidence that TC negatively affects transaction efficiency. A few studies have explored the relative transactional efficiency of land use transactions resting upon lease arrangements through contract choice models. They show that TCs affect farmer choice between cash and share leasing7 [START_REF] Datta | Choice of Agricultural Tenancy in the Presence of Transaction Costs[END_REF][START_REF] Allen | Contract Choice in Modern Agriculture: Cash Rent versus Cropshare[END_REF][START_REF] Moss | A transaction cost economics and property rights theory approach to farmland lease preferences[END_REF][START_REF] Fukunaga | The role of risk and transaction costs in contract design: evidence from farmland lease contracts in US agriculture[END_REF] and between gray and regular lease contracts [START_REF] Polman | An Institutional Economics Analysis of Land Use Contracting: The Case of the Netherlands[END_REF]. In the specific case of farmland lease transactions, Murrel (1987) identifies certain contract properties that could generate TCs, and [START_REF] Polman | An Institutional Economics Analysis of Land Use Contracting: The Case of the Netherlands[END_REF] discuss determinants of TCs (uncertainty, frequency, asset specificity). Both discuss the owner's behavior as an influencing factor. Finally, [START_REF] Gray | Transactions costs and new institutions: will CBLTs have a role in the Saskatchewan land market?[END_REF] proposed a comparative analysis grid of alternative forms of land tenure through both determinants and components of TCs. The objective was to evaluate the capacity to maintain a new land trust governance in the planning stage (CBLTs for Community-Based Land Trusts). However, this prospective exercise applied the grid to anecdotal data. Thus, a lack of direct identification and evaluation of real incurred costs remains. Moreover, except for [START_REF] Gray | Transactions costs and new institutions: will CBLTs have a role in the Saskatchewan land market?[END_REF], scholars have focused on lease arrangements; the land purchase option for farmers is less studied despite its central role in agricultural economics [START_REF] Allen | A transaction cost primer on farm organization[END_REF]. Our study compares the relative transactional efficiency of access-to-land coordination mechanisms, including lease arrangements with owners involved in LFRAs, lease arrangements with individual owners and operator ownership, using original data. We postulate that a public or collective moral person interested in agriculture behaves differently as an owner from an individual private owner, changing the completion of the transaction. A first research step is to identify and characterize the different costs incurred by agricultural operators for access to land use. A second is to empirically evaluate them in order to objectify the comparison, through an original farmer survey in the French region of Auvergne-Rhône-Alpes. The remainder of this paper proceeds as follows: Section 1 describes the theoretical framework and the methodology for TC evaluation. Section 2 identifies the main channels by which each can generate transaction costs. Section 3 describes methods and data. The results are analyzed and discussed in Section 4. Analytical framework Transaction cost theory applied to land use transactions In transaction cost economics (TCE), a transaction is the transfer of rights to use goods and services between technologically separable units (Ménard 2004, p.21). Each transaction induces both production and transaction costs related to the economic organization within which it occurs and to the latter's ability to economize them (Williamson 1985, p.61). Production costs are "the costs of executing the contract," while transaction costs "consist of the costs of arranging a contract ex ante and monitoring and enforcing it ex post" 8 (Matthews 1986, p.906). These costs are also defined as "the comparative cost of planning, adapting, and monitoring task completion under alternative governance structures" (Williamson 1985, p.2). Given that transaction costs influence market effectiveness, coordination mechanisms that minimize such costs are gradually selected. Williamson characterizes transactions according to three attributes that are critical dimensions influencing the transaction cost level: (1) uncertainty, (2) the frequency with which transactions recur, and (3) the degree to which 8 "To a large extent transaction costs are costs of relations between people and people, and production costs are costs of relations between people and things, but that is a consequence of their nature rather than a definition (it would not do as a definition -for example, the cost of personal services are production costs, but they do not necessarily involve things)" [START_REF] Matthews | The economics of institutions and the sources of growth[END_REF]). durable, transaction-specific investments are required to realize the lowest supply costs (Williamson 1981, p. 555). [START_REF] Murrell | The Economics of Sharing: A Transactions Cost Analysis of Contractual Choice in Farming[END_REF] and [START_REF] Polman | An Institutional Economics Analysis of Land Use Contracting: The Case of the Netherlands[END_REF] applied TCE to farmland use access to characterize land transactions using the three TCE attributes. Farmers face physical uncertainty, first because of complex land use specifications and variable land quality and second, because of asymmetric information favoring the landlord or seller regarding soil quality (Murrell 1983, p.285). They also face behavioral uncertainty due to possibly opportunistic owner behavior in the context of contract incompleteness [START_REF] Gray | Transactions costs and new institutions: will CBLTs have a role in the Saskatchewan land market?[END_REF]Polman and Slangen 2009, p.278-279). The lessor has authority and potentially promotes insecure land tenure [START_REF] Murrell | The Economics of Sharing: A Transactions Cost Analysis of Contractual Choice in Farming[END_REF]. Furthermore, "the tenant perception of security of tenure is crucial for efficient land use" (Murrell 1983, p.284). Therefore, trust and expectations concerning the reputation and trustworthiness of the land owner are directly linked with transaction costs [START_REF] Polman | An Institutional Economics Analysis of Land Use Contracting: The Case of the Netherlands[END_REF]. Transaction costs may also be driven by a relatively low frequency of transactions. Based on the time horizon of a farm, land use transactions are rarer 9 than purchases of materials, cattle feed or fertilizers (Polman and Slangen 2009, p.279). Finally, asset specificity is summarized by Murrell as "tenant immobility" (Murrell 1983, p.285), which generates an important site specificity. The farmer must find land close to the farm in the interest of profitability, while the owner encounters few potential buyers or tenants with farms close to his available land. This site specificity is linked with human asset specificity, as necessary knowledge might be different relative to other transactions on the market regarding climate, prime soil quality, water congestion, etc. Finally, specific investments such as irrigation or special materials represent a third dimension of asset specificity, as reported by [START_REF] Polman | An Institutional Economics Analysis of Land Use Contracting: The Case of the Netherlands[END_REF]. Our study aims to compare three coordination mechanisms of farmer access to land use: i) farming operator ownership, ii) lease arrangement from an individual owner, iii) lease arrangement resting upon an LFRA. Our position is to assess the relative transactional efficiencies only from the farmers' point of view. Indeed, our study aims to evaluate the influence of LFRAs on farm profitability, which owner exchange cost assessment would not highlight. We hypothesize that lease arrangements through LFRAs should more effectively 9 Even if a contract must be renewed in the case of leasing. minimize producer transaction costs by reducing behavioral and physical uncertainty on the part of farmers. In fact, positive intentions toward agriculture on the part of public or collective owners may be assumed to prevent the potential for opportunistic behavior. Some owner interests may be consistent with those of land users, including continued farming use and, accordingly, the profitability of the agricultural holding. These common interests might reduce information asymmetry and help farmers more fully understand the quality of their land. Furthermore, joint concerns and the (public) reputation of the owner may improve the likelihood of secure land access for the tenant. Assessing transaction costs and production costs relative to total costs Empirical studies that attempt comparative quantitative analysis of alternative governance structures according to the TCE project mostly rest upon a TC evaluation that can be qualified as indirect for two reasons. First, they assess TC determinants (uncertainty, frequency, asset specificity) and do not directly evaluate TCs and their components. Second, they use with this aim proxies of transaction attributes that affect these TC determinants [START_REF] Wang | Measuring transaction costs: an incomplete survey[END_REF]. This strategy permits a lack of empirical data and avoids difficulties posed by measurement of TCs [START_REF] Mccann | Transaction cost measurement for evaluating environmental policies[END_REF]. However, proxies used as explanatory variables can bring endogeneity and measure the underlying concepts with error. This problem is made particularly salient concerning TCE by the detail that theory requires [START_REF] Masten | Empirical research in transaction cost economics: challenges, progress, directions[END_REF]. Other studies develop empirical comparative analysis resting upon a direct quantitative assessment of TC and econometric regressions. These belong to different strands of the literature, focusing on integration decisions in organizations [START_REF] Masten | The Costs of Organization[END_REF] or on implementation of environmental public policies [START_REF] Kuperan | Measuring transaction costs in fisheries co-management[END_REF][START_REF] Falconer | Farm-level constraints on agri-environmental scheme participation: a transactional perspective[END_REF][START_REF] Mccann | Transaction Costs of Policies to Reduce Agricultural Phosphorous Pollution in the Minnesota River[END_REF][START_REF] Mccann | Transaction cost measurement for evaluating environmental policies[END_REF][START_REF] Mettepenningen | Measuring private transaction costs of European agri-environmental schemes[END_REF][START_REF] Widmark | Measuring transaction costs incurred by landowners in multiple land-use situations[END_REF][START_REF] Mccann | Farmer Transaction Costs of Participating in Federal Conservation Programs: Magnitudes and Determinants[END_REF]. However, how such approaches address different problems is yet to be clarified. The first question regards intertwining of production and transaction costs (p. 4;[START_REF] Royer | Transaction costs in milk marketing: a comparison between Canada and Great Britain[END_REF]. Indeed, for an economic organization, "the object is not to economize on transaction costs but to economize in both transaction and neoclassical production costs respects" (Williamson, 1985, p.61). The second question is how to treat the selection problem highlighted by [START_REF] Masten | The Costs of Organization[END_REF], given that most studies rest upon statistical inference through econometric regression: costs cannot be directly observed for organizational forms not chosen, even though these high costs precisely represent the reason why the transaction did not occur [START_REF] Benham | Measuring the costs of exchange[END_REF]. The way that [START_REF] Masten | The Costs of Organization[END_REF] prevent this selection bias 10 is relevant for firm integration decisions, where the transaction always occurs. Concerning costs of marketed transactions, or voluntary agreements, that do not necessarily occur, however, the question remains unanswered. [START_REF] Benham | Measuring the costs of exchange[END_REF] designed a complementary approach that does not depend on econometry to overcome these problems. First, the comparative analysis takes into account production and transaction costs in an undifferentiated manner as exchange costs, given their intertwining. For instance, Benham and Benham studied the cost of transferring ownership of an apartment, including taxes and lawyer fees. Second, production and transaction costs are compared relative to the total cost of the transaction. The resulting comparison of relative cost magnitude and structure, rather than cost amount, facilitates overcoming the econometric bias problem explained below. The resulting standardized methodology aims to estimate the sum of transaction and production costs, corresponding to a subset of the total cost of the transaction that they designated the cost of exchange (COE). "The cost of exchange C ijkm is defined as the opportunity cost in total resourcesmoney, time and goodsfor an individual with characteristics i to use a given form of exchange j to obtain a good k in an institutional setting m" (Benham and Benham, 2005, p.370). Given that comparisons based on relative production and transaction costs allow for the examination of the cost-effectiveness of a coordination mechanism, we choose this methodology to carry out our study. Not evaluating TCs through the attributes of the transaction affecting their determinants means evaluating them by assessing each of their components. When the transaction occurs, farmers may incur time and monetary costs at different steps of the transaction. Costs may arise ex ante during information gathering, contract making and implementation. Costs may also occur ex post during monitoring and enforcement. Activities resulting in exchange costs include 1) the search for information about price distribution as well as potential partners and 10 "Even though the costs associated with unchosen institutions cannot be observed for a particular transaction, the full structure of organization costs can be estimated if we know the selection process and if we can obtain data or proxies for the costs of organizational forms that are chosen" [START_REF] Masten | The Costs of Organization[END_REF]. relevant information about them, 2) negotiating and writing contracts, 3) monitoring partners, and 4) contract enforcement, as well as protection of property rights if necessary [START_REF] Eggertsson | The role of transaction costs and property rights in economic analysis[END_REF][START_REF] Furubotn | Institutions and economic theory: The contribution of the new institutional economics[END_REF]. Organizational forms of the land use exchange/transaction Agricultural producers commonly access land through two main different exchange mechanisms in developed countries. One mechanism involves the entire property rights bundle, including the use right, when the agricultural operator purchases and owns the land he or she farms. The other involves the lease of land through a tenancy arrangement [START_REF] Polman | An Institutional Economics Analysis of Land Use Contracting: The Case of the Netherlands[END_REF]. A third organizational form has emerged in France with lease arrangements resting upon LFRAs. These three organizational forms of land use exchange occur within identical institutional settings and market structures across the country. They are regulated by the same price controls, contract standards, public interest market interventions, and courts 11 . Table 1 presents some of the principal farm structure characteristics of agricultural holdings in France, in some neighboring European countries, and in the USA. These characteristics show that France has the lowest share of operator-owned land. In the following subsections, we describe each coordination mechanism; adopting the point of view of farmers, we identify the main potential channels of exchange costs and give figures regarding their importance to the sum total of land transactions in the French land use market. Access to land use as a portion of the full property rights bundle Access to land use as a portion of the property rights bundle (exchanged when one purchases land) is not a highly constraining organizational form for the user because, except for expropriations for public utilities, which are very rare, the farming operator obtains free access to the land use for an indefinite time. Thus, the land purchaser is exempted from a relationship with any other decision maker, such as the lessor in the case of a lease 11 Namely, the land tenure law, SAFER and the Farmland Leasehold Courts. arrangement (within legal limits, e.g., on environmental practices, as with any other user). This important incentive to purchase land is counterbalanced, however, by the constraint of freezing a non-negligible amount of capital per acre. This organizational form of access to land use represented almost one-quarter of French utilized agricultural land in 2010, and 37.5% if we consider that farming operator landownership includes land owned by associates involved in group holdings [START_REF] Courleux | Augmentation de la part des terres agricoles en location : échec ou réussite de la politique foncière ?[END_REF] 12 . In France, nearly 1.2% of utilized agricultural land is purchased each year (FNSAFER/Agreste 2016). This method of land acquisition does not concern the majority of the market given that the bulk of landownership is passed down through inheritances. The work of [START_REF] Courleux | Augmentation de la part des terres agricoles en location : échec ou réussite de la politique foncière ?[END_REF] offers some precision to the 2000-2007 data. Nearly 41% of farm operators who purchase farmland acquire land that they previously leased. They tend to use this organizational form of access to land not by choice but because they are constrained. Actually letting this land be sold to another purchaser means i) losing the use of land that they are not sure to recover and ii) losing the benefits of work habits and eventual investments in the land. Another considerable portion of farming operators' purchases (18%) have SAFER as the seller. That is, most farming operator land purchases occur in a legal context favorable to the farmer, whether through tenant priority rights or through the SAFER regulation frame. Land purchases in which the purchaser is neither the former tenant, related to the seller, nor favored by SAFER arbitration represent less than one-third of total purchases. Access to land use through a lease arrangement with an individual owner The other major organizational form of exchanging land use rights is a conventional lease arrangement from an individual owner. In its classic version, this transaction involves a definite lease period, a tenant, who is the producer, and a lessor, who is a natural person or a strictly private legal entity. In France, most of these arrangements are cash leases as opposed to share leases (only 1.5% of total leased utilized agricultural area (UAA) see Table 1). The land use rights are exchanged against a monetary rent. Nearly 61.7% of the UAA was farmed under cash leases in France in 2010 13 . This type of arrangement involves 69% of total farm holdings, but 87% of middle and large farm holdings, which cover 93% of the French UAA. On average, each of these farms contracted lease holdings with twelve different farmland lessors in 2010 [START_REF] Fnprr | Les chiffres de la propriété privée rurale[END_REF][START_REF] Agreste | Les principaux chiffres du recensement agricole[END_REF]). This coordination mechanism, which is not a one-time arrangement as in the case of purchase described above, is framed in French law by a highly regulated agricultural lease status. Legal protection of farmland use rights counterbalances the weight of property rights, written in the constitution as a primary human right. Thus, a lease contract lasts for at least nine years. The only possibility for termination is the owner's right to recover the land use rights, which is possible only after six years if the owner (or descendant) is a farming operator. As we have seen above, the tenant has a pre-emption right in case of sale to retain the land use rights. Accordingly, the leasehold is not broken in case of sale but must be completed with the new owner. Finally, the rent is bounded by prefect decree. All these specific legal obligations are valid by default, even without a written contract (oral contract) if farming use can be tangibly demonstrated [START_REF] Melot | Droits de propriété et d'usage sur la terre. Une étude statistique des recours contentieux en matière de fermage[END_REF]) and if no lawful annual leasehold has been contracted. As a result, according to law, a farming operator cannot easily be deprived of the usage rights. This law theoretically secures the land investment. However, difficulties occur when owner preferences change in anticipation of urban land conversion. In this situation, an increasing number of owners attempt to escape the legal status of the lease (Jarrige, Jouve, and Napoleone 2003; Geniaux and Napoléone 2005) using explicitly precarious lease contracts (with the annual nature of the contract established by convention) or using legal loopholes in land use leases. Finally, certain owners simply avoid leasing land [START_REF] Ciaian | Institutional Factors Affecting Agricultural Land Markets[END_REF]) despite the law on uncultivated land, which requires farmland owners to undertake real farming land use either through their own activity or that of a tenant. Access to land use via a lease arrangement through LFRAs Access to land use via a lease arrangement through LFRAs falls under the status of an agricultural lease, as with all lease arrangements in France. This coordination mechanism is of interest given that the main difference from a classic lease is the nature of the owner. We have seen above the influence of owner preferences on the conditions for land use rights exchanges. These specific (public or collective) owners choose to hold the property to preserve long-term agricultural use. They use ownership as a means of collective action. Their incentives and behaviors are consequently very different from those of individual private owners, whose strategies may be based on contrasting motives, such as preservation of a heritage-related family identity, speculation for land conversion, absentee ownership, etc. First, public/collective owners hold these properties for the long term, which is important for continuity of the land exchange relationship. Second, they consider the economic aspects of agriculture, given that farming is the vocation of the owner role that they assume. In general, these projects require a long implementation period (technical information, legal procedures). Therefore, the process of selecting a farming operator may range from a personal relationship to a call for proposals, which may require a learning process. The contract linking the farmer to the owner is a somewhat formal partnership, in which the land lease contract is only a part. This contract can include specific prescriptions concerning farming products and environmental practices. This organizational format is unusual and affects a small portion of the UAA that we could not evaluate given the phenomenon's recent development. Without aiming at an exhaustive inventory, we counted 258 hectares of land FRA in the French Auvergne-Rhône-Alpes Region in 2011. This lever for farmland preservation and development receives an increasing interest from local stakeholders. Whether from the local authorities searching for concrete projects to implement their policy, or from NGOs acting on the market to implement their citizen expectations, this observation suggests a future increase in the phenomenon. First, public authorities are confronted with decreasing public means and try to find a solution to avoid strict RAs that are expensive. Thus, we observed above that LFRAs sometimes mobilize already existing farmland reserves formerly constituted for future urban projects that have been abandoned (e.g., roads, housing programs). Many cities have created these reserves in the past by overestimating future urban development and, as a consequence, land requirements. These public land reserves thus represent a non-negligible portion of farmland in certain regions. Second, the lever that NGO LFRAs use, crowdfunding, currently relays the voice of the citizenship in France. Appendix A provides other background on these initiatives with descriptions of the six studied cases. Methodology and data collection An analysis grid of costs in land use exchange mechanisms We identify and analyze access to land exchange costs, i.e., transaction and production costs, through their characterization among ex ante and ex post costs, and their translation in concrete terms for the three compared coordination mechanisms. Table 2 presents this analysis grid. In transactions exchanging access to land use, ex ante costs include three cost types: information costs, negotiation costs, and implementation costs. Farmers (lessor or tenant) may incur information costs when gathering information on land markets, potential sellers/lessors and their intent, potential rivals, parcel features, and prices, selling and leasing conditions, and finally when encountering sellers or lessors. Negotiation costs are related to negotiating with the lessor or the seller regarding purchase price or rent, allowable farming uses, and contract duration and break conditions. This process includes eventual selection processes that one partner demands (e.g., applicant's file), negotiations between partners concerning price and other contractual terms, contract redaction, administrative contract registrations, expert services such as a negotiation mediator, and eventual registration fees (e.g., notary fees). Implementation costs result from additional effort made by farmers to access land use. For example, it might be impossible to farm the secured land if it has not been used in years. A reconditioning of land, for example, by vegetation clearing, thus becomes necessary and incurs costs. Ex post costs in access to land use transactions consist of two types of costs: monitoring costs and enforcement costs. Monitoring costs refer to cases when farmers must watch for owner compliance with the contract terms. For example, farmers particularly must pay attention in the case of a lease to anticipate an owner's eventual contract break strategy when facing an urban real estate opportunity. Farmers may also incur enforcement costs related to renegotiations and conflicts during the contract as well as contract termination costs, such as when the owner fails to meet obligations and the farmer is subjected to costly damages. In the example of an early break of a lease contract, a farmer may have to spend additional time and money to access other land and to obtain compensation for production in progress on leased land. Survey and data We carried out our empirical analysis in the French Auvergne-Rhône-Alpes Region in 2012 and 2013. We identified fifteen LFRAs in progress, and six of them had led to effective lease arrangements (see their locations in Appendix B). Those six initiatives occur in specific areas with varying characteristics in terms of agricultural production and urban pressure, etc. LFRAs have different characteristics for different criteria. In addition, they involved different stakeholder types (local authorities, agricultural professional organizations, associations, SAFER, etc.). They may include one farmer, a few farmers or more than fifteen farmers. The initiatives may lead to the extension of existing farms or to building new farms. They have different origins of funds, their implementations may be quick or may take a long time, and the duration of access to land use is variable (Appendix A). As a first step, we identify transaction costs and production costs from personal observations and fifty semi-structured interviews with stakeholders involved in LFRAs, farmers, and private owners. A second stage consisted of data collection concerning the resources used during the transactional process determined above. As proposed by [START_REF] Benham | Measuring the costs of exchange[END_REF], we surveyed farmers directly involved in the considered exchanges. The six studied LFRAs totaled 25 lease arrangements with farmers. As parties in the transaction, agricultural operators behave according to incentives that depend on their farmholding's characteristics. Moreover, one can imagine that a specificity of farms involved in LFRAs exists in comparison to other farms. Constructing comparable samples of conventional lease arrangements and purchases thus requires a relative homogeneity regarding the characteristics of the farms in question. We used the quota method [START_REF] Denscombe | The good research guide: for small-scale social research projects[END_REF]) to obtain subsamples of transactions with a similar distribution of farm characteristics in the six areas. Selected farmholdings have substantially the same socioeconomic characteristics and are situated in the same or neighboring communes as farmers involved in lease arrangements through the six studied LFRAs. Those sampling constraints result in certain agricultural operators being interviewed about more than one coordination mechanism. Table 3 shows how labor force, market gardening share, breeding share, and farmer age present a degree of homogeneity for farms concerned in the three subsamples of transactions (see Table 3). We thus undertook a survey of 50 farmers, enabling us to analyze 74 transactions, including 21 land purchases, 28 lease arrangements to a private owner and 25 lease arrangements through LFRAs. All studied transactions occurred in major urban centers or on their fringes (see Appendix B). The survey was conducted through a questionnaire designed to determine the costs incurred by farmers (Appendix C). For a transaction price, the farmers provided the monetary amount. federation established a method to assess financial costs and profits of land transactions, whether by leasing or purchasing, to resolve farmer buy-or-lease decisions [START_REF] Johnson | Analysis of the Lease-Or-Buy Decision[END_REF] for the case of farmland. This calculation considered the loan duration necessary if the farmer were to purchase the land asset and the loan rate, the interest rate and the current inflation rate. We thus obtained a financial cost in euros per hectare and added it to exchange costs to obtain the total cost per hectare. Finally, we estimated the financial benefits of all transaction types using the same method (De Sousa 2008). Appendix D displays the principles of calculation for exchange costs, financial costs and financial benefits. Appendix E provides a statistical summary of the variables built and analyzed. Results Measurement of exchange costs Table 4 shows the ex-ante costs, which include information costs, negotiation costs and implementation costs calculated from our survey. Empty boxes indicate that no costs were associated with the specified transaction element. Information costs The information step of a purchase transaction costs €5.60 per hectare. The effort is shared between gathering passive information, for instance, in SAFER resale announcements or real estate auctions, as well as information obtained from a third person by word of mouth. Concerning a conventional lease arrangement from an individual, information costs incurred by the tenant represent €16.96 per hectare. These costs are much less due to passive information but are rather due to direct interactions with the owner (€5.48/ha) or a third person (€9.49/ha). A third person is sometimes sent by the owner as a messenger, for example, to ask the former tenant who ceased activity to propose the tenancy to another trusted farmer. The information step toward access to land use through a lease arrangement through LFRAs costs a farmer €28.79 per hectare. In this case, word of mouth or third-person information is not very important, as costs are higher due to gathering information from local newspapers, local authority websites and the various media used by associations and citizen networks (€3.28/ha). However, most costs are related to gathering information from the public or collective owner. These exchanges may occur in a collective meeting when the project concerns several farmers. The information gathering creates costs for several reasons: the complexity of the land support setup and the amount of information to be transmitted; the number of parties, given the multi-stakeholder nature of the initiative; and the fact that the farmer applicant is often solicited upstream from the farmland provision and from the entire process of collective action. The expectations and complexity are costly. Negotiation costs The negotiation step in access to land use as a part of the entire bundle of property rights, i.e., by purchasing land, is very costly for the farmer from this point of view (Table 4). Notary fees represent €2,381.74 of a total of €2,445.10 per hectare. The remainder of the negotiation costs (€63.36/ha) are principally due to individual negotiating with the owner, which itself represents €43.78 per hectare. As seen above, the seller is often a SAFER. In this case, the farmer will have to submit an application to be selected from among all applicants by a professional committee. Finally, given the importance of the transaction price, the parties will be more likely to solicit expert services (€13.21/ha), notably lawyers or real estate experts, for input on such issues. The costs incurred by farmers seeking access to land use through a conventional lease arrangement from an individual are only €34.08 per hectare. These costs are mainly due to individual negotiations with the owner (€21.67/ha). Lease contracts are often oral, and the money involved is reduced due to the weakness of rents and because the contractual terms are greatly dictated by law. Therefore, negotiations are brief. Expert services are required much less frequently (€2.11/ha). Nevertheless, non-negligible costs are incurred during the lease contract registration process, either with organizations directing land structure control policies (CDOA), from which tenants theoretically16 must request a farming use authorization (€8.49/ha), or with an agricultural social security mutual fund (€1.69/ha). Farmers accessing land use through LFRAs incur costs of €211.95 per hectare during the negotiation step or €172.55 per hectare if notary fees (€39.40/ha) are excluded. These notary fees are on average substantial, even if they may be linked with the motivation of collective or public owners to secure tenant use rights and to respect the law. One also finds this paradoxical negative effect of owner support of the tenant in the case of the selection process of applicant farmers. Indeed, their number often exceeds the availability of public/collectively owned farmland. To place applicants into a fair competition, LFRA leaders build a long and complex selection process based on applicant files and auditions. This process involves numerous stakeholders in a collegial final decision and seeks to evaluate candidates on agricultural technical and economic grounds, which remains difficult. This process is costly for farmers, who incur €44.93 per hectare (in addition to €4.49 on average because a SAFER is often involved in the selection process). The negotiation occurring after selection may be collective and is also costly (€71.81/ha). First, farmland sometimes has to be shared between the selected farmers, which induces disagreements. Second, tenant demands may be discussed and debated collectively, for instance on contractual terms or concerning farmland collective equipment (e.g., irrigation, buildings). Finally, individual negotiation with the owner is also costly (€51.32/ha). One reason for this cost is the complexity of these contracts, which are more than a simple agreement about access to land use against a rent. These contracts often include additional contractual terms such as use specifications (e.g., organic farming, marketing in short and local food chains, specific environmental practices, etc.). Another reason is the long duration of project setup, which as with information costs, contributes to increased costs by lengthening the time needed for each step. Implementation costs Implementation costs due to reconditioning land are shown in Table 4. In purchase transactions, interviewed farmers incurred high implementation costs (€532.4/ha). These costs are moderate in the case of conventional lease arrangements, with an average of €45.31 per hectare, and with LFRA lease arrangements, where they represent €33.90 per hectare. Several different elements illuminate these results. First, not all of the land sold is free of use rights. A major portion of them are under lease arrangement. In case of sale, the tenant has priority as a buyer. Consequently, few farmers venture to purchase occupied farmland. The lease-free lands that are being sold have therefore exited the farming use market for different reasons (e.g., owners taking back land but with no real farming use, farmland awaiting urban conversion, etc.). The older this exit, the more the land requires reconditioning work. One must also note the difference between conventional lease arrangements and those through LFRAs. Lands delivered by the LFRA can be in better condition because of the good maintenance of the owner who is interested in agricultural use. Another reason might be that the farmers receive in-kind assistance for this work, which reduces their costs. Comparative analysis As suggested by [START_REF] Benham | Measuring the costs of exchange[END_REF], comparative analysis of estimated exchange costs is possible when keeping in mind that non-realized transactions, with very likely high exchange costs, cannot be studied. One may compare the structure of exchange costs and the way they are counterbalanced or not by other costs and benefits. That is, the total exchange costs may be compared to total costs, i.e., the sum of exchange costs and the transaction price, or to the resulting gains. One may also compare the share of different transaction cost components across coordination mechanisms [START_REF] Royer | Transaction costs in milk marketing: a comparison between Canada and Great Britain[END_REF]. In this section, we present the results of these two comparison methods in Table 5. Do LFRAs facilitate access to land use for farmers? We first present the results for the two other coordination mechanisms being examined for comparison purposes. Exchange costs represent a major portion of total costs of purchase transactions (70%). The purchase amount to be delivered at one time for unlimited use is ultimately not the most prohibitive cost of the transaction. The exchange costs fully constrain the transaction result of access to land use, amounting to only 69€ per hectare, whereas the two other coordination mechanisms result in more than 400€ per hectare. By comparison, accessing land use through a conventional lease arrangement with an individual induces far fewer exchange costs for the farmer. When broken down from total costs, exchange costs represent only 7%, ten times less than in purchase transactions. Exchange costs do not substantially affect the financial benefits (2,805€), which remain solid despite the total costs (1,478€). Finally, the share of total costs that exchange costs represent in the case of farmers leasing land through LFRAs are intermediate to the two situations described above. Exchange costs (13%) represent almost twice those of conventional lease arrangements, which shows how costly these collective processes are for applicant farmers, mostly because of negotiation (10%). Although the lessor's intentions converge in part with the farmer's economic interests, these transactions are costly. This can prove to be prohibitive for farmers seeking access to new farmland. Moreover, the financial costs (1,922€ per ha), i.e., the rent, are on average higher than in the case of conventional leasing (1,382€ per ha). As a result, compared to a conventional lease, access to land use is made more costly not only indirectly through exchange costs but also directly via rent. This non-intuitive result should be kept in perspective, however, since exchange costs remain compensated for by the financial benefits, which allow for substantial transaction results (€487) even though they are less than half those of conventional leasing arrangements (€1,327). These benefits make the transaction attractive, at least compared to a purchase. What transaction cost components underlie these results? These differences in exchange costs across the three coordination mechanisms may be understood by looking at cost components. Broken down in accordance with total costs and financial benefits, information costs remain reasonable, fluctuating from zero to one point. Word of mouth, watching the local press and web searches are not very costly compared to the overall costs and gains from accessing farmland use. Even encounters with owners resulting from LFRAs turn out to be relatively simple. Negotiation costs are a far greater determinant, at least in purchase transactions and leases through LFRAs. They include notary fees and costs of negotiating with other contractors. Notary fees dramatically increase exchange costs in the case of purchase transactions, as seen above. These fees represent 56% of the total costs of the land transaction. In France, notary fees include important state taxes, amounting to nearly 38% of the fees, for instance, in a land sale for €10,000 17 . However, 62% remains dedicated to fees for registration work provided by notaries. Therefore, accessing land use as a part of the entire bundle of property rights is very costly for farmers exactly because the exchange concerns not only use rights but also and mainly alienation rights. Indeed, this alienation rights exchange requires registrations that are not necessary for exchanges concerning only use rights. We have shown above how adhering to land structure policy control and the mutuality social fund weighs on negotiation rights in conventional lease arrangements but in a way that cannot be compared. Negotiating with other contractors also dramatically explains the important difference in the magnitude of exchange costs across the two coordination mechanisms of access to land use through lease arrangements. As seen above, these costs are those that make lease arrangements through LFRAs costlier to access for farmers than those on privately owned land. The reason is the longer setup process due to the often many involved stakeholders and farmers as well as the selection process from among applicant files. Finally, the implementation costs, such as the land reconditioning costs, including vegetation clearing, are more important for purchase transactions (13% of total costs) and for conventional lease arrangements (3% of total costs) than for land lease arrangements through LFRAs (2% of total costs). Discussion Our results confirm some of the points made by [START_REF] Gray | Transactions costs and new institutions: will CBLTs have a role in the Saskatchewan land market?[END_REF] about CBLTs and contradict others. Gray's results do not allow for classifying the costs between cash lease and CBLTs. Gray advances only the hypothesis that CBLTs, as new institutions requiring legal work to establish, will induce large costs at least in the first versions. In some ways, our results thus confirm this contention. However, in terms of the ex-ante exchange costs that Gray calls "cost of negotiating a contract", our results belie his assumption that the purchase transaction cost is 17 Source: French Superior Notaries Council, 2017. null in contrast to lease-based transactions (cash lease and CBLT). According to Gray, the only criterion that could make purchase transactions costly is the case where the owneroperator borrows the capital. Our results strongly contradict this assertion, which suggests that this distinction is proving to be minor. Moreover, we took into account borrowing costs through financial cost calculations. Thus, we have demonstrated how analyzing TCs directly by identifying their components rather than discussing them indirectly via their determinants permits a more understanding of land use arrangements by exploring contract characteristics that induce exchange costs. Nevertheless, the latter approach allows for assessing even indirectly ex post costs beyond characterizing them as done by [START_REF] Gray | Transactions costs and new institutions: will CBLTs have a role in the Saskatchewan land market?[END_REF]. Indeed, although our second methodological choice of measuring exchange costs rather than qualitatively discussing them has permitted the comparison of tangible figures, it has led us to a problem of availability of data. Consequently, one possible shortcoming of this study is that we assume a comparison of these three coordination mechanisms based on exchange costs incurred "until access". Therefore, the results could be misleading since the discriminant alignment hypothesis suggests that an apprenticeship effect exists. Schematically, a coordination mechanism is excluded if one of its previous transactions shows higher costs than another mechanism. In that case, the entire transaction matters. In that view, further work is thus required over several years. Some of the lease arrangements through LFRAs would have ended so that an ex post evaluation of monitoring costs (supervision of contractual terms execution by the lessor) and enforcement costs (renegotiation, conflicts and contract termination with the lessor) would be possible. We could thus test on LFRAs the strong assumption that [START_REF] Gray | Transactions costs and new institutions: will CBLTs have a role in the Saskatchewan land market?[END_REF] has issued about the ex post cost of CBLTs, according to which community control ensures that the monitoring costs are low. Moreover, ex post costs show a strong disparity that is difficult to analyze with transaction cost measurements. As Royer (2011) noted for milk marketing contracts, contract litigation, which may generate very high ex post exchange costs, involves only a few farmers. An evaluation of farmer ex post costs in accessing land would allow two questions to be answered. First, do these ex post costs compensate the relative superiority of ex ante costs incurred by farmers in leasing through LFRAs in comparison to conventional leasing? Some hypotheses already exist on this subject. Indeed, among other things, TCs are determined by uncertainty, and as noted by [START_REF] Murrell | The Economics of Sharing: A Transactions Cost Analysis of Contractual Choice in Farming[END_REF], the "tenant's perception of security of tenure is crucial for efficient land use", for example by encouraging him or her "to invest in the optimal stock of machinery required to operate the land" [START_REF] Gray | Transactions costs and new institutions: will CBLTs have a role in the Saskatchewan land market?[END_REF]. LFRAs may place farmers in a less uncertain context than conventional leasing. [START_REF] Polman | An Institutional Economics Analysis of Land Use Contracting: The Case of the Netherlands[END_REF] found that lease arrangement contracts where public organizations are involved are more complete and expose farmers to less opportunism. [START_REF] Gray | Transactions costs and new institutions: will CBLTs have a role in the Saskatchewan land market?[END_REF] predicts that CBLTs, by giving long-term perspectives to tenants with lifetime leases, increase their security of tenure. That statement is consistent with the survey data we gathered. Indeed, during the interviews, we assessed how farmers perceived their likelihood of continued access to land use over the short and medium term. It was apparent that the evaluated confidence was almost as high for lease arrangements through LFRAs as for purchase, while conventional lease arrangements showed far lower results. We could thus hypothesize that ex post exchange costs are higher in conventional lease arrangements than in LFRAs, which would better explain the interest of farmers in accessing land by lease arrangements through LFRAs. Second, do these ex post costs partially explain farmer preferences towards purchase? Indeed, such an evaluation would surely result in more or less null values for ex post costs for farmers who accessed land through purchase, as Gray hypothesizes regarding CBLTs (1994), and in non-null values for leasing through LFRAs and conventional leasing. If the latter values are dramatically high, that would counterbalance the very high negotiation costs revealed for purchase transactions. Obviously, other incentives linked with the abusus right may also lead to purchase transactions, such as the motivation to invest or changes in land use, identity, culture, or patrimonial interests. However, the ex-ante costs incurred during this first transaction step and that we measure in this study are those that may reveal prohibitive for transactions that did not occur [START_REF] Masten | The Costs of Organization[END_REF]. In that case, there is no possible apprenticeship, and the land market may remain inaccessible for certain contractors, for example those who do not have family connections with farming, which disadvantages them [START_REF] Ingram | Matching new entrants and retiring farmers through farm joint ventures: Insights from the Fresh Start Initiative in Cornwall, UK[END_REF]. They may encounter more difficulties than others in obtaining information, meeting owners and gaining their confidence. As access-to-farmland demand currently experiences an increasing of such a profile among applicants, it would be interesting to further study the effect of this familial relationship character on the level of TCs incurred. Finally, [START_REF] Benham | Measuring the costs of exchange[END_REF] methodology used in our study requires measurement of the total costs of the concerned transaction to allow a relative comparison. In case of transactions exchanging access to land, it includes the price of the transaction. In previous studies measuring farmer transaction costs, [START_REF] Royer | Transaction costs in milk marketing: a comparison between Canada and Great Britain[END_REF] does not state the transaction price in the case of milk marketing, probably due to its negligible value. The transaction price is also null for farmers engaged in voluntary agreements in a context of environmental conservation policy [START_REF] Falconer | Farm-level constraints on agri-environmental scheme participation: a transactional perspective[END_REF]Rorstad, Vatn, and Kvakkestad 2007;[START_REF] Mettepenningen | Measuring private transaction costs of European agri-environmental schemes[END_REF]. In this context, farmers would rather incur compensation payment. In our study, we evaluated a comparable transaction price between lease arrangements and purchase transactions through an application to access to farmland of the buy-or-lease problem (Jonshon and Lewellen, 1972). With this goal, we use a methodology designed by the French Farm Management Federation, whose vocation is to advise farmers on their management decisions [START_REF] Sousa | Est-il préférable d'acheter plutôt que de louer la terre ? In Info Agricole[END_REF]. Conclusion Based on data from a survey of farmers within a French region, this study shows that leasing through LFRAs carries fewer ex ante exchange costs than purchasing land and higher ex ante exchange costs than leasing to an individual owner relative to the total cost. This difference is due to negotiation costs, which are nearly twice as high as in conventional lease arrangements. The fact that land reconditioning costs are lower for land accessed through LFRAs than for conventionally leased land is not sufficient to counterbalance the higher negotiation costs of the former. Moreover, the superiority of exchange costs compared to conventional leasing is slightly accentuated by other costs, specifically the rent. These results must be interpreted with caution, given that they are only related to ex ante costs incurred by farmers until effective land use begins. LFRAs aim to provide secure access to land use for farmers and, notably, for new entrants with agricultural projects. Nevertheless, this study shows that these initiatives impose on farmers important exchange costs and unexpected delays, which may result in economic difficulties, especially for incipient farm holdings. LFRAs would benefit from simplifying and shortening farmer involvement in the process. Thus, these initiatives would best reach their own goal of maintaining and developing farming. However, the ability of these initiatives to facilitate land access for farmers could also be more widely examined. Beyond the costs, the advantages could be considered. To a certain degree, these initiatives could match the willingness of some farmers to pay to engage in processes about which they are personally sensitive (organic farming, local farming). In addition, these initiatives might help overcome locked-in situations in which farmers are unable to access land. An example is the great difficulty faced by farmers who have no family connection with farming, which is the most common way to find land and is often an indispensable prerequisite. For exchange costs, they quantitatively assessed transaction costs in terms of time, money and kilometers for different transaction stages, represented by information costs, negotiation costs and implementation costs. Ex post costs that are enforcement and monitoring costs were not estimated since some transactions had not yet ended, given the recent emergence of the studied lease arrangements through LFRAs. Finally, they provided quantitative information regarding registration costs such as notary fees, which were included in negotiation costs, or reconditioning costs, which were part of implementation costs.All farmer exchange costs collected in kilometers or hours were translated into monetary values according to various standards, such as the official kilometer index 14 or the average revenue per hour 15 . Then, we calculated the total costs of each transaction type by adding exchange costs and financial costs. Financial costs were calculated in a manner consistent with the example of the French Farm Management Federation (De Sousa 2008). The 14 Tax authority price scale for a 5-horsepower vehicle in 2012, 0.536€ per km.15 Average net revenue in the Auvergne-Rhône-Alpes Region in 2012, 13.8€ per hour (source: INSEE). 1. What is your UAA? (in ha) 2. How large is your labor force? (in units of human labor) 3. What are your crops? Please indicate your crop rotation, in hectares: □ Annual crops for sale, number of hectares: … □ Vineyard, number of hectares: … □ Forage annual crops (corn…), number of hectares: … □ Orchards, number of hectares: … □ Grass (permanent and temporary), number of □ Market gardening, number of hectares: … hectares: … □ Other, specify: … 4. Do you breed livestock? Please indicate your herd, in the number of mothers: □ Bovine dairy cattle, number of reproductive females: … □ Meat sheep, number of reproductive females: … □ Bovine meat cattle, number of reproductive □ Dairy goats, number of reproductive females: … females: … □ Other, specify: ... 5. Can you, for each of your farmland parcels, describe its surface, the owner and conditions of access to the land? Parc Area Owner type Rent Agreement el (in ha) Yourselves Member Member Owner Owner A public Notary Private Oral No. of your of your from is living carrier. contract writin immediat extended and in the collectiv g e family family living territory e on this territory 1 ... □ □ □ □ □ □ □ □ □ □ Yes □ No 2 ... □ □ □ □ □ □ □ □ □ □ Yes □ No 6 . How many cadastral parcels do you farm? … 7. Gender and age of farm-holding operator and eventual associates: Farming operator Partner 1 Gender □ Male □ Male □ Female □ Female Birth date ... ... 8. When did you meet the parcel owner for the first time? (month/year) (…/…) 9. When did you definitely know you would have this plot ? (month/year) (…/…) 10. Area of the land plots exchanged: No. 1 No. 2 No. 3 Lot no. 4 Lot no. 5 Surface (ha) ... ... ... ... ... 11. What is the amount of the annual rent/what was the purchase amount (in €)? ... € 12. How did you know that this parcel would be available? The following questions aim to measure your personal engagement and the costs incurred in order to obtain information on the parcel to purchase / to lease from an individual owner / to lease through LFRA. Mobilized Travel (location of Purchase of total time eventual meetings, newspapers, (hours) registrations…) or other translated into km and fees (€) hours □ Dissemination of your land search □ Discussion/ visit to third party/ies with knowledge of the land plot availability □ Information obtained by chance during a conversation with third party/ies □ Reading of advertisements in the local press □ Information or call for applications from TdL, municipalities, land structure control policies (CDOA/DAPE), SAFER, etc. □ Participation in meetings organizing LFRAs □ Spotting of and visit to the land plot □ Getting information about the soil vocation in official urban planning documents □ Meeting with a stakeholder of the LFRA □ Visit to the owner □ Visit to the current user (farmer) □ Proposal of the current user □ Proposal of the owner □ Other, specify: ... 13. How did negotiations take place with the owner/ the stakeholders involved in the LFRA? The following table aims to measure your personal engagement and the costs incurred during the negotiations of the transaction. Mobilized total Location of Other specific time (hours, meetings and costs ( days) registrations notary and registration fees, etc.…) □ Direct negotiations, requests to the owner/ the LFRA group /the current farmer □ Answering the call (writing an application) □ Call to a third party and lobbying to convince the individual owner to rent/sell his or her parcel (ex: Union, other farmers, family) □ Passage by the legal bodies of the farming profession for the plot (CDOA, SAFER) arbitration □ Legal registrations (notary agreement and registration, land structure control policies, agricultural social-security mutual fund) □ Preparation of reciprocal commitments of the LFRA (work on a charter, a convention, a specification, etc.…) □ Expert support (real estate expert, lawyer, etc.…) □ Other, specify: ... 14. Was use of the land immediately possible after the purchase/lease agreement? The following table aims to measure your personal engagement and fees incurred for land reconditioning allowing for "normal" farming use. Mobilized total Location of Other specific time (hours, meetings and costs (equipment days) registrations purchase or leasing) □ Vegetation clearing □ Grubbing up trees □ Tillage and specific amendments □ Other, specify: ... 15. Have you continued to make efforts in order to maintain your rights during farmland use? The following table aims to measure your personal engagement and fees incurred during the land use. Mobilized Location of Other total time meetings and specific costs (hours, days) registrations □ Satisfying eventual owner requirements in terms of farming use (maintenance of hedges, limitation of weed overgrowth, etc.…) □ Renegotiation with the owner(s) (rental amount, contract terms, farming practices, etc.…) □ Information, follow-up about land decision-making (urban projects or LFRA stakeholder projects that may affect the land plot; owner intentions to sell, recovering of usage rights, etc.…) □ Lobbying to reverse a threat to land plots (urban planning organizations, farming professional networks, other farmers involved in LFRAs through leases, etc.…) □ Other, specify:... 16. What difficulties have you encountered during land use cessation? The following table aims to measure your personal engagement and the costs incurred during the cessation of your access to land. Mobilized Location of Other total time meetings and specific costs (hours, days) registrations specify (lawyer fees…) □ Efforts to reverse the cessation or reach compensation (negotiation with the owner, rural leasehold court, discussion with a lawyer, with experts, lobbying, etc.…) □ Farm production losses (loss of culture in place, purchase of fodder for compensation…) □ Other, specify: ... European data from Eurostat for 2010 and US data from USDA NASS for 2012. d European data from Eurostat for 2010 and US data from US Agriculture Census for 1999[START_REF] Sherrick | Farmland markets: historical perspectives and contemporary issues[END_REF]. LFRAs are Long-term and Full Rights Acquisitions of farmland by public and collective legal persons, who are involved in agricultural activity through political or ideological interests and use ownership as a lever. Table 1 Characteristics of French farmland structure in comparison to other European countries Appendix E -Statistical summary France Germany United Netherlands Belgium Italy Spain USA Kingdom Average UAA a /farm (ha) b 58.7 58.6 93.6 27.4 34.6 12.0 24.1 216 Owned UAA, % of total UAA c 23.6% 38.7% 69.4% 58.8% 32.9% 64.9% 61.0% 60% Variabl UAA c Leased UAA 18 , % of total Description 76.5% 61.4% 30.6% 41.2% 67.1% Not available for 35.1% 39% 38% Source e Share-cropping, % of leased 1.5% 2.6% - 34.2% 1.6% 16.0% 18.5% 34.8% Information costs relative to a Utilized Agricultural Area UAA d C inf1 C inf2 C inf3 b European data from Eurostat for 2013 and US data from USDA NASS for 2012. Information gathering Contact with a third person Individual discussion C inf4 Collective meeting Purchasing transactions C neg1 Negotiation costs relative to Individual negotiating with owner Collective negotiating Applicant's file toward owner collective/public organization Applicant's file toward SAFER Costs incurred by farmers in access to land use transactions C neg2 C neg3 C neg4 Table 2 Purchasing transactions and conventional lease arrangements Purchasing transactions and conventional lease arrangements Calculation according to Appendix D from t, f, d and A (farmers survey); and T (Insee, 2012) and I (Tax office, 2012) C neg5 C neg6 C neg7 Land structure control policies Transactions of access to land use… Lease arrangements through LFRAs (CDOA/DAPE) Agricultural social-security mutual fund (Mutualité sociale agricole) …as a part of …through a …through a lease Purchasing transactions and lease the entire conventional lease arrangement through arrangements through LFRAs Real estate expert, lawyer property rights an individual Lease arrangements through LFRAs bundle of arrangement from LFRAs* C neg8 Ex Information Information search Notary fees Word of mouth, Conventional lease arrangements Word of mouth, Call for projects, ante C imp1 C inf costs C neg Implementation costs relative to costs Total information costs Total negotiation costs Land reconditioning newspapers, SAFER announcements SAFER announcements agricultural, NGO and rural development networks, newspapers, word of mouth, SAFER Calculation by sum of C inf1 , C inf2 , C inf3 and C inf4 Calculation by sum of C neg1 , C neg2 , C neg3 , C neg4 , C neg5 , announcements C neg6 , C neg7 and C neg8 C imp FC FB Total implementation costs Contact with Financial costs incurred by farmers in land use exchanges Phone, third person, mail, individual or collective meetings seller/lessor Negotiation Negotiations Through discussions in meetings Individual or collective Financial benefits incurred by farmers in land use exchanges costs meetings with owner collective/public organization Equal to C imp1 Calculation according to Appendix D with data from farmers survey and Agreste (2016), Banque de France (2006), Crédit agricole (2006) and Caisse des Dépôts (2008) Applicant's file In case of - In case of SAFER SAFER retrocessions and calls for retrocession projects Registrations As a user, to land structure control policies (CDOA/DAPE) and agricultural social-security mutual fund (Mutualité sociale agricole) Expert support Real estate expert, lawyer Registration fees Compulsory Notary fees in case of notarized lease (parties' notary fees choice) Implementation Land reconditioning Vegetation clearing costs Ex Monitoring Supervision of - Monitoring of the lessor post costs contractual terms costs Enforcement Renegotiation, conflicts - All methods used to conduct periodic costs and contract termination renegotiations, manage any conflicts, terminate the contract and recover any inherent losses * c Table 3 3 Socioeconomic characteristics of transaction sample In 2012, age of the interviewed farmer in case of agricultural group holdings involving other associates.3 LFRAs are Long-term and Full Rights Acquisitions of farmland by public and collective legal persons, who are involved in agricultural activity through political or ideological interests and use ownership as a lever. Total Total Transactions of access to land use… farms transactions …as a part of the …through a …through a lease (=50) (=74) entire bundle of conventional lease arrangement property rights arrangement from through LFRAs 3 an individual Average UAA 1 /farm (ha) 52.4 53.8 62.2 53.4 47.0 Average labor force 2.1 2.2 2.3 2.2 2.0 Breeding share 52% 54% 57% 54% 52% Market gardening share 30% 27% 19% 29% 32% Average farmer age 2 43 44 45 44 41 Average studied transaction surface - - 1.0 3.7 4.1 (ha) Average transaction date - - 2007 2007 2011 Average duration of transaction ex ante - - 8 .2 5.0 8.6 step (in months) 1 Utilized Agricultural Area 2 Table 4 4 Ex ante exchange costs faced by farmers in access to land use transactions in France, per hectare Our 2012 survey and our calculations *LFRAs are Long-term and Full Rights Acquisitions of farmland by public and collective legal persons, who are involved in agricultural activity through political or ideological interests and use ownership as a lever. Exchange cost Description Access to land use… …as a part of …through a …through a lease the entire conventional arrangement bundle of lease through LFRAs* property rights arrangement from an individual Information Information search Information €0.19 €3.28 costs gathering (C inf1 ) €2.19 Contact with a third €2.32 €9.49 €4.30 person (C inf2 ) Contact with Individual €1.10 €5.68 €14.33 seller/lessor discussion (C inf3 ) Collective meeting - €1.61 €6.88 (C inf4 ) Total €5.60 €16.96 €28.79 Negotiation Negotiations Individual €43.78 €21.67 €51.32 costs negotiating with owner (C neg1 ) Collective - - €71.81 negotiating (C neg2 ) Applicant's file …owner - - €44.93 toward… collective/public organization (C neg3 ) …SAFER (C neg4 ) €6.16 €0.11 €4.49 Registrations Land structure €0.22 €8.49 - control policies (CDOA/DAPE) (C neg5 ) Agricultural social- - €1.69 - security mutual fund (Mutualité sociale agricole) (C neg6 ) Expert support Real estate expert, €13.21 €2.11 - lawyer (C neg7 ) Registration fees Notary fees (C neg8 ) €2,381.74 - €39.40 Total €2,445.10 €34.08 €211.95 Total without production notary fees €63.36 €34.08 €172.55 Implementation Land reconditioning Vegetation clearing €532.54 €45.31 €33.90 costs (C imp ) Source: Table 5 5 Ex ante exchange costs incurred by farmers in the three coordination mechanisms for access to farmland use in France, per hectare Type of costs Source: Our calculations *LFRAs are Long-term and Full Rights Acquisitions of farmland by public and collective legal persons, who are involved in agricultural activity through political or ideological interests and use ownership as a lever. Access to land use… …as a part of the …through a …through a lease entire bundle of conventional lease arrangement through property rights arrangement from an LFRAs* individual Information (Cinf) €5.60 0% €16.96 1% €28.79 1% Negotiation (Cneg) €2,445.10 58% €34.08 2% €211.95 10% Notary fees €2,381.74 56% €0.00 0% €39.40 2% Implementation (Cimp) €532.54 13% €45.31 3% €33.90 2% Total ex ante exchange €2,983.25 70% €96.35 7% €274.64 13% costs Financial costs (FC) €1,254.13 30% €1,381.80 93% €1,921.53 87% Total costs €4,237.38 100% €1,478.16 100% €2,196.17 100% Financial benefits (FB) €4,305.95 €2805.30 €2,683.40 Transaction result = €68.58 €1,327.14 €487.23 Financial benefits - total costs Created in 2003, the French association Terre de Liens (land of connections) aims at contributing to the creation of environmentally responsible rural activities through the collective acquisition of agricultural land and buildings. It also aims to restore land management concerns to the minds of civil society and politicians. Terre de Liens is actually a federation of 15 regional associations of the same name. To implement its action plan, the Terre de Liens movement has created two tools: one for solidarity investment, the Terre de Liens Landholding Trust, which is a private savings fund used to acquire agricultural land that is then rented out to farmers, and the Terre de Liens Foundation. Recognized as being of public interest, this latter may accept donations of money and farms, notably from public authorities. Which are "nonprofit organizations that conserve environmental amenities on private land"[START_REF] Parker | Land trusts and the choice to conserve land with full ownership or conservation easements[END_REF]). SAFER ("Société d'aménagement foncier et d'établissement rural," or Farming Land Ownership Regulation Societies) are non-profit organizations under the supervision of the Agriculture and Finance Ministries. The organization regulates farmland ownership, notably using preemption rights and farm transfers, and supports local authorities in planning policies. Breaking with traditional customary rights frameworks, its political implications (i.e., land registration and property entitlement) are debated[START_REF] Bromley | Formalising property relations in the developing world: The wrong prescription for the wrong malady[END_REF]. In these transactions, the lessor is paid according to annual agricultural profits and thus shares risks and profits of the tenant's agricultural holding. These transactions represent a minority of lease arrangements in France. European data regarding France and French data differ slightly for 2010 because of the harmonization of calculation methods across European countries. This excludes group holding lease arrangements and contracts with personal associates (15% of UAA in 2010). Not all tenants follow this rule; registration costs evaluated here are mainly due to particular registration difficulties. "Leased UAA" includes conventional lease arrangements and leases of public/collectively owned farmland. However, the latter arrangements represent an infinitesimal portion of total land leased, which makes this figure most representative of conventional lease arrangements. The 2010 zoning of urban areas also distinguishes: -The "average areas", a group of municipalities without pockets of clear land, constituted by a center of 5,000 to 10,000 jobs, and by rural districts or urban units among which at least 40 % of the employed resident population works in the center or in the municipalities attracted by this center. --The "small areas", a group of municipalities without pockets of clear land, constituted by a center of 1,500 to 5,000 jobs, and by rural districts or urban units among which at least 40 % of the employed resident population works in the center or in the municipalities attracted by this center. Let the exchange cost be of a given type , the spent time in hours, the expense in euros, the travelled distance in kilometers, the area of the land whose use is exchanged, the cost of farmer labor in theory in euros per hour, and the kilometer expense in euros per kilometer. Then, the exchange cost in euro per hectare is: As follows, the principle of arbitration for financial costs and benefits is (Sousa 2008, p.6): "The farmer has two choices: buy or rent the land, reasoning the worth for a hectare of land. The point of view of the owner: • Buy a land parcel at _% self-financing, and he borrows the difference, at a rate of _% for _years • The repayment annuities of the loan are calculated according to the principle of constant annuities • Its valuation is x% / year -depending on the department and the soil considered -throughout the duration of borrowing The tenant's point of view: • He or she pays the rent (rent) • Rent is valued at x% / year -depending on the department and the nature of culture consideredthroughout the duration of placement • It places its savings (equal to the amount of the contributing staff) at the 10-year OAT risk free rate • It also places the differential that exists between the refund of the loan (the owner's case) and the rent he or she pays." Let A be the area of land exchanged, P the real purchase price in case of purchase, V P the average annual valuation of land capital (from our calculations, based on data from the Ministry of Agriculture [START_REF] Fnsafer/Agreste | L'essentiel des marchés fonciers ruraux en[END_REF]: "Average price of cropland and grassland, for Departments and Small Agricultural Areas, evolution from 2000 to 2015"), R the annual rent of the lease arrangement, V R the average annual rate of rent evolution (from our calculations, based on compilation of departmental prefectoral decrees fixing each year per department an index for rents linked with rural leaseholds), T the risk-free investment rate "OAT 10 years", 3.81% (Banque de France, December 2006), i the borrowing rate for land acquisition by farmers, 4.14% (Crédit agricole, 2006), j the farmer's average borrowed share of the purchase price, 0.40% (Caisse des Dépôts, 2008), n the duration of maturities, or average loan term: 15 years [START_REF] Sousa | Est-il préférable d'acheter plutôt que de louer la terre ? In Info Agricole[END_REF], and P T the theoretical purchase price per hectare in case of lease (from our calculations, based on data from the Ministry of Agriculture [START_REF] Fnsafer/Agreste | L'essentiel des marchés fonciers ruraux en[END_REF]: "Average price of cropland and grassland, per Small Agricultural Areas from 2000 to 2015", and the area of the land exchanged). Then, financial costs and benefits are such that:
01775239
en
[ "qfin.cp", "math.math-st" ]
2024/03/05 22:32:18
2017
https://pastel.hal.science/tel-01775239/file/65238_ACHAB_2017_archivage.pdf
Agathe Guilloux Iacopo Mastromatteo Keywords: Convex Optimization, Stochastic Gradient Descent, Monte Carlo Markov Chain, Survival Analysis, Conditional Random Fields CHAPTER IV Constrained optimization approach Hawkes processes, convex relaxation, ADMM, compressed sensing Hawkes Process, Non-parametric estimation, GMM method, Order books, Market Microstructure Je tiens en premier lieu à exprimer ma plus profonde gratitude envers mes directeurs de thèse Stéphane Gaï as et Emmanuel Bacry. Le premier pour son enthousiasme et son large spectre de connaissances en statistique, le second pour son Øair et son recul dans l'étude des signaux temporels. Leur soutien et leur conAEance durant ces trois ans m'ont permis de mener à bien ce travail. Merci pour tout ce que vous m'avez fait découvrir et pour tout ce que j'ai appris sous votre direction ! Je remercie Manuel Gomez Rodriguez et Niels Richard Hansen pour l'intérêt qu'ils ont porté à mon travail en acceptant de rapporter ma thèse, et m'excuse à nouveau de leur avoir volé une partie de leur été. Je suis également très honoré de la présence de Nicolas Vayatis et de Vincent Rivoirard dans mon jury de thèse. Je suis très reconnaissant envers mes co-auteurs Résumé Le but de cette thèse est de montrer comment certaines méthodes d'optimisation récentes permettent de résoudre des problèmes d'estimation di ciles posés par l'étude d'évènements aléatoires dans le temps. Alors que le cadre classique de l'apprentissage supervisé traite les observations comme une collection de couples indépendants de covariables et de labels, les modèles d'évènements s'intéressent aux temps d'arrivée, à valeurs continues, de ces évènements et cherchent à extraire de l'information sur la source de donnée. Ces évènements datés sont liés par la chronologie, et ne peuvent dès lors être considérés comme indépendants. Ce simple constat justiAEe l'usage d'un outil mathématique particulier, appelé processus ponctuel, pour apprendre une structure à partir de ces évènements. Deux exemples de processus ponctuels sont étudiés dans cette thèse. Le premier est le processus ponctuel sous-jacent au modèle de Cox à risques proportionnels : son intensité conditionnelle permet de déAEnir le ratio de risque, une quantité fondamentale dans la littérature de l'analyse de survie. Le modèle de régression de Cox relie la durée avant l'apparition d'un évènement, appelé défaillance, aux covariables d'un individu. Ce modèle peut être reformulé à l'aide du cadre des processus ponctuels. Le second est le processus de Hawkes qui modélise l'impact des évènements passés sur la probabilité d'apparition d'évènements futurs. Le cas multivarié permet d'encoder une notion de causalité entre les di érentes dimensions considérées. Cette thèse est divisée en trois parties. La première s'intéresse à un nouvel algorithme d'optimisation que nous avons développé. Il permet d'estimer le vecteur de paramètre de la régression de Cox lorsque le nombre d'observations est très important. Notre algorithme est basé sur l'algorithme SVRG et utilise une méthode MCMC pour approcher la direction de descente. Nous avons prouvé des vitesses de convergence pour notre algorithme et avons montré sa performance numérique sur des jeux de données simulées et issues du monde réel. La deuxième partie montre que la causalité au sens de Hawkes peut être estimée de manière non-paramétrique grâce aux cumulants intégrés du processus ponctuel multivarié. Nous avons développé deux méthodes d'estimation des intégrales des noyaux du processus de Hawkes, sans faire d'hypothèse sur la forme de ces noyaux. Nos méthodes sont plus rapides et plus robustes, vis-à-vis de la forme des noyaux, par rapport à l'état de l'art. Nous avons démontré la consistance statistique de la première méthode, et avons montré que la deuxième peut être réduite à un problème d'optimisation convexe. La dernière partie met en lumière les dynamiques de carnet d'ordre grâce à la première méthode d'estimation non-paramétrique introduite dans la partie précédente. Nous avons utilisé des données du marché à terme EUREX, déAEni de nouveaux modèles de carnet d'ordre (basés sur les précédents travaux de Bacry et al.) et appliqué la méthode d'estimation sur ces processus ponctuels. Les résultats obtenus sont très satisfaisants et cohérents avec une analyse économétrique. Un tel travail prouve que la méthode que nous avons développée permet d'extraire une structure à partir de données aussi complexes que celles issues de la AEnance haute-fréquence. v Abstract The aim of this thesis is to show how recent optimization methods help solving tough estimation problems based on the event models. While the classical framework of supervised learning treats the observations as a collection of covariate and label independent pairs, event models only focus on the arrival dates of these events and then seek to extract information about the data source. These timestamped events are ordered chronologically and can not therefore be considered independent. This simple fact justiAEes the use of a particular mathematical tool called point process to learn some structure from these events. Two examples of point processes are studied in this thesis. The AErst is the underlying point process in the Cox model with proportional hazards: its conditional intensity allows to deAEne the risk ratio, a fundamental quantity in the literature of the survival analysis. The Cox regression model links the duration before the occurrence of an event, called failure, to an individual's covariates. This model can be reformulated using the framework of point processes. The second is the Hawkes process, which models the impact of past events on the probability of future events. The multivariate case makes it possible to encode a notion of causality between the di erent dimensions considered. This thesis is divided into three parts. The AErst focuses on a new optimization algorithm we have developed. It allows to estimate the parameter vector of the Cox regression when the number of observations is very important. Our algorithm is based on the Stochastic Variance Reduced Gradient (SVRG) algorithm and uses a Monte Carlo Markov Chain (MCMC) method to approximate the descent direction. We have proved convergence rates for our algorithm and have shown its numerical performance on simulated and real world data sets. The second part shows that the Hawkes causality can be estimated in a non-parametric way by the integrated cumulants of the multivariate point process. We have developed two methods for estimating the integrals of the kernels of the Hawkes process, without making any hypothesis about the shape of these kernels. Our methods are faster and more robust, with respect to the shape of the kernel, compared to the state-of-the-art. We have demonstrated the statistical consistency of the AErst method, and have shown that the second method can be reduced to a convex optimization problem. The last part highlights the dynamics of the order book thanks to the AErst non-parametric estimation method introduced in the previous section. We used EUREX futures data, deAEned new order book models (based on previous work by Bacry et al.) and applied the estimation method on these point processes. The results obtained are very satisfactory and consistent with an econometric analysis. This work proves that the method that we have developed makes it possible to extract a structure from data as complex as those resulting from high-frequency AEnance. vi Introduction The guiding principle of this thesis is to show how the arsenal of recent optimization methods can help solving challenging new estimation problems on events models. While the classical framework of supervised learning [START_REF] Hastie | Overview of supervised learning[END_REF] treat the observations as a collection of independent couples of features and labels, events models focus on arrival timestamps to extract information from the source of data. These timestamped events are chronologically ordered and can't be regarded as independent. This mere statement motivates the use of a particular mathematical object called point process [START_REF] Daley | An introduction to the theory of point processes: volume II: general theory and structure[END_REF] to learn some patterns from events. Let us begin by presenting and motivating the questions on which we want to shed some light in this thesis. Motivations The amount of data being digitally collected and stored is vast and expanding rapidly. The use of predictive analytics that extract value of this data, often referred as the data revolution, has been successfully applied in astronomy [START_REF] Feigelson | Big data in astronomy[END_REF], retail sales [MB + 12] and search engines [START_REF] Chen | Business intelligence and analytics: From big data to big impact[END_REF], among others. Healthcare institutions are now also relying on data to build customized and personalized treatment models using tools from survival analysis [START_REF] Murdoch | The inevitable application of big data to health care[END_REF]. Medical research often aims at uncovering relationships between the patient's covariates and the duration until a failure event (death or other adverse e ects) happen. The information that some patients did not die during the study is obviously relevant, but can't be casted in a regression problem where one would need to observe the lifetime for all patients. This has been circumvented in [START_REF] David | Regression models and life tables (with discussion)[END_REF], one of the most cited scientiAEc paper of all time [START_REF] Van Noorden | The top 100 papers[END_REF], with its proportional hazards model that is regarded as a regression that can also extract information from censored data, i.e. patients whose failure time is not observed. An estimation procedure of the parameter vector of the regression without any assumption on the baseline hazard, regarded sometimes as a nuisance parameter, was introduced in [START_REF] Cox | Partial likelihood[END_REF] and is done via the maximization of the partial likelihood of the model. Such procedure can e ciently handle high-dimensional covariates, which happens with biostatistics data, by adding penalization terms to the criterion to minimize [START_REF] Goeman | L1 penalized estimation in the cox proportional hazards model[END_REF][START_REF] Tibshirani | Regression shrinkage and selection via the lasso[END_REF]. However, algorithms to maximize Cox partial likelihood does not scale well when the number of patients is high, on the contrary to most algorithms that enabled the data revolution. We might thus ask ourselves the following question: Introduction Question 1. How to adapt Cox proportional hazards model regression parameter estimation algorithm to the large-scale setting ? Few years before the twentieth century, the French sociologist Durkheim already argued that human societies were like biological systems in that they were made up of interrelated components [START_REF] Durkheim | Le suicide: étude de sociologie[END_REF]. Now that our technology enabled us to be remotely connected, plenty of AEelds involve networks, like social networks, information systems, marketing, epidemiology, national security, and others. A better understanding of those large real-world networks and processes that take place over them would have paramount applications in the mentioned domains [START_REF] Rodriguez | Structure and Dynamics of Di usion Networks[END_REF]. The observation of networks often reduces to noting when nodes of the network send a message, buy a product or get infected by a virus. We often observe where and when but not how and why messages are sent over a social network. Event data from multiple providers can however help uncovering the joint dynamics and revealing the underlying structure of a system. One way to recover the inØuence structure between di erent sources is to use a kind of point process named Hawkes process [START_REF] Hawkes | Spectra of some self-exciting and mutually exciting point processes[END_REF][START_REF] Hawkes | Point spectra of some mutually exciting point processes[END_REF], whose arrival rate of events depend on the past events. Hawkes processes have been succesfully applied to model the mutual inØuence between earthquakes with di erent times and magnitudes [START_REF] Ogata | Statistical models for earthquake occurrences and residual analysis for point processes[END_REF]. Namely, it encodes how an earthquake increases the occurence's probability of new earthquakes in the form of aftershocks, via the use of Hawkes kernels. Hawkes processes also enable measuring what we call Hawkes causality i.e. the average number of events of type i that are trigerred by events of type j . Hawkes process have been succesfully applied in a broad range of domains, the two main applications model interactions within social networks [BBH12, [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF][START_REF] Iwata | Discovering latent inØuence in online social activities via shared cascade poisson processes[END_REF] and AEnancial transactions [START_REF] Bacry | Hawkes processes in AEnance[END_REF]. However, usual estimation of Hawkes causality is done by making strong assumptions on the shape of the Hawkes kernels to simplify the inference algorithm [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF]. A common assumption is the monotonic decreasing shape of the kernels (exponential or power-law), meaning that an event impact is always instantly maximal, which is non-realistic since in practice there may exist a delay before the maximal impact. This leads to the following question: Question 2. Can we retrieve Hawkes causality without parametrizing the kernel functions ? To answer positively to the second question, we developed two new nonparametric estimation methods for Hawkes causality, faster and which scales better with a large number of nodes. In this part, we only focus on the AErst one, for which we have proved a consistency result. Since Bowsher's pioneering work [START_REF] Bowsher | Modelling security market events in continuous time: Intensity based, multivariate point process models[END_REF], who recognized the Øexibility and the simplicity of using Hawkes processes in order to model the joint dynamics of trades and mid-price changes of the NYSE, Hawkes processes have steadily gained in popularity in the domain of high frequency AEnance, see [START_REF] Bacry | Hawkes processes in AEnance[END_REF] for a review. Indeed, taking into account the irregular occurences of transaction data requires to consider it as a point process. Besides, in the AEnancial area, plenty of features that summarize empirical AEndings are already known. For instance, the Øow of trades is known to be autocorrelated and cross-correlated with price moves. Such features called stylized facts, from the economist Nicholas Kaldor [START_REF] Kaldor | A model of economic growth[END_REF] who referred to statistical trends that need to be taken into account despite a possible lack of microscopic understanding. These stylized facts can advantageously be captured using the Outline notion of Hawkes causality. Understanding the order book dynamics is one of the core question in AEnancial statistics, and previous nonparametric representations of order books with multivariate Hawkes processes were low-dimensional because of their estimation method's complexity. The nonparametric estimation of Hawkes causality introduced in the second part of this thesis is fast and robust to kernel functions' shape, and it is natural to wonder what kind of stylized facts it can uncover from order book timestamped data. Question 3. Can we draw a more precise picture of order book Øows dynamics using Hawkes causality's nonparamatric estimation introduced in the second part ? Outline Each question presented above corresponds to a part of the thesis. In Part I, we answer Question 1 by introducing a new stochastic gradient descent algorithm applied to the maximization of regularized Cox partial-likelihood, see details below. Indeed, the regularized Cox partial log-likelihood writes as a sum of subfunctions which depend on varying length sequences of observation, on the contrary to the usual empirical risk minimization framework where subfunctions depend on one observation. Classical stochastic gradient descent algorithms are less e ective in our case. We adapt the algorithm SVRG [START_REF] Johnson | Accelerating stochastic gradient descent using predictive variance reduction[END_REF] [XZ14] by adding another sampling step: each subfunction's gradient is estimated using a Monte Carlo Markov Chain (MCMC). Our algorithm achieves linear convergence once the number of MCMC iterations is bigger than an explicit lower bound. We illustrate the outperformance of our algorithm on survival datasets. Answers to Question 2 lie in Part II where we study two nonparametric estimation procedures for Hawkes causality. Both methods are based on the computation of the integrated cumulants of the Hawkes process and taking advantage of relations between the integrated cumulants and the Hawkes causality matrix. The AErst approach relies on matching the second and third order empirical integrated cumulants with their theoretical counterparts. This is done via the minimization of the squared norm of the di erence between the two terms, which can be viewed as a Generalized Method of Moments [START_REF] Hall | Generalized method of moments[END_REF]. However, the optimization problem to solve is non-convex providing thus an approximate solution to the exact initial problem. This second approach is based on the completion of the Hawkes causality matrix using the AErst and second integrated cumulants. The relaxation of the exact problem writes as a convex optimization problem which enables us to provide the exact solution of this approximate problem. Introduction [START_REF] Reynaud-Bouret | Goodness-of-AEt tests and nonparametric adaptive estimation for spike train analysis[END_REF] focus on the estimation of the kernel functions, and prevent order book model's dimension from being too large and/or the dataset from being too heavy. Our nonparametric method only estimates kernels' integral, involves a lighter computation and then scales better with a large number of nodes or large number of events. We also show that the Hawkes causality matrix provides a very rich summary of the system interactions. It can thus be a valuable tool in understanding the underlying structure of a system with many type of events. Let us now rapidly review the main results of this thesis. Part I: Large-scale Cox model Many supervised machine learning problems can be cast into the minimization of an expected loss over a data distribution. Following the empirical risk minimization principle, the expected loss is approximated by an average of losses over training data, and a major success has been achieved by exploiting the sum-structure to design e cient stochastic algorithms [START_REF] Bottou | Large-scale machine learning with stochastic gradient descent[END_REF]. Such stochastic algorithms enable a very e cient extraction of value from massive data. Applying this to large-scale survival data, from biostatistics or economics, is of course of great importance. In Chapter I, we review the recent advances in convex optimization with Stochastic Gradient Descent (SGD) algorithms, from the pioneering work of [START_REF] Robbins | A stochastic approximation method[END_REF] to the recent variants with variance reduction [START_REF] Defazio | Saga: A fast incremental gradient method with support for non-strongly convex composite objectives[END_REF] [XZ14] [SSZ13] [START_REF] Roux | A stochastic gradient method with an exponential convergence _rate for AEnite training sets[END_REF]. We then introduce the notion of Point Process [START_REF] Daley | An introduction to the theory of point processes: volume II: general theory and structure[END_REF] which provides key tools for modeling events i.e. timestamps and/or locations data. We AEnally introduce the Cox proportional hazards model [START_REF] David | Regression models and life tables (with discussion)[END_REF] that relates the time that passes before some event occurs to one or more covariates via the notion of hazard rate. In Chapter II, we introduce our new optimization algorithm to help AEtting large-scale Cox model. Background on SGD algorithms, Point Processes and Cox proportional hazards model In this chapter, we review the classic results behind Stochastic Gradient Descent algorithms and its variance reduced adaptations. We then introduce Cox proportional hazards model. Stochastic Gradient Descent algorithms SGD algorithms from a general distribution A variety of statistical and machine learning optimization problems writes min µ2R d F (µ) = f (µ) + h(µ) with f (µ) = E ª [`(µ, ª)], where f is a goodness of AEt measure depending implicitly on some observed data, h is a regularization term that imposes structure to the solution and ª is a random variable. Typically, f is a di erentiable function with a Lipschitz gradient, whereas h might be non-smooth - First-order optimization algorithms are all variants of Gradient Descent (GD), which can be traced back to Cauchy [START_REF] Cauchy | Méthode générale pour la résolution des systemes d'équations simultanées[END_REF]. Starting at some initial point µ 0 , this algorithm minimizes a di erentiable function f by iterating the following equation µ t +1 = µ t °¥t r f (µ t ). ( ) 1 where r f (µ) stands for the gradient of f evaluated at µ and (¥ t ) is a sequence of step sizes. Stochastic Gradient Descent (SGD) algorithms focus on the case where r f is intractable or at least time-consuming to compute. Noticing that r f (µ) writes as an expectation, one idea is to approximate the gradient in the update step (1) with a Monte Carlo Markov Chain [START_REF] Atchadé | On perturbed proximal gradient algorithms[END_REF]. For instance, replacing the exact gradient r f (µ) with its MCMC estimate has enabled a sig-niAEcant step forward in training Undirected Graphical Models [START_REF] Hinton | Training products of experts by minimizing contrastive divergence[END_REF] and Restricted Boltzmann Machines [START_REF] Hinton | Reducing the dimensionality of data with neural networks[END_REF]. This AErst form of Stochastic Gradient Descent is called Contrastive Divergence in the mentionned contexts. SGD Algorithms from the uniform distribution Most machine learning optimization problems involve a data AEtting loss function f averaged over sample points because of the empirical risk minimization principle [START_REF] Vapnik | The nature of statistical learning theory[END_REF]. Namely, the objective function writes min µ2R d F (µ) = f (µ) + h(µ) with f (µ) = 1 n n X i =1 f i (µ), where n is the number of observations, and f i is the loss associated to the i th observation. In that case, instead of running MCMC to approximate r f , one uniformly samples a random integer i between 1 and n and replace r f (µ) with r f i (µ) in the update step (1). In the large-scale setting, computing r f (µ) at each update step represents the bottleneck of the minimization algorithm, and SGD helps decreasing the computation time. Assuming that the computation of each r f i (µ) costs 1, the computation of the full gradient r f (µ) costs n, meaning that SGD's update step is n times faster than GD's one. The comparison of the convergence rates is however di erent. Consider f twice di erentiable on R d , µ-strongly-convex, meaning that eigenvalues of the Hessian matrix r 2 f (µ) are greater than µ > 0 for any µ 2 R d , and L-smooth, meaning that the same eigenvalues are smaller than L > 0. Convergence rates with other assumptions on the function f can be found in [B + 15]. We denote µ § its minimizer and deAEne the condition number as ∑ = L/µ. The convergence rate is deAEned for iterative methods as a tight upper bound of a pre-deAEned error, and is regarded as the speed at which the algorithm converges. Denoting µ t the iterate after t steps of an iterative algorithm and considering the di erence E f (µ t ) °f (µ § ) as error, Gradient Descent's convergence rate is O(e °t /∑ ), while Stochastic Gradient Descent's one is O(∑/t ). A convergence rate of the form O(e °AEt ) with AE > 0 is called linear convergence rate since the error decrease after one iteration is at worst linear. Equivalently, convergence rates can be phrased as the total complexity to reach a AExed accuracy i.e. the number of iterations after Recently, di erent works improved Stochastic Gradient Descent using variance reduction techniques from Monte Carlo methods. The idea is to add a control variate term to the descent direction to improve the bias-variance tradeo in the approximation of the real gradient r f (µ). Those variants also enjoy linear convergence rates, and then smaller complexities ( Point processes Point process is a useful mathematical tool to describe phenomena occuring at random locations and/or times. A point process is a random element whose values are point patterns on a set S. We present here the useful results when the set S is the interval [0, T ), and points are timestamps of events; this special case is sometimes called temporal point process. The book [START_REF] Daley | An introduction to the theory of point processes: volume II: general theory and structure[END_REF] is regarded as the main reference on point processes' theory. Every realization of a point process ª can be written as ª = P n i =1 ± t i where ± is the Dirac measure, n is an integer-valued random variable and t i 's are random elements of [0, T ). It can be equivalently represented by a counting process N t = R t 0 ª(s)d s = P n i =1 1 {t i ∑t } . The usual characterization of temporal point process is done via the conditional intensity function, which is deAEned as the inAEnitesimal rate at which events are expected to occur after t , given the history of N s prior to t : ∏(t |F t ) = lim h!0 P(N t +h °Nt = 1|F t ) h , where F t is the AEltration of the process that encodes information available up to (but not including) the time t . The most simple temporal point process is the Poisson process which assumes that the events arrive at a constant rate, which corresponds to a constant instensity function ∏ t = ∏ > 0. Note that temporal point processes can also be characterized by the distribution of interevent times i.e. the duration between two consecutive events. We remind that the distribution of interevent times of a Poisson process with intensity ∏ is an exponential distribution of parameter ∏. See the Page 41 of [START_REF] Daley | An introduction to the theory of point processes: volume II: general theory and structure[END_REF] for four equivalent ways of deAEning a temporal point process. Two examples of temporal point process are treated in this thesis. The AErst is the point process behind Cox proportional hazards model: its conditional intensity function allows to deAEne the hazard ratio, a fundamental quantity in survival analysis literature, see [START_REF] Andersen | Statistical models based on counting processes[END_REF]. The Cox regression model relates the duration before an event called failure to some covariates. This model can be reformulated in the framework of point processes [START_REF] Andersen | Statistical models based on counting processes[END_REF]. The second 1. Part I: Large-scale Cox model is the Hawkes process which models how past events increase the probability of future events. Its multivariate version enables encoding a notion of causality between the di erent nodes. We introduce below the Cox proportional hazards model, and the Hawkes processes in Part II. Cox proportional hazards model Survival analysis focuses on time-to-event data, such as the death in biological organisms and failure in mechanical systems, and is now widespread in a variety of domains like biometrics, econometrics and insurance. The variable we study is the waiting time until a well-deAEned event occurs, and the main goal of survival analysis is to link the covariates, or features, of a patient to its survival time T . Following the theory of point processes, we deAEne the intensity as the conditioned probability that a patient dies immediately after t , given that he was alive before t : ∏(t ) = lim h!0 P(t ∑ T ∑ t + h|t ∑ T ) h . The most popular approach, for some reasons explained below, is Cox proportional hazards model [START_REF] David | Regression models and life tables (with discussion)[END_REF]. The Cox model assumes a semi-parametric form for the hazard ratio at time t for the patient i , whose features are encoded in the vector x i 2 R d : ∏ i (t ) = ∏ 0 (t ) exp(x > i µ), where ∏ 0 (t ) is a baseline hazard ratio, which can be regarded as the hazard ratio of a patient whose covariates are x = 0. One estimation approach considers ∏ 0 as a nuisance and only estimates µ via maximizing a partial likelihood [START_REF] David | Regression models and life tables (with discussion)[END_REF]. This way of estimating suits clinical studies where physicians are only interested in the e ects of the covariates encoded in x on the hazard ratio. This can be done with computing the ratio of hazard ratios from two di erent patients: ∏ i (t ) ∏ j (t ) = exp((x i °x j ) > µ) For that reason, Cox model is said to be a proportional hazards model. However, maximizing this partial likelihood is a hard problem when we deal with large-scale (meaning a large number of observations n) and high-dimensional (meaning large d ) data. To tackle the high-dimensionality, sparse penalized approaches have been considered in the literature [Tib96] [T + 97] [Goe10] . The problem is now to minimize the negative of the partial log-likelihood f (µ) = °`(µ) with a penalization h(µ) that makes the predictor µ to become sparse and then select variables. We will discuss this approach and the di erent models in Chapter II. On the contrary, approaches to tackle the large-scale side of the problem do not yet exist. Introduction SVRG beyond Empirical Risk Minimization Survival data (y i , x i , ± i ) n pat i =1 contains, for each individual i = 1, . . . , n pat , a features vector x i 2 R d , an observed time y i 2 R + , which is a failure time if ± i = 1 or a right-censoring time if ± i = 0. If D = {i : ± i = 1} is the set of patients for which a failure time is observed, if n = |D| is the total number of failure times, and if R i = { j : y j ∏ y i } is the index of individuals still at risk at time y i , the negative Cox partial log-likelihood writes °`(µ) = 1 n X i 2D h °x> i µ + log ≥ X j 2R i exp(x > j µ) ¥i (2) for parameters µ 2 R d . Each gradient of the negative log-likelihood then writes as two nested expectations: one from an uniform distribution over D, the other over a Gibbs distribution, see Chapter II for details. Our minimization algorithm is doubly stochastic in the sense that gradient steps are done using stochastic gradient descent (SGD) with variance reduction, and the inner expectations are approximated by a Monte Carlo Markov Chain (MCMC) algorithm. We derive conditions on the MCMC number of iterations guaranteeing convergence, and obtain a linear rate of convergence under strong convexity and a sublinear rate without this assumption. Part II: Uncover Hawkes causality without parametrization In Chapters III and IV, we study two methods to uncover causal relationships from a multivariate point process. We focus on one approach per chapter. Hawkes processes In order to model the joint dynamics of several point processes (for example timestamps of messages sent by di erent users of a social network), we will consider the multidimensional Hawkes model, introduced in 1971 in [START_REF] Hawkes | Point spectra of some mutually exciting point processes[END_REF] and [START_REF] Hawkes | Spectra of some self-exciting and mutually exciting point processes[END_REF], with cross-inØuences between the di erent processes. By deAEnition a family of d point processes is a multidimensional Hawkes process if the intensities of all of its components write as linear regressions over the past of the d processes: ∏ i t = µ i + D X k=1 Z t 0 ¡ i j (t °s)d N j s . Another way to construct Hawkes processes is to consider the following population representation, see [START_REF] Hawkes | A cluster process representation of a selfexciting process[END_REF]: individuals of type i , 1 ∑ i ∑ d , arrive as a Poisson process of intensity µ i . Every individual can have children of all types and the law of the children of type i of an individual of type j who was born or migrated in t is an inhomogeneous Poisson process of intensity ¡ i j (• °t ). Part II: Uncover Hawkes causality without parametrization This construction is nice because it yields a natural way to deAEne and measure the causality between events in the Hawkes model, where the integrals g i j = Z +1 0 ¡ i j (u) du ∏ 0 for 1 ∑ i , j ∑ d . weight the directed relationships between individuals. Namely, introducing the counting function N i √ j t that counts the number of events of i whose direct ancestor is an event of j , we know from [START_REF] Bacry | Hawkes processes in AEnance[END_REF] that E[d N i √ j t ] = g i j E[d N j t ] = g i j § j d t, (3) where we introduced § i as the intensity expectation, satisfying E[d N i t ] = § i d t. However in practice, the Hawkes kernels are not directly measurable from the data and these measures of causality between the di erent kinds of events are thus inaccessible. In the literature, there are main two classes of estimation procedures for Hawkes kernels: the parametric one and the nonparametric one. The AErst one assumes a parametrization of the Hawkes kernels, the most usual assumes the kernels are decaying exponential, and estimate the parameter via the maximization of the Hawkes log-likelihood, see for example [START_REF] Bacry | A generalization error bound for sparse and low-rank multivariate hawkes processes[END_REF] or [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF]. The second one is based either on the numerical resolution of Wiener-Hopf equations which links the Hawkes kernels to its correlation structure [START_REF] Bacry | Second order statistics characterization of hawkes processes and non-parametric estimation[END_REF] (or equivalently on the approximation of the Hawkes process as an Autoregressive model and the resolution of Yule-Walker equations [START_REF] Eichler | Graphical modeling for multivariate hawkes processes with nonparametric link functions[END_REF]), or on a method of moments via the minimization of the contrast function deAEned in [START_REF] Reynaud-Bouret | Goodness-of-AEt tests and nonparametric adaptive estimation for spike train analysis[END_REF]. In Chapters III and IV, we propose two new nonparametric estimation methods to infer the integrals of the kernels using only the integrated moments of the multivariate Hawkes process. For all estimation procedures mentionned above, including ours, we need the following stability condition so that the process admits a version with a stationary intensity: Assumption 1. The spectral norm of G = [g i j ] satisAEes ||G|| < 1. Generalized Method of Moments approach A recent work [START_REF] Jovanović | Cumulants of hawkes point processes[END_REF] proved that the integrated cumulants of Hawkes processes can be expressed as functions of G = [g i j ], and provided the constructive method to obtain these expressions. The AErst approach we developed in this part is a moment matching method that AEts the second-order and the third-order integrated cumulants of the process. To that end, we have designed consistent estimators of the integrated AErst, second and third cumulants of the Hawkes process. Their theoretical counterparts are polynomials of R = (I °G) °1, as shown in Introduction [START_REF] Jovanović | Cumulants of hawkes point processes[END_REF]: § i = d X m=1 R i m µ m C i j = d X m=1 § m R i m R j m K i jk = d X m=1 (R i m R j m C km + R i m C j m R km +C i m R j m R km °2 § m R i m R j m R km ). Once we observe the process N t for t 2 [0, T ], we compute the empirical integrated cumulants on windows [°H T , H T ], and minimize the squared di erence L T between the theoretical cumulants and the empirical ones. We have proven the consistency of our estimator in the limit T ! 1, once the sequence (H T ) satisAEes some conditions. Our problem can be seen as a Generalized Method of Moments [START_REF] Hall | Generalized method of moments[END_REF]. To prove the consistency of the empirical integrated cumulants, we need the following assumption: Assumption 2. The sequence of integration domain's half-length satisAEes H T ! 1 and H 2 T /T ! 0. We prove in Chaper III the following theorem of consistency. ∂ °1 P °°°°! T !1 G The numerical part, on both simulated and real-world datasets, gives very satisfying results. We AErst simulated event data, using the thinning algorithm of [START_REF] Ogata | On lewis' simulation method for point processes[END_REF], with very di erent kernel shape -exponential, power law and rectangular -and recover the true value of G for each kind of kernel. Our method is, to the best of our knowledge, the most robust with respect to the shape of the kernels. We then ran our method on the 100 most cited websites of the MemeTracker database, and on AEnancial order book data: we outperformed state-of-the-art methods on MemeTracker and extracted nice and interpretable features from the AEnancial data. Let also mention that our method is signiAEcantly faster (roughly 50 times faster) since previous methods aim at estimating functions while we only focus on their integrals. The simplicity of the method, that maps a list of list of timestamps to a causality map between the nodes, and its statistical consistency, incited us to design new point process models of order book and capture its dynamics. The features extracted using our method have very insightful economic interpretation. This is the main purpose of the Part III. Constrained optimization approach The previous approach based on the Generalized Method of Moments need the AErst three cumulants to obtain enough information from the data to recover the d 2 entries of G. Assuming that the matrix G has a certain structure, we can get rid of the third order cumulant and design another estimation method using only the AErst two integrated cumulants. Plus, the resulting optimization problem is convex, on the contrary to the minimization of L T above, which enables the convergence to the global minimum. The matrix we want to estimate minimize a simple criterion f convex, typically a norm, while being consistent with the AErst two empirical integrated cumulants. We formulate our problem as the following constrained optimization problem: On the contrary to the optimization problem of the previous chapter, the problem just stated is convex. We test this procedure on numerical simulations of various Hawkes kernels and real order book data, and we show how the criterion f impact the matrices we retrieve. min G f (G) s.t. C = (I °G) °1L(I °G> ) °1 ||G|| < 1 g i j ∏ 0 where f (G) Introduction A single asset 12-dimensional Hawkes order book model As a AErst application of the procedure described in Chapter III, we consider the following 12dimensional point process, a natural extension of the 8-dimensional point process introduced in [START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF]: N t = (T + t , T °t , L + t , L °t ,C + t ,C °t , T a t , T b t , L a t , L b t ,C a t ,C b t ) where each dimension counts the number of events before t : • T + (T °): upwards (downward) mid-price move triggered by a market order. • L + (L °): upwards (downward) mid-price move triggered by a limit order. • C + (C °): upwards (downward) mid-price move triggered by a cancel order. • T a (T b ): market order at the ask (bid) that does not move the price. • L a (L b ): limit order at the ask (bid) that does not move the price. • C a (C b ): cancel order at the ask (bid) that does not move the price. We then use the causal interpretation of Hawkes processes to interpret our solution as a measure of the causality between events. This application of the method to this new model revealed the di erent interactions that lead to the high-frequency price mean reversion, and those between liquidity takers and liquidity makers. For instance, one observes the e ects of T + events on other events on Figure A.1 (in the AErst columnn on the left). The most relevant interactions are the T + ! L + and T + ! L °: the latter is more intense and related to the mean-reversion of the price. Indeedn when a market order consumes the liquidity available at the best ask, two main scenarios can occur for the mid-price to change again, either the consumed liquidity is replaced, reverting back the price (mean-reverting scenario, highly probable) or the price moves up again and a new best bid is created. A multi-asset 16-dimensional Hawkes order book model The nonparametric estimation method introduced in Chapter III allows a fast estimation for a nonparametric methodology. We then scale up the model so as to account for events on two assets simultaneously and unveil a precise structure of the high-frequency cross-asset dynamics. We consider a 16-dimensional model, made of two 8-dimensional models of the form N t = (P + t , P °t , T a t , T b t , L a t , L b t ,C a t ,C b t ) where the dimension P + (P °) counts upwards (downward) mid-price move triggered by any order. We compared two couples of assets that share exposure to the same risk factors. The main empirical result of this study concerned the couple (DAX, EURO STOXX) for which price Background on SGD algorithms, Point Processes and Cox proportional hazards model 1 SGD algorithms Objectives that are decomposable as a sum of a number of terms come up often in applied mathematics and scientiAEc computing. They are particularly prevalent in machine learning applications, where one wants to minimize the average loss function over all observations. In the last two decades research on optimisation problems with a summation structure has focused more on the stochastic approximation setting, where the summation is assumed to be over an inAEnite set of terms [NJLS09, DS09, BCN16, Bot98]. The AEnite sum case has seen a resurgence in recent years after the discovery that there exist fast stochastic incremental gradient methods whose convergence rates are better deterministic AErst order methods. We provide a survey of fast stochastic gradient methods in the later parts of this section. DeAEnitions In this work, we particularly focus on problems that have convex objectives. This is a major restriction, and one at the core of much of modern optimization theory. The primary reasons for targeting convex problems are their widespread use in applications and their relative ease of solving them. For convex problems, we can almost always establish theoretical results giving a practical bound on the amount of computation time required to solve a given convex problem [START_REF] Nesterov | Interior-point polynomial algorithms in convex programming[END_REF]. Convex optimisation is still of interest when addressing non-convex problems though: many algorithms that were developed for convex problems, motivated by their provably fast convergence have later been applied to non-convex problems with good empirical results [START_REF] Goodfellow | Deep Learning[END_REF]. We denote r f the gradient of f , r 2 f its Hessian matrix and ||•|| the Eucliean norm. Let now deAEne some useful notions. I. Background on SGD algorithms, Point Processes and Cox proportional hazards model DeAEnition 1. A function f is L-smooth with L > 0 if f is di entiable and its gradient is Lipschitz continuous, that is 8µ, µ 0 2 R d , ||r f (µ) °rf (µ 0 )|| ∑ L||µ °µ0 ||. If the function f is twice di erentiable, the deAEnition can be equivalently written: 8µ 2 R d , |eigenvalues[r 2 f (µ)]| ∑ L. The other assumption we will sometimes make is that of strong convexity. DeAEnition 2. A function f is µ-strongly convex if: 8µ, µ 0 2 R d , 8t 2 [0, 1], f (t µ + (1 °t )µ 0 ) ∑ t f (µ) + (1 °t ) f (µ 0 ) °t (1 °t ) µ 2 ||µ °µ0 || 2 . If f is di erentiable, the deAEnition can be equivalently written: 8µ, µ 0 2 R d , f (µ 0 ) ∏ f (µ) + rf (µ) > (µ 0 °µ) + µ 2 ||µ 0 °µ|| 2 . If the function f is twice di erentiable, the deAEnition can be equivalently written: 8µ 2 R d , |eigenvalues[r 2 f (µ)]| ∏ µ. Gradient descent based algorithms can be easily extended to non-di erentiable objectives F if they write F (µ) = f (µ) + h(µ) with f convex and di erentiable, and h convex and nondi erentiable whose proximal operator is easy to compute. DeAEnition 3. Given a convex function h, we deAEne its proximal operator as prox h (x) = argmin y ∑ h(y) + 1 2 ||x °y|| 2 ∏ , which is well-deAEned because of the strict convexity of the `2-norm. The proximal operator can be seen as a generalization of the projection. Indeed, if h = 0 on C and h = 1 on C , prox h is exactly the projection over C . The computation of the proximal operator is also an optimization problem, but when the function h is simple enough, the proximal operator has a closed form solution. Using these proximal operators, most algorithms enjoy the same theoretical convergence rates as if the objective was di erentiable (i.e. F (µ) = f (µ)). SGD algorithms from a general distribution A variety of statistical and machine learning optimization problems writes min µ2R d F (µ) = f (µ) + h(µ) with f (µ) = E ª [`(µ, ª)], where f is a goodness of AEt measure depending implicitly on some observed data, h is a regularization term that imposes structure to the solution and ª is a random variable. Typically, f is a di erentiable function with a Lipschitz gradient, whereas h might be non-smooth (typical examples include sparsity inducing penalty). First-order optimization algorithms are all variants of Gradient Descent (GD), which can be traced back to Cauchy [START_REF] Cauchy | Méthode générale pour la résolution des systemes d'équations simultanées[END_REF]. Starting at some initial point µ 0 , this algorithm minimizes a di erentiable function by iterating steps proportional to the negative of the gradient, as explained in Algorithm 1. Algorithm 1 Gradient Descent (GD) initialize µ while not converged do µ √ µ °¥r f (µ) end while return µ Stochastic Gradient Descent (SGD) algorithms focus on the case where r f is intractable or at least time-consuming to compute. Noticing that r f (µ) writes as an expectation like f , one idea is to approximate the gradient in the update step in Algorithm 1 with a Monte Carlo Markov Chain [START_REF] Atchadé | On perturbed proximal gradient algorithms[END_REF]. Replacing the exact gradient r f (µ) with its MCMC estimate is a general approach that enabled a signiAEcant step forward in training Undirected Graphical Models [START_REF] Hinton | Training products of experts by minimizing contrastive divergence[END_REF] and Restricted Boltzmann Machines [START_REF] Hinton | Reducing the dimensionality of data with neural networks[END_REF]. This form of Stochastic Gradient Descent is called Contrastive Divergence in the mentionned context. Approximating the gradient of an expectation, sometimes named the score function [START_REF] Cox | Theoretical statistics[END_REF], is a recurrent task for many other problems. Among them, we can cite posterior computation in variational inference [START_REF] Rezende | Stochastic backpropagation and approximate inference in deep generative models[END_REF], value function and policy learning in reinforcement learning [START_REF] Peters | Policy gradient methods[END_REF], derivative pricing [START_REF] Broadie | Estimating security price derivatives using simulation[END_REF], inventory control in operation research [START_REF] Fu | Gradient estimation[END_REF] and optimal transport theory [START_REF] Gelman | Simulating normalizing constants: From importance sampling to bridge sampling to path sampling[END_REF]. SGD algorithms from a uniform distribution Most machine learning optimization problems involve a data AEtting loss function f averaged over the uniform distribution, for instance when f is the average loss function over each observation of the data set. Namely, the optimization problem to solve writes min µ2R d F (µ) = f (µ) + h(µ) with f (µ) = 1 n n X i =1 f i (µ), where n is the number of observations, and f i is the loss associated to the i th observation. In that case, instead of running MCMC to approximate r f , one uniformly samples a random integer i between 1 and n and replace r f (µ) with r f i (µ) in the update step, as shown in Algorithm 2. In the litterature, Stochastic Gradient Descent implicitly refers to the uniform distribution case. In the large-scale setting, computing r f (µ) at each update step represents the bottleneck of the minimization algorithm, and SGD helps decreasing the computation time. Algorithm 2 Stochastic Gradient Descent (SGD) initialize µ as the zero vector while not converged do pick i ª U [n] µ √ µ °¥r f i (µ) end while return µ Assuming the computation of each r f i (µ) costs 1, the computation of the full gradient r f (µ) costs n, meaning SGD's update step is n times faster than GD's one. The comparison of the convergence rates is however di erent. Consider f L-smooth and convex and denote µ § its minimizer. We deAEne the condition number ∑ = L/µ. The convergence rate is measured via the di erence f (µ t ) °f (µ § ). Using the algorithm Gradient Descent with ¥ = 1/L, the convergence rates are: f (µ t ) °f (µ § ) ∑ O µ 1 t ∂ , f (µ t ) °f (µ § ) ∑ O °e°t/∑ ¢ if f is µ-strongly convex. The latter convergence rate which geometrically decrease the error is called linear convergence rate since the error decrease after one iteration is at worst linear. The convergence (in expectation) of the sequence (µ t ) produced by the algorithm Stochastic Gradient Descent need the step sizes to decrease to zero a speciAEc way, see [START_REF] Robbins | A stochastic approximation method[END_REF] for a general characterization. The convergence rate of stochastic algorithms is measured via the di erence E f (µ t ) °f (µ § ). Assuming each function f i is L-Lipschitz (and not L-smooth) and f is convex, denoting µ t = 1 t P t u=1 µ u , the convergence rates of Stochastic Gradient Descent are: E f (µ t ) °f (µ § ) ∑ O µ 1 p t ∂ with ¥ t = 1 L p t , E f (µ t ) °f (µ § ) ∑ O ≥ ∑ t ¥ with ¥ t = 1 µt if f is µ-strongly convex. Convergence rates with other assumptions on the function f can be found in [B + 15]. Recently, di erent works improved Stochastic Gradient Descent using variance reduction techniques from Monte Carlo methods. The idea is to add a control variate term to the descent direction to improve the bias-variance tradeo in the approximation of the real gradient r f (µ). Those variants also enjoy linear convergence rates with constant step-sizes. 1. SGD algorithms SGD with Variance Reduction The control variable is a variance reduction technique used in Monte Carlo methods [START_REF] Glasserman | Monte Carlo methods in AEnancial engineering[END_REF]. Its principle consists in estimating the population mean E(X ) while reducing the variance of sample of X by using a sample from another variable Y with known expectation. We deAEne a family of estimators Z AE = AE(X °Y ) + E(Y ) AE 2 [0, 1], whose expectation and variance equal E(Z a ) = AEE(X ) + (1 °AE)E(Y ), V(Z a ) = AE 2 [V(X ) + V(Y ) °2cov(X , Y )]. The case AE = 1 provides an unbiased estimator, while 0 < AE < 1 implies Z AE to be biased with reduced variance. This control variates is particularly useful when Y is positively correlated with X . The authors of [START_REF] Johnson | Accelerating stochastic gradient descent using predictive variance reduction[END_REF] observed that the variance induced by SGD's descent direction can only decrease to zero if decreasing step sizes are used, which prevents from linear convergence rate. In their work, they propose a variance reduction approach on the descent direction so as to use constant step sizes and obtain a linear convergence rate. The algorithms SAG [START_REF] Roux | A stochastic gradient method with an exponential convergence _rate for AEnite training sets[END_REF][START_REF] Schmidt | Minimizing AEnite sums with the stochastic average gradient[END_REF], SVRG [START_REF] Johnson | Accelerating stochastic gradient descent using predictive variance reduction[END_REF][START_REF] Xiao | A proximal stochastic gradient method with progressive variance reduction[END_REF], SAGA [START_REF] Defazio | Saga: A fast incremental gradient method with support for non-strongly convex composite objectives[END_REF] and SDCA [START_REF] Shalev-Shwartz | Stochastic dual coordinate ascent methods for regularized loss minimization[END_REF] can be phrased with the variance reduction approach described above. Update steps of SAG, SAGA and SVRG with i ª U [n] respectively write this way: (SAG) µ √ µ °¥ √ r f i (µ) °yi n + 1 n n X j =1 y j ! , (SAGA) µ √ µ °¥ √ r f i (µ) °yi + 1 n n X j =1 y j ! , (SVRG) µ √ µ °¥ √ r f i (µ) °rf i ( μ) + 1 n n X j =1 r f j ( μ) ! . From the control variate interpreation, we observe that SAG's descent direction is a biased estimate (AE = 1/n) of the gradient r f (µ), while SAGA's and SVRG's ones are unbiased (AE = 1). Stochastic Average Gradient (SAG) At each iteration, the algorithm SAG [START_REF] Roux | A stochastic gradient method with an exponential convergence _rate for AEnite training sets[END_REF] computes one gradient r f i with the up-to-date value of µ, like SGD, and then descend in the direction of the average of the most recently computed gradients r f j with equals weights, see Algorithm 3. Even though some gradients in the summation haven't been updated recently, the algorithm enjoys a linear convergence rate in the strongly-convex case. SAG can be regarded as a stochastic version of Incremental Average Gradient [START_REF] Blatt | A convergent incremental gradient method with a constant step size[END_REF], which has the same update with a di erent constant factor, and with cyclic computation of the gradient instead of randomised. The convergence rates in the convex and strongly-convex cases with ¥ = 1/(16L) respectively involves the average iterate µ t and the iterate µ t : E f (µ t ) °f (µ § ) ∑ O µ 1 t ∂ E f (µ t ) °f (µ § ) ∑ O ≥ e °t °1 8n ^1 16∑ ¢¥ if f is µ-strongly convex. The algorithm SAG is adaptative to the level of convexity of the problem, as it may be used with the same step size on both convex and strongly convex problems. Algorithm 3 Stochastic Average Gradient (SAG) initialize µ as the zero vector, y i = rf i (µ) for each i while not converged do µ √ µ °¥ n P n j =1 y j pick i ª U [n] y i √ rf i (µ) end while return µ Stochastic Variance Reduced Gradient (SVRG) The SVRG algorithm [XZ14, JZ13] is a recent stochastic gradient algorithm with variance reduction with linear convergence rate, given in Algorithm 4. Unlike SAG and SAGA, there is another parameter m to tune, which controls the update frequency of the control variate μ. The algorithm S2GD [START_REF] Konecn | Semi-stochastic gradient descent methods[END_REF] was developed at the same time, and has the same update as SVRG. The di erence lies in the update of the control variate μ: • Option I: μ is the average of the µ values from the last m iterations, used in [START_REF] Johnson | Accelerating stochastic gradient descent using predictive variance reduction[END_REF]. • Option II: μ is a randomly sampled µ from the last m iterations, used for S2GD [START_REF] Konecn | Semi-stochastic gradient descent methods[END_REF]. Consider f µ-strongly convex, a step size ¥ < 1/(2L), and assume m is su cently large so that Ω = 1 µ¥(1 °2L¥)m + 2L¥ 1 °2L¥ < 1, then the SVRG algorithm has a linear convergence rate if t is a multiple of m: E f ( μt ) °f (µ § ) ∑ O °Ωt/m ¢ . Let us mention that SVRG does not require the storage of full gradients, on the contrary to SDCA, SAG and SAGA. The algorithm just stores the gradient r f ( μ) and re-evaluates the gradient r f i ( μ) at each iteration. SGD algorithms Algorithm 4 Stochastic Variance Reduced Gradient (SVRG) initialize µ and μ as zero vectors, t as zero while not converged do pick i ª U [n] µ √ µ °¥(r f i (µ) °rf i ( μ) + rf ( μ)) t √ t + 1 if t is a multiple of m then update μ with option I or II end if end while return µ SAGA The algorithm SAGA [START_REF] Defazio | Saga: A fast incremental gradient method with support for non-strongly convex composite objectives[END_REF], described in Algorithm 5, enjoys a linear convergence rate in the strongly convex case, like SAG and SVRG, but it has the advantage with respect to SAG that it allows non-smooth penalty terms such as `1 regularization. The proof of the convergence rate is easier as well, especially because SAG's descent direction is a biased estimate of the gradient, while SAGA's one is unbiased. As SAG, the algorithm SAGA maintains the current iterate µ and a table of historical gradients. The convergence rate of the algorithm SAGA writes: E f (µ t ) °f (µ § ) ∑ O ≥ n t ¥ with ¥ = 1 3L , E||µ t °µ § || 2 ∑ O ≥ e °t 2(n+∑) ¥ with ¥ = 1 2(µn + L) if f is µ-strongly convex. Algorithm 5 SAGA initialize µ as the zero vector, y i = rf i (µ) for each i while not converged do pick i ª U [n] µ √ µ °¥ ≥ r f i (µ) °yi + 1 n P n j =1 y j ¥ y i √ rf i (µ) end while return µ Composite case In the paragraphs above, we gave the convergence rates of the algorithm in the smooth case i.e. when the objective function to minimize is a smooth function. When the objective function is not smooth, one writes it as the sum of its smooth part f (µ) and its non-smooth part h(µ). One can easily adapt the previous algorithms by computing the gradient of the smooth part f and then project the iterate using the proximal operator of the non-smooth part h. This adds a projection step µ √ prox h (µ) at the end of each iteration. Point Processes Point processes are useful to describe phenomena occurring at random locations and/or times. A point process is a random element whose values are point patterns on a set S. We present here the deAEnitions and the useful results from point processes' theory. For further details, the book [START_REF] Daley | An introduction to the theory of point processes: volume II: general theory and structure[END_REF] is regarded as the main reference in the area of point processes. DeAEnitions Let S be a locally compact metric space equipped with its Borel ae-algebra B. Let X S be the set of locally AEnite counting measures on S, and N S the smallest ae-algebra on X S such that all point counts f B : X S ! N, ! 7 ! #(! \ B ) are measurable for B relatively compact in B. A point process on S is a measurable map ª from a probability space (≠, F , P) to the measurable space (X S , N S ). Every realization of a point process ª can be written as ª = P n i =1 ± X i where ± is the Dirac measure, n is an integer-valued random variable and X i 's are random elements of S. A point process can be equivalenty represented by a counting process: N (B ) := R B ª(x)d x, Temporal Point Processes A particular interesting case of point processes is given when S is the time interval [0, T ), which we will call a temporal point process. Here, a realization is simply a set of time points: ª = P n i =1 ± t i . With a slight notation abuse we will write ª = {t 1 ,..., t n } where each t i is a random time before T , and we deAEne N t = P ø2ª 1 ø∑t the associated counting process. The conditional intensity function is the usual way to characterize temporal point processes where the present depends on the past. It is deAEned as the expected inAEnitesimal rate at which events are expected to occur after t , given the history of the counting process N t prior to t . Point Processes Namely, ∏(t |F t ) = lim d t!0 P(N t +d t °Nt = 1|F t ) d t , where F t is the natural AEltration of the process, it represents the information available up to (but not including) the time t . The conditional intensity function is sometimes denoted ∏ § (t ). The most simple temporal point process is the homogeneous Poisson process which assumes that the events arrive at a constant rate, which corresponds to a constant intensity function ∏(t |F t ) = ∏ § (t ) = ∏ > 0. More generally, we deAEne the inhomogeneous Poisson process for which the conditional intensity function depends on t but not on the history i.e. ∏(t |F t ) = ∏ § (t ) = ∏(t ). The conditional intensity turns out to be interesting for multiple reasons. First, it is a convenient characterization of a temporal point process since it describes what is locally happening at t and is easy to interpret as an instantaneous probability. Secondly, the conditional intensity can be used for simulating a temporal point process: the basic idea is to simulate a Poisson process and use the cumulative conditional intensity to time scale the interevent times [START_REF] Ogata | On lewis' simulation method for point processes[END_REF]. Thirdly, the likelihood function can be expressed on closed form using the conditional intensity: if the point process is deAEned on [0, T ), then the likelihood and the log-likelihood functions are given by L(ª) = √ n Y i =1 ∏ § (t i ) ! exp µ °ZT 0 ∏ § (s)d s ∂ , log L(ª) = n X i =1 log ∏ § (t i ) °ZT 0 ∏ § (s)d s. Finally, the conditional intensity function is useful for many other purposes, like a goodnessof-AEt test known as residual analysis for point processes [START_REF] Ogata | Statistical models for earthquake occurrences and residual analysis for point processes[END_REF], or the conditional distribution of interevent times between events [START_REF] Daley | An introduction to the theory of point processes: volume II: general theory and structure[END_REF]. We can also deAEne the compensator §(t ) of the point process, with respect to F t , as the integral of the conditional intensity function: §(t ) = R t 0 ∏ § (s)d s. We remind that N t ° §(t ) is then a F t -martingale. We remind that the distribution of interevent times of a Poisson process with intensity ∏ is an exponential distribution of parameter ∏. More generally, we denote f § (t ) the conditional probability density function of the interevent time, t n the last event that occured and T the random next one, F § (t ) = P(t n ∑ T ∑ t |F t ) the conditional cumulative density function, and S § (t ) = 1 °F § (t ) = P(T ∏ t |F t ) the survival function. Now, ∏ § (t ) = lim h!0 P(t ∑ T ∑ t + h|T ∏ t ) h = lim h!0 1 h P(t ∑ T ∑ t + h) P(T ∏ t ) = lim h!0 µ 1 h f § (t )h S § (t ) + o(1) ∂ = f § (t ) 1 °F § (t ) . I. Background on SGD algorithms, Point Processes and Cox proportional hazards model Conversely, we can write the likelihood function of the next event using the conditional intensity function: f § (t ) = ∏ § (t ) exp µ °Zt t n ∏ § (s)d s ∂ . This last formula enables writing a point process's realization's likelihood, already introduced above. 3 Cox proportional hazards model Survival analysis Survival analysis focuses on time-to-event data, such as the death in biological organisms and failure in mechanical systems, and is now widespread in a variety of domains like biometrics, econometrics and insurance [START_REF] Andersen | Statistical models based on counting processes[END_REF]. The variable we study is the waiting time until a well-deAEned event occurs, and the main goal of survival analysis is to link the covariates, or features, of a patient to its survival time. We denote T the random variable of the time of death, we deAEne the survival function as: S(t ) = P(t ∑ T ). However, and fortunately, not all a ected patients die during a medical study and some patients can also leave the study before its end: we say that these observations are right-censored, in the sense that for some units the event of interest has not occured at the time the data are analyzed. The information about censored individual is incomplete, but it is still an information because one knows that an individual survived at least until the date he left the study. We will only study this kind of censoring in this part1 . Let us now consider the probabilistic formulation for our framework: let T be a non-negative random variable representing the waiting time until the occurrence of an event (we will refer to this event as failure and to this waiting time as failure time). However, we don't always observe the random variable T since the patient can leave the study -before its death -at time C called the censoring time. Actually, we do observe T ^C and we know if the patient died or left the study i.e. we know ± = {T ∑C } . We also assume that T and C are independent. We can now describe the model using counting processes. Let (≠, F , P) be a probability space and (F t ) t ∏0 a AEltration satisfying the usual conditions. Let N be a point process with compensator § with respect to (F t ) t ∏0 so that N ° § is a (F t ) t ∏0 -martingale. We denote (T 1 , ..., T n ) i .i .d . copies of the random variable of interest T , corresponding to n di erent patients of a medical study for instance, (C 1 , ...,C n ) i .i .d . copies of the censoring variable C and we deAEne for each patient i : ± i = {T i ∑C i } , the counting process N i (t ) = ± i {T i ^Ci ∑t } , and Y i (t ) = {T i ^Ci ∏t } , which is a predictable process. To understand the behavior of the counting process N i (t ), we introduce its intensity AE i (t ) deAEned as 3. Cox proportional hazards model the conditional probability that the patient i dies immediately after t , given that he was alive before t : AE(t ) = lim h!0 P(t ∑ T ∑ t + h|t ∑ T ) h = °S0 (t ) S(t ) Since the process can jump only once, the intensity of N i (t ) takes the form AE i (t ) = ∏ i (t )Y i (t ), where ∏ i (t ) is called the hazard ratio. We also introduce the cumulative hazard § i (t ) = R t 0 ∏ i (s)d s, which can be seen as the sum of the risks faced from 0 to t . Survival analysis generally aims at estimating either S(t ) or ∏(t ) (or §(t )) given the observations of n individuals. Many approachs exist: the parametric one, which assumes that the functions can be described with a AEnite and small number of parameters, the nonparametric one, which assumes that the function of interest belongs to a certain class of smooth functions and the semi-parametric one, that has parametric and non-parametric components. The most popular approach, for some reasons explained below, is Cox proportional hazards model. The Cox model [START_REF] David | Regression models and life tables (with discussion)[END_REF] assumes a semi-parametric form for the hazard ratio at time t for the patient i , whose features are encoded in the vector x i 2 R d : ∏ i (t ) = ∏ 0 (t ) exp(x > i µ) where ∏ 0 (t ) is a baseline hazard ratio, which can be regarded as the hazard ratio of a patient whose covariates x = 0. Two estimation approachs exist: either estimating ∏ 0 and µ which can be done via maximizing the full likelihood of the model [START_REF] Ren | Full likelihood inferences in the cox model: an empirical likelihood approach[END_REF] [She15], or considering ∏ 0 a nuisance and only estimating µ via maximizing a partial likelihood L(µ) [START_REF] David | Regression models and life tables (with discussion)[END_REF]. This way of estimating suits clinical studies where physicians are only interested in the e ects of the covariates encoded in x on the hazard ratio. This can be done with computing the ratio of hazard ratios from two di erent patients: ∏ i (t ) ∏ j (t ) = exp((x i °x j ) > µ) For that reason, Cox model is said to be a proportional hazards model. However, maximizing such functions is a hard problem when we deal with large-scale (meaning large n) and high-dimensional (meaning large d ) data. To tackle to high-dimensionality, sparse penalized approaches have been considered in the literature [START_REF] Tibshirani | Regression shrinkage and selection via the lasso[END_REF] [ T + 97] [Goe10]. The problem is now to minimize the negative of the partial log-likelihood °`(µ) with a penalization that make the predictor µ become sparse and then select variables. We will discuss further this approach and the di erent models. On the contrary, approaches to tackle the large-scale side of the problem do not yet exist. We give an answer to this question in the following chapter. Existing methods The maximization of the partial likelihood L P (µ) introduced in [START_REF] David | Regression models and life tables (with discussion)[END_REF] enables the estimation of µ -without the estimation of ∏ 0 . The partial likelihood writes: L P (µ) = n Y i =1 √ exp(x > i µ) P j 2R i exp(x > j µ) ! ± i (1) I. Background on SGD algorithms, Point Processes and Cox proportional hazards model We prove in appendix that the negative of the partial log-likehood is convex, then the issue of AEnding the µ that match our data can be expressed as a classical convex optimization problem. We will consider the problem of maximizing the partial likelihood of the Cox model in the rest of this chapter. However, in case of large-scale (meaning large n) and high-dimensional (meaning large p) data, this function becomes hard to maximize. To tackle to high-dimensionality, sparse penalized approaches have been considered in the literature. The problem is now to minimize the negative of the partial log-likelihood °`(µ) + pen(µ) i.e. 1 n n X i =1 ± i " °x> i µ + log √ X j 2R i exp(x > j µ) !# + pen(µ) where pen(µ) is a penalization term that make the predictor µ become sparse and then select variables. For instance, the sparse penalties Lasso [START_REF] Tibshirani | Regression shrinkage and selection via the lasso[END_REF] [ T + 97], Elastic-Net [SFHT11] [YZ12], SCAD [START_REF] Fan | Variable selection via nonconcave penalized likelihood and its oracle properties[END_REF], Adaptative Lasso [START_REF] Zou | The adaptive lasso and its oracle properties[END_REF], Graphical Lasso [START_REF] Friedman | Sparse inverse covariance estimation with the graphical lasso[END_REF], SLOPE [BvdBS + 15] and others. Indeed, the Lasso penalty [START_REF] Tibshirani | Regression shrinkage and selection via the lasso[END_REF] pen lasso (µ) = ∏||µ|| 1 can be used to obtain a penalized partial likelihood estimator b µ [START_REF] Goeman | L1 penalized estimation in the cox proportional hazards model[END_REF]. The lasso penalty tends to select only a few nonzero coe cients and does not handle well very correlated predictors: it will pick one and ignore the other. Another well-known penalty called Ridge penalty pen ridge (µ) = ∏ 2 ||µ|| 2 2 tends to shrink all coe cients to zero and give equal weights to very correlated predictors. Zhou and Hastie [START_REF] Simon | Regularization paths for cox's proportional hazards model via coordinate descent[END_REF] combined the strengths of the two approaches with the Elastic-Net penalty, where AE 2 [0, 1] controls the behavior of the penalty: pen e-net (µ) = ∏ µ AE||µ|| 1 + 1 2 (1 °AE)||µ|| 2 2 ∂ The authors of [START_REF] Gopakumar | Stabilizing sparse cox model using clinical structures in electronic medical records[END_REF] studied electronic medical records and used a sparse penalty which encodes the a priori relationship between predictors i and j : A i j = 1 if predictors i and j share a temporal or well-known relation, A i j = 0 otherwise. pen(µ) = ∏ 1 ||µ|| 1 + 1 2 ∏ 2 X i , j A i j (µ i °µj ) 2 These methods handle the high-dimensional side of the dataset, but don't look relevant when the number of patients n is large. Indeed, the higher the number of examples n, the higher the time to compute the sum of loss functions (here, the negative of the penalized log-likelihood) and time can be the limiting factor when one envisions very large datasets. In the next chapter, we introduce a new stochastic algorithm with variance reduction that enables a faster minimization of the negative partial likelihood of the Cox model. CHAPTER II Large-scale Cox model Abstract We introduce a doubly stochastic proximal gradient algorithm for optimizing a AEnite average of smooth convex functions, whose gradients depend on numerically expensive expectations. Indeed, the e ectiveness of SGD-like algorithms relies on the assumption that the computation of a subfunction's gradient is cheap compared to the computation of the total function's gradient. This is true in the Empirical Risk Minimization (ERM) setting, but can be false when each subfunction depends on a sequence of examples. Our main motivation is the acceleration of the optimization of the regularized Cox partiallikelihood (the core model in survival analysis), but other settings can be considered as well. The proposed algorithm is doubly stochastic in the sense that gradient steps are done using stochastic gradient descent (SGD) with variance reduction, and the inner expectations are approximated by a Monte-Carlo Markov-Chain (MCMC) algorithm. We derive conditions on the MCMC number of iterations guaranteeing convergence, and obtain a linear rate of convergence under strong convexity and a sublinear rate without this assumption. We illustrate the fact that our algorithm improves the state-of-the-art solver for regularized Cox partial-likelihood on several datasets from survival analysis. Introduction During the past decade, advances in biomedical technology have brought high dimensional data to biostatistics and survival analysis in particular. Today's challenge for survival analysis lays in the analysis of massively high dimensional (numerous covariates) and large-scale (large number of observations) data, see in particular [START_REF] Murdoch | The inevitable application of big data to health care[END_REF]. Areas of application outside of biostatistics, such as economics (see [START_REF] Einav | Economics in the age of big data[END_REF]), or actuarial sciences (see [START_REF] Richards | A handbook of parametric survival models for actuarial use[END_REF]) are also concerned. One of the core models of survival analysis is the Cox model (see [START_REF] David | Regression models and life tables (with discussion)[END_REF]) for which we propose, in the present paper, a novel scalable optimization algorithm tuned to handle II. Large-scale Cox model massively high dimensional and large-scale data. Survival data (y i , x i , ± i ) n pat i =1 contains, for each individual i = 1, . . . , n pat , a features vector x i 2 R d , an observed time y i 2 R + , which is a failure time if ± i = 1 or a right-censoring time if ± i = 0. If D = {i : ± i = 1} is the set of patients for which a failure time is observed, if n = |D| is the total number of failure times, and if R i = { j : y j ∏ y i } is the index of individuals still at risk at time y i , the negative Cox partial log-likelihood writes °`(µ) = 1 n X i 2D h °x> i µ + log ≥ X j 2R i exp(x > j µ) ¥i (1) for parameters µ 2 R d . This model can be regarded as a regression of the n failure times, using information from the n pat patients that took part to the study. With high-dimensional data, a regularization term is added to the partial likelihood to automatically favor sparsity in the estimates, see [T + 97] and [START_REF] Simon | Regularization paths for cox's proportional hazards model via coordinate descent[END_REF] for a presentation of Lasso and elastic-net penalizations, see also the review paper by [START_REF] Witten | Survival analysis with high-dimensional covariates[END_REF] for an exhaustive presentation. Several algorithms for the Cox model have been proposed to solve the regularized optimization problem at hand, see [PH07, SKJP09, Goe10] among others. These implementations use Newton-Raphson iterations, i.e. large matrices inversions, and can therefore not handle large-scale data. Cyclical coordinate descent algorithms have since been proposed and successfully implemented in R packages coxnet and fastcox, see [START_REF] Simon | Regularization paths for cox's proportional hazards model via coordinate descent[END_REF][START_REF] Yang | A cocktail algorithm for solving the elastic net penalized cox's regression in high dimensions[END_REF]. More recently [START_REF] Mittal | Large-scale parametric survival analysis[END_REF] adapted the column relaxation with logistic loss algorithm of [START_REF] Zhang | The value of unlabeled data for classiAEcation problems[END_REF] to the Cox model. The fact that all these algorithms are of cyclic coordinate descent type solve the problem, supported by Newton-Raphson type algorithms, of large matrices inversions. Yet another computationnally costly problem, speciAEc to the Cox model, has not been fully addressed: the presence of cumulative sums (over indices j 2 R i ) in the Cox partial likelihood. This problem was noticed in [START_REF] Mittal | Large-scale parametric survival analysis[END_REF], where a numerical workaround exploiting sparsity is proposed to reduce the computational cost. The cumulative sum prevents from successfully applying stochastic gradient algorithms, which are however known for their e ciency to handle large scale generalized linear models: see for instance SAG by [START_REF] Schmidt | Minimizing AEnite sums with the stochastic average gradient[END_REF], SAGA by [START_REF] Defazio | Saga: A fast incremental gradient method with support for non-strongly convex composite objectives[END_REF], Prox-SVRG by [START_REF] Xiao | A proximal stochastic gradient method with progressive variance reduction[END_REF] and SDCA by [START_REF] Shalev-Shwartz | Proximal stochastic dual coordinate ascent[END_REF] that propose very e cient stochastic gradient algorithms with constant step-size (hence achieving linear rates), see also Catalyst by [START_REF] Lin | A universal catalyst for AErst-order optimization[END_REF] that introduces a generic scheme to accelerate and analyze the convergence of those algorithms. Such recent stochastic gradient algorithms have shown that it is possible to improve upon proximal full gradient algorithms for the minimization of convex problems of the form min µ2R d F (µ) = f (µ) + h(µ) with f (µ) = 1 n n X i =1 f i (µ), (2) where the functions f i are gradient-Lipschitz and h is prox-capable. These algorithms take advantage of the AEnite sum structure of f , by using some form of variance-reduced stochastic gradient descent. It leads to algorithms with a much smaller iteration complexity, as compared to proximal full gradient approach (FG), while preserving (or even improving) the linear convergence rate of FG in the strongly convex case. However, such algorithms are relevant 2. Comparison with previous work when gradients r f i have a numerical complexity much smaller than r f , such as for linear classiAEcation or regression problems, where r f i depends on a single inner product x > i µ between features x i and parameters µ. In this paper, motivated by the important example of the Cox partial likelihood (1), we consider the case where gradients r f i can have a complexity comparable to the one of r f . More precisely, we assume that they can be expressed as expectations, under a probability measure º i µ , of random variables G i (µ), i.e., r f i (µ) = E G i (µ)ªº i µ [G i (µ)]. (3) This paper proposes a new doubly stochastic proximal gradient descent algorithm (2SVRG), that leads to a low iteration complexity, while preserving linear convergence under suitable conditions for problems of the form (2) + (3). Our main motivation for considering this problem is to accelerate the training-time of the the penalized Cox partial-likelihood. The function °`(µ) is convex (as a sum of linear and log-sum-exp functions, see Chapter 3 of [START_REF] Boyd | Convex optimization[END_REF], and AEts in the setting (2) + (3). Indeed, AEx i 2 D and introduce f i (µ) = °x> i µ + log ≥ X j 2R i exp(x > j µ) ¥ , so that r f i (µ) = °xi + X j 2R i x j º i µ ( j ) where º i µ ( j ) = exp(x > j µ) P j 0 2R i exp(x > j 0 µ) , 8 j 2 R i . This entails that r f i (µ) satisAEes (3) with G i (µ) a random variable valued in {°x i + x j : j 2 R i } and such that P(G i (µ) = °xi + x j ) = º i µ ( j ) for j 2 R i . Note that the numerical complexity of r f i can be comparable to the one of r f , when y i is close to min i y i (recalling that R i = { j : y j ∏ y i }). Note also that a computational trick allows to compute r f (µ) with a complexity O(nd). Indeed, once all data points are sorted, the sum can be computed recursively. This makes this setting quite di erent from the usual case of empirical risk minimization (linear regression, logistic regression, etc.), where all the gradients r f i share the same low numerical cost. Comparison with previous work SGD techniques. Recent proximal stochastic gradient descent algorithms by [DBLJ14], [XZ14], [SSZ12] and [START_REF] Schmidt | Minimizing AEnite sums with the stochastic average gradient[END_REF] build on the idea of [START_REF] Robbins | A stochastic approximation method[END_REF] and [KW + 52]. Such algorithms are designed to tackle large-scale optimization problems (n is large), where it is assumed implicitly that the r f i (smooth gradients) have a low computational cost compared to r f , and where h is eventually non-di erentiable and is dealt with using a backward or projection step using its proximal operator. The principle of SGD is, at each iteration t , to sample uniformly at random an index i ª U [n] , and to apply an update step of the form µ t +1 √ µ t °∞t r f i (µ t ). This step is based on an unbiased but very noisy estimate of the full gradient r f , so the choice of the step size ∞ t is crucial since it has to be decaying to curb the variance introduced by random sampling (excepted for averaged SGD in some particular cases, see [START_REF] Bach | Non-strongly-convex smooth stochastic approximation with convergence rate o (1/n)[END_REF]). This tends to slow down convergence to a minimum µ ? 2 argmin µ2R d f (µ). Gradually reducing the variance of r f i for i ª U [n] as an approximation of r f allows to use larger -even constant -step sizes and to obtain faster convergence rates. This is the underlying idea of two recent methods -SAGA and SVRG respectively introduced in [DBLJ14], [START_REF] Xiao | A proximal stochastic gradient method with progressive variance reduction[END_REF] -that use updates of the form w t +1 √ µ t °∞≥ r f i (µ t ) °rf i ( μ) + 1 n n X j =1 r f j ( μ) ¥ , and µ t +1 √ prox ∞h (w t +1 ). In [START_REF] Xiao | A proximal stochastic gradient method with progressive variance reduction[END_REF], μ is fully updated after a certain number of iterations, called phases, whereas in [START_REF] Defazio | Saga: A fast incremental gradient method with support for non-strongly convex composite objectives[END_REF], μ is partially updated after each iteration. Both methods use stochastic gradient descent steps, with variance reduction obtained via the centered control variable °r f i ( μ) + 1 n P n j =1 r f j ( μ), and achieve linear convergence when F is strongly- convex, namely EF (µ k ) °min 2R d F (x) = O(Ω k ) with Ω < 1, which make these algorithms state-of-the-art for many convex optimization problems. Some variants of SVRG [XZ14] also approximate the full gradient 1 n P n j =1 r f j ( μ) using mini-batchs to decrease the computing time of each phase, see [LJ17, HAV + 15]. Numerically hard gradients. A very di erent, nevertheless classical, "trick" to reduce the complexity of the gradient computation, is to express it, whenever the statistical problem allows it, as the expectation, with respect to a non-uniform distribution º µ , of a random variable G(µ), i.e., r f (µ) = E G(µ)ªº µ [G(µ)]. Optimization problems with such a gradient have generated an extensive literature from the AErst works by [START_REF] Robbins | A stochastic approximation method[END_REF], and [KW + 52]. Some algorithms are designed to construct stochastic approximations of the sub-gradient of f + h, see [NJLS09, JN + 11, Lan12, DHS11]. Others are based on proximal operators to better exploit the smoothness of f and the properties of h, see [START_REF] Hu | Accelerated gradient methods for stochastic optimization and online learning[END_REF][START_REF] Xiao | Dual averaging methods for regularized stochastic learning and online optimization[END_REF][START_REF] Atchadé | On perturbed proximal gradient algorithms[END_REF]. In this paper, we shall focus on the second kind of algorithms. Indeed, our approach is closer to the one developed in [START_REF] Atchadé | On perturbed proximal gradient algorithms[END_REF], though, as opposed to ours, the algorithm developed in this latter work is based on proximal full gradient algorithms (not doubly stochastic as ours) and does not guarantee a linear convergence. Contrastive divergence. The idea to approximate the gradient using MCMC already appeared in the litterature of Undirected Graphical Models under the name of Contrastive Divergence, see [START_REF] Murphy | Machine learning: a probabilistic perspective[END_REF][START_REF] Hinton | Training products of experts by minimizing contrastive divergence[END_REF][START_REF] Carreira-Perpinan | On contrastive divergence learning[END_REF]. Indeeed, for this class of model, the gradient of the log-likelihood r f (µ) can be written as the di erence of two expectations: one -tractable 3. A doubly stochastic proximal gradient descent algorithm -with respect to the data discrete distribution X, the other -intractable -with respect to the model-dependent distribution p(•, µ). The idea of Contrastive Divergence relies in the approximation of the intractable expectation using MCMC, with few iterations of the chain. However, in the framework of Cox model, and also Conditional Random Fields (see Section 6 below), this is the gradient r f i (µ) that writes as an time-consuming expectation, see Equation 3. Our setting. The setting of our paper is original in the sense that it combines both previous settings, namely stochastic gradient descent and MCMC. As in the stochastic gradient setting, the gradient can be expressed as the sum of n components, where n can be very large. However, since these components are time-consuming to compute directly, following the expectation based gradient computation setting, they are expressed as averaged values of some random variables. More precisely, the gradient r f i (µ) is replaced by an approximation b r f i (µ) obtained by an MCMC algorithm. Our algorithm is, to the best of our knowledge, the AErst one to propose a combination of two stochastic approximations in this way, hence the name doubly stochastic, which allow to deal with both, eventual large values for n and the inner complexity of each gradient r f i computation. The idea to mix SGD and MCMC has also been raised recently in the very di erent setting of implicit stochastic gradient descent, see [START_REF] Toulis | Implicit stochastic gradient descent for principled estimation with large datasets[END_REF]. Note also that in our approach we make two stochastic approximations to the gradient using random training points, while the doubly stochastic approach from [DXH + 14] performs two stochastic approximations to the gradient using random training points and random features for kernel methods. A doubly stochastic proximal gradient descent algorithm Our algorithm 2SVRG is built upon the algorithm SVRG via an approximation function ApproxMCMC. We AErst present the meta-algorithm without specifying the approximation function, and then provide two examples for ApproxMCMC. 2SVRG: a meta-algorithm Following the ideas presented in the previous section, we design a doubly stochastic proximal gradient descent algorithm (2SVRG), by combining a variance reduction technique for SGD given by Prox-SVRG [START_REF] Xiao | A proximal stochastic gradient method with progressive variance reduction[END_REF], and a Monte-Carlo Markov-Chain algorithm to obtain an approximation of the gradient r f j (µ) at each step. Thus, in the considered setting the full gradient writes r f (µ) = E i ªU [r f i (µ)] = E i ªU E G i (µ)ªº i µ [G i (µ)], where U is the uniform distribution on {1, . . . , n}, so our algorithm contains two levels of stochastic approximation: uniform sampling of i (the variance-reduced SGD part) for the AErst expectation, and an approximation of the second expectation w.r.t º i µ by means of Monte-Carlo simulation. The 2SVRG algorithm is described in Algorithm 6. Following Prox-SVRG by [START_REF] Xiao | A proximal stochastic gradient method with progressive variance reduction[END_REF], this algorithm decomposes in phases: iterations within a phase apply variance reduced stochastic gradient steps (with a backward proximal step, see Algorithm 6 Doubly stochastic proximal gradient descent (2SVRG) 1: Require: Number of phases K ∏ 1, phase-length m ∏ 1, step-size ∞ > 0, MCMC number of itera- tions per phase (N k ) K k=1 , starting point µ 0 2 R d 2: Initialize: μ √ µ 0 and compute r f i ( μ) for i = 1, . . . , n 3: for k = 1 to K do 4: for t = 0 to m °1 do 5: Pick i ª U [n] 6: b r f i (µ t ) √ ApproxMCMC(i , µ t , N k ) 7: d t = b r f i (µ t ) °rf i ( μ) + 1 n P n j =1 r f j ( μ) 8: ! t +1 √ µ t °∞d t 9: µ t +1 √ prox ∞h (! t +1 ) 10: end for 11: Update μ √ 1 m P m t =1 µ t , µ 0 √ μ, μk √ μ 12: Compute r f i ( μ) for i = 1, . . . , n 13: end for 14: Return: μK lines 7 and 8 in Algorithm 6). At the end of a phase, a full-gradient is computed (lines 10, 11) and used in the next phase for variance reduction. Within a phase, each inner iteration samples uniformly at random an index i (line 4) and obtains an approximation of the gradient r f i at the previous iterate µ t by applying N k iterations of a Monte-Carlo Markov-Chain (MCMC) algorithm. Intuitively, the sequence N k should be increasing with the phase number k, as we need more and more precision as the iterations goes on (this is conAErmed in Section 4). The important point of our algorithm resides precisely in this aspect: very noisy estimates can be used in the early phases of the algorithm, hence allowing for an overall low complexity as compared to a full gradient approach. Choice of ApproxMCMC We focus now on two implementations of the function ApproxMCMC based on two famous MCMC algorithms: Metropolis-Hastings and Importance Sampling. Independent Metropolis-Hastings When the º i µ are Gibbs probability measures, as for the previously described Cox partial loglikelihood (but for other models as well, such as Conditional Random Fields, see [LMP + 01]), one can apply Independent Metropolis-Hastings (IMH), see Algorithm 7 below, to obtain approximations b r f i of the gradients. In this case the produced chain is geometrically uniformly ergodic, see [START_REF] Robert | Monte carlo methods[END_REF], and therefore meets the general assumptions required in our results (see Proposition 1 below). The IMH algorithm uses a proposal distribution Q which is independent of the current state j l of the Markov chain. In the case of the Cox partial log-likelihood, at iteration t of phase k of Algorithm 6, we set º = º i µ t , and Q to be the uniform distribution over the set R i . We implemented two versions Algorithm 7 Independent Metropolis-Hastings (IMH) estimator (for the Cox model) Require: Proposal distribution Q = U {R i }, starting point j 0 2 R i , stationary distribution º = º i µ t for l = 0, . . . , N k °1 do 1. Generate: j 0 ª Q. 2. Update: AE = min ≥ º( j 0 )Q( j l ) º( j l )Q( j 0 ) ,1 ¥ = min °exp((x j 0 °x j l ) > µ t ), 1 ¢ . 3. Take: j l +1 = ( j 0 with probability AE j l otherwise. end for Return: °xi + 1 N k P N k l =1 x j l of Algorithm 6 with IMH: one with a uniform proposal Q, the other one with an adaptative proposal e Q. When we want to approximate r f i (µ), we can consider the adaptative proposal e Q = º i μ, where μ is the iterate we have computed at the end of the previous phase, see Line 10 of Algorithm 6. Since we compute the full gradient only once every phase, the probabilities º i μ( j ) are computed at the same time, which means that the use of an adaptative proposal adds no computational e ort. Morever, the theoretical guarantees given in Section 4 make no di erence between the two versions aformentionned, but a strong di erence is observed in practice. Importance Sampling To choice of the adaptative proposal above reduces the variance of the estimator given by Ap-proxMCMC. The idea of sampling with e Q = º i μ can also be used in an Importance Sampling estimator as well. r f i (µ) = E G i (µ)ªº i µ [G i (µ)] = E G i (µ)ª e Q " G i (µ) º i µ (G i (µ)) e Q(G i (µ)) # Since the ratio º i µ (G i (µ))/ e Q(G i (µ)) still contains an expensive term to compute, we can divide the term above with E e Q £ º i µ (G i (µ))/ e Q(G i (µ)) § = 1 and approximate the resulting term. This trick provides an estimator called Normalized Importance Sampling estimator, which writes like this in the case of Cox partial likelihood: b J N = N X k=1 (x j k °xi ) º i µ ( j k ) e Q( j k ) , N X k=1 º i µ ( j k ) e Q( j k ) , with j k ª e Q = °xi + N X k=1 exp((µ °μ) > x j k ) P N l =1 exp((µ °μ) > x j l ) x j k , with j k ª e Q Section 4 below gives theoretical guarantees for Algorithm 6: linear convergence under strong-convexity of F is given in Theorem 1, and a convergence without strong convexity is given in Theorem 2. This improves the proximal stochastic gradient method of [AFM17], II. Large-scale Cox model Algorithm 8 Normalized Importance Sampling (NIS) estimator of r f i (µ) (for the Cox model) Require: Proposal distribution e Q = º i μ, stationary distribution º i µ , V = 0 2 R d , S = 0 2 R for l = 1, . . . , N k do 1. Generate: j l ª e Q(•). 2. Update: V √ V + exp((µ °μ) > x j l )x j l . 3. Update: S √ S + exp((µ °μ) > x j l ). end for Return : °xi + V /S where the best case rate is O(1/k 2 ) using Fista (see [START_REF] Beck | A fast iterative shrinkage-thresholding algorithm for linear inverse problems[END_REF]) acceleration scheme. Numerical illustrations are given in Section 5, where a fair comparison between several state-of-the-art algorithms is proposed. Theoretical guarantees DeAEnitions. All the functions f i and h are proper convex lower-semicontinuous on R d . The norm k • k stands for the Euclidean norm on R d . A function f : R d ! R is L-smooth if it is di erentiable and if its gradient is L-Lipschitz, namely if kr f (x) °rf (y)k ∑ Lkx °yk for all x, y 2 R d . A function f : R d ! R is µ-strongly convex if f (x + y) ∏ f (x) + rf (x) > y + µ 2 kyk 2 for all x, y 2 R d i.e. if f °µ 2 k • k 2 is convex. The proximal operator of h : R d ! R is uniquely deAEned by prox h (x) = argmin y2R d {h(y) + 1 2 kx °yk 2 }. Notations. We denote by i t the index randomly picked at the t th iteration, see line 4 in Algorithm 6. We introduce the error of the MCMC approximation ¥ t = b r f i t (µ t °1)°r f i t (µ t °1) and the AEltration F t = ae(µ 0 , i 1 , µ 1 ,...,i t , µ t ). In order to analyze the descent steps, we need di erent expectations: E t the expectation w.r.t the distribution of the pair (i t , b r f i t (µ t °1)) con- ditioned on F t °1 , and E the expectation w.r.t all the random iterates (i t , µ t ) of the algorithm. We also denote µ § = argmin µ2R d F (µ). Assumptions. Assumption 1. We consider F = f + h where f = 1 n P n i =1 f i , with each f i being convex and L i -smooth, L i > 0, and h a lower semi-continuous and closed convex function. We denote L = max 1∑i ∑n L i . We assume that there exists B > 0 such that the iterates µ t satisfy sup t ∏0 kµ t °µ § k ∑ B . Assumption 2. We assume that the bias and the expected squared error of the Monte Carlo estimation can be bounded in the following way: Theorems. The theorems below provide upper bounds on the distance to the minimum in the strongly convex case, see Theorem 1 and in the convex case, see Theorem 2. kE t ¥ t k ∑ C 1 N k and E t k¥ t k 2 ∑ C 2 N k ( 4 Theorem 1. Suppose that F = f + h is µ-strongly convex. Consider Algorithm 6, with a phase length m and a step-size ∞ 2 (0, 1 16L ) satisfying Ω = 1 m∞µ(1 °8L∞) + 8L∞(1 + 1/m) 1 °8L∞ < 1. (5) Then, under Assumption 1 and Assumption 2, we have: E[F ( μK )] °F (µ § ) ∑ Ω K ≥ F (µ 0 ) °F (µ § ) + K X l =1 D Ω l N l ¥ , (6) where D = 3∞C 2 +BC 1 1°8L∞ . In Theorem 1, the choice N k = k AE Ω °k with AE > 1 gives E[F ( μK )] °F (µ § ) ∑ D 0 Ω K where D 0 = F (µ 0 ) °F (µ § ) + D P k∏1 k °AE and D > 0 is a numerical constant. This entails that 2SVRG achieves a linear rate under strong convexity. Remark 1 (An important remark). The number N k of MCMC iterations is growing quickly with the phase number k. So, we use in practice an hybrid version of 2SVRG called HSVRG: 2SVRG is used for the AErst phases (usally 4 or 5 phases in our experiments), and as soon as N k exceeds n, we switch to a mini-batch version of Prox-SVRG (SVRG-MB), see [START_REF] Nitanda | Stochastic proximal gradient descent with acceleration techniques[END_REF]. A precise description of HSVRG is given in Algorithm 9 from Section 5 below. Note that overall linear convergence of HSVRG is still guaranteed, since both 2SVRG and SVRG-MB decrease linearly the objective from one phase to the other. Theorem 2. Consider Algorithm 6, with a phase length m and a step-size ∞ 2 (0, 1 8L(2m+1) ). Then, under Assumption 1 and Assumption 2, we have: E[F ( μK )] °F (µ § ) ∑ D 1 K + D 2 K K +1 X k=1 1 N k , (7) where D 1 and D 2 depend on the constants of the problem, and where μK is the average of iterates μk until phase K . In Theorem 2, the choice N k = k AE with AE > 1 gives E[F ( μK )] °F (µ § ) ∑ D 3 K for a constant D 3 > 0. This result is an improvement of the Stochastic Proximal Gradient algorithm from [START_REF] Atchadé | On perturbed proximal gradient algorithms[END_REF] since it is not necessary to design a weighted averaged but just a simple average to reach the same convergence rate. Also, it provides a convergence guarantee for the non-strongly convex case, which is not proposed in [START_REF] Xiao | A proximal stochastic gradient method with progressive variance reduction[END_REF]. Theorems 1 and 2 show a trade-o between the linear convergence of the variancereduced stochastic gradient algorithm and the MCMC approximation error. The next proposition proves that Algorithm 7 satisAEes Assumption 2 under a general assumption on the proposal and the stationary distribution. Proposition 1. Suppose that there exists M > 0 such that the proposal Q and the stationary distribution º satisfy º(x) ∑ MQ(x), for all x in the support of º. Then, the error ¥ t obtained by Algorithm 7 satisAEes Assumption 2. Remark 2 (SpeciAEcs for the Cox partial likelihood). Note that the assumptions required in Proposition 1 are met for the Cox partial likelihood: in this case, a simple choice is M = n max x2supp(º) º(x), and the Monte Carlo error ¥ t induced by computing the gradient of f i at phase k using Algorithm 7 satisAEes (4) with C 1 = 2 |R i | max j 2R i º i µ t °1 ( j ) C 2 = 36C 2 C 2 1 (1 +C 1 ) max j 2R i kx j k 2 2 , where C 2 is the Rosenthal constant of order 2, see Proposition 12 in [FM + 03]. Numerical experiments We compare several solvers for the minimization of the objective given by an elastic-net penalization of the Cox partial likelihood F (µ) = °`(µ) + ∏ ≥ AEkµk 1 + 1 °AE 2 kµk 2 2 ¥ , where we recall that the partial likelihood `is deAEned in Equation (1) and where ∏ > 0 and AE 2 [0, 1] are tuning parameters. A fair comparison of algorithms. The doubly stochastic nature of the considered algorithms makes it hard to compare them to batch algorithms in terms of iteration number or epoch number (number of full passes over the data), as this is usually done for SGD-based algorithm. Hence, we proceed by plotting the evolution of F ( μ) °F (µ § ) (where µ § 2 argmin u2R d F (u) and μ is the current iterate of a solver) as a function of the number of inner products between a feature vector x i and µ, e ectively computed by each algorithm, to obtain the current iterate μ. This gives a fair way of comparing the e ective complexity of all algorithms. These algorithms however need a good starting point (near the actual minimizer) to achieve convergence (this fact is due to a diagonal approximation of the Hessian matrix, see [START_REF] Hastie | Generalized additive models[END_REF], Chapter 8.). They are therefore tuned to provide good path of solutions while varying by small steps the penalization parameter ∏. Indeed in this case, this starting point is naturally set at the minimizer at the previous value of ∏, when minimizing along a path but cannot be guessed outside of a path. We illustrate this fact on Hybrid SVRG algorithm Since N k exponentially increases, the 2SVRG's complexity is higher than SVRG's original complexity. However, the algorithm 2SVRG is very e cient during the AErst phases: we introduce an hybrid solver that begins with 2SVRG and switches to SVRG with mini-batchs (denoted SVRG-MB). Mini-batching simply consists in replacing single stochastic gradients r f i by an average over a subset B of size n mb uniformly selected at random. This is useful in our case, since we can use a computational trick (recurrence formula) to compute mini-batched gradients. In our experiments, we used n mb = 0.1n or n mb = 0.01n, a constant step-size ∞ designed for each dataset, and switched from 2SVRG to SVRG-MB after K S = 5 phases. We set N k = n k/(K S +2) so that N k never exceeds n. Baselines. We describe in this paragraph the algorithm that we put in competition in our experiments. Algorithm 9 Hybrid SVRG (HSVRG) 1: Require: Number of phases before switching K S ∏ 1, total number of phases K ∏ K S , phase-length m ∏ 1, step-size ∞ > 0, MCMC number of iterations per phase (N k ) K k=1 , starting point µ 0 2 R d 2: Initialize: μ √ µ 0 and compute r f i ( μ) for i = 1, . . . , n 3: for k = 1 to K S do 4: for t = 0 to m °1 do 5: Pick i ª U [n] 6: b r f i (µ t ) √ ApproxMCMC(i , µ t , N k ) 7: d t = b r f i (µ t ) °rf i ( μ) + 1 n P n j =1 r f j ( μ) 8: ! t +1 √ µ t °∞d t 9: µ t +1 √ prox ∞h (! t +1 ) 10: end for 11: Update μ √ 1 m P m t =1 µ t , µ 0 √ μ, μk √ μ 12: Compute r f i ( μ) for i = 1, . . . , for t = 0 to m mb °1 = b(m °1)/n mb c do 16: Pick a set of random indices B ª (U [n]) n mb 17: d t = rf B (µ t ) °rf B ( μ) + 1 n P n j =1 r f j ( μ) 18: ! t +1 √ µ t °∞d t 19: µ t +1 √ prox ∞h (! t +1 ) 20: end for 21: Update μ √ 1 m mb P m mb t =1 µ t , µ 0 √ μ, μk √ μ 22: end for 23: Return: μK FISTA This is accelerated proximal gradient from [START_REF] Beck | A fast iterative shrinkage-thresholding algorithm for linear inverse problems[END_REF] with backtracking linesearch. Inner products necessary inside the backtracking are counted as well. L-BFGS-B A state-of-the-art quasi-Newton solver which provides a usually strong baseline for many batch optimization algorithms, see [START_REF] Liu | On the limited memory bfgs method for large scale optimization[END_REF]. We use the original implementation of the algorithm proposed in python's scipy.optimize module. Nondi erentiability of the `1-norm in the elastic-net penalization is dealt with the standard trick of reformulating the problem, using the fact that |a| = a + + a °for a 2 R. HSVRG-UNIF-IMH Q = º • μ. HSVRG-AIS This is Algorithm 9 where ApproxMCMC is done via Algorithm 8, that is Adaptative Importance Sampling. SVRG-MB Mini-Batch Prox-SVRG described in [START_REF] Nitanda | Stochastic proximal gradient descent with acceleration techniques[END_REF], which can be seen as Algorithm 9 (see below) with K S = 0. This is a simply stochastic algorithm, since there is no MCMC 6. Conclusion approximation of the gradients r f i . The question of mini-batch sizing is critical and is adressed in Section 10. We used n mb = 0.1n or n mb = 0.01n in our experiments. The "simply stochastic" counterpart SVRG-MB is way slower than the corresponding doubly stochatic versions, since they rely on many computations of stochastic gradients r f i , which are numerically costly, as explained above. The same settings are used throughout all experiments, some of them being tuned by hand: steps size for the variants of HSVRG are taken as ∞ t = ∞ 0 2 {10 °2, 10 °3, 10 °4} where ∞ 0 depends on the dataset, the phase length m is equal to the number n of failures of each datasets as suggested in [START_REF] Konečn | Mini-batch semi-stochastic gradient descent in the proximal setting[END_REF]. As mentionned above, the doubly stochastic algorithms use di erent verions of ApproxMCMC. Datasets We compare algorithms on the following datasets. The AErst three are standard benchmarks in survival analysis, the fourth one is a large simulated dataset where the number of observations n exceeds the number of features d . This di ers from supervised gene expression data: such a large-scale setting happens for longitudinal clinical trials, medical adverse event monitoring and business data minings tasks. • We generated a Gaussian features matrix X with n = 10, 000 observations and d = 500 predictors, with a Toeplitz covariance and correlation equal to 0.5. The failure times follow a Weibull distribution. See Section 9 for details on simulation in this model. We compare in Figures II.2 and II.3 all algorithms for ridge penalization, namely AE = 0 and ∏ = 1/ p n. Experiences with other values of AE and ∏ are given in Section (including the Lasso penalization for instance). Conclusions. The experiments AErst show that the solvers HSVRG-ADAP-IMH and HSVRG-AIS give better results than HSVRG-UNIF-IMF. However, the HSVRG solvers behave particularly well during the AErst phases where the gradients can be noisy -due to a small number of iterations of the MCMC -and still point a decent descent direction. Conclusion We have proposed a doubly stochastic gradient algorithm to extend SGD-like algorithms beyond the empirical risk minimization setting. The algorithm we proposed is the result of two di erent ideas: sampling from uniform distribution to avoid the computation of a large sum, and sampling using MCMC methods to avoid the computation of a more complicated expectation. We have also provided theoretical guarantees of convergence for both the convex and the strongly-convex setting. This doubly stochastic gradient algorithm is very e cient during the early phases. The hybrid version of our algorithm, at the crossing of simply and doubly stochastic gradient algorithms, signiAEcantly outperforms state-of-the-art methods. In a future work, we intend to extend our algorithm to Conditional Random Fields (CRF), where each subfunction's gradient takes the form r f i (µ) = r(°log(p(y i |x i , µ)) = X Y 2Y i e H (X i ,Y ) > µ P Y 0 2Y i e H (X i ,Y 0 ) > µ (H (X i , Y ) °H (X i , Y i )), for a certain function H (see Page 2 in [SBA + 15]). Notice that the Cox negative partial likelihood can be seen as a particular case of CRF by setting We AErst prove Proposition 1 that ensures that Algorithm 7 provides the bounds of Assumption 2. X i = [x j ] j 2R i 2 R d £|R i | , Y i = [ j 2R i ] j 2R i 2 {0, 1} |R i | , H (X , Y ) = X Y and Y i = {[ j =k ] j 2R i : k 2 R i }. Proof. Since there exists M > 0 such that the proposal Q and the stationary distribution º satisfy º(x) ∑ MQ(x), for all x in the support of º, the Theorem 7.8 in [START_REF] Robert | Monte carlo methods[END_REF] states that the Algorithm 7 produces a geometrically ergodic Markov kernel P with ergodicity constants uniformly controlled: kP k (x, •) °ºk T V ∑ 2 µ 1 °1 M ∂ k , (8) where P k is the kernel of the k th iteration of the algorithm and k • k T V is the total variation norm. Since b r f i t (µ t °1) is computed as the mean of the iterates of the Markov chain, a simple computation enables us to bound the bias of the error and Proposition 12 from [FM + 03] gives the upper bound for the expected squared error: kE t ¥ t k ∑ C 1 N k and E t k¥ t k 2 ∑ C 2 N k (9) where C 1 and C 2 are some AEnite constants, and N k the number of iterations of the Markov chain. It can be shown that C 1 = 2M and that C 2 is related to a constant from the Rosenthal's inequality. Á Preliminaries to the proofs of Theorems 1 and 2 In what follows, the key lemmas for the proofs of Theorems 1 and 2 are stated and proved when not directly borrowed from previous articles. Lemma 1. For ¢ t := b r f i t (µ t °1) °rf i t ( μ) + rf ( μ) °rf (µ t °1 ), we have: E t k¢ t k 2 ∑ 8L[F (µ t °1) °F (µ § ) + F ( μ) °F (µ § )] + 3E t k¥ t k 2 . The proof of Lemma 1 uses Lemma 1 in [START_REF] Xiao | A proximal stochastic gradient method with progressive variance reduction[END_REF]. Lemma 2. [START_REF] Johnson | Accelerating stochastic gradient descent using predictive variance reduction[END_REF][START_REF] Xiao | A proximal stochastic gradient method with progressive variance reduction[END_REF] Consider F satisfying Assumption 1. Then, 1 n n X i =1 kr f i (µ) °rf i (µ § )k 2 ∑ 2L[F (µ) °F (µ § )] Proof of Lemma 1. For the sake of simplicity, we now denote d t i = rf i (µ t °1) °rf i ( μ) and d t = r f (µ t °1) °rf ( μ), so that one gets ¢ t = d t i t °d t + ¥ t . Then, using the expectation introduced in Section 4, we repeatedly use the identity E t kªk 2 = E t kª °Et ªk 2 + kE t ªk 2 . First with ª = ¢ t (since E t d t i t = d t , one gets E t ª = E t ¥ t ) : E t k¢ t k 2 = E t kd t i t + ¥ t °°d t + E t ¥ t ¢ k 2 + kE t ¥ t k 2 then, successively with ª = d t i t + ¥ t , ª = d t + ¥ t and AEnally ª = ¥ t : E t k¢ t k 2 = E t kd t i t + ¥ t k 2 + kE t ¥ t k 2 °kd t + E t ¥ t k 2 = E t kd t i t + ¥ t k 2 + kE t ¥ t k 2 °°E t kd t + ¥ t k 2 °Et k¥ t °Et ¥ t k 2 ¢ = E t kd t i t + ¥ t k 2 + E t k¥ t k 2 °Et kd t + ¥ t k 2 . Now we remark that E t kd t + ¥ t k 2 ∏ 0, and the identity ka + bk 2 ∑ 2kak 2 + 2kbk 2 gives the majoration E t k¢ t k 2 ∑ 2E t kd t i t k 2 + 3E t k¥ t k 2 . Now rewriting d t i t = rf i t (µ t °1) °rf i t (µ § ) + rf i t (µ § ) °rf i t ( μ) , the same identity leads to E t k¢ t k 2 ∑ 4E t kr f i t (µ t °1) °rf i t (µ § )k 2 + 4E t kr f i t ( μ) °rf i t (µ § )k 2 + 3E t k¥ t k 2 . The desired result follows applying twice Lemma 2. Á When F is µ-strongly convex, the next Lemma (Lemma 3 in [START_REF] Xiao | A proximal stochastic gradient method with progressive variance reduction[END_REF]) provides a key lower bound. Lemma 3. [XZ14] Consider F = f + h satifying Assumption 1, where f is L f -smooth, L f > 0, f is µ f -strongly convex, µ f ∏ 0, h is µ h -strongly convex, µ h ∏ 0. For any x, v 2 R d , we deAEne x + = prox ∞h (x °∞v), g = 1 ∞ (x °x+ ), where ∞ 2 (0, 1 L f ]. Then, for any y 2 R d : F (y) ∏ F (x + ) + g > (y °x) + ∞ 2 kg k 2 + µ f 2 ky °xk 2 + µ h 2 ky °x+ k 2 + (v °rf (x)) > (x + °y). (10) Remark 3. Note that in Lemma 3, one can freely choose µ f and µ h (in particular one can take µ f = 0 or µ h = 0), as long as µ f + µ h = µ. The following Lemma comes from [AFM17] (Lemma 14): Lemma 4. [AFM17] Consider F = f + h satifying Assumption 1, where f is L f -smooth, and T ∞ : x 7 ! prox ∞h [x °∞r f (x)] with ∞ 2 (0, 2/L f ]. Let x, y 2 R d , we have: kT ∞ (x) °T∞ (y)k ∑ kx °yk 7. Proofs Proof of Theorem 1 Proof. The proof begins with the study of the distance kµ t °µ § k 2 between the phases k °1 and k. To ease the reading, when staying between these two phases, we write μ instead of μk°1 . Introducing g t = 1 ∞ (µ t °1 °µt ), we may write: kµ t °µ § k 2 = kµ t °1 °∞g t °µ § k 2 = kµ t °1 °µ § k 2 °2∞(g t ) > (µ t °1 °µ § ) + ∞ 2 kg t k 2 . To upper bound the term °2∞(g t ) > (µ t °1 °µ § )+∞ 2 kg t k 2 , we apply the Lemma 3 with x = µ t °1, x + = µ t and y = µ § . With again ¢ t = b r f i t (µ t °1) °rf i t ( μ) + rf ( μ) °rf (µ t °1), we obtain °(g t ) > (µ t °1 °µ § ) + ∞ 2 kg t k 2 ∑ F (µ § ) °F (µ t ) °µf 2 kµ t °1 °µ § k 2 °µh 2 kx t °µ § k 2 °(¢ t ) > (µ t °µ § ), and kµ t °µ § k 2 ∑ kµ t °1 °µ § k 2 + 2∞[F (µ § ) °F (µ t )] °2∞(¢ t ) > (µ t °µ § ). (11) We now concentrate on the quantity °2∞(¢ t ) > (µ t °µ § ). Introducing ∫ t = prox ∞h [µ t °t 1 °∞r f (µ t °1)] 2 F t °1 i.e. the vector obtained from µ t °1 after an exact proximal gradient descent step, we get °2∞(¢ t ) > (µ t °µ § ) = °2∞(¢ t ) > (µ t °∫t ) °2∞(¢ t ) > (∫ t °µ § ) ∑ 2∞k¢ t k • kµ t °∫t k °2∞(¢ t ) > (∫ t °µ § ) where the inequality follows from the Cauchy-Schwartz inequality. Now the non-expansiveness property of proximal operators kprox ∞h (x) °prox ∞h (y)k ∑ kx °yk leads to °2∞(¢ t ) > (µ t °µ § ) ∑ 2∞k¢ t k • k{µ t °1 °∞(¢ t + rf (µ t °1))} °{µ t °1 °∞r f (µ t °1)}k °2∞(¢ t ) > (∫ t °µ § ) ∑ 2∞ 2 k¢ t k 2 °2∞(¢ t ) > (∫ t °µ § ). Reminding that ∫ t 2 F t °1, we derive: °2∞E t (¢ t ) > (µ t °µ § ) ∑ 2∞ 2 E t k¢ t k 2 °2∞(E t ¢ t ) > (∫ t °µ § ) ∑ 2∞ 2 E t k¢ t k 2 + 2∞kE t ¢ t k • k∫ t °µ § k, the last inequality comes from the Cauchy-Schwartz inequality. Since µ § is the minimum of F = f + h, it satisAEes µ § = prox ∞h [µ § °∞r f (µ § )] . Thus, the Lemma 4 and the Assumption 1 on the sequence (µ t ) give us k∫ t °µ § k ∑ kµ t °1 °µ § k ∑ B . We also remark that E t ¢ t = E t ¥ t . For all t between phases k °1 and k, we AEnally apply Lemma 1 to obtain: °2∞E t (¢ t ) > (µ t °µ § ) ∑ 16∞ 2 L[F (µ t °1) °F (µ § ) + F ( μ) °F (µ § )] + 6∞ 2 E t k¥ t k 2 + 2∞B kE t ¥ t k. ( 12 ) Taking the expectation E t on inequation (11) and combining with previous inequality leads to E t kµ t °µ § k 2 ∑ kµ t °1 °µ § k 2 + 2∞[F (µ § ) °F (µ t )] + 16∞ 2 L[F (µ t °1) °F (µ § ) + F ( μ) °F (µ § )] + 6∞ 2 E t k¥ t k 2 + 2∞B kE t ¥ t k. With the notation of Algorithm 6, μ = μk°1 = µ 0 . Now, applying iteratively the previous inequality over t = 1, 2, . . . , m and taking the expectation E over i 1 , µ 1 , i 2 , µ 2 ,...,i m , µ m , we obtain: Ekµ m °µ § k 2 + 2∞[EF (µ m ) °F (µ § )] +2∞(1 °8L∞) m°1 X t =1 [EF (µ t ) °F (µ § )] ∑ kµ 0 °µ § k 2 + 16L∞ 2 [F (µ 0 ) °F (µ § ) + m(F ( μ) °F (µ § ))] + 6∞ 2 m X t =1 Ek¥ t k 2 + 2∞B m X t =1 EkE t ¥ t k. Now, by convexity of F and the deAEnition μk = 1 m P m t =1 µ t , we may write F ( μk ) ∑ 1 m P m t =1 F (µ t ). Noticing that 2∞(1 °8L∞) < 2∞ leads to 2∞(1 °8L∞)m[EF ( μk ) °F (µ § )] ∑ k μ °µ § k 2 + 16L∞ 2 (m + 1)[F ( μ) °F (µ § )] + 6∞ 2 m X t =1 Ek¥ t k 2 + 2∞B m X t =1 kE¥ t k. Under the Assumption 2, we have 6∞ 2 m X t =1 Ek¥ t k 2 + 2∞B m X t =1 kE¥ t k ∑ (6∞ 2 C 2 + 2∞BC 1 ) m N k whereas the µ-strong convexity of F implies k μk°1 °µ § k 2 ∑ 2 µ [F ( μk°1 ) °F (µ § )]. This leads to EF ( μk ) °F (µ § ) ∑ Ω ≥ EF ( μk°1 ) °F (µ § ) ¥ + D N k for D and Ω as deAEned in the theorem. Applying the last inequality recursively leads to the result. Á 7. Proofs Proof of Theorem 2 Proof. As at the begining of the proof of Theorem 1, we consider that we stand between phase k °1 and phase k of Algorithm 6 and consequently µ 0 = μk°1 . We use the same arguments until (11), with the di erence that, in this non-strongly convex case, we have µ f = µ h = 0. We obtain for all t between phases 1 and m F (µ t ) °F (µ § ) ∑ 1 2∞ (kµ t °1 °µ § k 2 °kµ t °µ § k 2 ) °(µ t °µ § ) > ¢ t . Summing over t = 1, . . . , ø (for ø ∑ m) leads to ø X t =1 [F (µ t ) °F (µ § )] ∑ 1 2∞ ( ø°1 X t =0 kµ t °µ § k 2 °ø X t =1 kµ t °µ § k 2 ) °ø X t =1 (µ t °µ § ) > ¢ t . (13) We now use Equation ( 13) (with ø = m) and the convexity of k•k 2 with μk = 1 m P m t =1 µ t to write m X t =1 [F (µ t ) °F (µ § )] ∑ 1 2∞ √ m°1 X t =0 kµ t °µ § k 2 °mk μk °µ § k 2 ! °m X t =1 (µ t °µ § ) > ¢ t . (14) Starting from Equation (13) again but now summing over l = 1, . . . , t , we get 1 2∞ (kµ 0 °µ § k 2 °kµ t °µ § k 2 ) °t X l =1 (µ l °µ § ) > ¢ l ∏ t X l =1 [F (µ l ) °F (µ § )] ∏ 0, (15) where the last inequality follows from the deAEnition of µ § . In [START_REF]DXH +[END_REF], we now substitute kµ t °µ § k 2 by the upper bound derived from (15) to write (noticing that µ 0 = μk°1 ): m X t =1 [F (µ t ) °F (µ § )] ∑ m 2∞ (k μk°1 °µ § k 2 °k μk °µ § k 2 ) °m°1 X t =1 t X l =1 (µ l °µ § ) > ¢ l °m X t =1 (µ t °µ § ) > ¢ t ∑ m 2∞ (k μk°1 °µ § k 2 °k μk °µ § k 2 ) °m X t =1 (m + 1 °t )(µ t °µ § ) > ¢ t . As in the proof of Theorem 1 (see Equation ( 12)), each term °Et (µ t °µ § ) > ¢ t is upper bounded by 8∞L[F (µ t °1) °F (µ § ) + F ( μk°1 ) °F (µ § )] + 3∞E t ||¥ t || 2 + B ||E t ¥ t ||. Now with m + 1 °t ∑ m and Assumption 2, we obtain: 1 m m X t =1 E[F (µ t ) °F (µ § )] ∑ 1 2∞ (k μk°1 °µ § k 2 °Ek μk °µ § k 2 ) + 8L∞ © m X t =1 [EF (µ t °1) °F (µ § )] + F (µ § ) °E[F (µ m )] + (m + 1)[E[F ( μk°1 )] °F (µ § )] ™ + m 3∞C 2 + BC 1 N k . By deAEnition of ∞, we have 8Lm∞ < 1, and we can use the convexity of F to lower bound the left hand side. With the inequality E[F (µ m )] °F (µ § ) ∏ 0, one has: (1 °8L∞m) h E[F ( μk )] °F (µ § ) i ∑ 1 2∞ ≥ k μk°1 °µ § k 2 °Ek μk °µ § k 2 ¥ + 8L∞(m + 1) h E[F ( μk°1 )] °F (µ § ) i + m 3∞C 2 + BC 1 N k We now take the expectation E on all iterates of the algorithm i.e. on the iterates i 1 , µ 1 , i 2 , µ 2 ,...,i m , µ m from the AErst phase. Introduce the notations A k = E[F ( μk )] °F (µ § ) and a = (8L∞(m + 1))/(1 °50 8. Supplementary experiments 8Lm∞) < 1 , last inequality leads to: A k °a A k°1 ∑ 1 2∞(1 °8Lm∞) ≥ Ek μk°1 °µ § k 2 °Ek μk °µ § k 2 ¥ + D N k , where D is deAEned in the theorem. Summing over the phases k = 1, 2, . . . , K + 1 and lower bounding A K +1 with 0, we obtain: (1 °a) K X k=1 A k ∑ a A 0 + 1 2∞(1 °8Lm∞) k μ0 °µ § k 2 + K +1 X k=1 D N k The last argument is the use of the convexity of F . Remark the explicit forms of the constants in the theorem: D 1 = a 1 °a A 0 + 1 1 °a k μ0 °µ § k 2 2∞(1 °8Lm∞) and D 2 = D 1°a . Á Supplementary experiments We have tested all algorithms with other settings for the penalization. Namely, we considered: High lasso. We take AE = 1 and ∏ = 1/ Simulation of data With Cox model, the hazard ratio for the failure time T i of the i th patient takes the form: ∏ i (t ) = ∏ 0 (t ) exp(x > i µ), where ∏ 0 (t ) is a baseline hazard ratio, and x i 2 R d the covariates of the i th patient. We AErst simulate the feature matrix X 2 R n£d as a Gaussian vector with a Toepliz covariance, where the correlation between features j and j 0 is equal to Ω | j °j 0 | , for some Ω 2 (0, 1). We want now to simulate the observed time y i that corresponds to x i . We denote the cumulative hazard function §(t ) = R t 0 ∏(s)d s. Using the deAEnition ∏(t ) = f (t ) 1°F (t ) , we know that §(t ) = °log(1 °F (t )) , where f is the p.d.f. and F is the c.d.f. of T . It is easily seen that §(T ) has distribution Exp(1) (Exponential with intensity equal to 1): since § is an increasing function, we have P( §(T ) ∏ t ) = P(T ∏ § °1(t )) = Z 1 § °1(t ) f (s)d s = 1 °F ( § °1(t )) = exp(° §( § °1(t ))) = exp(°t ), so that simulating failure times is simply achieved by using T i = § °1(E i ) where E i ª Exp(1). To compute §, we should have a parametric form for ∏ 0 . We assume that T follows the Weibull distribution W (1, ∫) (when x i = 0). This choice is motivated by the following facts: • Its cumulative hazard function is easy to invert. Indeed the hazard ratio is given by ∏ 0 (t ) = ∫t ∫°1 e °t ∫ 1°(1°e °t ∫ ) = ∫t ∫°1 , so that § °1(y ) = ≥ y exp(x > i µ) ¥ 1/∫ . • It enables two di erent trends -increasing or decreasing -for the baseline hazard ratio that correspond to two typical behaviours in the medical AEeld. -decreasing: after taking a treatment, time before a side-e ect's appearence u i ∑ k ∂ = n mb Y i =1 P(u i ∑ k) = µ bkc n ∂ n mb , P µ max i 2B |R i | ∑ ck ∂ = µ bkc n ∂ n mb , P µ max i 2B |R i | ∏ a ∂ = 1 °µ ba/cc n ∂ n mb , for a < n pat The third equation leads us to consider 1 ø n mb ø n to prevent both max Uncover Hawkes causality without parametrization 1 Introduction In many applications, one needs to deal with data containing a very large number of irregular timestamped events that are recorded in continuous time. These events can reØect, for instance, the activity of users on a social network, see [SAD + 16], the high-frequency variations of signals in AEnance, see [START_REF] Bacry | Hawkes processes in AEnance[END_REF], the earthquakes and aftershocks in geophysics, see [START_REF] Ogata | Space-time point-process models for earthquake occurrences[END_REF], the crime activity, see [MSB + 11] or the position of genes in genomics, see [START_REF] Reynaud-Bouret | Adaptive estimation for hawkes processes; application to genome analysis[END_REF]. The succession of the precise timestamps carries a great deal of information about the dynamics of the underlying systems. In this context, multidimensional counting processes based models play a paramount role. Within this framework, an important task is to recover the mutual inØuence of the nodes (i.e., the di erent components of the counting process), by leveraging on their timestamp patterns, see, for instance, [BM16, LV14, LM11, ZZS13, GRLS13, FWR + 15, XFZ16]. Consider a set of nodes I = {1, . . . , d }. For each i 2 I , we observe a set Z i of events, where each ø 2 Z i labels the occurrence time of an event related to the activity of i . The events of all nodes can be represented as a vector of counting processes N t = [N 1 t ••• N d t ] > , where N i t counts the number of events of node i until time t 2 R + , namely N i t = P ø2Z i {ø∑t } . The vector of stochastic intensities ∏ t = [∏ 1 t •••∏ d t ] > associated with the multivariate counting process N t is deAEned as ∏ i t = lim d t!0 P(N i t +d t °N i t = 1|F t ) d t for i 2 I , where the AEltration F t encodes the information available up to time t . The coordinate ∏ i t gives the expected instantaneous rate of event occurrence at time t for node i . The vector ∏ t characterizes the distribution of N t , see [START_REF] Daley | An introduction to the theory of point processes: volume II: general theory and structure[END_REF], and patterns in the events time-series can be captured by structuring these intensities. The Hawkes process introduced in [START_REF] Hawkes | Point spectra of some mutually exciting point processes[END_REF] corresponds to an autoregressive structure of the intensities in order to capture self-excitation and cross-excitation of nodes, which is a phenomenon typically observed, for instance, in social networks, see for instance [START_REF] Crane | Robust dynamic classes revealed by measuring the response function of a social system[END_REF]. Namely, N t is called a Hawkes point process if the stochastic intensities can be written as ∏ i t = µ i + d X j =1 Z t 0 ¡ i j (t °t 0 )d N j t 0 , where µ i 2 R + is an exogenous intensity and ¡ i j are positive, integrable and causal (with support in R + ) functions called kernels encoding the impact of an action by node j on the activity of node i . Note that when all kernels are zero, the process is a simple homogeneous multivariate Poisson process. Most of the litterature uses a parametric approach for estimating the kernels. With no doubt, the most popular parametrization form is the exponential kernel ¡ i j (t ) = AE i j Ø i j e °Øi j t because it deAEnitely simpliAEes the inference algorithm (e.g., the complexity needed for computing the likelihood is much smaller). When d is large, in order to reduce the number of parameters, some authors choose to arbitrarily share the kernel shapes across the di erent nodes. Thus, for instance, in [YZ13, ZZS13, FWR + 15], they choose ¡ i j (t ) = AE i j h(t ) with AE i j 2 R + quantiAEes the intensity of the inØuence of j on i and h(t ) a (normalized) function that characterizes the time-proAEle of this inØuence and that is shared by all couples of nodes (i , j ) (most often, it is chosen to be either exponential h(t ) = Øe °Øt or power law h(t ) = Øt °(Ø+1) ). Both approaches are, most of the time, highly non-realistic. On the one hand there is a priori no reason for assuming that the time-proAEle of the inØuence of a node j on a node i does not depend on the pair (i , j ). On the other hand, assuming an exponential shape or a power law shape for a kernel arbitrarily imposes an event impact that is always instantly maximal and that can only decrease with time, while in practice, there may exist a latency between an event and its maximal impact. In order to have more Øexibility on the shape of the kernels, nonparametric estimation can be considered. Expectation-Maximization algorithms can be found in [START_REF] Lewis | A nonparametric em algorithm for multiscale hawkes processes[END_REF] (for d = 1) or in [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF] (d > 1). An alternative method is proposed in [START_REF] Bacry | First-and second-order statistics characterization of hawkes processes and non-parametric estimation[END_REF] where the nonparametric 1. Introduction estimation is formulated as a numerical solving of a Wiener-Hopf equation. Another nonparametric strategy considers a decomposition of kernels on a dictionary of function h 1 ,...,h K , namely ¡ i j (t ) = P K k=1 a i j k h k (t ) , where the coe cients a i j k are estimated, see [HRBR + 15, LV14] and [START_REF] Xu | Learning granger causality for hawkes processes[END_REF], where group-lasso is used to induce a sparsity pattern on the coe cients a i j k that is shared across k = 1, . . . , K . Such methods are heavy when d is large, since they rely on likelihood maximization or least squares minimization within an over-parametrized space in order to gain Øexibility on the shape of the kernels. This is problematic, since the original motivation for the use of Hawkes processes is to estimate the inØuence and causality of nodes, the knowledge of the full parametrization of the model being of little interest for causality purpose. Our paper solves this problem with a di erent and more direct approach. Instead of trying to estimate the kernels ¡ i j , we focus on the direct estimation of their integrals. Namely, we want to estimate the matrix G = [g i j ] where g i j = Z +1 0 ¡ i j (u) du ∏ 0 for 1 ∑ i , j ∑ d . (1) As it can be seen from the cluster representation of Hawkes processes ([HO74]), this integral represents the mean total number of events of type i directly triggered by an event of type j , and then encodes a notion of causality. Actually, as detailed below (see Section 2.1), such integral can be related to the Granger causality ( [START_REF] Granger | Investigating causal relations by econometric models and cross-spectral methods[END_REF]). The main idea of the method we developed in this paper is to estimate the matrix G directly using a matching cumulants (or moments) method. Apart from the mean, we shall use second and third-order cumulants which correspond respectively to centered second and third-order moments. We AErst compute an estimation c M of these centered moments M (G) (they are uniquely deAEned by G). Then, we look for a matrix b G that minimizes the L 2 error kM ( b G) °c Mk 2 . Thus the integral matrix b G is directly estimated without making hardly any assumptions on the shape the involved kernels. As it will be shown, this approach turns out to be particularly robust to the kernel shapes, which is not the case of all previous Hawkes-based approaches that aim causality recovery. We call this method NPHC (Non Parametric Hawkes Cumulant), since our approach is of nonparametric nature. We provide a theoretical analysis that proves the consistency of the NPHC estimator. Our proof is based on ideas from the theory of Generalized Method of Moments (GMM) but requires an original technical trick since our setting strongly departs from the standard parametric statistics with i.i.d observations. Note that moment and cumulant matching techniques proved particularly powerful for latent topic models, in particular Latent Dirichlet Allocation, see [START_REF] Podosinnikova | Rethinking lda: moment matching for discrete ica[END_REF]. A small set of previous works, namely [START_REF] Fonseca | Hawkes process: Fast calibration, application to trade clustering, and di usive limit[END_REF][START_REF] Aït-Sahalia | Modeling AEnancial contagion using mutually exciting jump processes[END_REF], already used method of moments with Hawkes processes, but only in a parametric setting. Our work is the AErst to consider such an approach for a nonparametric counting processes framework. The paper is organized as follows: in Section 2, we provide the background on the integrated kernels and the integrated cumulants of the Hawkes process. We then introduce the method, investigate its complexity and explain the consistency result we prove. In Section 3, we estimate the matrix of Hawkes kernels' integrals for various simulated datasets and for real datasets, namely the MemeTracker database and AEnancial order book data. We then provide in Section 4 the technical details skipped in the previous parts and the proof of our consistency result. Section 5 contains concluding remarks. NPHC: The Non Parametric Hawkes Cumulant method In this Section, we provide the background on integrals of Hawkes kernels and integrals of Hawkes cumulants. We then explain how the NPHC method enables estimating G. Branching structure and Granger causality From the deAEnition of Hawkes process as a Poisson cluster process, see [START_REF] Jovanović | Cumulants of hawkes point processes[END_REF] or [START_REF] Hawkes | A cluster process representation of a selfexciting process[END_REF], g i j can be simply interpreted as the average total number of events of node i whose direct ancestor is a given event of node j (by direct we mean that interactions mediated by any other intermediate event are not counted). In that respect, G not only describes the mutual inØuences between nodes, but it also quantiAEes their direct causal relationships. Namely, introducing the counting function N i √ j t that counts the number of events of i whose direct ancestor is an event of j , we know from [START_REF] Bacry | Hawkes processes in AEnance[END_REF] that E[d N i √ j t ] = g i j E[d N j t ] = g i j § j d t, (2) where we introduced § i as the intensity expectation, namely satisfying E[d N i t ] = § i d t. Note that § i does not depend on time by stationarity of N t , which is known to hold under the stability condition kGk < 1, where kGk stands for the spectral norm of G. In particular, this condition implies the non-singularity of I d °G. Since the question of a real causality is too complex in general, most econometricians agreed on the simpler deAEnition of Granger causality [START_REF] Granger | Investigating causal relations by econometric models and cross-spectral methods[END_REF]. Its mathematical formulation is a statistical hypothesis test: X causes Y in the sense of Granger (u) = 0 for u 2 R + . Since the kernels take positive values, the latter condition is equivalent to R 1 0 ¡ i j (u)du = 0. In the following, we'll refer to learning the kernels' integrals as uncovering causality since each integral encodes the notion of Granger causality, and is also linked to the number of events directly caused from a node to another node, as described above at Eq. (2). Integrated cumulants of the Hawkes process A general formula for the integral of the cumulants of a multivariate Hawkes process is provided in [START_REF] Jovanović | Cumulants of hawkes point processes[END_REF]. As explained below, for the purpose of our method, we only need to consider cumulants up to the third order. Given 1 ∑ i , j , k ∑ d , the AErst three integrated 2. NPHC: The Non Parametric Hawkes Cumulant method cumulants of the Hawkes process can be deAEned as follows thanks to stationarity: § i d t = E(d N i t ) (3) C i j d t = Z ø2R ≥ E(d N i t d N j t +ø ) °E(d N i t )E(d N j t +ø ) ¥ (4) K i jk d t = ZZ ø,ø 0 2R 2 ≥ E(d N i t d N j t +ø d N k t +ø 0 ) + 2E(d N i t )E(d N j t +ø )E(d N k t +ø 0 ) °E(d N i t d N j t +ø )E(d N k t +ø 0 ) °E(d N i t d N k t +ø 0 )E(d N j t +ø ) °E(d N j t +ø d N k t +ø 0 )E(d N i t ) ¥ , (5) where Eq. ( 3) is the mean intensity of the Hawkes process, the second-order cumulant (4) refers to the integrated covariance density matrix and the third-order cumulant (5) measures the skewness of N t . Using the martingale representation from [START_REF] Bacry | First-and second-order statistics characterization of hawkes processes and non-parametric estimation[END_REF] or the Poisson cluster process representation from [START_REF] Jovanović | Cumulants of hawkes point processes[END_REF], one can obtain an explicit relationship between these integrated cumulants and the matrix G. If one sets R = (I d °G) °1, (6) straightforward computations (see Section 4) lead to the following identities: § i = d X m=1 R i m µ m (7) C i j = d X m=1 § m R i m R j m (8) K i jk = d X m=1 (R i m R j m C km + R i m C j m R km +C i m R j m R km °2 § m R i m R j m R km ). (9) Equations ( 8) and (9) are proved in Section 4. Our strategy is to use a convenient subset of Eqs. (3), (4) and (5) to deAEne M, while we use Eqs. ( 7), (8) and (9) in order to construct the operator that maps a candidate matrix R to the corresponding cumulants M (R). By looking for b R that minimizes R 7 ! kM (R) °c Mk 2 , we obtain, as illustrated below, good recovery of the ground truth matrix G using Equation (6). The simplest case d = 1 has been considered in [START_REF] Hardiman | Branching-ratio approximation for the self-exciting hawkes process[END_REF], where it is shown that one can choose M = {C 11 } in order to compute the kernel integral. Eq. ( 8) then reduces to a simple second-order equation that has a unique solution in R (and consequently a unique G) that accounts for the stability condition (kGk < 1). Unfortunately, for d > 1, the choice M = {C i j } 1∑i ∑ j ∑d is not su cient to uniquely determine the kernels integrals. In fact, the integrated covariance matrix provides d (d + 1)/2 independent coe cients, while d 2 parameters are needed. It is straightforward to show that the remaining d (d °1)/2 conditions can be encoded in an orthogonal matrix O, reØecting the fact that Eq. ( 8) is invariant under the change R ! OR, so that the system is under-determined. Our approach relies on using the third order cumulant tensor K = [K i jk ] which contains (d 3 + 3d 2 + 2d )/6 > d 2 independent coe cients that are su cient to uniquely AEx the matrix G. This can be justiAEed intuitively as follows: while the integrated covariance only contains symmetric information, and is thus unable to provide causal information, the skewness given by the third order cumulant in the estimation procedure can break the symmetry between past and future so as to uniquely AEx G. Thus, our algorithm consists of selecting d 2 third-order cumulant components, namely M = {K i i j } 1∑i , j ∑d . In particular, we deAEne the estimator of R as b R 2 argmin R L (R), where L (R) = (1 °∑)kK c (R) °c K c k 2 2 + ∑kC (R) °b C k 2 2 , (10) where k • k 2 stands for the Frobenius norm, K c = {K i i j } 1∑i , j ∑d is the matrix obtained by the contraction of the tensor K to d 2 indices, C is the covariance matrix, while c K c and b C are their respective estimators, see Equations ( 12), (13) below. It is noteworthy that the above mean square error approach can be seen as a particular Generalized Method of Moments (GMM), see [START_REF] Hall | Generalized method of moments[END_REF]. This framework allows us to determine the optimal weighting matrix involved in the loss function. However, this approach is unusable in practice, since the associated complexity is too high. Indeed, since we have d 2 parameters, this matrix has d 4 coe cients and GMM calls for computing its inverse leading to a O(d 6 ) complexity. In this work, we use the coe cient ∑ to scale the two terms, as ∑ = k c K c k 2 2 k c K c k 2 2 + k b C k 2 2 , see Section 4.4 for an explanation about the link between ∑ and the weighting matrix. Finally, the estimator of G is straightforwardly obtained as b G = I d °b R °1, from the inversion of Eq. (6). Let us mention an important point: the matrix inversion in the previous formula is not the bottleneck of the algorithm. Indeed, its has a complexity O(d 3 ) that is cheap compared to the computation of the cumulants when n = max i |Z i | ¿ d , which is the typical scaling satisAEed in applications. Solving the considered problem on a larger scale, say d ¿ 10 3 , is an open question, even with state-of-the-art parametric and nonparametric approaches, see for instance [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF][START_REF] Xu | Learning granger causality for hawkes processes[END_REF][START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF][START_REF] Bacry | First-and second-order statistics characterization of hawkes processes and non-parametric estimation[END_REF], where the number of components d in experiments is always around 100 or smaller. Note that, actually, our approach leads to a much faster algorithm than the considered state-of-the-art baselines, see Tables 1-4 from Section 3 below. Estimation of the integrated cumulants In this section we present explicit formulas to estimate the three moment-based quantities listed in the previous section, namely, §, C and K . We AErst assume there exists H > 0 such that the truncation from (°1, +1) to [°H , H ] of the domain of integration of the quantities appearing in Eqs. (4) and (5), introduces only a small error. In practice, this amounts to neglecting border e ects in the covariance density and in the skewness density that is a good 2. NPHC: The Non Parametric Hawkes Cumulant method approximation if the support of the kernel ¡ i j (t ) is smaller than H and the spectral norm kGk satisAEes kGk < 1. In this case, given a realization of a stationary Hawkes process {N t : t 2 [0, T ]}, as shown in Section 4, we can write the estimators of the AErst three cumulants (3), (4) and (5) as b § i = 1 T X ø2Z i 1 = N i T T (11) b C i j = 1 T X ø2Z i ≥ N j ø+H °N j ø°H °2H b § j ¥ (12) b K i jk = 1 T X ø2Z i ≥ N j ø+H °N j ø°H °2H b § j ¥ • ≥ N k ø+H °N k ø°H °2H b § k ¥ °b § i T X ø2Z j X ø 0 2Z k (2H °|ø 0 °ø|) + + 4H 2 b § i b § j b § k . ( 13 ) Let us mention the following facts. Bias. While the AErst cumulant §i is an unbiased estimator of § i , the other estimators b C i j and b K i jk introduce a bias. However, as we will show, in practice this bias is small and hardly a ects numerical estimations (see Section 3). This is conAErmed by our theoretical analysis, which proves that if H does not grow too fast compared to T , then these estimated cumulants are consistent estimators of the theoretical cumulants (see Section 2.6). Complexity. The computations of all the estimators of the AErst, second and third-order cumulants have complexity respectively O(nd), O(nd 2 ) and O(nd 3 ), where n = max i |Z i |. However, our algorithm requires a lot less than that: it computes only d 2 third-order terms, of the form b K i i j , leaving us with only O(nd 2 ) operations to perform. Symmetry. While the values of § i ,C i j and K i jk are symmetric under permutation of the indices, their estimators are generally not symmetric. We have thus chosen to symmetrize the estimators by averaging their values over permutations of the indices. Worst case is for the estimator of K c , which involves only an extra factor of 2 in the complexity. The NPHC algorithm The objective to minimize in Equation ( 10) is non-convex. More precisely, the loss function is a polynomial of R of degree 6. However, the expectations of cumulants § and C deAEned in Eq. ( 4) and (5) that appear in the deAEnition of L (R) are unknown and should be replaced with b § and b C . We denote f L (R) the objective function, where the expectations of cumulants § i and C i j have been replaced with their estimators in the right-hand side of Eqs. ( 8) and (9): f L (R) = (1 °∑)kR Ø2 b C > + 2[R Ø ( b C °R b L)]R > °c K c k 2 2 + ∑kR b LR > °b C k 2 2 ( 14 ) As explained in [CHM + 15], the loss function of a typical multilayer neural network with simple nonlinearities can be expressed as a polynomial function of the weights in the network, whose degree is the number of layers. Since the loss function of NPHC writes as a polynomial of degree 6, we expect good results using optimization methods designed to train deep multilayer neural networks. We used the AdaGrad from [START_REF] Duchi | Adaptive subgradient methods for online learning and stochastic optimization[END_REF], a variant of the Stochastic Gradient Descent with adaptive learning rates. AdaGrad scales the learning rates coordinate-wise using the online variance of the previous gradients, in order to incorporate second-order information during training. The NPHC method is summarized schematically in Algorithm 10. Our problem being non-convex, the choice of the starting point has a major e ect on the convergence. Here, the key is to notice that the matrices R that match Equation ( 8 Complexity of the algorithm Compared with existing state-of-the-art methods to estimate the kernel functions, e.g., the ordinary di erential equations-based (ODE) algorithm in [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF], the Granger Causalitybased algorithm in [START_REF] Xu | Learning granger causality for hawkes processes[END_REF], the ADM4 algorithm in [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF], and the Wiener-Hopf-based algorithm in [START_REF] Bacry | First-and second-order statistics characterization of hawkes processes and non-parametric estimation[END_REF], our method has a very competitive complexity. This can be understood by the fact that those methods estimate the kernel functions, while in NPHC we only estimate their integrals. The ODE-based algorithm is an EM algorithm that parametrizes the kernel function with M basis functions, each being discretized to L points. The basis functions are updated after solving M Euler-Lagrange equations. If n denotes the maximum number of events per component (i.e. n = max 1∑i ∑d |Z i |) then the complexity of one iteration of the algorithm is O(Mn 3 d 2 + ML(nd + n 2 )). The Granger Causality-based algorithm is similar to the previous one, without the update of the basis functions, that are Gaussian kernels. The complexity per iteration is O(Mn 3 d 2 ). The algorithm ADM4 is similar to the two algorithms above, as EM algorithm as well, with only one exponential kernel as basis function. The complexity per iteration is then O(n 3 d 2 ). The Wiener-Hopf-based algorithm is not iterative, 2. NPHC: The Non Parametric Hawkes Cumulant method on the contrary to the previous ones. It AErst computes the empirical conditional laws on many points, and then invert the Wiener-Hopf system, leading to a O(nd 2 L + d 4 L 3 ) computation. Similarly, our method AErst computes the integrated cumulants, then minimize the objective function with N iter iterations, and invert the resulting matrix b R to obtain b G. In the end, the complexity of the NPHC method is O(nd 2 + N iter d 3 ). According to this analysis, summarized in Table III.1 below, one can see that in the regime n ¿ d , the NPHC method outperforms all the other ones. O(N iter M (n 3 d 2 + L(nd + n 2 ))) GC [XFZ16] O(N iter Mn 3 d 2 ) ADM4 [ZZS13] O(N iter n 3 d 2 ) WH [BM16] O(nd 2 L + d 4 L 3 ) NPHC O(nd 2 + N iter d 3 ) Theoretical guarantee: consistency The NPHC method can be phrased using the framework of the Generalized Method of Moments (GMM). GMM is a generic method for estimating parameters in statistical models. In order to apply GMM, we have to AEnd a vector-valued function g (X , µ) of the data, where X is distributed with respect to a distribution P µ 0 , which satisAEes the moment condition: E[g (X , µ)] = 0 if and only if µ = µ 0 , where µ 0 is the "ground truth" value of the parameter. Based on i.i.d. observed copies x 1 ,..., x n of X , the GMM method minimizes the norm of the empirical mean over n samples, k 1 n P n i =1 g (x i , µ)k, as a function of µ, to obtain an estimate of µ 0 . In the theoretical analysis of NPHC, we use ideas from the consistency proof of the GMM, but the proof actually relies on very di erent arguments. Indeed, the integrated cumulants estimators used in NPHC are not unbiased, as the theory of GMM requires, but asymptotically unbiased. Moreover, the setting considered here, where data consists of a single realization {N t } of a Hawkes process strongly departs from the standard i.i.d setting. Our approach is therefore based on the GMM idea but the proof is actually not using the theory of GMM. In the following, we use the subscript T to refer to quantities that only depend on the process (N t ) in the interval [0, T ] (e.g., the truncation term H T , the estimated integrated covariance b C T or the estimated kernel norm matrix b G T ). In the next equation, Ø stands for the Hadamard product and Ø2 stands for the entrywise square of a matrix. We denote G 0 = I d °R°1 0 the true value of G, and the R 2d £d valued vector functions Theorem 1 (Consistency of NPHC). Suppose that (N t ) is observed on R + and assume that g 0 (R) = ∑ C °RLR > K c °RØ2 C > °2[R Ø (C °RL)]R > ∏ b g T (R) = " b C T °R b L T R > c K c T °RØ2 b C > T °2[R Ø ( b C T °R b L T )]R > . # Using these notations, f L T (R) 1. g 0 (R) = 0 if and only if R = R 0 ; 2. R 2 £, where £ is a compact set; the spectral radius of the kernel norm matrix satisAEes kG 0 k < 1; 4. H T ! 1 and H 2 T /T ! 0. Then b G T = I d °µarg min R2£ f L T (R) ∂ °1 P ! G 0 . The proof of the Theorem is given in Section 4.5 below. Assumption 3 is mandatory for stability of the Hawkes process, and Assumptions 3 and 4 are su cient to prove that the estimators of the integrated cumulants deAEned in Equations ( 11), ( 12) and (13) are asymptotically consistent. Assumption 2 is a very mild standard technical assumption allowing to prove consistency for estimators based on moments. Assumption 1 is a standard asymptotic moment condition, that allows to identify parameters from the integrated cumulants. Numerical Experiments In this Section, we provide a comparison of NPHC with the state-of-the art, on simulated datasets with di erent kernel shapes, the MemeTracker dataset (social networks) and the order book dynamics dataset (AEnance). Simulated datasets. We simulated several datasets with Ogata's Thinning algorithm [START_REF] Ogata | On lewis' simulation method for point processes[END_REF] using the open-source library tick1 , each corresponding to a shape of kernel: rectangular, exponential or power law kernel, see Figure III.1 below. The integral of each kernel on its support equals AE, 1/Ø can be regarded as a characteristic time-scale and ∞ is the scaling exponent for the power law distribution and a delay parameter 3. Numerical Experiments t ¡ t 0 ∞ ∞ + 1/Ø AEØ (a) Rectangular kernel ¡ t = AEØ [0,1/Ø] (t °∞) log t log ¡ t °log Ø log AEØ∞ slope º °(1 + ∞) (b) Power law kernel on log-log scale for the rectangular one. We consider a non-symmetric block-matrix G to show that our method can e ectively uncover causality between the nodes, see Figure III.2. The matrix G has constant entries AE on the three blocks -AE = g i j = 1/6 for dimension 10 and AE = g i j = 1/10 for dimension 100 -, and zero outside. The two other parameters' values are the same for dimensions 10 and 100. The parameter ∞ is set to 1/2 on the three blocks as well, but we set three very di erent Ø 0 , Ø 1 and Ø 2 from one block to the other, with ratio ¡ t = AEØ∞(1 + Øt ) °(1+∞) t ¡ t 0 1/Ø AEØ (c) Exponential kernel ¡ t = AEØ exp(°Øt ) Ø i +1 /Ø i = 10 and Ø 0 = 0.1. The number of events is roughly equal to 10 5 on average over the nodes. We ran the algorithm on three simulated datasets: a 10-dimensional process with rectangular kernels named Rect10, a 10-dimensional process with power law kernels named PLaw10 and a 100-dimensional process with exponential kernels named Exp100. MemeTracker dataset. We use events of the most active sites from the MemeTracker dataset2 . This dataset contains the publication times of articles in many websites/blogs from August 2008 to April 2009, and hyperlinks between posts. We extract the top 100 media sites with the largest number of documents, with about 7 million of events. We use the links to trace the Øow of information and establish an estimated ground truth for the matrix G. Indeed, when an hyperlink j appears in a post in website i , the link j can be regarded as a direct ancestor of the event. Then, Eq. (2) shows g i j can be estimated by N i √ j T /N j T = #{links j ! i }/N j T . Order book dynamics. We apply our method to AEnancial data, in order to understand the self and cross-inØuencing dynamics of all event types in an order book. An order book is a list of buy and sell orders for a speciAEc AEnancial instrument, the list being updated in real-time throughout the day. This model has AErst been introduced in [BJM16], and models the order book via the following 8-dimensional point process: N t = (P (a) t , P (b) t , T (a) t , T (b) t , L (a) t , L (b) t ,C (a) t ,C (b) t ), where P (a) (resp. P (b) ) counts the number of upward (resp. downward) price moves, T (a) (resp. T (b) ) counts the number of market orders at the ask 3 (resp. at the bid) that do not move the price, L (a) (resp. L (b) ) counts the number of limit orders at the ask 4 (resp. at the bid) that do not move the price, and C (a) (resp. C (b) ) counts the number of cancel orders at the ask5 (resp. at the bid) that do not move the price. The AEnancial data has been provided by QuantHouse EUROPE/ASIA, and consists of DAX future contracts between 01/01/2014 and 03/01/2014. Baselines. We compare NPHC to state-of-the art baselines: the ODE-based algorithm (ODE) by [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF], the Granger Causality-based algorithm (GC) by [START_REF] Xu | Learning granger causality for hawkes processes[END_REF], the ADM4 algorithm (ADM4) by [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF], and the Wiener-Hopf-based algorithm (WH) by [START_REF] Bacry | First-and second-order statistics characterization of hawkes processes and non-parametric estimation[END_REF]. Metrics. We evaluate the performance of the proposed methods using the computing time, the Relative Error RelErr(A,B) = 1 d 2 X i , j |a i j °bi j | |a i j | {a i j 6 =0} + |b i j | {a i j =0} and the Mean Kendall Rank Correlation MRankCorr(A,B) = 1 d d X i =1 RankCorr([a i • ], [b i • ]), where RankCorr(x, y) = 2 d (d °1) (N concordant (x, y) °Ndiscordant (x, y)) with N concordant (x, y) the number of pairs (i , j ) satisfying x i > x j and y i > y j or x i < x j and y i < y j and N discordant (x, y) the number of pairs (i , j ) for which the same condition is not satisAEed. Note that RankCorr score is a value between °1 and 1, representing rank matching, but can take smaller values (in absolute value) if the entries of the vectors are not distinct. Discussion. We perform the ADM4 estimation, with exponential kernel, by giving the exact value Ø = Ø 0 of one block. Let us stress that this helps a lot this baseline, in comparison to NPHC where nothing is speciAEed on the shape of the kernel functions. We used M = 10 basis functions for both ODE and GC algorithms, and L = 50 quadrature points for WH. We did not run WH on the 100-dimensional datasets, for computing time reasons, because its complexity scales with d 4 . We ran multi-processed versions of the baseline methods on 56 cores, to decrease the computing time. Our method consistently performs better than all baselines, on the three synthetic datasets, on MemeTracker and on the AEnancial dataset, both in terms of Kendall rank correlation and estimation error. Moreover, we observe that our algorithm is roughly 50 times faster than all the considered baselines. On Rect10, PLaw10 and Exp100 our method gives very impressive results, despite the fact that it does not uses any prior shape on the kernel functions, while for instance the ADM4 baseline do. On Figure III.2, we observe that the matrix b G estimated with ADM4 recovers well the block for which Ø = Ø 0 , i.e. the value we gave to the method, but does not perform well on the two other blocks, while the matrix b G estimated with NPHC approximately reaches the true value for each of the three blocks. On these simulated datasets, NPHC obtains a comparable or slightly better Kendall rank correlation, but improves a lot the relative error. On MemeTracker, the baseline methods obtain a high relative error between 9% and 19% while our method achieves a relative error of 7% which is a strong improvement. Moreover, NPHC reaches a much better Kendall rank correlation, which proves that it leads to a much better recovery of the relative order of estimated inØuences than all the baselines. Indeed, it has been shown in [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF] that kernels of MemeTracker data are not exponential, nor power law. This partly explains why our approach behaves better. On the AEnancial data, the estimated kernel norm matrix obtained via NPHC, see Figure III.3, gave some interpretable results (see also [START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF]): 1. Any 2 £ 2 sub-matrix with same kind of inputs (i.e. Prices changes, Trades, Limits or Cancels) is symmetric. This shows empirically that ask and bid have symmetric roles. 2. The prices are mostly cross-excited, which means that a price increase is very likely to be followed by a price decrease, and conversely. This is consistent with the wavy prices we observe on AEnancial markets. 3. The market, limit and cancel orders are strongly self-excited. This can be explained by the persistence of order Øows, and by the splitting of meta-orders into sequences of 4. Technical details smaller orders. Moreover, we observe that orders impact the price without changing it. For example, the increase of cancel orders at the bid causes downward price moves. Technical details We show in this section how to obtain the equations stated above, the estimators of the integrated cumulants and the scaling coe cient ∑ that appears in the objective function. We then prove the theorem of the paper. Proof of Equation (8) We denote ∫(z) the matrix ∫ i j (z) = L z ≥ t ! E(d N i u d N j u+t ) dud t ° §i § j ¥ , where L z ( f ) is the Laplace transform of f , and √ t = P n∏1 ¡ (?n) t , where ¡ (?n) t refers to the n th auto-convolution of ¡ t . Then we use the characterization of second-order statistics, AErst formulated in [START_REF] Hawkes | Point spectra of some mutually exciting point processes[END_REF] and fully generalized in [START_REF] Bacry | First-and second-order statistics characterization of hawkes processes and non-parametric estimation[END_REF], ∫(z) = (I d + L °z (™))L(I d + L z (™)) > , where L i j = § i ± i j with ± i j the Kronecker symbol. Since I d + L z (™) = (I d °Lz (©)) °1, taking z = 0 in the previous equation gives > , which gives us the result since the entry (i , j ) of the last equation gives ∫(0) = (I d °G) °1L(I d °G> ) °1, C = RLR C i j = P m § m R i m R j m . Proof of Equation (9) We start from [START_REF] Jovanović | Cumulants of hawkes point processes[END_REF], cf. Eqs. (48) to (51), and group some terms: K i jk = X m § m R i m R j m R km + X m R i m R j m X n § n R kn L 0 (√ mn ) + X m R i m R km X n § n R j n L 0 (√ mn ) + X m R j m R km X n § n R i n L 0 (√ mn ). Using the relations L 0 (√ mn ) = R mn °±mn and C i j = P m § m R i m R j m , proves Equation (9). Integrated cumulants estimators For H > 0 let us denote ¢ H N i t = N i t +H °N i t °H . Let us AErst remark that, if one restricts the integration domain to (°H , H ) in Eqs. (4) and (5), one gets by permuting integrals and expectations: § i d t = E(d N i t ) C i j d t = E ≥ d N i t (¢ H N j t °2H § j ) ¥ K i jk d t = E ≥ d N i t (¢ H N j t °2H § j )(¢ H N k t °2H § k ) ¥ °d t § i E ≥ (¢ H N j t °2H § j )(¢ H N k t °2H § k ) ¥ . The estimators ( 11) and ( 12) are then naturally obtained by replacing the expectations by their empirical counterparts, notably E(d N i t f (t )) d t ! 1 T X ø2Z i f (ø). For the estimator (13), we shall also notice that E((¢ H N j t °2H § j )(¢ H N k t °2H § k )) = ZZ [°H ,H ] (t ) [°H ,H ] (t 0 )C j k t °t 0 d td t 0 = Z (2H °|t |) + C j k t d t. We estimate the last integral with the remark above. Choice of the scaling coe cient ∑ Following the theory of GMM, we denote m(X , µ) a function of the data, where X is distributed with respect to a distribution P µ 0 , which satisAEes the moment conditions g (µ) = E[m(X , µ)] = 0 if and only if µ = µ 0 , the parameter µ 0 being the ground truth. For x 1 ,..., x N observed copies of X , we denote b g i (µ) = m(x i , µ), the usual choice of weighting matrix is c W N (µ) = 1 N P N i =1 b g i (µ) b g i (µ) > , and the objective to minimize is then √ 1 N N X i =1 b g i (µ) ! °c W N (µ 1 ) ¢ °1 √ 1 N N X i =1 b g i (µ) ! , (15) where µ 1 is a constant vector. Instead of computing the inverse weighting matrix, we rather use its projection on {AEI d : AE 2 R}. It can be shown that the projection choses AE as the mean eigenvalue of c W N (µ 1 ). We can easily compute the sum of its eigenvalues: Tr( c W N (µ 1 )) = 1 N N X i =1 Tr( b g i (µ 1 ) b g i (µ 1 ) > ) = 1 N N X i =1 Tr( b g i (µ 1 ) > b g i (µ 1 )) = 1 N N X i =1 || b g i (µ 1 )|| 2 2 . Technical details In our case, b 2 for the second. We compute the previous terms with R 1 = 0. All together, the objective function to minimize is g (R) = h vec[ c K c °K c (R)], vec[ b C °C (R)] i > 2 R 2d 2 . 1 k c K c k 2 2 kK c (R) °c K c k 2 2 + 1 k b C k 2 2 kC (R) °b C k 2 2 . (16) Dividing this function by ≥ 1/k c K c k 2 2 + 1/k b C k 2 2 ¥ °1, and setting ∑ = k c K c k 2 2 /(k c K c k 2 2 + k b C k 2 2 ), we obtaind the loss function given in Equation (10). Proof of the Theorem The main di erence with the usual Generalized Method of Moments, see [START_REF] Hansen | Large sample properties of generalized method of moments estimators[END_REF], relies in the relaxation of the moment conditions, since we have E[ b g T (µ 0 )] = m T 6 = 0. We adapt the proof of consistency given in [START_REF] Newey | Large sample estimation and hypothesis testing[END_REF]. We can relate the integral of the Hawkes process's kernels to the integrals of the cumulant densities, from [START_REF] Jovanović | Cumulants of hawkes point processes[END_REF]. Our cumulant matching method would fall into the usual GMM framework if we could estimate -without bias -the integral of the covariance on R, and the integral of the skewness on R 2 . Unfortunately, we can't do that easily. We can however estimate without bias R f T t C i j t d t and R f T t K i jk t d t with f T a compact supported function on [°H T , H T ] that weakly converges to 1, with H T °! 1. In most cases we will take f T t = [°H T ,H T ] (t ). Denoting b C i j,(T ) the estimator of R f T t C i j t d t, the term | E[ b C i j,(T ) ]°C i j | = | R f T t C i j t d t °C i j | can be considered a proxy to the distance to the classical GMM. This distance has to go to zero to make the rest of GMM's proof work: the estimator b C i j,(T ) is then asymptotically unbiased towards C i j when T goes to inAEnity. Notations We observe the multivariate point process (N t ) on R + , with Z i the events of the i th component. We will often write covariance / skewness instead of integrated covariance / skewness. In the rest of the document, we use the following notations. Hawkes kernels' integrals G true = R © t d t = ( R ¡ i j t d t) i j = I d °(R true ) °1 Theoretical mean matrix L = diag( § 1 ,..., § d ) Theoretical covariance C = R true L(R true ) > Theoretical skewness K c = (K i i j ) i j = (R true ) Ø 2 C > + 2[R true Ø (C °Rtrue L)](R true ) > Filtering function f T ∏ 0 supp( f T ) Ω [°H T , H T ] F T = R f T s d s e f T t = f T °t Events sets Z i ,T,1 = Z i \ [H T , T + H T ] Z j ,T,2 = Z j \ [0, T + 2H T ] Estimators of the mean b § i = N i T +H T °N i H T T e § j = N j T +2H T T +2H T Estimator of the covariance b C i j,(T ) = 1 T P ø2Z i ,T,1 °Pø 0 2Z j ,T,2 f ø 0 °ø °e § j F T ¢ Estimator of the skewness 6 b K i jk,(T ) = 1 T X ø2Z i ,T,1 √ X ø 0 2Z j ,T,2 f ø 0 °ø °e § j F T !√ X ø 00 2Z k,T,2 f ø 0 °ø °e § k F T ! °b § i T + 2H T X ø 0 2Z j ,T,2 √ X ø 00 2Z k,T,2 ( f T ? e f T ) ø 0 °ø00 °e § k (F T ) 2 ! GMM related notations µ = R and µ 0 = R true g 0 (µ) = vec ∑ C °RLR > K c °RØ 2 C > °2[R Ø (C °RL)]R > ∏ 2 R 2d 2 b g T (µ) = vec " b C (T ) °R b LR > c K c (T ) °RØ 2 ( b C (T ) ) > °2[R Ø ( b C (T ) °R b L)]R > # 2 R 2d 2 Q 0 (µ) = g 0 (µ) > W g 0 (µ) b Q T (µ) = b g T (µ) > c W T b g T (µ) Consistency First, let's remind a useful theorem for consistency in GMM from [START_REF] Newey | Large sample estimation and hypothesis testing[END_REF]. Theorem 2. If there is a function Q 0 (µ) such that (i ) Q 0 (µ) is uniquely maximized at µ 0 ; (i i ) £ is compact; (i i i ) Q 0 (µ) is continuous; (i v) b Q T (µ) converges uniformly in probability to Q 0 (µ), then b µ T = arg max b Q T (µ) P °! µ 0 . We can now prove the consistency of our estimator. µ = µ 0 , 2. µ 2 £, which is compact, 6 When f T t = [°H T ,H T ] (t ), we remind that ( f T ? e f T ) t = (2H T °|t |) + . This leads to the estimator we showed in the article. Technical details the spectral radius of the kernel norm matrix satisAEes ||©|| § < 1, 4. 8i , j , k 2 [d ], R f T u C i j u du ! R C i j u du and R f T u f T v K i jk u,v dud v ! R K i jk u,v dud v, (µ) = [W 1/2 g 0 (µ)] > [W 1/2 g 0 (µ)] > 0 = Q 0 (µ 0 ) . Indeed, there exists a neighborhood N of µ 0 such that µ 2 N \{µ 0 } and g 0 (µ) 6 = 0 since g 0 (µ) is a polynom. Condition 2.1(ii) follows by (i i ). Condition 2.1(iii) is satisAEed since Q 0 (µ) is a polynom. Con- dition 2.1(i v) is harder to prove. First, since b g T (µ) is a polynom of µ, we prove easily that E[sup µ2£ | b g T (µ)|] < 1. Then, by £ compact, g 0 (µ) is bounded on £, and by the triangle and Cauchy-Schwarz inequalities, The estimator of L is unbiased so let's focus on the variance of b | b Q T (µ) °Q0 (µ)| ∑ |( b g T (µ) °g0 (µ)) > c W T ( b g T (µ) °g0 (µ))| + |g 0 (µ) > ( c W T + c W > T )( b g T (µ) °g0 (µ))| + |g 0 (µ) > ( c W T °W )g 0 (µ)| ∑ k b g T (µ) °g0 (µ)k 2 k c W T k + 2kg 0 (µ)kk b g T (µ) °g0 (µ)kk c W T k + kg 0 (µ)k 2 k c W T °W k. To prove sup µ2£ | b Q T (µ) °Q0 ( L. E[( b § i ° §i ) 2 ] = E " µ 1 T Z T +H T H T (d N i t ° §i d t) ∂ 2 # = 1 T 2 Z T +H T H T Z T +H T H T E[(d N i t ° §i d t)(d N i t 0 ° §i d t 0 )] = 1 T 2 Z T +H T H T Z T +H T H T C i i t 0 °t d td t 0 ∑ 1 T 2 Z T +H T H T C i i d t = C i i T °! 0 By Markov inequality, we have just proved that k b L °Lk P °! 0. Proof that k b C (T ) °C k P °! 0 First, let's remind that E( b C (T ) ) 6 = C . Indeed, E ≥ b C i j,(T ) ¥ = E µ 1 T Z T +H T H T d N i t Z T +2H T 0 d N j t 0 f t 0 °t °b § i e § j F T ∂ = E µ 1 T Z T +H T H T d N i t Z T +2H T °t °t d N j t +s f s ° §i § j F T ∂ + ≤ i j,T,H T F T = 1 T Z T +H T H T Z H T °HT f s E ≥ d N i t d N j t +s ° §i § j d s ¥ + ≤ i j,T,H T F T = Z f s C i j s d s + ≤ i j,T,H T F T Now, ≤ i j,T,H T = E ≥ § i § j °b § i e § j ¥ = °1 T 2 Z T +H T H T Z T +2H T 0 E ≥ d N i t d N j t 0 ° §i § j d td t 0 ¥ = °1 T 2 Z T +H T H T Z T +2H T 0 C i j t °t 0 d td t 0 = °1 T Z µ 1 + µ H T °|t | T ∂ °∂+ C i j t d t Since f satisAEes F T = o(T ), we have E( b C (T ) ) °! C . It remains now to prove that k b C (T ) °E( b C (T ) )k P °! 0. Let's now focus on the variance of b C i j,(T ) : V( b C i j,(T ) ) = E °( b C i j,(T ) ) 2 ¢ °E( b C i j,(T ) ) 2 . Now, E ≥ ( b C i j,(T ) ) 2 ¥ = E √ 1 T 2 X (ø,¥,ø 0 ,¥ 0 )2(Z i ,T,1 ) 2 £(Z j ,T,2 ) 2 ( f ø 0 °ø °F T /(T + 2H T ))( f ¥ 0 °¥ °F T /(T + 2H T )) ! = E µ 1 T 2 Z t ,s2[H T ,T +H T ] Z t 0 ,s 0 d N i t d N j t 0 d N i s d N j s 0 ( f t 0 °t °F T /(T + 2H T ))( f s 0 °s °F T /(T + 2H T )) ∂ = 1 T 2 Z t ,s2[H T ,T +H T ] Z t 0 ,s 0 2[0,T +2H T ] E ≥ d N i t d N j t 0 d N i s d N j s 0 ¥ • ( f t 0 °t °F T /(T + 2H T ))( f s 0 °s °F T /(T + 2H T )) And, E( b C i j,(T ) ) 2 = 1 T 2 Z t ,s2[H T ,T +H T ] Z t 0 ,s 0 2[0,T +2H T ] E ≥ d N i t d N j t 0 ¥ E ≥ d N i s d N j s 0 ¥ • ( f t 0 °t °F T /(T + 2H T ))( f s 0 °s °F T /(T + 2H T )) Then, the variance involves the integration towards the di erence of moments µ r,s,t ,u °µr,s µ t ,u . Let's write it as a sum of cumulants, since cumulants density are integrable. µ r,s,t ,u °µr,s µ t ,u = ∑ r,s,t ,u + ∑ r,s,t ∑ u [4] + ∑ r,s ∑ t ,u [3] + ∑ r,s ∑ t ∑ u [6] + ∑ r ∑ s ∑ t ∑ u °(∑ r,s + ∑ r ∑ s )(∑ t ,u + ∑ t ∑ u ) = ∑ r,s,t ,u + ∑ r,s,t ∑ u + ∑ u,r,s ∑ t + ∑ t ,u,r ∑ s + ∑ s,t ,u ∑ r + ∑ r,t ∑ s,u + ∑ r,u ∑ s,t + ∑ r,t ∑ s ∑ u + ∑ r,u ∑ s ∑ t + ∑ s,t ∑ r ∑ u + ∑ s,t ∑ r ∑ u In the rest of the proof, we denote a t = t 2[H T ,T +H T ] , b t = t 2[0,T +2H T ] , c t = t 2[°H T ,H T ] , g t = f t °1 T +2H T F T Before starting the integration of each term, let's remark that: 1. ™ t = P n∏1 © (?n) t ∏ 0 since © t ∏ 0. f t 0 °t d td t 0 = T F T b) R a t b t 0 g t 0 °t d td t 0 = 0 c) R a t b t 0 |g t 0 °t |d td t 0 ∑ 2T F T 4. 8t 2 R, a t (b ? e g ) t = 0 , where e g s = g °s . Fourth cumulant We want here to compute R ∑ i , j ,i , j t , t 0 °t g s 0 °s | ∑ (|| f || 1 (1 + 2H T /T )) 2 ∑ 4|| f || 2 1 . Ø Ø Ø 1 T 2 Z ∑ i , j ,i , j t ,t 0 ,s,s 0 a t b t 0 a s b s 0 g t 0 °t g s 0 °s d td t 0 d sd s 0 Ø Ø Ø ∑ µ 2|| f || 1 T ∂ 2 Z d t a t Z d t 0 b t 0 Z d sa s Z d s 0 b s 0 M i ji j t 0 °t ,s°t ,s 0 °t ∑ µ 2|| f || 1 T ∂ 2 Z d t a t Z d t 0 b t 0 Z d sa s Z d w M i ji j t 0 °t ,s°t ,w ∑ µ 2|| f || 1 T ∂ 2 Z d t a t Z M i ji j u,v,w dud vd w ∑ 4|| f || 2 1 T M i ji j °! T !1 0 Third £ First We have four terms, but only two di erent forms since the roles of (s, s 0 ) and (t , t 0 ) are symmetric. First form Z ∑ i , j ,i t ,t 0 ,s § j G t d t = § j T 2 Z ∑ i , j ,i t ,t 0 ,s a t b t 0 a s b s 0 g t 0 °t g s 0 °s d td t 0 d sd s 0 = § j T 2 Z ∑ i , j ,i t ,t 0 ,s a t b t 0 a s (b ? e g ) s g t 0 °t d td t 0 d s = 0 since a s (b ? e g ) s = 0 Second form Ø Ø Ø Z ∑ i , j , j t ,t 0 ,s 0 § i G t d t Ø Ø Ø = Ø Ø Ø § i T 2 Z ∑ i , j , j t ,t 0 ,s 0 a t b t 0 a s b s 0 g t 0 °t g s 0 °s d td t 0 d sd s 0 Ø Ø Ø = Ø Ø Ø § i T 2 Z ∑ i , j , j t ,t 0 ,s 0 a t b t 0 g t 0 °t b s 0 (a ? g ) s 0 d td t 0 d s 0 Ø Ø Ø ∑ § i T 2 2|| f || 1 Z d s 0 b s 0 (a ? |g |) s 0 Z d t a t Z d t 0 b t 0 K i j j t 0 °s0 ,t °s0 ∑ 4|| f || 1 K i j j § i F T T °! T !1 0 Second £ Second First form Ø Ø Ø Z ∑ i ,i t ,s ∑ j , j t 0 ,s 0 G t d t Ø Ø Ø ∑ 2|| f || 1 T 2 Z C i i t °sC j j t 0 °s0 a t b t 0 |g t 0 °t |a s b s 0 d td t 0 d sd s 0 ∑ 2|| f || 1 T 2 C i i C j j Z a t b t 0 |g t 0 °t |d td t 0 ∑ 4|| f || 1 C i i C j j F T T °! T !1 0 Second form Ø Ø Ø Z ∑ i , j t ,s 0 ∑ i , j t 0 ,s G t d t Ø Ø Ø ∑ 4|| f || 1 (C i j ) 2 F T T °! T !1 0 Second £ First £ First First form Z ∑ i , j t ,t 0 § i § j G t d t = § i § j T 2 Z ∑ i , j t ,t 0 a t b t 0 g t 0 °t d td t 0 Z a s b s 0 g s 0 °s d sd s 0 = 0 Second form Z ∑ i ,i t ,s § j § j G t d t = µ § j T ∂2 Z ∑ i ,i t ,s a t b t 0 g t 0 °t a s (b ? e g ) s d td t 0 d s = 0 We have just proved that V( b C (T ) ) P °! 0. By Markov inequality, it ensures us that k b C (T ) °E( b C (T ) )k P °! 0, and AEnally that k b C (T ) °C k P °! 0. Á Proof that k c K c (T ) °K c k P °! 0 The scheme of the proof is similar to the previous one. The upper bounds of the integrals involve the same kind of terms, plus the new term (F T ) 2 /T that goes to zero thanks to the assumption 5 of the theorem. Conclusion In this paper, we introduce a simple nonparametric method (the NPHC algorithm) that leads to a fast and robust estimation of the matrix G of the kernel integrals of a Multivariate Hawkes process that encodes Granger causality between nodes. This method relies on the matching of the integrated order 2 and order 3 empirical cumulants, which represent the simplest set of global observables containing su cient information to recover the matrix G. Since this matrix fully accounts for the self-and cross-inØuences of the process nodes (that can represent agents or users in applications), our approach can naturally be used to quantify the degree of endogeneity of a system and to uncover the causality structure of a network. By performing numerical experiments involving very di erent kernel shapes, we show that the baselines, involving either parametric or non-parametric approaches are very sensible to model misspeciAEcation, do not lead to accurate estimation, and are numerically expensive, while NPHC provides fast, robust and reliable results. This is conAErmed on the MemeTracker database, where we show that NPHC outperforms classical approaches based on EM algorithms or the Wiener-Hopf equations. Finally, the NPHC algorithm provided very satisfying results on AEnancial data, that are consistent with well-known stylized facts in AEnance. Introduction The previous approach based on the Generalized Method of Moments need the AErst three cumulants to obtain enough information from the data to recover the d 2 entries of G. Indeed, we want to recover d 2 independent coe cients -the entries of G -and the AErst two integrated cumulants give d + d (d + 1)/2 independent terms since the integrated covariance C is a symmetric matrix. Assuming the matrix G has a certain structure, we can get rid of the third order cumulant and design another estimation procedure using only the AErst two integrated cumulants. The advantage of such approach lies in the convexity of the related optimization problem, on the contrary to the minimization of L T from the previous chapter. The matrix we want to estimate minimize a simple criterion f convex, typically a norm, while being consistent with the AErst two empirical integrated cumulants. Problem setting We start from the relation between the integrated covariance C and the matrix R introduced in the previous chapter, from [START_REF] Jovanović | Cumulants of hawkes point processes[END_REF] and many other references: C = RLR > . Our purpose is still to approximate G = I °R°1 from the information encoded in the integrated cumulants. The previous equation in R admits a set of roots of that is isomorphocic to orthogonal group O n (R), and then: G = I °L1/2 MC °1/2 with M 2 O n (R) i.e. L °1/2 (I °G)C 1/2 2 O n (R). (1) The previous expression only comes from the relation on the covariance. However, two classic assuptions on the Hawkes kernel norm matrix are not yet encoded. The AErst one concerns the positivity of the kernels, and then the positivity of their integrals: g i j ∏ 0 for i , j 2 [d ]. Some variants of Hawkes processes allow the possibility of modeling inhibition through negative valued kernels [START_REF] Pernice | How structure determines correlations in neuronal networks[END_REF], with nonlinear Hawkes processes for instance [START_REF] Brémaud | Stability of nonlinear hawkes processes[END_REF], but the closed formulas of the cumulants [START_REF] Jovanović | Cumulants of hawkes point processes[END_REF] no longer stand with those variants. The other well-known assumption is linked to the stationarity of the process. The counting process N t has asymptotically stationary increments if the spectral norm of the kernel norm matrix is smaller than one: ||G|| < 1. We AEnally encode the structure of the matrix G via the minimization of a criterion f subject to some constraints. For the problem to be easy to solve, the criterion f will be a convex function whose proximal operator is explicit. The AEt to the data encoded in Equation (1), and the two assumptions above will be regarded as constraints of our optimization problem. All together, we formulate our problem as the following constrained optimization problem Constrained optimization problem: min G f (G) s.t. L °1/2 (I °G)C 1/2 2 O n (R) ||G|| < 1 g i j ∏ 0 The problem above involves easy constraints on two di erent matrices: G and M = L °1/2 (I °G)C 1/2 . Our AErst idea is to relax the previous problem to turn it into a convex optimization problem. The objective f is convex, and the constraints ||G|| < 1 and g i j ∏ 0 correspond to convex sets. The constraint that involves the orthogonal group is trickier and is not classic. We prove in Section 6 that the convex hull of the orthogonal group O n (R) is the closed unit ball w.r.t. the `2 norm. In the rest of the chapter, we denote B (resp. B 2 ) the open (resp. closed) unit ball w.r.t the spectral norm (resp. the `2 norm). Instead of the previous problem, we split the variables G and M, meaning that we focus on the minimization problem both on G and M. Such minimization problem on two variables x and z linked via an equation of the form Ax + B z = c can be e ciently solved with the Alternating Direction Method of Multipliers algorithm [START_REF] Glowinski | Sur l'approximation, par éléments AEnis d'ordre un, et la résolution, par pénalisation-dualité d'une classe de problèmes de dirichlet non linéaires[END_REF][START_REF] Gabay | A dual algorithm for the solution of nonlinear variational problems via AEnite element approximation[END_REF] detailed in Section 3. ADMM The minimization problem we AEnally aim at solving writes: min G,M f (G) + B 2 (M) + B (G) + R d £d + (G) (2) s.t. L °1/2 G + M C °1/2 = L °1/2 , On the contrary to the optimization problem of the previous chapter, the problem just stated is convex. We test this procedure on numerical simulations of various Hawkes kernels and real order book data, and we show how the criterion f impact the matrices we retrieve. ADMM The ADMM algorithm The Alternating Direction Methods of Multipliers (ADMM) is a widely-used minimization method to solve constrained problems of the form min x,z f (x) + g (z) (3) s.t. Ax + B z = c. The objective function is separable in (x, z) with g and h two convex functions. The constraint involves two matrices A and B , and a constant vector c. The algorithm ADMM was originally introduced in [GM76] and [START_REF] Glowinski | Sur l'approximation, par éléments AEnis d'ordre un, et la résolution, par pénalisation-dualité d'une classe de problèmes de dirichlet non linéaires[END_REF], and focuses on the augmented Lagrangian [START_REF] Hestenes | Multiplier and gradient methods[END_REF][START_REF] Powell | A method for non-linear constraints in minimization problems[END_REF] associated to problem (3), that is: L Ω (x, z, y) := g (x) + h(z) + y > (Ax + B z °c) + Ω 2 ||Ax + B z °c|| 2 2 , (4) with Ω > 0 and solves the problem min x,z max y L Ω (x, z, y) (5) instead of the initial one. The method of multipliers [START_REF] Hestenes | Multiplier and gradient methods[END_REF][START_REF] Powell | A method for non-linear constraints in minimization problems[END_REF] (analysis in [START_REF] Bertsekas | Constrained optimization and Lagrange multiplier methods[END_REF]) applied to this problem would alternate an exact minimization step on the primal variable (x, z) and a gradient ascent step on the dual variable y. Instead of the exact minimization step on the couple (x, z), we do one pass of a Gauss-Seidel method [START_REF] Golub | Matrix computations[END_REF] and split the joint minimization into two partial minimization steps: one over x with z AExed, the other over z with x AExed. These two minimization steps can be done simultaneously, from the same initial points, or in the case of ADMM, one after the other, with an update between. Namely, ADMM algorithm iterates the following update steps: x t +1 = argmin x L Ω (x, z t , y t ), z t +1 = argmin z L Ω (x t +1 , z, y t ), y t +1 = y t + Ω(Ax t +1 + B z t +1 °c). Convergence results The convergence results of ADMM hold under the following two assumptions: • The functions f and g are convex, proper1 and closed2 . • The (unaugmented) Lagrangian L 0 has a saddle point i.e. there exist (x § , z § , y § ) for which L 0 (x § , z § , y) ∑ L 0 (x § , z § , y § ) ∑ L 0 (x, z, y § ) for all x, y, z. Under these two assumptions, the ADMM iterates satisfy the following convergences (a proof is given in [BPC + 11]): • Residual convergence: r t = Ax t + B z t °c ! 0 as t ! 1 i.e. the iterates approaches feasibility. • Objective convergence: f (x t ) + g (z t ) ! min x,z { f (x) + g (z)} as t ! 1 i.e. the objective function approaches its optimal value. • Dual variable convergence: y t ! y § as t ! 1, where y § is a dual optimal point. Examples The ADMM method is quite general and plenty of optimization problems can be solved with it. We show here two usual tricks to turn an optimization problem into a relevant ADMM form. The AErst is to introduce indicator functions and concerns for instance optimization problem constrained on a set C : min x f (x) s.t. x 2 C . This problem can be equivalenty written: min x,z f (x) + g (z) s.t. x °z = 0 with g to be indicator of C i.e. to equal zero on C and 1 outside. The other trick is to introduce a variable z being equal to a linear transformation of x. We consider the problem called total variation denoising [START_REF] Rudin | Nonlinear total variation based noise removal algorithms[END_REF]: min x ||x °b|| 2 2 + ∏ d °1 X i =1 |x i +1 °xi |. Numerical results Denoting F = (F i j ) with F i j = 1 j =i +1 °1j=i , the previous problem can be written as: min x,z ||x °b|| 2 2 + ∏||z|| 1 s.t. F x °z = 0. Such problem can be e ciently solved using the ADMM algorithm, since each update step of the algorithm has a closed form using proximal operators of the `2 and `1 norms. Numerical results In the previous sections, we only assumed f was a convex criterion whose proximal operator can be easily computed. Now, we exhibit three di erent choices for f and present the results obtained with these choices for both simulated and real-world dataset. The criteria we consider are the `1-norm f = || • || 1 , Problem II when f = || • || 2 2 and Problem III for the case f = || • || § . We solve those minimization problem using the ADMM algorithm whose update steps are written above. The explicit update steps are provided in Section 6. Simulated data We simulated multivariate Hawkes point processes with the procedure already explained in the previous chapter, and implemented in the open-source library tick. As previously, we simulated three datasets generated from three di erent Hawkes kernels: the exponential kernel, the power-law kernel and the rectangular one. The mean vector and the integrated covariance matrix are computed using estimators provided in the previous chapter. We then used ADMM algorithm to solve the problems I , I I and I I I , and observed the same patterns for the three kernels. To ease the reading, we only show the results for the exponential kernel in dimension 100. The results from the Figure IV.1 and the Table IV.1 are consistent and shows that the solution to Problem I is the closest to the ground-truth matrix G. Moreover, according the Figure IV.1 one observes that the solutions to the two other problems are symmetric matrices, while the ground-truth matrix is not. Order book data The numerical experiments on simulated data incites us to focus on Problem I if the matrix G we want to uncover is not symmetric. Such non-symmetric relationships are for instance highlighted in [START_REF] Rambaldi | The role of volume in order book dynamics: a multivariate hawkes process analysis[END_REF], where the authors studied the interplay between orders of di erent sizes. Indeed, a large trade is more likely to be followed by smaller trades than the opposite. We use the same data as the authors of [START_REF] Rambaldi | The role of volume in order book dynamics: a multivariate hawkes process analysis[END_REF] i.e. high-frequency order book data of futures traded at EUREX, that are also used in the numerical part of the previous chapter, see this section for details about the dataset. Here, we use the trades' timestamps of Bund futures. We consider here unsigned trades i.e. we do not distinguish between buyer initiated trades and seller initiated ones. The di erent dimensions of the multivariate point process correspond to di erent intervals of volumes: each transaction falls into only one component. We denote N a t the number of transactions whose volume equals a that occured before t , N a:b t the number of transactions whose volume is between a and b (included), and N a: t the number of transactions whose volume is greater or equal than a. We then consider the following multivariate point processes, and solve the Problem I for these timestamped events: t . This solution is consistent with estimtates in lower dimension. A t = (N 1 t , The solutions we found share the same patterns. We observe that self-excitation is preponderant, followed in importance by the excitation from large volumes. The excitation from large volumes is however lighter when we increase the dimension, this may be a consequence of the `1 norm minimization which aims at AEnding sparse solutions. Our observations are consistent with the results obtained in [START_REF] Rambaldi | The role of volume in order book dynamics: a multivariate hawkes process analysis[END_REF], see this reference for AEnancial interpretations of the results. Note that kernel norm matrix of 21-dimensional model who have been very long to estimate with the Wieher-Hopf based method used in [RBL17], while our method has way lower complexity (comparable to the NPHC's one, see the previous chapter for a full comparison). The approach consisting of minimizing a criterion instead of extracting the information from the third integrated cumulant seems promising. The convexity of the optimization problem is a real advantage compared to the non-convex problem one has to solve to estimate the Hawkes kernel norm matrix using NPHC, see Chapter III. The method developed in this chapter however lacks theory, especially for the choice of the criterion to minimize. As shown in the numerical part, the solution to the Problem I seems to provide better solutions, compared to the two other problems. Such statement could be explained by the similarity between Problem I and the `1 minimization of the compressing sensing problem min x ||x|| 1 s.t. Ax = b. One can indeed prove the exact recovery of the vector x under some assumptions [START_REF] Donoho | Compressed sensing[END_REF]. Technical details Convex hull of the orthogonal group The convex hull of the orthogonal group is the unit ball for the `2 norm. This is a nice exercise that can be solved using simple tools of linear algebra. A proof can be found in [START_REF] Giorgi | Mathematics of optimization: smooth and nonsmooth case[END_REF] for instance. Updates of ADMM steps 6.2.1 Notations We AErst denote the functions used in 2: f 1 (X ) = f (X ), f 2 (X ) = R d £d + (X ), f 3 (X ) = B (X ) and f 4 (X ) = B 2 (X ). We also denote A = C = L °1/2 and B = C °1/2 . After splitting to the right number of variables (so that the update steps of ADMM algorithm for problem 2 write with closed formula), the problem 2 becomes: min X 1 ,X 2 ,X 3 ,X 4 ,Y 1 ,Y 2 f 1 (X 1 ) + f 2 (X 2 ) + f 3 (X 3 ) + f 4 (X 4 ) s.t. Y 1 + Y 2 = C X 1 °X2 = 0 X 3 °X2 = 0 A °1Y 1 °X1 = 0 Y 2 B °1 °X4 = 0 90 6. Technical details Update steps Now, the update steps of ADMM algorithm (using the scaled dual form, see [BPC + 11]) write: X t +1 1 = argmin X 1 f 1 (X 1 ) + (Ω/2)||X 1 °X t 2 +U t 2 || 2 F + (Ω/2)||A °1Y t 1 °X1 +U t 4 || 2 F X t +1 2 = argmin X 2 f 2 (X 2 ) + (Ω/2)||X t +1 1 °X2 +U t 2 || 2 F + (Ω/2)||X t 3 °X2 +U t 3 || 2 F X t +1 3 = argmin X 3 f 3 (X 3 ) + (Ω/2)||X 3 °X t +1 2 +U t 3 || 2 F X t +1 4 = argmin X 4 f 4 (X 4 ) + (Ω/2)||Y t 2 B °1 °X4 +U t 5 || 2 F Y t +1 1 = argmin Y 1 ||Y 1 + Y t 2 °C +U t 1 || 2 F + ||A °1Y 1 °X t +1 1 +U t 4 || 2 F Y t +1 2 = argmin Y 2 ||Y t +1 1 + Y 2 °C +U t 1 || 2 F + ||Y 2 B °1 °X t +1 4 +U t 5 || 2 F U t +1 1 = U t 1 + (Y t +1 1 + Y t +1 2 °C ) U t +1 2 = U t 2 + (X t +1 1 °X t +1 2 ) U t +1 3 = U t 3 + (X t +1 3 °X t +1 2 ) U t +1 4 = U t 4 + (A °1Y t +1 1 °X t +1 1 ) U t +1 5 = U t 5 + (Y t +1 2 B °1 °X t +1 4 ) Proximal operators The previous update steps can be written using proximal operators of the functions f 1 , f 2 , f 3 and f 4 . Proximal operator of f 2 This one is straightforward: prox f 2 (X ) = (X ) + = (max(x i j , 0)) i j ) Proximal operator of f 3 Using techniques given in [START_REF] Boyd | Convex optimization[END_REF], one easily shows that First, it is easier to compute the proximal operator of f AE 3 = ae 1 (•)∑AE for AE < 1. Let's SVD some matrix X 2 R d £d : X = U SV > where U ,V 2 O d (R) and S = diag(ae 1 ,...,ae d ) is diagonal. Using techniques given in [BV04], one easily shows that prox f AE 3 (X ) = d X i =1 (ae i °(ae i °AE) + ) u i v > i Proximal operator of f 4 This one is a well-known projection too: prox f 4 (X ) = X 1 {||X || 2 ∑1} + X ||X || 2 1 {||X || 2 >1} . Final algorithm X t +1 1 = prox f 1 /(2Ω) °(X t 2 °U t 2 + A °1Y t 1 +U t 4 )/2 ¢ X t +1 2 = (1/2) °X t +1 1 +U t 2 + X t 3 +U t 3 ¢ + X t +1 3 = prox f AE 3 °X t +1 2 °U t 3 ¢ X t +1 4 = prox f 4 °Y t 2 B °1 +U t 5 ¢ Y t +1 1 = (I d + A °2) °1( A °1(X t +1 1 °U t 4 ) °Y t 2 +C °U t 1 ) Y t +1 2 = ((X t +1 4 °U t 5 )B °1 °Y t +1 1 +C °U t 1 )(I d + B °2) °1 U t +1 1 = U t 1 + (Y t +1 1 + Y t +1 2 °C ) U t +1 2 = U t 2 + (X t +1 1 °X t +1 2 ) U t +1 3 = U t 3 + (X t +1 3 °X t +1 2 ) U t +1 4 = U t 4 + (A °1Y t +1 1 °X t +1 1 ) U t +1 5 = U t 5 + (Y t +1 2 B °1 °X t +1 4 ) Introduction With the large number of empirical studies devoted to high frequency AEnance, relying on datasets of increasing size and quality, many progresses have been made during the last decade in the modelling and understanding the microstructure of AEnancial markets. Within this context, as evidenced by this special issue, Hawkes processes have become a very popular class of models. The main reason is that they allow one to account for the mutual inØuence of various types of events in a simple and parsimonious way through a conditional intensity vector. Hawkes processes have been involved in many di erent problems of high frequency AEnance ranging from the simple description of the temporal occurrence of market orders or price changes ([Bow07, HB14, FS12]), to the complex modelling of the arrival rates of various kinds of events in a full order book model ([Lar07, Tok11, JA13]). We refer to [START_REF] Bacry | Hawkes processes in AEnance[END_REF] for a recent review. A multivariate Hawkes model of dimension d is characterized by a d£d matrix of kernels, whose elements ¡ i j (t ) account for the inØuence, after a lag t , of events of type j on the arrival rate of events of type i . The challenging issue of the statistical estimation of the shape of these excitation kernels has been addressed by many authors and various solutions have been proposed whose performances (accuracy and computational complexity) strongly depend on the empirical situation one considers. Indeed, if non-parametric methods like e.g. the EM method ([LM11]), the Wiener-Hopf method ([BM14a, BM16, BJM16]) or the contrast function method ([RBRGTM14]) can be applied in low dimensional situations with a large number of events, one has to consider parametric penalized alternatives (like e.g., in [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF][START_REF] Yang | Mixture of mutually exciting processes for viral di usion[END_REF]) when one has to handle a system of very large dimension with a relative low number of observed events (as, e.g., when studying events associated with the node activities of some social networks). As far as (ultra) high frequency AEnance is concerned, the overall number of events can be very large. These events occur in a very correlated manner (with long-range correlations) and the system dimensionality can vary from low to moderately high. In a series of recent papers, Bacry et al. have shown that the non parametric Wiener-Hopf method provides reliable estimations in order to describe, within a multivariate Hawkes model, various aspects of level-I order book Øuctuations: the coupled dynamics of mid-price changes, market and limit order arrivals ([BM14a, BJM16]), the impact of market orders ( [START_REF] Bacry | Market impacts and the life cycle of investors orders[END_REF]) or the interplay between book orders of di erent sizes ([RBL17]). However, if one wants to account for systems of larger dimensionality by considering for instance a wider class of event types or the book events associated with a basket (e.g. a couple) of assets, then the Wiener-Hopf method (or any other similar non-parametric method) may reach its limits as respect to both computational cost and estimation accuracy. On the other hand, a parametric approach can lead to strong bias in the estimated inØuences between components. For this reason, in the present paper, we propose to estimate Hawkes models of order book data using the faster and simpler non-parametric approach introduced in [ABG + 17]. This method focuses only on the global properties of the Hawkes process. More precisely, it aims at estimating directly the matrix of the kernel norms (also called the branching ratio matrix) without paying attention to the precise shape of these kernels. As recalled in the next section, this matrix does not bring all the information about the process dynamics, but is su cient to disentangle the complex interactions between various type of events and estimate the magnitude of their self-and cross-excitations. Moreover, it allows one to estimate the amplitude of Øuctuations of endogenous origin as compared to those of exogenous sources. The method we propose can be considered as the multivariate extension of the approach pioneered by [START_REF] Hardiman | Branching-ratio approximation for the self-exciting hawkes process[END_REF] that proposed to estimate the kernel norm of a one-dimensional Hawkes model directly from the integral of the empirical correlation function. Unfortunately their approach cannot be immediately extended to a multivariate framework because it does not bring a su cient number of constraints as compared to the number of unknown parameters. The method of [ABG + 17] circumvents this di culty by taking into account the AErst three integrated cumulant tensors of Hawkes process. The paper is organized as follows: in Section 2 we provide the main deAEnitions and properties of multivariate Hawkes processes and we introduce the main notations we use all along the paper. The cumulant method of Achab et al. is described and illustrated in Section 3. In Section 4 we estimate the matrix of kernel of Hawkes models for level-I book 2. Hawkes processes: deAEnitions and properties events associated with 4 di erents very liquid assets, namely DAX, Euro-Stoxx, Bund and Bobl future contracts. We AErst consider the 8-dimensional model proposed in [START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF] in order to compare our method to the former results obtained with a computationally more complex Wiener-Hopf method. We then show that the cumulant approach can easily be extended to a 12-dimensional model where all types of level-I book events are considered. Within this model, we uncover all the relationships between these types of events and we study the daily amplitude variations of exogenous intensities. In Section 5 we investigate the correlation between two assets by considering the events of their order book within a 16-dimensional model. This allows us to discuss the inØuence of both their tick size and their degree of reactivity with respect to the impact of their book events on each other. Section 6 contains concluding remarks while some technical details are provided in Appendix. Hawkes processes: deAEnitions and properties In this section we provide the main deAEnitions and properties of multivariate Hawkes processes and set the notations we need all along the paper. Multivariate Hawkes processes and the branching ratio matrix G A multivariate Hawkes process of dimension d is a d -dimensional counting processes N t with a conditional intensity vector ∏ t that is a linear function of past events. More precisely, ∏ i t = µ i + d X j =1 Z t °1 ¡ i j (t °s) d N j s (1) where µ i represents the baseline intensity while the kernel ¡ i j (t ) quantiAEes the excitation rate of an event of type j on the arrival rate of events of type i after a time lag t . In general it is assumed that each kernel is causal and positive, meaning that Hawkes processes can only account for mutual excitation e ects since the occurrence of some event can only increase the future arrival intensity of other events. In order to consider the possibility of inhibition e ects, one can allow kernels to take negative values. In that case, we have to consider expression (1) only when it provides a positive result while the conditional intensity is assumed to be zero otherwise. Rigorously speaking, such non-linear variant of Eq. (1) cannot be handled as simply as the original Hawkes process ([BM96]) but, as empirically shown in e.g. [START_REF] Reynaud-Bouret | Goodness-of-AEt tests and nonparametric adaptive estimation for spike train analysis[END_REF] or [START_REF] Bacry | First-and second-order statistics characterization of hawkes processes and non-parametric estimation[END_REF], if the probability that ∏ i t < 0 is small enough, one can safely consider the model as linear so that all standard expressions provide accurate results. In the following we will suppose that we are in this case and we don't necessarily impose that the kernels ¡ i j (t ) are positive functions. Let us deAEne the matrix G as the matrix whose coe cients are the integrals of the kernels ¡ i j (t ) (that are supported by R + ): G i j = Z +1 0 ¡ i j (t )d t . (2) Let us remark that, as it can directly be seen from the cluster representation of Hawkes processes ([HO74]), G i j represents the mean total number of events of type i directly triggered by an event of type j . For that reason, in the literature, the matrix G is also referred to as the branching ratio matrix ([HB14]). Notice that since the kernels ¡ i j (t ) are not necessarily non negative functions, G i j does not in general correspond to the L 1 norm of ¡ i j . For the sake of simplicity, though this is not technically correct, we shall often refer to the matrix G as the "matrix of kernel norms" or more simply the "norm matrix". If kGk stands for the largest eigenvalue of G, it is well known that a su cient condition for the intensity process ∏ t to be stationary is that kGk < 1. In the following we will always consider this condition satisAEed. One can then deAEne the matrix R as: R = (I d °G) °1, (3) where I d denotes the identity matrix of dimension d . Let § denote the mean intensity vector: § = E(∏ t ) , (4) so that the ratio µ i § i represents the fraction of events of type i that are of exogenous origin. One can easily prove that § and µ are related as: § = R µ (5) If one deAEnes the matrix ™ as: ™ = GR = R °Id , (6) then ™ i j represents the average number of events of type i triggered (directly or indirectly) by an exogenous event of type j . When one analyzes empirical data within the framework of Hawkes processes, the previous remarks allow one to quantify causal relationships between events in the sense of Granger, i.e., within a well deAEned mathematical model. In that respect, the coe cients of the matrices G or ™ can be read as (Granger-)causality relationships between various types of events and used as a tool to disentangle the complexity of the observed Øow of events occurring in some experimental situations [START_REF] Eichler | Graphical modeling for multivariate hawkes processes with nonparametric link functions[END_REF]). Let us emphasize that such causal implications are just a matter of interpretation of data within a speciAEc model (namely a Hawkes model) and should simply be considered as a convenient and parsimonious way to represent that data. They should not, in any way, be understood as a "physical" causality reØecting their "real nature". Integrated Cumulants of Hawkes Process The NPHC algorithm developed in [ABG + 17] and described in Sec. 3 below, enables the direct estimation of the matrix G from a single or several realizations of the process. It relies on the computation of low order cumulant functions whose expressions are recalled below. Given 1 ∑ i , j , k ∑ d , the AErst three integrated cumulants of the Hawkes process can be, thanks to stationarity, deAEned as follows: § i d t = E(d N i t ) (7) C i j d t = Z ø2R ≥ E(d N i t d N j t +ø ) °E(d N i t )E(d N j t +ø ) ¥ (8) K i jk d t = ZZ ø,ø 0 2R 2 ≥ E(d N i t d N j t +ø d N k t +ø 0 ) + 2E(d N i t )E(d N j t +ø )E(d N k t +ø 0 ) °E(d N i t d N j t +ø )E(d N k t +ø 0 ) °E(d N i t d N k t +ø 0 )E(d N j t +ø ) °E(d N j t +ø d N k t +ø 0 )E(d N i t ) ¥ , (9) where Eq. ( 7) is the mean intensity of the Hawkes process, the second-order cumulant (8) refers to the integrated covariance density matrix and the third-order cumulant (9) measures the skewness of N t . Using the martingale representation ([BM16]) or the Poisson cluster process representation ([JHR15]), one can obtain an explicit relationship between these integrated cumulants and the matrix R (and therefore the matrix G thanks to Eq. ( 3)). Some straightforward computations (see [ABG + 17]) lead to the following identities: § i = d X m=1 R i m µ m (10) C i j = d X m=1 § m R i m R j m ( 11 ) K i jk = d X m=1 (R i m R j m C km + R i m C j m R km +C i m R j m R km °2 § m R i m R j m R km ). (12) 3 The NPHC method In this section we brieØy recall the main lines of the recent non parametric method proposed in [ABG + 17] that leads to a fast and robust direct estimation of the branching ratio matrix G without estimating the shape of the kernel functions. This method is based on the remark that, as shown in [START_REF] Jovanović | Cumulants of hawkes point processes[END_REF] and as it can be seen in Eqs. (10), ( 11) and ( 12), the integrated cumulants of a Hawkes process can be explicitly written as functions of R. The NPHC method is a moment method that consists in directly exploiting these equations to recover R and thus G. Estimation of the integrated cumulants Let us AErst introduce explicit formulas to estimate the three moment-based quantities listed in the previous section, namely, §, C and K . In what follows, we assume there exists H > 0 such that the truncation from (°1, +1) to [°H , H ] of the domain of integration of the quantities appearing in Eqs. ( 8) and (9) introduces only a small error. This amounts to neglecting tail e ects in the covariance density and in the skewness density, and it corresponds to a good approximation if (i ) each kernel ¡ i j (t ) is essentially supported by [0, H ] and (i i ) the spectral norm kGk is less than 1. In this case, given a realization of a stationary Hawkes process {N t : t 2 [0, T ]}, as shown in [ABG + 17], we can write the estimators of the AErst three cumulants (7), ( 8) and (9) as b § i = 1 T X ø2Z i 1 = N i T T (13) b C i j = 1 T X ø2Z i ≥ N j ø+H °N j ø°H °2H b § j ¥ (14) b K i jk = 1 T X ø2Z i ≥ N j ø+H °N j ø°H °2H b § j ¥ • ≥ N k ø+H °N k ø°H °2H b § k ¥ °b § i T X ø2Z j X ø 0 2Z k (2H °|ø 0 °ø|) + + 4H 2 b § i b § j b § k . (15) In practice, the AEltering parameter H is selected by (i ) computing estimates of the covariance density at several points t 1 , (i i ) assessing the characteristic time ø c after which the covariance density is negligible, and (i i i ) setting a multiple of ø c for H , for instance H = 5ø c . The NPHC algorithm The covariance C only provides d (d + 1)/2 independent coe cients and is therefore not su cient to uniquely identify the d 2 coe cients of the matrix G. In order to set a su cient number of constraints, the NPHC approach relies on using all the covariance C along with a restricted number of the (d 3 + 3d 2 + 2d )/6 third-order independent cumulant components, namely the d 2 coe cients K c = {K i i j } 1∑i , j ∑d . Thus, we deAEne the estimator of R as b R 2 argmin R L (R), where L (R) = (1 °∑)kK c (R) °c K c k 2 2 + ∑kC (R) °b C k 2 2 , (16) where k • k 2 stands for the Frobenius norm, while c K c and b C are the respective estimators of C and K c as deAEned in Equations ( 14), (15) above. It is noteworthy that the above mean square error approach can be seen as a particular instance of Generalized Method of Moments (GMM), see [START_REF] Hall | Generalized method of moments[END_REF], [START_REF] Hansen | Large sample properties of generalized method of moments estimators[END_REF]. Though this framework allows to determine the optimal weighting matrix involved in the loss function, in practice this approach is unusable, as the associated complexity is too high. Indeed, since we have d 2 parameters, this matrix has d 4 coe cients and GMM calls for computing its inverse leading to a O(d 6 ) complexity. Thus, instead, we choose to use the loss function (16) in which, so as to be of the same order, the two terms are rescaled using ∑ = k c K c k 2 2 /(k c K c k 2 2 +k b C k 2 2 ). We refer to Appendix 1 for an explanation of how ∑ is related to the weighting matrix. Finally the estimator of G is straightforwardly obtained as b G = I d °b R °1, from the inversion of Eq. (2). The authors of [ABG + 17] proved the consistency of the soobtained estimator b G, i.e. the convergence in probability to the true value, when the observation time T goes to inAEnity. 1 the pointwise covariance density at t can be estimated with Let us mention that, when applied to AEnancial time-series, the number of events is generally large as compared with d (i.e., n = max i |Z i | ¿ d ), thus the matrix inversion in the previous formula is not the bottleneck of the algorithm. Indeed, it has a complexity O(d 3 ) which is cheap as compared with the computation of the cumulants which is O(nd 2 1 hT P ø2Z i ≥ N j ø+t +h °N j ø+t °h b § j ¥ for a small h 102 3. The NPHC method t ¡ t 0 ∞ ∞ + 1/Ø AEØ (a) Rectangular kernel log t log ¡ t °log Ø log AEØ∞ slope º °(1 + ∞) (b) ). Thus, assuming the loss function ( 16) is minimized after N iter iterations, the overall complexity of the algorithm is O(nd 2 + N iter d 3 ). The authors of [ABG + 17] compared the complexity of their algorithm with other state-of-the-art methods' ones, namely the ordinary di erential equations based (ODE) algorithm in [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF], the Sum of Gaussians based algorithm in [START_REF] Xu | Learning granger causality for hawkes processes[END_REF], the ADM4 algorithm in [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF], and the Wiener-Hopf-based algorithm in [START_REF] Bacry | First-and second-order statistics characterization of hawkes processes and non-parametric estimation[END_REF]. The complexity of NPHC is smaller, because the algorithm NPHC directly estimates the kernels' integrals while other methods go through the estimation of the kernel functions themselves. Numerical experiments As mentioned above, the NPHC algorithm is non parametric and provides an estimation of the integral of the kernels regardless of their shapes. In order to illustrate the stability of our method with respect to the shape of the kernels, we simulated two datasets with Ogata's Thinning algorithm introduced in [START_REF] Ogata | On lewis' simulation method for point processes[END_REF] using the open-source library tick2 . Each dataset corresponds to a di erent kernel shape (but with the same norm), a rectangular kernel and a power-law kernel, both represented in Figure V.1: rectangular kernel: ¡(t ) = AEØ [0,1/Ø] (t °∞) (17) power law kernel: ¡(t ) = AEØ∞(1 + Øt ) °(1+∞) (18) In both cases, AE corresponds the integral of the kernel, 1/Ø can be regarded as a characteristic time-scale, and ∞ corresponds to the scaling exponent for the power law kernel and a delay parameter for the rectangular one. We consider a non-symmetric 10-dimensional blockmatrix G with 3 non-zero blocks, and where the parameters AE = 1/6 and ∞ = 1/2 take the T + T °L+ L °C + C °T a T b L a L b C a C b DAX 11.9 11.9 21.8 21.9 10.1 10.1 11.6 11.7 80.0 79.5 97.3 96.1 ESXX 2.6 2.6 3.5 3.6 0.9 0.9 16.4 16.5 176.0 174.7 172.4 170.8 Bund 3.2 3.2 4.0 4.0 0.8 0.8 14.5 14.7 125.4 125.0 111.5 110.7 Bobl 1.1 1.1 1.5 1.5 0.5 0.5 6.1 6.1 86.5 86.8 81.6 81.4 Table V.1: Average number of events in thousands per type in a trading day (from open at 08:00 to closing at 22:00 Frankfurt time) for the four assets considered. order and a trade is generated. It is therefore possible to obtain a list of the orders that were submitted complete with their time, type (limit, cancel or market order), volume and price. The timestamp precision is one microsecond and the timestamps are set directly by the exchange. In this work we are interested in disentangling the interactions of di erent types of events occurring at the AErst level of the order book. To this end, we will distinguish the following event types: • T + (T °) : upwards (downwards) mid price movement triggered by a market order; • L + (L °) : upwards (downwards) mid price movement triggered by a limit order; • C + (C °) : upwards (downwards) mid price movement triggered by a cancel order; • T a (T b ) : market order at the ask (bid) that does not move the mid price; • L a (L b ) : limit order at the ask (bid) that does not move the mid price; • C a (C b ) : cancellation order at the ask (bid) that does not move the mid price. Additionally, we introduce the symbols P + (P °) to denote an upwards (downwards) mid price movement irrespectively of its origin. In Table V.1 we report the average number of events per day (from 08:00 am to 10:00 pm) for each asset and each type. We remark that all four assets are extremely active securities with an average of more than 300.000 events per day. One characteristic that strongly inØuences the order book dynamics at short time scales is the tick size to average spread ratio. When this ratio is close to one (resp. much smaller than one), the asset is said to be a "large tick asset" (resp. a "small tick asset") (see, e.g., [START_REF] Dayri | Large tick assets: implicit spread and optimal tick size[END_REF]). In our dataset, all assets are large-tick assets (the spread is equal to one tick in more than 95% of the times) except for the DAX future, which is a small-tick one. As evidenced by Table V.1, the price changes much less frequently on large tick assets. One can also remark that the quantity available at the best quotes tends do be proportionally much larger on large tick assets. These microstructural characteristics will be reØected by our analysis. method when the focus is solely on the kernel interaction matrix. Indeed, in order to estimate the kernel norm matrix with the Wiener-Hopf method, the full kernel functions have to be estimated AErst and then numerically integrated. The NPHC method thus represents a much faster alternative, as it does not require the estimation of d 2 functions but directly estimates their integrals. Besides the speed gain, the gain in complexity allows NPHC to scale much better when increasing the dimension, i.e., when using more detailed models. A 12-dimensional mono-asset model By estimating directly the norm of the kernels and not the whole kernel function, the NPHC method can be used to investigate systems of greater dimension. In this section we extend the model of Section 4.2 to 12 dimensions by separating the type of events that lead to a price move. The 12 even types we consider are thus T + (T °), L + (L °), C + (C °), T a (T b ), L a (L b ), C a (C b ). We then apply the NPHC algorithm to estimate the branching ratio matrix. When not otherwise speciAEed, we set H = 500s. To further assess the validity of our methodology and the impact of time-of-day e ects, we AErst estimate the model using di erent time slots within the trading day. In Section 4.3.2 we also check the robustness of our results as respect to the choice of the parameter H . Kernel stability during the trading day We ran our method for the DAX future on the 12-dimensional point process detailed above on di erent subintervals of the trading day. More precisely, we divided each trading day into 7 slots with edges at 08:00 am, 10:00 am, 12:00 am, 02:00 pm, 04:00 pm, 06:00 pm and 10:00 pm. We then estimated the 12-dimensional model described above on each slot separately, averaging over all 338 trading days available in our dataset. The results are remarkable in that the kernel norm matrix appears to be very stable during the trading day. (we checked that this is also true if we set H = 1s). The NPHC method outputs the estimated matrix b R (and then b G) from which one can obtain an estimate of µ using the relation (5) that links R and the mean intensity §, namely b µ = b R °1 b §. In the right panel of Figure V.5 we plot the values of b µ as obtained using the above relation for the T a/b , L a/b and C a/b components. We consider the kernel norm matrix as constant in each two hours slot and we estimate the average intensity on 15 minutes non-overlapping windows. Moreover, for each type of events we show the average of the bid/ask components. For comparison, in the left panel of Figure V.5 we show the empirical intraday pattern obtained for each component. We remark that the values of µ obtained with our procedure vary during the day and roughly follow the intraday curve of the respective components. Let us notice that µ i / § i , the fraction of exogenous events, is of the order of a few percent. This is fully consistent with what was found in [START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF] and means, within the Hawkes framework, that most of the observed order book dynamics is strongly endogenous. For the price moving components the values of § are of the order of 1s °1, while results for µ are more noisy, similarly to those of T a/b . This analysis conAErms the result formerly observed in [START_REF] Bacry | Hawkes model for price and trades high-frequency dynamics[END_REF] that the kernels are stable during the day, and that time-of-day e ects are well captured by the baseline intensity, at least as long as we are mainly concerned with the high frequency dynamics on a very liquid asset as is the case here. Analysis of the G matrix: Unveiling mutual interactions between book events Having established that the estimated kernel matrix is stable with respect to time of the day e ects, we now examine more in-depth its structure. In Figure V.6 is represented the result of the estimation of the matrix G over the whole trading day for the DAX future. The branching ratio matrix on the left panel is estimated with H = 1s while the right panel corresponds to 4. Single-asset model H = 500s. Let us recall that both horizons are several orders of magnitude larger than the typical inter-event time. Concerning the di erences between the two matrices, we note that certain inhibitory e ects that are visible for H = 1s are less intense or disappear when H = 500s is used. This most notably happens for the elements T + ! T + and T + ! T °and similarly for L + ! L +/°a nd C + ! C +/°, which suggests that when we look at longer scale correlation the self-exciting behavior (i.e. trades are followed by more trades) tends to prevail on the high frequency mean reverting e ect. Apart from these di erences, we can make some observations that are valid in both cases. In particular, we note that two main interaction blocks stand out. The AErst is the upper left corner which concerns interactions between price-moving events, where two anti-diagonal bands are prominent. The second is the bottom right corner, which has a strong diagonal structure. The blocks involving interactions between price-moving and non-price moving events present instead much smaller values. In what follows, we AErst discuss more in depth the e ects of price movements on other events, then those of non-price-moving ones. We also remark that the spectral norm of the estimated matrices G is close to 1 while being inferior (e.g. 0.98 for the DAX with H = 500s). This is in line with what was found in [START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF], and the criticality of AEnancial markets highlighted in [START_REF] Hardiman | Critical reØexivity in AEnancial markets: a hawkes process analysis[END_REF]. Before entering into more details, let us remark that in both cases the expected symmetry up/down (+/°) and bid-ask (b/a) is well recovered in our results. Therefore, to make notation lighter and facilitate the exposition, we will comment only on one side. More precisely, when discussing the e ects of price moves we will refer only to the upwards ones (T + /L + /C + ) and when discussing e ects of liquidity changes we will focus on ask side events (T a /L a /C a ). E ect of price-moving events As we noted above, the most relevant interactions involving T + are the T + ! L + and T + ! L °ones, the mean reverting one (T + ! L °) being more intense. When a market order consumes the liquidity available at the best ask, two main scenarios can occur for the mid price to change again, either the consumed liquidity is replaced, reverting back the price (mean-reverting scenario, highly probable) or the price moves up again and a new best bid is created. Market orders that move the mid price have also an inhibitory e ects at short time scales on subsequent price-moving trades (T + ! T + is negative for H = 1s). Indeed, once a market order consumes the liquidity available at the best quote, it is unlikely that the price will be moved in the same direction by other market orders as the price becomes more unfavorable. We also note a generally inhibitory e ect of T + on price-moving cancel orders which can be linked to a mechanical e ect, liquidity that has been consumed by the market order cannot be canceled anymore. The same kind of dynamics is at play also in the interactions L + ! T + and L + ! T °with the roles inverted. Again, the mean reverting e ect L + ! T °appears to be much more probable. A strong mean-reverting e ect is found in the block L + ! C °. This is possibly the signature of high-frequency strategies whereby agents place limit orders in the spread and cancel them shortly thereafter. Concerning C + events, the main feature lies in the block C + ! L °, where we notice the same anti-diagonal dominance found for the block L + ! C °. Again, we can suppose that when a limit order in the spread is removed it is often quickly replaced by market participants. Finally, the e ect of price moving events on non-price moving ones can be summarized in two main e ects. The AErst is a trend-following/order splitting e ect by which e.g. trades at the ask are likely to be followed by more trades in the same direction (T + ! T a ) and similarly for limit (L + ! L b ) and cancel (C + ! C a ) orders. The second is the shift in liquidity triggered by a price change. A trade at the ask that moves upward the mid price triggers limit orders on the opposite side (T + ! L b ). This can be understood using a latent price argument ([RR10]), as it is well known that there are more limit orders far from the latent price. Right after the mid price goes up, the latent price is expected to be closer to the newly best ask price than to the best bid price, thus limit order Øow is expected to be higher at best bid than at best ask. E ect of non-price-moving events For all events T a , L a and C a the most visible feature is the strong self-exciting interaction. This has been conAErmed in several works ([BJM16], [START_REF] Rambaldi | The role of volume in order book dynamics: a multivariate hawkes process analysis[END_REF]) and can be traced to order-splitting strategies and herding behaviors. Signatures of typical trading patterns can be seen also in the kernels L a ! C a , L a ! C b , where the positive value of the kernel arises form agents canceling and replacing their limit orders with or without switching sides. We also note the positive e ects T a ! T + , L a ! T °and C a ! T + . All these e ects, as well as the analogous ones on C +/°/ L +/°, reØect the fact that changes in the imbalance have an inØuence on the probability of a subsequent price move. So when the queue at the best ask decrease an upward price move becomes more likely and vice-versa. These e ects are much more relevant on a small tick asset (DAX) than on a large tick asset (Bund) where, the size of the queues being larger, their inØuence is marginal. We performed the same analysis on the Bund (see Figure V.7). The main di erences as compared to the DAX are that the e ects between events that move the price are much more intense while the e ects of events that do not move the price on those that do move the price (and vice-versa) are much less pronounced, indeed they are barely visible in Figure V.7. This can be basically seen as a simple consequence of the Bund future being large tick assets, while the DAX is a small tick one. Therefore, price movements on the former are much less frequent but when they happen their e ects are more marked. Analysis of the ™ matrix: the AEngerprint of meta-orders As discussed in Section 2, the elements of the matrix ™ quantiAEes the total e ect, direct and indirect, of an event of type j on events of type i . More precisely, thanks to the branching process structure, we can interpret √ i j as the mean number of events of type i generated by a single exogenous ancestor of type j . We plot the estimated matrices ™ for the DAX and . We note that an exogenous limit or cancel event generates a large number of limit and cancel events and, to a lesser extent, trade events. This can be read as the signature of meta-orders. Indeed, if an agent wants to sell a large number of contracts 6 , he will place a meta-order, i.e., he will optimize the overall cost by dividing this large order into several smaller orders. The overall optimization will result in many limit/cancel sell orders L a ,C a and, as less as possible, of sell market orders T b (the cost of a market order is on average higher than that of a limit order). The same description can be applied to understand why an exogenous sell market order T b generates mainly limit and cancel sell orders L a ,C a as well as other sell market orders T b . Due to the much lower values of the exogenous intensities for price moving events, the left part of the ™ matrix is more noisy. Nevertheless, at least in the DAX case, we note also for the price moving components the prevalence of the L + ! L + and L + ! C °elements, which are the price-moving counterparts of the e ect described for L a . Finally, we also remark that although we noted several inhibition e ects in the matrices G, the elements of ™ are non negative. This suggests that most inhibition e ects are short lived and the e ect of an event arrival is towards an increase of the overall intensity. This is in line with what was found in [START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF] and [START_REF] Rambaldi | The role of volume in order book dynamics: a multivariate hawkes process analysis[END_REF], where the inhibitions e ects were shown to be mostly concentrated around the typical market reaction time. Within the branching ratio representation of Hawkes processes, µ j § i √ i j represents the fraction of events of type i that has a type j as primary ancestor. Along the same line, we can estimate the fraction of aggressive orders (i.e. all T ), as opposed to passive orders (L or 6 Let us recall that, in our discussion, we only address half of the matrix coe cients since the discussion on the other half can be obtained using the symmetries ask/bid, buy/sell, price up/price down. Following these lines, we only consider here the case of a selling meta-order. 112 5. Multi-asset model C ), that is ultimately generated by another aggressive order, as: 1 P i ={T +/°, T a/b } § i X j ={T +/°, T a/b } X i ={T +/°, T a/b } √ i j µ j . (19) We AEnd that for both assets this fraction is about 10%, which means that the large majority of market orders have a "passive order" (L or C ) oldest ancestor. We compute the analogous fraction for passive orders and we AEnd that for both assets more than 96% of the passive orders (L or C ) have an oldest ancestor that is itself a passive (L or C ) order. This fact is in line with the idea that meta-orders would be at the origin of most of the trading activity within the order book. Multi-asset model Studying and quantifying the interactions and comovements within a basket of assets is an important topic in AEnance. Most of these studies focus on the return correlations properties in relationship with portfolio theory. At very high frequency, the discrete nature of price variations and the asynchronous occurrence of price change events make the correlation analysis trickier and, in order to avoid well known bias (like the Epps e ect) one has to use speciAEc techniques like the estimator proposed by [HY + 05]. Hawkes processes, being naturally deAEned in continuous time, can represent a complementary tool for the investigation of highfrequency cross-asset dynamics. The idea of capturing the joint dynamic of multiple assets via Hawkes processes has only been considered in few recent papers. Let us mention the work proposed by [BCT + 15] which models the simultaneous cojumps of di erent assets using a one-dimensional Hawkes process, and a more recent work ( [START_REF] Fonseca | Correlation and lead-lag relationships in a hawkes microstructure model[END_REF]) which focuses on the correlation and lead-lag relationships between the price changes of two assets, in the spirit of [START_REF] Bacry | Modelling microstructure noise with mutually exciting point processes[END_REF]. In this section, we aim at unveiling a more precise structure of the high-frequency crossasset dynamics by pushing further the dimensionality of the model to include simultaneously events on two assets. We AErst consider the pair DAX-EURO STOXX and then the one Bobl-Bund. The pairs of assets considered here are tightly related, as they share exposure to the same risk factors and, in the case of DAX-EURO STOXX, also because the underlying indices actually share a signiAEcant part of their components. This is conAErmed also by Table V.2 where we report 5 minutes return correlations among the considered assets. In this section we consider the same kind of events as in Section 4.2 and we have therefore a 16-dimensional model (2 £ 8) corresponding to 256 possible interactions. Let us point out that this is quite a large dimension value for a non parametric methodology. The DAX -EURO STOXX model In the following, we will denote the events of the DAX order book with the subscript D while we will use the subscript X for the events of EURO STOXX order book. The obtained branching ratio matrix is displayed in Figure V.9. We observe that the mono-asset submatrices (the two 8 £ 8 block matrices along the diagonal), which present the most relevant e ects, have the same structure as the ones which have already been commented on in detail in Section 4.2. Consequently, in this section, we shall focus our discussion on the non diagonal 8 £ 8 submatrices that correspond to the interactions between the two assets. These two submatrices are shown in Figure V.10. Note that colors have been rescaled to highlight their structure. To keep the notation lighter, we will comment only on e ect of upwards price moves and ask events as it was done in the previous section, since we AEnd the symmetries +/°and a/b to be well respected. The most striking feature emerging from Figure V.10 is the very intense relation between same-sign price movements on the two assets. Albeit present in both directions, the norms P + X ! P + D attain larger values. Another notable aspect is the di erent e ects of price moves and liquidity changes of one asset on events on the other asset. Price moves on the DAX have also an e ect on the Øow of limit orders on EURO STOXX (P + D ! L b X and P + D ! C a X ), whereas EURO STOXX price moves triggers mainly DAX price moves in the same direction (P + X ! P + D ). An important aspect for understanding this result is the di erent perceived tick sizes on the two assets. In the following, whenever it is convenient, we shall place the discussion withing the framework of latent price models (e.g., [START_REF] Robert | A new approach for the dynamics of ultrahigh-frequency data: The model with uncertainty zones[END_REF]). Within this framework, the latent price refers to an underlying e cient price representing at any time some average opinion of market participants about the value of the asset. As noted in Section 4.1, the DAX future is a smalltick asset, while the EURO STOXX future is a large-tick one ( [START_REF] Eisler | The price impact of order book events: market orders, limit orders and cancellations[END_REF]). As a consequence, an upward move in the DAX price (P + D ), while signaling that the market latent price has moved slightly upwards, is not su cient to move the EURO STOXX price by a full tick. However, this move can be perceived in the EURO STOXX through the L b X and C a X Øows that are increasing. Indeed, as already mentioned in Section 4.2, it is well known ([RR10]) that there are more limit orders far from the latent price. The latent price went up, so it is now closer to the best ask, and hence the Øow of the limit (resp. cancel) orders on the best bid (resp. ask) is increasing. In the opposite direction, a change in EURO STOXX price is perceived as "large" and triggers price changes in the same direction on the DAX. Interestingly, we can also note that changes in the latent price on the EURO STOXX triggers price movements on the DAX. For instance, a shift of liquidity at the bid, namely an increase of the arrival Øow of limit orders at the bid, that signals that the latent price has moved upwards, has a direct e ect on upward price moves on the DAX. This can be seen from the interactions T a X ! P + D , L b X ! P + D and C a X ! P + D . We can summarize our results by saying that price changes and liquidity changes on the 6. Conclusion and prospects DAX mainly inØuence liquidity (latent price) on the EURO STOXX, while price changes and liquidity changes on the EURO STOXX tend to trigger price moves on the DAX. Finally, let us note that the above e ects are even more pronounced when we estimate the interaction matrices with a smaller H . In particular the e ects of DAX price movements on T, L,C on the EURO STOXX become more relevant compared with those on prices. At the same time, while the e ect of EURO STOXX price moves on DAX's ones is still strong, the e ect of liquidity movements on DAX price movements is comparatively stronger with smaller H . This suggests that these e ects are mainly localized at short time scales, while the P + ! P + ones have much slower decay in time. Bobl -Bund We perform the same analysis on the asset pair Bobl-Bund futures. Here both assets are large tick assets, however the Bund is much more actively traded than the Bobl in the sense that all the order Øows are of higher intensity. The cross-asset submatrices are depicted in Figure V.11. As in the previous case, we remark that the elements P + L ! P + M and P + M ! P + L reØect the strong correlation observed between the two assets. Price changes in the Bund have also a noticeable e ect on limit/cancel order Øows in the Bobl, while price changes in the Bobl have little to no e ect on the Bund except for the mentioned P + M ! P + L interaction. At the same time, T a , L a ,C a events on the Bobl impact prices on the Bund, while the corresponding event on the Bund have little e ect. Comparing this with the case of the DAX-EURO STOXX pair, we can liken the e ect of the Bund on the Bobl to that of the DAX over the EURO STOXX and vice-versa. We argue that the di erence in trading frequency between the Bobl and Bund contracts has a similar e ect of that of a di erent tick size that we observed in the previous case. As before, we have an asset, the Bund, which is more "reactive" (the limit/cancel order Øows are higher than those of the Bobl) than the Bobl, thus a price change of the Bund indicating a change of the latent price impacts the limit/cancel Øows of the Bobl. In the previous case, the higher "reactivity" of the DAX was due to its smaller tick size. Conclusion and prospects In the context of Hawkes processes, the estimation of the matrix kernel norms is essential, as it gives a clear overview of the dependencies involved in the underlying dynamics. In the context of high-frequency AEnancial time-series non-parametric estimation of the matrix kernel norms has already shown to be very fruitful ( [START_REF] Bacry | Hawkes model for price and trades high-frequency dynamics[END_REF][START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF]), since it provides a very rich summary of the system interactions, and it can thus be a valuable tool in understanding a system where many di erent types of events are present. However, its estimation is a computationally demanding process since these estimations are computed from a non-parametric pre-estimation of the kernels themselves, i.e., their entire shape and not only their norm. The resulting complexity prevents the estimations from being performed when the dataset is too heavy or (more important) when the dimension of the Hawkes process (i.e., the number of considered di erent event types) is too large. In this work, we presented the newly developed NPHC algorithm ([ABG + 17]) that allows to directly estimate non-parametrically the kernel norms matrix of a multidimensional Hawkes process, that is without going through the kernel shapes pre-estimation step. As of today, it is the only direct non-parametric estimation procedure available in the academic literature. This method can be seen as a Generalized Method of Moments (GMM) that relies on secondorder and third-order integrated cumulants. This paper shows that this method successfully reveals the various dynamics between the di erent (AErst level) order Øows involved in order books. In a context of a single-asset 8-dimensional Hawkes process, we have shown (as a "sanity check") that it is able to reproduce former results obtained using "indirect" methods. Moreover, the so-obtained gain in complexity allowed us to run a much more detailed analysis (increasing the dimension to 12), separating the di erent types of events that lead to a midprice move. This in turn allowed us to have a very precise picture of the high frequency order book dynamics, revealing, for instance, the di erent interactions that lead to the highfrequency price mean reversion or those between liquidity takers and liquidity makers as well as the inØuence of the tick-size of these dynamics. Not the least, through the analysis of the matrix ™ we also detected the signature of meta-orders. We have also successfully used the NPHC algorithm in a multi-asset 16-dimensional framework. It allowed us to unveil very precisely the high-frequency joint dynamics of two assets that share exposure to the same risk factors but that have di erent characteristics (e.g., di erent tick sizes or di erent degrees of reactivity). It is noteworthy that our methodology can e ciently highlight these types of dynamics, especially since cross-asset e ects are second order e ects compared to mono-asset's. We conclude by noting that our study left out some relevant information such as the volume of the orders and the size of the jumps in the mid-price. This will be the objective of future works. Moreover, within the methodology presented in this paper, an analysis of baskets of assets (with more than two assets) as well as multi-agent high-frequency interactions are currently under progress. c W N (µ) = 1 N P N i =1 b g i (µ) b g i (µ) > , and the objective to minimize is then √ 1 N N X i =1 b g i (µ) ! °c W N (µ 1 ) ¢ °1 √ 1 N N X i =1 b g i (µ) ! , (20) where µ 1 is a constant vector. Instead of multiplying by the inverse weighting matrix, we have decided to divide by the sum of its eigenvalues, which is easily computable: 2 for the second. We compute the previous terms with R 1 = 0. All together, the objective function to minimize is APPENDIX A Tr( c W N (µ)) = 1 N N X i =1 Tr( b g i (µ) b g i (µ) > ) = 1 N N X i =1 Tr( b g i (µ) > b g i (µ)) = 1 N N X i =1 || b g i (µ)|| 2 2 In our case, b g (R) = h vec[ c K c °K c (R)], vec[ b C °C (R)] i > 2 R 2d 2 . 1 k c K c k 2 2 kK c (R) °c K c k 2 2 + 1 k b C k 2 2 kC (R) °b C k 2 2 , (21) Résumé des contributions min µ2R d F (µ) = f (µ) + h(µ) with f (µ) = E ª [`(µ, ª)], où f est un terme d'attache dépendant implicitement des données observées,h est un terme de régularisation qui impose une structure à la solution et ª est une variable aléatoire. Typiquement, f est une fonction di érentiable avec un gradient Lipschitz, alors que h pourrait ne pas être lisse -des exemples typiques incluent une pénalité induisant une pénalité -comme la pénalisation `1. Les algorithmes d'optimisation du premier ordre sont tous des variations de la Descente de Gradient (GD), dont l'origine remonte à Cauchy [START_REF] Cauchy | Méthode générale pour la résolution des systemes d'équations simultanées[END_REF]. A partir d'un point initial µ 0 , cet algorithme minimise une fonction di érentiable f en appliquant itérativement la mise à jour suivante µ t +1 = µ t °¥t r f (µ t ). (1) où r f (µ) représente le gradient de f évalué à µ et (¥ t ) est une séquence de tailles de pas. Les algorithmes de descente de gradient stochastique (SGD) se concentrent sur le cas où r f prend beaucoup de temps à calculer, voire est incalculable. En remarquant que r f (µ) s'écrit comme une moyenne, une idée est d'approximer le gradient dans l'étape de mise à jour (1) avec une méthode Monte Carlo par chaîne de Markov [START_REF] Atchadé | On perturbed proximal gradient algorithms[END_REF]. Par exemple, le remplacement du gradient exact r f (µ) par son estimation MCMC a permis de faire un grand pas en avant dans l'entraînement des modèles graphiques non dirigés [START_REF] Hinton | Training products of experts by minimizing contrastive divergence[END_REF] et des machines Boltzmann restreintes [START_REF] Hinton | Reducing the dimensionality of data with neural networks[END_REF]. Cette première forme de descente de gradient stochastique est appelée Divergence Contrastive dans les contextes mentionnés. Introduction Algorithmes SGD pour une distribution uniforme La plupart des problèmes d'apprentissage statistique de la forme (1) font intervenir comme fonction d'attache aux données f une moyenne sur des points observés, en vertu du principe empirique de minimisation des risques [START_REF] Vapnik | The nature of statistical learning theory[END_REF]. Plus précisément, la fonction objectif écrit min µ2R d F (µ) = f (µ) + h(µ) with f (µ) = 1 n n X i =1 f i (µ), où n est le nombre d'observations, et f i est la perte associée à l'observation i th . Dans ce cas, au lieu d'exécuter MCMC pour approcher r f , on échantillonne uniformément un entier aléatoire i entre 1 et r f (µ) et remplace r f (µ) par r f i (µ) dans l'étape de mise à jour (1). Dans la conAEguration à grande échelle, le calcul de r f (µ) à chaque étape de mise à jour représente le goulot d'étranglement de l'algorithme de minimisation, et SGD permet de diminuer le temps de calcul. En supposant que le calcul de chaque r f i (µ) coûte 1, le calcul du gradient complet r f (µ) coûte n, ce qui signiAEe que l'étape de mise à jour de SGD est n fois plus rapide que celle de GD. ) avec AE > 0 est appelé taux de convergence linéaire puisque la diminution de l'erreur après une itération est au pire linéaire. De même, les taux de convergence peuvent être formulés comme la complexité totale pour atteindre une précision AExe, c'est-à-dire le nombre d'itérations après lequel la di érence E f (µ t ) °f (µ t ) °f (µ § Processus ponctuels Le processus ponctuel est un outil mathématique utile pour décrire les phénomènes qui se produisent à des endroits et/ou à des moments aléatoires. Un processus de points est un élément aléatoire dont la valeur est une liste de points sur un ensemble S. Nous présentons ici les résultats utiles lorsque l'ensemble S est l'intervalle [0, T ), et les points sont des événements datés; ce cas spécial est parfois appelé processus de point temporel. Le livre [START_REF] Daley | An introduction to the theory of point processes: volume II: general theory and structure[END_REF] est considéré comme la référence principale sur la théorie des processus ponctuels. Deux exemples de processus ponctuels temporels sont traités dans cette thèse. Le premier est le processus ponctuel derrière le modèle proportionnel des risques de Cox : sa fonction d'intensité conditionnelle permet de déAEnir le hazard ratio, une quantité fondamentale dans la littérature d'analyse de survie, voir [START_REF] Andersen | Statistical models based on counting processes[END_REF]. Le modèle de régression de Cox relie la durée avant un événement appelé échec à certaines covariables. Ce modèle peut être reformulé dans le cadre de processus ponctuels [START_REF] Andersen | Statistical models based on counting processes[END_REF]. Le second est le processus de Hawkes qui modélise comment les événements passés augmentent la probabilité d'événements futurs. Sa version multivariée permet d'encoder une notion de causalité entre les di érents noeuds. Nous présentons ci-dessous le modèle des risques proportionnels de Cox et les processus de Hawkes dans la partie II. Modèle des risques proportionnels de Cox L'analyse de survie étudie la durée qui précède l'arrivée d'un évènement particulier, tel que la mort dans les organismes biologiques et les défaillances dans les systèmes mécaniques, et est maintenant répandue dans une variété de domaines comme la biométrie, l'économétrie et l'assurance. La variable que nous étudions est le temps d'attente jusqu'à ce qu'un événement Introduction bien déAEni se produise, et l'objectif principal de l'analyse de survie est de lier les covariables, ou caractéristiques, d'un patient à son temps de survie T . Suivant la théorie des processus ponctuels, nous déAEnissons l'intensité comme la probabilité conditionnée qu'un patient meurt immédiatement après t , étant donné qu'il était vivant avant t : ∏(t ) = lim h!0 P(t ∑ T ∑ T ∑ t + h|t ∑ T ) h . L'approche la plus populaire, pour certaines raisons expliquées ci-dessous, est le modèle des risques proportionnels de Cox [START_REF] David | Regression models and life tables (with discussion)[END_REF]. Le modèle de Cox prend une forme semiparamétrique pour le ratio de risque au temps t pour le patient i , dont les caractéristiques sont codées dans le vecteur x i 2 R d : ∏ i (t ) = ∏ 0 (t ) exp(x > i µ), où ∏ 0 (t ) est un ratio de risque de base, qui peut être considéré comme le ratio de risque d'un patient dont les covariables sont x = 0. Une approche d'estimation considère ∏ 0 comme une nuisance et estime seulement µ en maximisant une vraisemblance partielle [START_REF] David | Regression models and life tables (with discussion)[END_REF]. Cette façon d'estimer convient aux études cliniques où les médecins ne s'intéressent qu'aux e ets relatifs des covariables codées en x sur le ratio de risque. Pour ce faire, on peut calculer le rapport des rapports de risque de deux patients di érents : ∏ i (t ) ∏ j (t ) = exp((x i °x j ) > µ) Pour cette raison, on dit que le modèle de Cox est un modèle de risques proportionnels. SVRG au delà de la Minimisation du Risque Empirique Les données utilisées en analyse de survie de la forme (y Notre algorithme de minimisation est doublement stochastique dans le sens où les étapes de gradient sont faites en utilisant la descente stochastique de gradient (SGD) avec réduction de variance, et les espérances internes sont approximées par un algorithme de la chaîne de Monte Carlo Markov (MCMC). Nous dérivons des conditions sur le nombre d'itérations MCMC garantissant la convergence, et obtenons un taux de convergence linéaire sous forte convexité et un taux sublinéaire sans cette hypothèse. i , x i , x i , ± i ) i =1 i =1 n pat contiennent Part II: Découvrir la causalité de Hawkes sans paramétrage Dans les Chapitres III et IV, nous étudions deux méthodes permettant de retrouver les relations de causalité à partir d'un processus ponctuel multivarié. Nous développons une approche par chapitre. Processus de Hawkes AAEn de modéliser la dynamique commune de plusieurs processus ponctuels (par exemple l'horodatage des messages envoyés par di érents utilisateurs d'un réseau social), nous allons considérer le modèle multi-dimensionnel de Hawkes, introduit en 1971 dans [START_REF] Hawkes | Point spectra of some mutually exciting point processes[END_REF] et [START_REF] Hawkes | Spectra of some self-exciting and mutually exciting point processes[END_REF], avec des inØuences croisées entre les di érents processus. Par déAEnition, une famille de d processus ponctuels est un processus de Hawkes multi-dimensionnel si les intensités de toutes ses composantes s'écrivent comme des régressions linéaires sur le passé des processus d : qui compte le nombre d'événements de i dont l'ancêtre direct est un événement de j , on sait d'après [START_REF] Bacry | Hawkes processes in AEnance[END_REF] que : Dans la littérature, il existe deux principales classes de procédures d'estimation pour les noyaux de Hawkes : la paramétrique et la non paramétrique. La première suppose une paramétrisation des noyaux de Hawkes, la plus courante suppose que les noyaux sont en décomposition exponentielle, et estime le paramètre via la maximisation de la log-vraisemblance de Hawkes, voir par exemple [START_REF] Bacry | A generalization error bound for sparse and low-rank multivariate hawkes processes[END_REF] ou [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF].La seconde est basée soit sur la résolution numérique des équations de Wiener-Hopf qui relie les noyaux de Hawkes à sa structure de corrélation [?] (ou de manière équivalente sur l'approximation du processus de Hawkes en tant que modèle autorégressif et la résolution des équations de Yule-Walker [START_REF] Eichler | Graphical modeling for multivariate hawkes processes with nonparametric link functions[END_REF]), soit sur une méthode de moments via la minimisation de la fonction de contraste déAEnie dans [START_REF] Reynaud-Bouret | Goodness-of-AEt tests and nonparametric adaptive estimation for spike train analysis[END_REF]. ∏ i t = µ i + D X k=1 Z t 0 ¡ i j (t E[d N i √ j t ] = g i j E[d N j t ] = g i j § j d t, Dans les Chapitres III et IV, nous proposons deux méthodes d'estimation non-paramétrique permettant d'estimer les intégrales des noyaux de Hawkes à l'aide des intégrales des moments du processus. Pour toutes les procédures d'estimation mentionnées ci-dessus, y compris la nôtre, nous avons besoin de la condition de stabilité suivante aAEn que le processus admette une version avec une intensité stationnaire : Assumption 1. La norme spectrale of G = [g i j ] satisfait ||G|| < 1. Approche par Méthode des Moments Généralisée Un travail récent [JHR15] a prouvé que les cumulants intégrés des processus de Hawkes peuvent être exprimés en fonctions de G = [g i j ], et a fourni la méthode constructive pour obtenir ces expressions. La première approche que nous avons développée dans cette partie est une méthode d'appariement des moments sur les cumulants intégrés de deuxième et de troisième ordre du processus. À cette AEn, nous avons conçu des estimateurs cohérents des premier, deuxième et troisième cumulants intégrés du processus de Hawkes. Leurs contrepar-2. Part II: Découvrir la causalité de Hawkes sans paramétrage ties théoriques sont des polynômes de R = (I °G) °1, comme indiqué dans [START_REF] Jovanović | Cumulants of hawkes point processes[END_REF] : § i = d X m=1 R i m µ m C i j = d X m=1 § m R i m R j m K i jk = d X m=1 (R i m R j m C km + R i m C j m R km +C i m R j m R km °2 § m R i m R j m R km ). Une Modèle 12-dimensionnel du carnet d'ordres d'un actif Comme première application de la procédure décrite au chapitre III, nous considérons le processus ponctuel à 12 dimensions suivant, une extension naturelle du processus ponctuel à 8 dimensions introduit dans [START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF] : • T + (T °): mouvement du mid-price à la hausse (baisse) due à un ordre de marché. N t = (T + t , • L + (L °): mouvement du mid-price à la hausse (baisse) due à un ordre limite. • C + (C °): mouvement du mid-price à la hausse (baisse) due à un ordre d'annulation. • T a (T b ): ordre de marché à l'ask (au bid) sans modiAEcation du mid-price. • L a (L b ): ordre limite à l'ask (au bid) sans modiAEcation du mid-price. • C a (C b ): ordre d'annulation à l'ask (au bid) sans modiAEcation du mid-price. Nous utilisons ensuite l'interprétation causale des processus de Hawkes pour interpréter notre solution comme une mesure de la causalité entre les événements. Cette application de la méthode à ce nouveau modèle a révélé les di érentes interactions qui conduisent au retour à la moyenne des prix à haute fréquence, et celles entre les preneurs de liquidité et les faiseurs de liquidité. Par exemple, on observe les e ets des événements T + sur d'autres événements sur la Modèle 16-dimensionnel du carnet d'ordres de deux actifs La méthode d'estimation non paramétrique introduite au chapitre III permet une estimation rapide pour une méthodologie non paramétrique. Nous passons ensuite à l'échelle le modèle aAEn de tenir compte des événements sur deux actifs simultanément et de dévoiler une structure précise de la dynamique des actifs croisés à haute fréquence. Nous considérons un modèle à 16 dimensions, composé de deux modèles à 8 dimensions de la forme. Abstract : This thesis is divided into three parts. The first focuses on a new optimization algorithm we have developed. It allows to estimate the parameter vector of the Cox regression when the number of observations is very important. Our algorithm is based on the SVRG algorithm and uses a MCMC method to approximate the descent direction. We have proved convergence rates for our algorithm and have shown its numerical performance on simulated and real world data sets. The second part shows that the Hawkes causality can be estimated in a non-parametric way from the integrated cumulants of the multivariate point process. We have developed two methods for estimating the integrals of the kernels of the Hawkes process, without making any hypothesis about the shape of these ker-nels. Our methods are faster and more robust, with respect to the shape of the kernel, compared to the state-of-the-art. We have demonstrated the statistical consistency of the first method, and have shown that the second method can be reduced to a convex optimization problem. The last part highlights the dynamics of the order book thanks to the first nonparametric estimation method introduced in the previous section. We used EUREX futures data, defined new order book models and applied the estimation method on these point processes. The results obtained are very satisfactory and consistent with an econometric analysis, and prove that the method that we have developed makes it possible to extract a structure from data as complex as those resulting from high-frequency finance. N 1. Part I: Large-scale Cox model typical examples include sparsity inducing penalty -such as the `1 penalization. Introduction which the di erence E f (µ t ) °f (µ § ) becomes smaller than ≤ > 0 multiplied by the complexity per iteration. The algorithm Gradient Descent will reach the accuracy ≤ after O °∑ log 1 ≤ ¢ iterations resulting in a O °nd∑ log 1 ≤ ¢ complexity, while Stochastic Gradient Descent reachs such accuracy after O °∑ ≤ to reach accuracy ≤) than Gradient Descent since the complexity per iteration of those algorithms is O(d ) versus O(nd) for Gradient Descent. Those typically typically enjoy complexity of the form O °(n + ∑)d log 1 ≤ ¢ in the strongly-convex case, see [SLRB17, JZ13, DBLJ14, SSZ13]. Result 1 . 1 Under Assumptions 1 and 2, the sequence of estimators deAEned by the minimization of L T (R) converges in probability to the true value G: is a norm that provides a particular structure to the solution. Every matrix G satisfying C = (I °G) °1L(I °G> ) °1 equals I °L1/2 MC °1/2 with M an orthogonal matrix. Instead of the previous problem, we now focus on its convex relaxation, we split the variables G and M, and solve the problem with the Alternating Direction Method of Multipliers algorithm, see [GM75] and [GM76]: min G,M f (G) + B (M) + B (G) + R d £d + (G) s.t. G = I °L1/2 M C °1/2 , where B (resp. B) is the open (resp. closed) unit ball w.r.t. the spectral norm. The closed unit ball w.r.t. the spectral norm is indeed the convex hull of the orthogonal group. Figure .1: Kernel norm matrix G estimated for the DAX future with H = 1s. which basically is the number of events in each Borel subset B 2 B. The mean measure M of a point process ª is a measure on S that assigns to every B 2 B the expected number of events of ª in B , i.e., M (B ) := E[N (B )] for all B 2 B. For inhomogeneous Poisson process, M (B ) = R B ∏(x)d x, where the intensity function ∏(x) yields a positive measurable function on S. Intuitively speaking, ∏(x)d x is the expected number of events in the inAEnitesimal d x. For the most common type of point process, a homogeneous Poisson process, ∏(x) = ∏ and M (B ) = ∏|B |, where | • | is the Lebesgue measure on (S, B). More generally, we deAEne Cox point processes -also known as doubly stochastic Poisson processes -as a generalization of Poisson processes where the intensity ∏(x) is itself a stationary stochastic process. Then, conditional on ∏, the doubly stochastic Poisson process is simply an inhomogenous Poisson process with intensity ∏(x). ) 4 . 4 Theoretical guarantees for the iterations t belonging to the k-th phase, where N k is the number of iterations of the Markov chain used for the computation of b r f i t (µ t ) during phase k (see line 5 of Algorithm 6), and where C 1 and C 2 are positive constants. Let us point out that Proposition 1 below gives a su cient condition for Assumption 2 to hold. Figure Figure II.1: Convergence of Cocktail and L-BFGS-B on Lymphoma dataset. Top: the starting point is µ 0 = 0 2 R d . Bottom: the starting point is µ 0 = b µ (l ) (solution to the same objective with a slightly larger ∏). This illustrates the fact that Cocktail cannot minimize directly a single objective (with a AExed ∏) and requires to compute the full path of solution to converge. Figure II.1, where the convergence of Cocktail and L-BFGS-B algorithms are compared for two starting points µ 0 . Even when the starting point is set to the previous minimizer (second case in Figure II.1, cocktail's convergence is slower than the one of L-BFGS-B. As a consequence, we decided that no fair comparison could be conducted with cocktail and coxnet algorithms. Figure Figure II.2: Distance to optimum of all algorithms on NKI70 (left) and Lymphoma (right) with ridge penalization (AE = 0 and ∏ = 1/ p n) pn and illustrate our results in Figure II.5. Low lasso. We take AE = 1 and ∏ = 1/n and illustrate our results in Figure II.4. High ridge. We take AE = 0 and ∏ = 1/ p n and illustrate our results in Figures II.2 and Figures II.3. Low ridge. We take AE = 0 and ∏ = 1/n and illustrate our results in Figure II.6. Figure Figure II.4: Distance to optimum of all algorithms on NKI70, Lymphoma, Luminal and on the simulated dataset (respectively from top to bottom) for Low-ridge penalization - increasing: no memory process and patient's health is worsening This method enables us to simulate n failures times T 1 , T 2 ,...,T n . Then, we simulate C 1 ,C 2 ,...,Cn with exponential distribution. This AEnally gives us a set of observed times (yi ) n i =1 = (T i ^Ci ) n i =1 and a set of censoring indicators (± i ) n i =1 = ( {T i ^Ci } ) n i =1 .10 Mini-batch sizingThe mini-batch sizing question is essential since it is a natural trade-o between computing time and precision. We know that computing r f i (µ) needs the computation of |R i | 2 {1, . . . , n pat } inner products. One proves easily that computing a mini-batch (1/n mb )r f B (µ)where B is the set of n mb index randomly picked -only needs max i 2B |R i | inner products. A simple probability exercise gives us a key insight about the mini-batch size. Let's assume that censoring is uniform over the set {1, 2, . . . , n pat } meaning that |R i | = ci with c > 1. Then, we denote u 1 , u 2 ,...,u n mb ª U [n] the indices independently sampled to compute the mini-batch i.e. B = {u i } n mb i =1 . Now we study the c.d.f. of max 1∑i ∑n mb u i : for k 2 {1, 2, . . . , n}, i 2B |R i | and |B| from being too large. This is why we used n mb = 0.1n or n mb = 0.01n, depending of the size n of the dataset. 54 10. Mini-batch sizing Part II b § i , b C i j , b K i i j fromEqs. (11, 12, 13) 4: Design f L (R) using the computed estimators. 5: Minimize numerically f L (R) so as to obtain b R 6: Return b G = I d °b R °1. ) writes C 1/2 OL °1/2 , with L = diag( §) and O an orthogonal matrix. Our starting point is then simply chosen by setting O = I d in the previous formula, leading to nice convergence results. Even though our main concern is to retrieve the matrix G, let us notice we can also obtain an estimation of the baseline intensities' from Eq. (3), which leads to b µ = b R °1 b §. An e cient implementation of this algorithm with TensorFlow, see [AAB + 16], is available on GitHub: https://github.com/achab/nphc. can be seen as the weighted squared Frobenius norm of b g T (R).Moreover, when T ! +1, one has b under the conditions of the following theorem, where P ! stands for convergence in probability. Figure III. 1 : 1 Figure III.1: The three di erent kernels used to simulate the datasets. Figure III. 2 : 2 Figure III.2: On Exp100 dataset, estimated bG with ADM4 (left), with NPHC (middle) and the ground-truth matrix G (right). Both ADM4 and NPHC estimates recover the three blocks. However, ADM4 overestimates the integrals on two of the three blocks, while NPHC gives the same value on each blocks. Figure Figure III.3: Estimated b G via NPHC on DAX order book data. Considering a block-wise weighting matrix, one block for c K c °K c (R) and the other for b C °C (R), the sum of the eigenvalues of the AErst block becomes k c K c °K c (R)k 2 2 , and k b C °C (R)k 2 Theorem 3 . 3 Suppose that (N t ) is observed on R + , cW T P °! W , and 1. W is positive semi-deAEnite and W g 0 (µ) = 0 if and only if Remark 1 . 1 and || f || 1 = O(1). In practice, we use a constant sequence of weighting matrices: c W T = I d . Proof. Proceed by verifying the hypotheses of Theorem 2.1 from [NM94]. Condition 2.1(i ) follows by (i ) and by Q 0 µ)| P °! 0, we should now prove that sup µ2£ k b g T (µ) °g0 (µ)k P °! 0. By £ compact, it is su cient to prove that k b (skewness density) and M i jkl u,v,w (fourth cumulant density) are positive as polynoms of integrals of √ ab • with positive coe cients. The integrals of the singular parts are positive as well. Figure IV. 1 : 1 Figure IV.1: From the left to the right: solution of Problem I, solution of Problem II, solution of Problem III, and the ground-truth matrix G. Only the solution to Problem I recovers the three blocks. The two other problems outputed symmetric matrices, while the ground-truth matrix is not. Figure IV. 2 : 2 Figure IV.2: Solutions of Problem I for the multivariate point process A t (left) and B t (right). We observe a strong self-excitation. These solutions are consistent with the estimated kernel norm matrix in [RBL17]. Figure IV. 3 : 3 Figure IV.3: Solution of Problem I for the multivariate point process Ct . This solution is consistent with estimtates in lower dimension. Figure V.1: The two di erent kernels used to simulate the datasets. Figure V. 3 : 3 Figure V.3: Kernel norm matrix G estimated with the NPHC method for the DAX future (left) and with the Wiener-Hopf method of [BM16] (right) when the 8-dimensional model described in Section 4.2 is considered. Figure V. 4 : 4 Figure V.4: Kernel norm matrix G for the DAX future estimated (using NPHC) at three di erent times: between 08:00 and 10:00 (left), between 12:00 and 14:00 (middle) and between 16:00 and 18:00 (right). Figure V. 5 : 5 Figure V.5: Estimation of the baseline intensities of each event type within a trading day for the DAX future using 15 min slots. Left panel: Empirical intraday pattern measured using market, limit and cancel orders that do not move the price. Right panel: µ values estimated using the NPHC method. All quantities are expressed in s °1. 110 4 . 4 Figure V.6: Kernel norm matrix G estimated with the NPHC method for the DAX future with H = 1s (left) and H = 500s (right). Figure V. 7 : 7 Figure V.7: Kernel norm matrix G estimated with the NPHC method for the Bund future with H = 1s (left) and H = 500s (right). Figure Figure V.8: ™ matrix of eq. (6) estimated with the NPHC method for the DAX future (left) and the Bund future (right) with H = 500s. Figure V. 9 : 9 Figure V.9: Hawkes kernel norm matrix obtained when the DAX and EURO STOXX futures are considered simultaneously in a 16D model. DAX events are denoted with the D subscript, EURO STOXX ones with the X subscript. Figure V. 10 : 10 Figure V.10: Submatrices of the Kernel norm matrix G corresponding to the e ect of DAX events on EUROSTOXX STOXX events (left) and vice versa (right). These two submatrices correspond to the ones lying on the antidiagonal on the Figure V.9 Assuming the associated weighting matrix is block-wise, one block for c K c °K c (R) and the other for b C °C (R), the sum of the eigenvalues of the AErst block becomes k c K c °K c (R)k 2 2 , and k b C °C (R)k 2 Figure V.11: Submatrices of the Kernel norm matrix G corresponding to the e ect of Bund (L) events on Bobl (M) events (left) and vice-versa (right). ) devient plus petit que ≤ > 0 multiplié par la complexité par itération. L'algorithme Descente de Gradient atteindra la précision ≤ après O °∑ log 1 ≤ ¢ itérations résultant en une complexité de O °nd∑ log 1 ≤ ¢ , tandis que la Descente de Gradient Stochastique atteint une telle précision après O °∑ ≤ ¢ itérations et donc une complexité en O ≥ d ∑ ≤ ¥ . Récemment, di érents travaux ont amélioré la descente du gradient stochastique en utilisant les techniques de réduction de variance des méthodes de Monte Carlo. L'idée est d'ajouter un terme de contrôle à la direction de descente pour améliorer le compromis biais-variance dans l'approximation du gradient réel r f (µ). Ces variantes bénéAEcient également de taux de convergence linéaire, puis de complexités plus petites (pour atteindre la précision ≤) que la descente de gradient puisque la complexité par itération de ces algorithmes est de O(d ) contre O(nd) pour Gradient Descent. Celles-ci atteignent typiquement une complexité de la forme O °(n + ∑)d log 1 ≤ ¢ dans le cas fortement convexe, voir [SLRB17, JZ13, DBLJ14, ?]. 1. Part I: Modèle de Cox à grande échelle ∏t = ∏ > 0. Notez que les processus ponctuels temporels peuvent aussi être caractérisés par la distribution des temps d'intervalle i.e. la durée entre deux événements consécutifs. Nous rappelons que la distribution des temps d'intervalle d'un processus de Poisson avec intensité ∏ est une distribution exponentielle du paramètre ∏. Voir la page 41 de[START_REF] Daley | An introduction to the theory of point processes: volume II: general theory and structure[END_REF] pour quatre façons équivalentes de déAEnir un processus ponctuel temporel. Cependant, maximiser cette vraisemblance partielle est un problème di cile lorsqu'il s'agit de données à grande échelle (c'est-à-dire un grand nombre d'observations n) et à haute dimension (c'est-à-dire un grand d ). Pour s'attaquer à la dimensionnalité élevée, des approches pénalisées et parcimonieuses ont été envisagées dans la littérature[START_REF] Tibshirani | Regression shrinkage and selection via the lasso[END_REF] [T + 97][START_REF] Goeman | L1 penalized estimation in the cox proportional hazards model[END_REF]. Le problème est maintenant de minimiser l'opposé du logarithme de la vraisemblance partielle f (µ) = °`(µµ) avec une pénalisation h(µ) qui fait que le prédicteur µ devient parcimonieux et sélectionne des variables. Nous discuterons de cette approche et des di érents modèles dans le chapitre II. Il n'existe cependant pas encore d'approches pour répondre au problème à grande échelle. , pour chaque individu i = 1, . . . , n pat , un vecteur de caractéristiques x i 2 R d , un temps observé y i 2 R + qui correspond au temps de l'échec si ± i = 1 ou à un temps censuré à droite si ± i = 0. Si D = {i : ± i = 1} est l'ensemble des patients pour lesquels un temps d'échec est observé, si n = |D| est le nombre total de temps d'échec, et si R i = { j... : y j ∏ y i } est l'indice des individus 8 2. Part II: Découvrir la causalité de Hawkes sans paramétrage toujours à risque au moment où y i , l'opposé du logarithme de la vraisemblance partielle de Cox s'écrit : µ 2 R d . Chaque gradient de la probabilité logarithmique négative s'écrit alors comme deux espérances imbriquées : l'une d'une distribution uniforme sur D, l'autre sur une distribution de Gibbs, voir le chapitre II pour plus de détails. § i correspond à l'intensité moyenne, satisfaisant E[d i t ] = § i d t.Cependant, dans la pratique, les noyaux de Hawkes ne sont pas directement mesurables à partir des données et ces mesures de causalité entre les di érents types d'événements sont donc inaccessibles. ∏ 0 où f (G) est une norme qui donne une certaine structure à la solution. Toute matrice G satisfaisant C = (I °G) °1L(I °G> ) °1 s'écrit I °L1/2 MC °1/2 avec M une matrice orthogonale. Au lieu d'étudier le problème précédent, nous nous focalisons sur sa relaxation convexe, we séparons les variables G et M, et résolvons le problème avec l'algorithme Alternating Direction Method of Multipliers, voir [GM75] et [GM76]: min G,M f (G) + B (M) + B (G) + R d £d + (G) s.t. G = I °L1/2 M C °1/2 , où B (resp. B) est la boule unitaire ouverte (resp. fermée) par rapport à la norme spectrale. La boule unitaire fermée par rapport à la norme spectrale est en e et l'enveloppe convexe du groupe orthogonal. Contrairement au problème d'optimisation du chapitre précédent, le problème qui vient d'être énoncé est convexe. Nous testons cette procédure sur des simulations numériques de divers noyaux Hawkes et des données du carnet d'ordres réels, et nous montrons comment le critère f a ecte les matrices que nous récupérons. 3 Partie III: Capter la dynamique d'un carnet d'ordres à l'aide de processus de Hawkes Le chapitre V s'intéresse à l'estimation des intégrales des noyaux de Hawkes sur des données AEnancières, à l'aide de la méthode d'estimation introduite dans le chapitre III. Cela nous a permis d'avoir une image très précise de la dynamique du carnet d'ordres à haute fréquence. Nous avons utilisé les événements du carnet de commandes associés à 4 actifs très liquides de la bourse EUREX, à savoir DAX, EURO STOXX, Bund et les contrats à terme Bobl. Figure A. 1 ( 1 Figure A.1: Matrice d'intégrales des noyaux G estimée pour le DAX avec H = 1s. Figure A. 2 : 2 Figure A.2: Sous-matrice d'intégrales des noyaux G correspondant à l'e et des évènements du DAX sur ceux de l'EUROSTOXX STOXX (gauche) et vice versa (droite). • NKI70 contains survival data for 144 breast cancer patients, 5 clinical covariates and the expressions from 70 gene signatures, see [VDVHVV + 02]. • Luminal contains survival data for 277 patients with breast cancer who received the adjuvant tamoxifen, with 44,928 expressions measurements, see [LHKD + 07]. • Lymphoma contains 7399 gene expressions data for 240 lymphoma patients. The data was originally published in [AED + 00]. causality if forecasting future values of Y is more successful while taking X past values into account. In[START_REF] Eichler | Graphical modeling for multivariate hawkes processes with nonparametric link functions[END_REF], it is shown that for N t a multivariate Hawkes process, N j t does not Granger-cause N i t w.r.t N t if and only if ¡ i j Table III . III 1: Complexity of state-of-the-art methods. NPHC's complexity is very low , especially in the regime n ¿ d . Method Total complexity ODE [ZZS13] Table III III Method ODE GC ADM4 WH NPHC RelErr 0.007 0.15 0.10 0.005 0.001 MRankCorr 0.33 0.02 0.21 0.34 0.34 Time (s) 846 768 709 933 20 Table III.3: Metrics on PLaw10: comparable rank correlation, strong improvement for relative error and computing time. Method ODE GC ADM4 WH NPHC RelErr 0.011 0.09 0.053 0.009 0.0048 MRankCorr 0.31 0.26 0.24 0.34 0.33 Time (s) 870 781 717 946 18 .2: Metrics on Rect10: comparable rank correlation, strong improvement for relative error and computing time. Table III III .4: Metrics on Exp100: comparable rank correlation, strong improvement for relative error and computing time. Method ODE GC ADM4 NPHC RelErr 0.092 0.112 0.079 0.008 MRankCorr 0.032 0.009 0.049 0.041 Time (s) 3215 2950 2411 47 Table III.5: Metrics on MemeTracker: strong improvement in relative error, rank correlation and computing time. Method ODE GC ADM4 NPHC RelErr 0.162 0.19 0.092 0.071 MRankCorr 0.07 0.053 0.081 0.095 Time (s) 2944 2780 2217 38 ||•|| 1 , the squared `2-norm ||•|| 2 2 and the nuclear norm ||•|| § . In the rest of the section, we refer as Problem I the minimization problem written in Equation (2) with Table IV . IV 1: The solution of Problem I has the smallest relative error. Problem I I I I I I RelErr 0.093 0.130 0.131 1 Part I: Modèle de Cox à grande échelle Cette thèse cherche à montrer comment certaines méthodes d'optimisation récentes permettent de résoudre des problèmes d'estimation di ciles liés aux modèles d'évènements. Alors que le cadre classique de l'apprentissage supervisé[START_REF] Hastie | Overview of supervised learning[END_REF] traite les observations comme une collection de couples indépendants de covariables et de labels, les modèles d'évènements s'intèressent aux temps d'arrivée d'un évènement et cherchent alors à extraire de l'information de la source de donnée. Ces évènements datés sont ordonnés par la chronologie et ne peuvent dès lors être considérés comme indépendants. Ce simple constat motive l'utilisation d'un outil mathématique particulier appelé processus ponctuel[START_REF] Daley | An introduction to the theory of point processes: volume II: general theory and structure[END_REF] pour apprendre une structure à partir de ces évènements. Nous allons dans un premier temps présenter et motiver les problématiques que nous voulons aborder dans cette thèse. Le fait que tous les patients ne meurent pas lors de l'étude est toujours intéressant d'un point de vue statistique, mais ces données ne peuvent pas être utilisée dans un problème de régression classique pour lequel il faudrait observer l'évènement défaillance pour tous les individus. La di culté a été contournée par D.R. Cox[START_REF] David | Regression models and life tables (with discussion)[END_REF], dans l'un des articles scientiAEques les plus cités de tous les temps[START_REF] Van Noorden | The top 100 papers[END_REF] avec le modèle à risque proportionnel qui permet d'extraire de l'information de données censurées, i.e. de patients pour lesquels le temps de défaillance n'est pas observé. La procédure d'estimation du vecteur de paramètre de la régression, sans aucune hypothèse sur le risque de base considéré comme un paramètre de bruit, a été introduite dans[START_REF] Cox | Partial likelihood[END_REF] Introduction et revient à maximiser la vraisemblance partielle du modèle. Cette procédure permet de gérer e cacement les covariables de grande dimension, ce qui est courant avec les données de biostatistique, en ajoutant un terme de pénalisation au critère à minimiser[START_REF] Goeman | L1 penalized estimation in the cox proportional hazards model[END_REF][START_REF] Tibshirani | Regression shrinkage and selection via the lasso[END_REF]. Cependant, les algorithmes de maximisation de la vraisemblance partielle ne passent pas à l'échelle lorsque le nombre de patients devient très grand, contrairement à la plupart des algorithmes qui ont permis la data révolution. On peut dès lors se poser la question suivante :Quelques années avant le vingtième siècle, le sociologue français Durkheim a rmait déjà que les sociétés humaines sont faites de composantes interconnectées, comme les systèmes biologiques[START_REF] Durkheim | Le suicide: étude de sociologie[END_REF]. Maintenant que notre technologie nous permet même d'être connecté à distance, la notion de réseau concerne de très nombreux domaines : réseaux sociaux, systèmes d'information, marketing, épidémiologie, sécurité nationale et tant d'autres. Une meilleure compréhension de ces larges réseaux et des processus qui s'y passent aurait des applications majeures dans les domaines déjà cités[START_REF] Rodriguez | Structure and Dynamics of Di usion Networks[END_REF]. L'observation des réseaux se réduit souvent à l'enregistrement des instants où les noeuds du réseau envoient un message, achètent un produit ou sont infectés par un virus. Nous observons souvent où et quand mais pas comment et pourquoi les messages sont envoyés via un réseau social. L'obtention de ces données pour plusieurs noeuds du réseau permet de retrouver la dynamique jointe et révéler la structure sous-jacente au système. Une des approches permettant d'estimer l'inØuence entre ces di érentes sources est d'utiliser un processus ponctuel appelé processus de Hawkes[START_REF] Hawkes | Spectra of some self-exciting and mutually exciting point processes[END_REF][START_REF] Hawkes | Point spectra of some mutually exciting point processes[END_REF], dont le taux d'arrivée des évènements dépend des évènements passés. Les processus de Hawkes ont été appliqués avec succès pour modéliser l'inØuence réciproque entre les tremblements de terre de di érentes magnitudes et rapprochés dans le temps[START_REF] Ogata | Statistical models for earthquake occurrences and residual analysis for point processes[END_REF]. Plus précisement, ce processus quantiAEe l'augmentation de la probabilité d'observer de nouveaux tremblements de terre, appelés répliques, après en avoir observé un premier, via l'utilisation de fonctions appelés noyaux. Les processus de Hawkes permettent aussi de mesurer la causalité au sens de Hawkes, qui correspond au number moyen d'évènements de type i engendrés par un évènement de type j . Outre l'exemple originel des tremblements de terre, les deux autres domaines majeurs où sont utilisés les processus de Hawkes sont l'étude des réseaux sociaux [BBH12, ZZS13, ISG13] et l'étude des transactions AEnancières[START_REF] Bacry | Hawkes processes in AEnance[END_REF]. L'estimation habituelle de la causalité au sens de Hawkes demande cependant de faire quelques hypothèses sur la formes des noyaux pour simpliAEer l'algorithme d'inférence[START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF]. Une hypothèse courante est la décroissance monotone des noyaux (exponentiels ou en loi du puissance), ce qui signiAEe que l'impact d'un événement est toujours instantanément maximal, ce qui n'est pas réaliste puisqu'en pratique il peut exister un délai avant l'impact maximal. Ces remarques nous conduisent à soulever la question suivante : 'échelle lorsque le nombre de noeuds est élevé. Dans cette partie, nous ne nous concentrons que sur la première, pour laquelle nous avons prouvé un résultat de consistence. Depuis le travail pionnier de Bowsher[START_REF] Bowsher | Modelling security market events in continuous time: Intensity based, multivariate point process models[END_REF], qui a reconnu la Øexibilité et la simplicité d'utilisation des processus Hawkes aAEn de modéliser la dynamique conjointe des transactions et des changements de prix moyens du NYSE, les processus Hawkes n'ont pas cessé de gagner en popularité dans le domaine de la AEnance à haute fréquence, voir[START_REF] Bacry | Hawkes processes in AEnance[END_REF] pour une revue. En e et, pour prendre en compte des données transactionnelles irréguliers espacées dans le temps, il est naturel de les considérer comme un processus ponctuel. Aussi, dans le domaine AEnancier, de nombreuses caractéristiques résumant les résultats empiriques sont déjà connues. Par exemple, le Øux des transactions est connu pour être auto-corrélé et inter-corrélé avec les mouvements de prix. Ces caractéristiques, appelées faits stylisés, de l'économiste Nicholas Kaldor[START_REF] Kaldor | A model of economic growth[END_REF], faisaient référence à des tendances statistiques qui doivent être prises en compte malgré un possible manque de compréhension microscopique. Ces faits stylisés peuvent être facilement capturés à l'aide de la notion de causalité de Hawkes. La compréhension de la dynamique du carnet d'ordre est l'une des questions centrales des statistiques AEnancières, et les représentations non-paramétriques antérieures des carnets d'ordre à l'aide de processus de Hawkes multivariés étaient de faible dimension en raison de la complexité de leur méthode d'estimation. L'estimation non-paramétrique de la causalité de Hawkes introduite dans la deuxième partie de cette thèse est rapide et robuste à la forme des fonctions noyaux, et il est donc naturel de se demander quel type de fait stylisé peut être découvert à partir des données horodatées du carnet d'ordre.Dans la partie I, nous répondons à la question 4 en introduisant un nouvel algorithme de descente de gradient stochastique appliquée à la minimisation de la vraisemblance partielle de Cox. En e et, la log-vraisemblance partielle de Cox s'écrit comme une somme de sousfonctions dépendant chacune d'une séquence d'observations, séquences de longueur variable, contrairement au cas classique de la minimisation du risque empirique où les sous-fonctions dépendent d'un nombre AExe d'observations, une en général. Les algorithmes classiques de descente de gradient stochastique sont moins e caces dans notre cas. Nous avons adapté l'algorithme SVRG[START_REF] Johnson | Accelerating stochastic gradient descent using predictive variance reduction[END_REF] [XZ14] en rajoutant une nouvelle étape d'échantillonage : chaque sous-fonction est approximée par une méthode de Monte Carlo par chaînes de Markov (MCMC), son calcul exact étant coûteux. Notre algorithme jouit d'un taux de convergence linéaire, une fois que le nombre d'itération de la chaîne de Markov est plus grand qu'une borne inférieure explicite. Nous illustrons la surperformance de notre algorithme sur des jeux de données issus de l'analyse de survie.IntroductionLes réponses à la question 5 se trouvent dans la partie II où nous étudions deux algorithmes d'estimation non-paramétrique de l'estimation de la causalité de Hawkes. Ces deux méthodes se basent sur le calcul des cumulants intégrés du processus de Hawkes multivarié et tirent parti des relations polynomiales entre ces cumulants intégrés et la matrice de causalité de Hawkes. La première approche repose sur la correspondance entre l'écriture théorique et le calcul empirique des cumulants du deuxième et troisième ordre. Cela se fait par la minimisation de la norme quadratique de la di érence entre les deux termes, ce qui peut être vu comme un cas de Méthode des Moments Généralisée[START_REF] Hall | Generalized method of moments[END_REF]. Cependant, le problème d'optimisation à résoudre est non-convexe, le résultat est donc une solution approchée au problème exact. La seconde approche est basée sur la complétion de la matrice de causalité de Hawkes à l'aide des premiers et second cumultants intégrés. La relaxation de ce problème s'écrit comme un problème d'optimisation convexe, ce qui nous permet donc d'obtenir une solution exacte au problème approché.Finalement, dans la partie III, nous appliquons la première méthode développée dans la partie II à des données de transactions haute-fréquence issues du carnet d'ordre du marché à terme Eurex aAEn de répondre à la question 6. La méthode est utilisée pour estimer les paramètres d'un processus de Hawkes 12-dimensionnel modélisant un actif et pour comprendre l'inØuence que les di érents évènements peuvent avoir les uns sur les autres. Ce modèle de carnet d'ordre est une extension naturelle du modèle 8-dimensionnel étudié dans[START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF]. Nous augmentons ensuite la dimension du problème pour prendre en compte les évènements de deux actifs simultanément et discutons la dynamique jointe de ces deux actifs. Les méthodes non-paramétriques usuelles[START_REF] Bacry | Second order statistics characterization of hawkes processes and non-parametric estimation[END_REF] [RBRGTM14] cherchent à estimer les noyaux, ce qui restreint la dimension du modèle de carnet d'ordre pour des raisons de complexité. Notre méthode non-paramétrique estime seulement l'intégrale des noyaux, nécessite des calculs moins coûteux and passe mieux à l'échelle pour un nombre de noeuds plus grand ou un nomdre d'évènements plus important. Nous montrons aussi que la matrice de causalité de Hawkes fournit un résumé très riche des intéractions au sein du système, et peut donc devenir un outil puissant pour comprendre la structure sous-jacente d'un système présentant plusieurs types d'évènements.Nous avons maintenant assez d'éléments pour résumer les résultats principaux de cette thèse.De nombreux problèmes d'apprentissage statistique supervisé s'écrivent comme la minimisation d'une perte moyenne sur une distribution de données. D'après le principe de la minimisation du risque empirique, la perte moyenne est approchée par une moyenne des pertes sur les données observées, et un succès majeur a été de pouvoir exploiter la structure de somme pour concevoir des algorithmes stochastiques e caces[START_REF] Bottou | Large-scale machine learning with stochastic gradient descent[END_REF]. De tels algorithmes stochastiques permettent une extraction très e cace de la valeur des données massives. L'application de cette méthode aux données de survie à grande échelle, qu'il s'agisse de biostatistiques ou d'économie, est évidemment d'une grande importance. 1. Part I: Modèle de Cox à grande échelle Dans le chapitre I, nous passons en revue les progrès récents de l'optimisation convexe avec les algorithmes SGD (Stochastic Gradient Descent), du travail pionnier de [RM51] aux variantes récentes avec réduction de variance [DBLJ14] [XZ14] [SSZ13] [RSB12]. Nous introduisons ensuite la notion de processus ponctuel [DVJ07] qui fournit des outils clés pour la modélisation des événements i.e. horodatés et/ou des données de localisation. Nous introduisons enAEn le modèle à risques proportionnels de Cox [Dav72] qui relie la durée qui précède la réalisation d'un événement à une ou plusieurs covariables via la notion de taux de risque. Dans le chapitre II, nous présentons notre nouvel algorithme d'optimisation pour aider à ajuster le modèle de Cox à grande échelle. Plan Question 4. Comment adapter l'algorithme d'estimation de la régression de Cox lorsque le nombre de patients devient très grand ? passent à l1.1 Question 5. Est-il possible de mesurer la causalité au sens de Hawkes sans faire d'hypothèse sur les fonctions noyaux ? AAEn de répondre positivement à la deuxième question, nous avons développé deux nouvelles méthodes d'estimation non paramétriques pour la causalité de Hawkes, plus rapides et qui Motivations La quantité de données collectées et stockées de façon électronique est très importante, et ne cesse de croître. L'utilisation d'outils d'analyse prédictive pour extraire de la valeur de ces données, qui est le coeur de ce que l'on appelle la data révolution, a fait ses preuves en astronomie [START_REF] Feigelson | Big data in astronomy[END_REF] , dans le e-commerce [MB + 12], pour les moteurs de recherche [START_REF] Chen | Business intelligence and analytics: From big data to big impact[END_REF] et bien d'autres domaines. Les institutions de santé se basent aujourd'hui aussi sur l'utilisation de données pour créer des modèles de traitement personnalisé grâce aux outils de l'analyse de survie [START_REF] Murdoch | The inevitable application of big data to health care[END_REF] . Une part importante de la recherche médicale cherche à comprendre les relations entre les covariables d'un patient et la durée avant l'occurence d'un évènement appelé défaillance (souvent la mort ou l'apparition d'une maladie). Question 6. La méthode d'esimation de la causalité de Hawkes, introduite précédemment, peut-elle nous permettre d'avoir une compréhension plus précise de la dynamique d'un carnet d'ordre ? Plan Chacune des questions posée ci-dessus correspond à une partie de cette thèse. Contexte sur les algorithmes SGD, les processus ponctuels et le modèle des risques proportionnels de Cox Dans ce chapitre, nous passons en revue les résultats classiques derrière les algorithmes de descente du gradient stochastique et ses adaptations à variance réduite. Nous introduisons ensuite le modèle des risques proportionnels de Cox. 1.1. 1 Algorithmes de Descente de Gradient Stochastique Algorithmes SGD pour une distribution quelconque De nombreux problèmes d'estimation dans le cadre de l'apprentissage statistique s'écrivent La comparaison des taux de convergence est toutefois di érente. Soit f deux fois di érentiable sur R d , µ-fortement convexe, ce qui signiAEe que les valeurs propres de la matrice hessienne r 2 f (µ) sont supérieures à µ > 0 pour tout µ 2 R d , et L-lisse, ce qui signiAEe que les valeurs propres sont inférieures à L > 0. Les taux de convergence avec d'autres hypothèses sur la fonction f se trouvent dans [B + 15]. On note µ § son minimiseur et on déAEnit le nombre de condition comme ∑ = L/µ. Le taux de convergence est déAEni pour les méthodes itératives comme une limite supérieure serrée d'une erreur prédéAEnie et est considéré comme la vitesse à laquelle l'algorithme converge. En notant µ t l'itération après les étapes t d'un algorithme itératif et considérant la di érence E f (µ t ) °f (µ § ) comme erreur, le taux de convergence de la Descente de Gradient est O(e °t /∑), tandis que celui de la Descente de Gradient Stochastique est O(∑/t ). Un taux de convergence de la forme O(e °AEt Chaque réalisation d'un processus de points ª peut être écrit comme ª = T ). Il peut être représenté de manière équivalente par un processus de comptage N ∑t } . La caractérisation habituelle du processus de point temporel se fait par la fonction intensité conditionnelle, qui est déAEnie comme la vitesse inAEnitésimale à laquelle les événements sont censés se produire après t , étant donné l'historique de N AEltration du processus qui code les informations disponibles jusqu'au temps t . Le processus ponctuel temporel le plus simple est le processus Poisson qui suppose que les événements arrivent à un taux constant, ce qui correspond à une fonction d'intensité constante mesure Dirac, n est une variable aléatoire à valeur entière et t P i 's sont des éléments aléatoires n t i où ± est la i =1 ± de [0, t = R t 0 ª(s)d s = P n i =1 1 {t P(N t +h °Nt = 1|F t ) , h!0 h où F i s avant t : ∏(t |F t ) = lim t est la Une autre façon de construire les processus de Hawkes est de considérer la représentation branchante suivante, voir [HO74] : les individus de type i , 1 ∑ i ∑ d , arrivent comme un processus de Poisson d'intensité µ i . Chaque individu peut avoir des enfants de tous types et la loi des enfants de type i d'un individu de type j qui est né ou a migré en t est un processus de Poisson inhomogène d'intensité ¡ i j (• °t ).Cette construction permet en outre de déAEnir et de mesurer la causalité entre noeuds d'un modèle de Hawkes, où les intégrales Introduction pondèrent les relations entre individus. Plus précisément, l'introduction du processus de comptage N i √ j t g i j = 0 Z +1 ¡ i j °s)d N j s . (u) du ∏ 0 pour 1 ∑ i , j ∑ d . fois que nous observons le processus N t pour t 2 [0, T ], nous calculons les cumulants intégrés empiriques sur les fenêtres[°H ], et minimisons la di érence quadratique L T entre les cumulants théoriques et les cumulants empiriques. Nous avons prouvé la consistence de notre estimateur dans la limite T ! 1, une fois que la séquence (H T ) satisfait à certaines conditions. Notre problème peut être considéré comme une méthode généralisée des moments[START_REF] Hall | Generalized method of moments[END_REF].Pour prouver la consistence des cumulants intégrés empiriques, nous avons besoin de l'hypothèse suivante :Assumption 2. Le moitié de la taille du support du domaine d'intégration satisfait H La partie numérique, à la fois sur des ensembles de données simulées et réelles, donne des résultats très satisfaisants. Nous avons d'abord simulé des données d'événements, en utilisant l'algorithme de thinning de[START_REF] Ogata | On lewis' simulation method for point processes[END_REF], avec des formes de noyaux très di érentesexponentielle, loi de puissance et rectangulaire -et récupérons la valeur réelle du symbole G pour chaque type de noyau. Notre méthode est, à notre connaissance, la plus robuste en ce qui concerne la forme des noyaux. Nous avons ensuite appliqué notre méthode sur les 100 sites Web les plus cités de la base de données MemeTracker et sur les données du carnet d'ordres AEnanciers : nous avons surpassé les méthodes de pointe appliquée à MemeTracker et nous avons extrait des caractéristiques intéressantes et interprétables des données AEnancières. Mentionnons également que notre méthode est signiAEcativement plus rapide (environ 50 fois plus rapide) puisque les méthodes précédentes visaient à estimer des fonctions alors que nous nous concentrons uniquement sur leurs intégrales.La simplicité de la méthode, qui associe une liste de temps à une carte de causalité entre les noeuds, et sa cohérence statistique, nous a incité à concevoir de nouveaux modèles Introduction de processus ponctuels de carnet d'ordres et pour mieux comprendre sa dynamique. Les caractéristiques extraites à l'aide de notre méthode ont une interprétation économique très naturelle. C'est le but principal de la Partie III. 'approche précédente basée sur la méthode généralisée des moments a besoin des trois premiers cumulants pour obtenir su samment d'informations à partir des données pour récupérer les entrées d 2 de G. En supposant que la matrice G a une certaine structure, nous pouvons nous débarrasser du cumulant du troisième ordre et concevoir une autre méthode d'estimation en utilisant seulement les deux premiers cumulants intégrés. De plus, le problème d'optimisation qui en résulte est convexe, au contraire de la minimisation de L T ci-dessus, ce qui permet la convergence vers le minimum global. La matrice que nous voulons estimer minimise un critère simple f convexe, typiquement une norme, tout en étant cohérent avec les deux premiers cumulants intégrés empiriques.Notre problème se formule comme un problème d'optimisation sous contraintes : 2.3 Approche par optimisation sous contraintes ∂ °1 P °°°°! T !1 G T , H T T ! 1 et H 2 T /T ! 0. Nous prouvons dans le chapitre III le théorème de consistence suivant : Result 2. Sous les hypothèses 1 et 2, la séquence d'estimateurs déAEnis par la minimisation de L T (R) converge en probabilité vers la vraie valeur G: b G T = I °µarg min R2£ L T (R) L T °t , L + t , L °t ,C + t ,C °t , T a t , T b t , L a t , L b t ,C a t ,C b t )où chaque dimension compte le nombre d'évènements antérieurs au temps t : où les dimensions P + et P °comptent les mouvements du mid-price à la hausse (baisse) due à ordre quelconque.Nous avons comparé deux couples d'actifs qui partagent les mêmes facteurs de risque. Le principal résultat empirique de cette étude concerne le couple (DAX, EURO STOXX) pour lequel les variations de prix et les variations de liquidité sur le DAX (petite tick) inØuencent principalement la liquidité sur l'EURO STOXX (grande tique), tandis que les variations de prix et les variations de liquidité sur l'EURO STOXX tendent à déclencher des mouvements de prix sur le DAX. Nous avons exécuté la procédure d'estimation sur le modèle 16-dimensionnel, nous concentrons notre discussion sur les deux sous-matrices non diagonales 8 £ 8 sur la Figure A.2, qui correspondent à l'interaction entre les actifs -l'indice D représente DAX et X pour EURO STOXX. a t , T b t , L a t , L b t ,C a t ,C b t ) t = (P + t , P °t , T Part III: Capture order book dynamics with Hawkes processes There are other types of censoring. For instance, left-censoring means the patient died or left the study before being observed : neglecting left-censoring will lead to overestimation of the survival time. https://github.com/X-DataInitiative/tick https://www.memetracker.org/data.html i.e. buy orders that are executed and removed from the list i.e. buy orders added to the list i.e. the number of times a limit order at the ask is canceled: in our dataset, almost 95% of limit orders are canceled before execution. A proper convex function f is a convex function taking values on the extended real line such that f (x) > °1 for all x and f (x) < +1 for at least one x. A proper convex function is closed if and only if it is lower semi-continuous. https://github.com/X-DataInitiative/tick http://www.quanthouse.com Note that we use the very same dataset as in[START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF] As was done in[START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF], for the estimation of the covariance density we take a linearly spaced grid at short time lags (until a lag of 1ms) and we switch to a log-spaced one for longer time lags. This allows to estimate the covariance on several orders of magnitude in time. Remerciements Acknowledgements This work beneAEted from the support of the chair "Changing markets", CMAP École Polytechnique and École Polytechnique fund raising -Data Science Initiative. The authors want to thank Marcello Rambaldi for fruitful discussions on order book data's experiments. Acknowledgments This research beneAEted from the support of the Chair "Changing Markets", under the aegis of Louis Bachelier Finance and Sustainable Growth laboratory, a joint initiative of Ecole Polytechnique, Université d'Evry Val d'Essonne and Fédération Bancaire Française and from the chair of the Risk Foundation: Quantitative Management Initiative. CHAPTER III Generalized Method of Moments approach Abstract We design a new nonparametric method that allows one to estimate the matrix of integrated kernels of a multivariate Hawkes process. This matrix not only encodes the mutual inØuences of each node of the process, but also disentangles the causality relationships between them. Our approach is the AErst that leads to an estimation of this matrix without any parametric modeling and estimation of the kernels themselves. As a consequence, it can give an estimation of causality relationships between nodes (or users), based on their activity timestamps (on a social network for instance), without knowing or estimating the shape of the activities lifetime. For that purpose, we introduce a moment matching method that AEts the second-order and the third-order integrated cumulants of the process. A theoretical analysis allows us to prove that this new estimation technique is consistent. Moreover, we show, on numerical experiments, that our approach is indeed very robust with respect to the shape of the kernels and gives appealing results on the MemeTracker database and on AEnancial order book data. Keywords. Hawkes Process, Causality Inference, Cumulants, Generalized Method of Moments Part III Capture order book dynamics with Hawkes processes same constant values on these blocks. Three di erent Ø 0 , Ø 1 and Ø 2 are used in the di erent blocks, with Ø 2 /Ø 1 = Ø 1 /Ø 0 = 10 and Ø 0 = 0.1. The number of events is roughly equal to 10 5 on average over the nodes. We thus obtain two datasets, the AErst one referred to as Rect10 corresponding to the rectangular kernels and the second one referred to as PLaw10 corresponding to the power law kernels. We run on these two datsets the NPHC algorithm and the ADM4 algorithm from [START_REF] Zhou | Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes[END_REF], which calibrates a single exponential kernel t ! AEØe °Øt with constant Ø, and for which we provided the intermediate true value Ø = Ø 1 . The results are shown in Figure V.2. These AEgures clearly illustrate that parametric methods can lead to very poor results when the parametrization does not represent well the data, while NPHC method gives better solutions without knowing scaling parameters Ø. Single-asset model In this section we apply the NPHC method to high-frequency AEnancial data. First we describe our dataset, then we compare the results of the NPHC method with those obtained with the Wiener-Hopf method of [START_REF] Bacry | First-and second-order statistics characterization of hawkes processes and non-parametric estimation[END_REF] on the 8-dimensional model of single asset level-I book order events proposed in [START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF]. We AEnally discuss the NPHC estimation of the norm matrix associated with a "complete version" (i.e. 12-dimensional) of this model. Data In this paper we use level-I order book data provided by QuantHouse EUROPE/ASIA 3 for four future contracts traded on the Eurex exchange, namely the futures on the DAX and Euro Stoxx 50 equity indices, and the Bund and Bobl futures. The DAX and Euro Stoxx 50 indices track the largest stock by market capitalization in Germany and the Euro area respectively, while the Bund and Bobl are German interest rate futures on the 8.5 -10.5 years and the 4.5-5.5 years horizon respectively. The data span a period of 338 trading days from July 2013 to October 2014. For each asset, a line with the current status of the AErst levels of the order book is added to the database every time there is a change (price, volume or both). Moreover, an additional line is added in the case the change is caused by a market Revising the 8-dimensional mono-asset model of [BJM16] : A sanity check In [START_REF] Bacry | Hawkes model for price and trades high-frequency dynamics[END_REF][START_REF] Bacry | First-and second-order statistics characterization of hawkes processes and non-parametric estimation[END_REF], the authors outlined a method for non-parametric estimation of the Hawkes kernel functions based the inAEnitesimal covariance density and the numerical solution of a Wiener-Hopf system of integral equations that links the covariance matrix and the kernel matrix. Their method has been applied to high-frequency AEnancial data in [START_REF] Bacry | Hawkes model for price and trades high-frequency dynamics[END_REF][START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF], and [START_REF] Rambaldi | The role of volume in order book dynamics: a multivariate hawkes process analysis[END_REF]. The aim of this section is to compare the newly proposed NPHC methodology with the Wiener-Hopf method mentioned above in order to assess the reliability of the new NPHC method. To this end, we reproduce the results obtained in [START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF]. As it was done there, we consider the DAX and Bund futures data 4 and for each asset we separate Level-I order book events into 8 categories as deAEned above: P + , P °, T a , T b , L a , L b , and C a , C b . Note that here a price move can be of any type. We then consider the timestamp associated with all events as a realization of a 8-dimensional Hawkes process and we use both the NPHC method outlined in Section 3 and the Wiener-Hopf method of [START_REF] Bacry | First-and second-order statistics characterization of hawkes processes and non-parametric estimation[END_REF] to estimate the integrated kernel interaction matrix G from the data. For the Wiener-Hopf method, we follow the same procedure as [START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF] and in particular we estimate the covariance density up to a maximum lag of º 1000s using a log-linear spaced grid 5 , while for the NPHC method we follow the steps outlined in Section 3 and we AEx H = 500s so to be on a comparable scale with the Wiener-Hopf method. Let us note that this scale is several orders of magnitude larger than the typical inter-event time. Indeed, on the assets considered median inter-event times are of the order of 300µs (the mean being º 50ms), with minimum time distances in the tens of microseconds. In Figure V.3, we compare the kernel integral matrices G obtained with the NPHC method (left) with those obtained with the Wiener-Hopf approach (right) on the DAX future. Although the precise values of the matrix entries di er somewhat, as it is di cult to tune the estimation parameters of the two methods as to produce the exact same numerical results, we note that the two methods produce very consistent results. Indeed, they recover the same interaction structure and thus lead to the same interpretation of the underlying system dynamics. In our view, this represents a good sanity check for the proposed NPHC methodology. Analogous results are obtained for the Bund future. Let us also point out that the small asymmetries between symmetric interactions (such as e.g. T + ! T °and T °! T + ) can be used get a rough measure of the estimation error. In the case presented here, the average absolute di erence between symmetric interactions kernels is 0.03, which means relative error of a few percent on the most relevant interactions. We do not comment here the features emerging from the kernel norm matrices presented in this section since they have been already discussed at length in [START_REF] Bacry | Estimation of slowly decreasing hawkes kernels: application to high-frequency order book dynamics[END_REF] and some of them will be further discussed in the next sections. Instead, here we highlight that the results of this section provide a strong case for the use of the NPHC method over the Wiener-Hopf 1 Origin of the scaling coe cient ∑ Following the theory of GMM, we denote m(X , µ) a function of the data, where X is distributed with respect to a distribution P µ 0 , which satisAEes the moment conditions g (µ) = E[m(X , µ)] = 0 if and only if µ = µ 0 , the parameter µ 0 being the ground truth. For x 1 ,..., x N observed copies of X , we denote b g i (µ) = m(x i , µ), the usual choice of weighting matrix is
01775244
en
[ "info.info-ro" ]
2024/03/05 22:32:18
2018
https://inria.hal.science/hal-01775244/file/CommunityNetworksFinal.pdf
Patrick Maillé email: [email protected] Bruno Tuffin email: [email protected] Joshua Peignier Estelle Varloot Pricing of Coexisting Cellular and Community Networks Community networks have emerged as an alternative to licensed-band systems (WiMAX, 4G, etc.), providing an access to the Internet with Wi-Fi technology while covering large areas. A community network is easy and cheap to deploy, as the network is using members' access points in order to cover the area. We study the competition between a community operator and a traditional operator (using a licensed-band system) through a game-theoretic model, while considering the mobility of each user in the area. • If Φ D (0) ≥ 0 when D = {0} ∪ D 0 I. INTRODUCTION Wireless technologies are becoming ubiquitous in Internet usage. Operators try to provide a whole wireless coverage on urban areas, in order to offer an Internet access to everyone, with a guaranteed quality. However, this system requires huge investment costs in terms of infrastructure and spectrum licenses. This has repercussions on the subscription fees, which can be large enough for users to prefer other options. Because of this, community networks have been imagined as an alternative. The principle is simple: when a user subscribes to a community network, he sets an access point where he lives (and is responsible for its maintenance), which can be used by all members of the community network. As a counterpart, he gains access to the Internet through every access point belonging to the community network. This approach presents the advantage that the infrastructure is cheaper and easier to maintain, from a provider perspective. However, the quality of service cannot be guaranteed, since it depends on the size of the community. Currently, the largest community operator is FON 1 . From the user point of view, a community network has the particularity of having both positive and negative externalities, i.e., having more subscribers is both beneficial (larger coverage when roaming) and a nuisance (more traffic to serve from one's access point). An analysis of those effects and on the impact of prices, with users being heterogeneous in terms of their propension to roam, is carried out in [START_REF] Afrasiabi | Exploring user-provided connectivity[END_REF]. In the present paper, we add another dimension, that is, how-i.e., whereusers roam. Also, we consider that users can choose between two competing providers, a "classical" one and one operating a community network, that compete over prices. Community networks have already been studied under a game-theoretic framework, with operators as players. In [START_REF] Manshaei | On wireless social community networks[END_REF], the authors first study how a community network evolves, depending on its initial price and coverage, and then investigate using a game-theoretic framework [START_REF] Osborne | A course in game theory[END_REF] the repartition of users having the choice between a community network and an operator on a licensed band. The competition is first studied when each player decides its price once and the size of the community network changes over time. Then a discrete-time dynamic model is studied, where operators can change their price at each time step, taking into account the preferences of the users concerning price and coverage. The authors show the existence of one or several Nash equilibria under specific conditions. An extension in [START_REF] Manshaei | Evolution and market share of wireless community networks[END_REF] investigates whether it is profitable for a licensed-band operator to complement the service it provides with a community network service. It is shown that this is generally not the case, as users will more likely choose the (less profitable) community network. In [START_REF] Mazloumian | Optimal pricing strategy for wireless social community networks[END_REF], the same authors study an optimal pricing strategy for a community network operator alone in both static and semidynamic models, while considering a mobility factor for each user (e.g., each user makes requests, but not all in the same spot). They also allow the operator to set different prices for each user. In the following, we will refer to the traditional operator as the classical ISP (Internet Service Provider). In this article, we study a model similar to both [START_REF] Manshaei | On wireless social community networks[END_REF] and [START_REF] Mazloumian | Optimal pricing strategy for wireless social community networks[END_REF]. In [START_REF] Manshaei | On wireless social community networks[END_REF] the users all present the same characteristics while in [START_REF] Mazloumian | Optimal pricing strategy for wireless social community networks[END_REF] there is a mobility factor but the paper considers a community network alone. We consider here a more general and realistic framework: users are considered located in places heterogeneous in terms of attractiveness for connections (an urban area is more likely to see connections than the countryside). Moreover, their mobility behavior is also heterogeneous: they do not all plan to access the Internet from the same places. Instead of a mobility parameter, we rather consider a density function, which represents the probability that a user makes a request while being near the access point of another user. But in our paper, all users will have the same sensitivity toward quality; indeed, our goal is rather to focus on the impact of geographical locations of spots and connections on users' subscription and on the competition between the operator and the community network. The model is analyzed using noncooperative game theory [START_REF] Osborne | A course in game theory[END_REF], [START_REF] Maillé | Telecommunication Network Economics: From Theory to Applications[END_REF]. The decisions are taken at different time scales: first the networks fix their price, and then users choose which network to subscribe to. We illustrate on different scenarios that for fixed subscription prices to the ISP and the community network, several equilibria on the repartition of users can exist; the one we can expect depends on the initial mass of the subscribers to the community network. The pricing competition between operators is played anticipating the choice of users. The paper is organized as follows. Section II presents the model; the basic notions are taken from the literature, but we extend it with the modeling of mobility via a continuous distribution. In Section III, we describe how, for fixed subscription prices, the repartition of users is determined. In Section IV, we introduce the pricing game between the operator and the community network, as well as our method to compute a Nash equilibrium. Section V presents two scenarios as examples of application of our method. II. MODEL DEFINITION We present here the basic elements of the model taken from the literature, mainly [START_REF] Manshaei | On wireless social community networks[END_REF], which we complement with more heterogeneity among users related to location and mobility, as well as the possible nuisance from providing service to other community members. A. Actors and strategies To study the competition between a community operator and a classical ISP, we need to define a model for the profit of each operator, but also for users, in order to explain what will make them choose a service rather than the other one. The decisions of these actors are taken at different time scales, defining a multilevel game: 1) First, the classical ISP and the community network play non-cooperatively on the subscription prices, in order to maximize their own revenue (expressed as the product of price and mass of users). 2) Given the prices and qualities of service, users choose their network based on price (we assume a flat-rate pricing is applied by each operator) and quality of service. We will describe how, depending on an initial repartition between operators, users can switch operators up to a situation when nobody has an interest to move. The results of the game for the operators are given once all users have settled on an operator (if any). Even though the operators play first (subscription is impossible until the price of subscription is set), they are assumed to make their decision strategically, anticipating the subsequent decisions of users. Hence the game is analyzed by backward induction [START_REF] Osborne | A course in game theory[END_REF]: we determine the user choices for any fixed prices, and consider that operators are able to compute those choices when selecting their prices. B. Modeling of users, quality, and mobility We consider a continuum of users characterized by their type u; this type typically represents a home location. In the following, we will not distinguish between a location u and the user u living there. Let Ω be the space of users and f their density over space Ω (with Ω f (v) dv = 1). We also assume that Ω is the support of f , i.e., f (u) > 0 for all u ∈ Ω. Let D be the subset of Ω of users subscribing to the community network; we call it the domain, since it also represents the domain of coverage of the community network. Each user makes requests when using the Internet. Let m(v) be the average number of communication requests of user v per time unit. Depending on their location, users may also present different mobility patterns. To express this heterogeneity, define for each user u, the density function g(v|u) that a request from u occurs at v. Note that users may move to uninhabited regions: we aggregate those regions into one item, denoted by ⊥, and we define the set of mobility locations as Ω := Ω ∪ ⊥. Then, over a location area A ⊂ Ω, the probability that a type-u user's request is in A (rather than Ω \ A) is A g(v|u) dv. If we define n(u) as the density (number per space unit) of requests at u from users of the community network, it can be computed as n(u) = D g(u|v)m(v)f (v) dv. The quality of a given service is defined as the probability that a request is fulfilled. For the ISP assumed to have a full coverage, it is therefore 1, in line with the literature. For a user u, the quality of the community network will depend on whether the requests by u are generated when in the coverage domain, hence it can be computed as q u = D g(v|u)dv. (1) C. User preferences How will User u decide whether to subscribe to the classical ISP, to the community network, or to none of them? Following [START_REF] Manshaei | On wireless social community networks[END_REF], [START_REF] Manshaei | Evolution and market share of wireless community networks[END_REF], define U I (u), U C (u) and U ∅ (u) as the respective utility functions for choosing the classical ISP, the community network, or none. These functions depend on the price the user has to pay, the quality of the service he is provided, and his sensitivity toward quality. As in [START_REF] Manshaei | On wireless social community networks[END_REF], [START_REF] Manshaei | Evolution and market share of wireless community networks[END_REF], we consider a simple quasi-linear form for utilities: a user u, whose sensitivity toward quality is denoted by a and who benefits from service quality q (assumed in the interval [0, 1]) at price p perceives a utility a u q -p. Note that in [START_REF] Manshaei | On wireless social community networks[END_REF], [START_REF] Manshaei | Evolution and market share of wireless community networks[END_REF] the sensitivity parameter a depends on the user type, u, but we limit ourselves to a constant value a for all users since our goal is to focus on the geographical heterogeneity of users. In addition, as in [START_REF] Afrasiabi | Exploring user-provided connectivity[END_REF] we consider a disturbance factor for the community network: satisfying requests for other members can indeed become an annoyance, which we model through a negative term -cn(u) in the utility function, with c a unit cost per request at u. Here we assume that the nuisance is due to WiFi spectrum usage, hence it depends on the total density of requests in u and is independent of the density of users at u. Let p I and p C be the respective flat-rate subscription fees to the ISP and to the community network, respectively. We assume users are rational: a type-u user will choose the network providing the largest utility (or no network), where the utilities at the community network, the ISP or for not subscribing to any are respectively (recall those functions depend on the set D of users in the community network) U C (u) = aq u -p C -cn(u) U I (u) = a -p I U ∅ (u) = 0, where U ∅ (u) is used to say that users with negative utilities at the operators do not subscribe to any of them; we also assume that users with null utilities at the operators do not subscribe. In the following, we will therefore always assume that p I < a, because otherwise, the classical operator would get no subscriber (as for all u, we would have U I (u) ≤ 0 and all users prefer the no-subscription option over the classical operator). With the same argument, we assume that p C < a. We now have, for all u, U I (u) > 0, which implies that each user will necessarily subscribe to one operator, since they strictly prefer the classical operator over the no-subscription option. However, the repartition between the classical ISP and the community network is not trivial, since the utilities expressed above, which determine user choices, also depend on user choices through the set D. Hence the notion of equilibrium (or fixed-point), which we define and analyze in Section III. D. Operators' model The utilities for the classical ISP and the community operator are simply defined as their profits. For each operator, the profit depends on the price it chose, and on the number of users subscribing to its service, which depends on both prices. Let d I and d C be the number (or mass) of users subscribing respectively to the classical ISP and to the community operator. For a set D ⊂ Ω of users subscribing to the community network (which depends on prices as we see later on), those masses can be written as d I = Ω\D f (v)dv d C = D f (v)dv. The utilities are then expressed by V C = d C p C V I = d I p I -χ I , where χ I is the infrastructure cost for the ISP. Each operator chooses (plays with) its price to maximize its revenue, but that revenue also depends on the decision of the competing operator which can attract some customers, hence the use of non-cooperative game theory to solve the problem. III. USER EQUILIBRIUM With the characterization of user behavior above, we aim in this section at determining if, for fixed subscription prices, there is an equilibrium user repartition among operators, and also if it is unique. We first define what such an equilibrium is. We consider here that prices p I and p C have already been decided. A. Definition and characterization Definition 1. A user equilibrium domain is a domain D ⊂ Ω such that no user, in D or in Ω\D has an interest to change his choice of network. Mathematically, this means that ∀u ∈ D U C (u) ≥ U I (u) ∀u ∈ Ω\D U C (u) ≤ U I (u). Consider a User u. For a given domain D, he will prefer the community network if U C (u) ≥ U I (u), that is, if a(q u - 1) + (p I -p C ) -cn(u) ≥ 0. Let us define the domain-dependent function Φ D : Ω → R as the difference U C (u) -U I (u), that is, ΦD(u) := a D g(v|u)dv -1 +(pI -pC )-c D g(u|v)m(v)f (v) dv. ( 2 ) Then D is a user equilibrium domain if and only if Φ D (u) ≥ 0 ∀u ∈ D Φ D (u) ≤ 0 ∀u ∈ Ω \ D. Example 1. Consider the case of users with homogeneous mobility behavior, that is, where g(v|u) does not depend on u so we only denote it by g(v). From (1), we also get that the quality q u does not depend on u: all the community network users experience the same quality, which we denote by q and equals D g(v)dv). Moreover n(u) = g(u) D m(v)f (v)dv = M g(u) with M := D m(v)f (v) dv the total request mass from community network users. At a user equilibrium D, User u prefers the community network if and only if a(q -1) + (p I -p C ) -cM g(u) ≥ 0. The domain D is then made of all users u with attractiveness g(u) below a threshold. B. Existence and uniqueness Proposition 1. A user equilibrium is not unique in general. Proof. An example of non uniqueness is shown in Section V-A when users present a homogeneous mobility pattern g(v|u) = g(v) ∀u. Proposition 2. D = ∅, that is no user subscribes to the community network, is a user equilibrium (but not necessarily the only one) if and only if p I ≤ p C + a. In words, if the difference of price between the community network and the ISP is not large enough (it has to be larger than a), no user subscribing to the community network is a user equilibrium, even if not necessarily the unique possibility. Proof. D = ∅ is a user equilibrium if and only if, when there are no community network users, U C (u) ≤ U I (u) ∀u ∈ Ω; that is, -a + (p I -p C ) ≤ 0, i.e., p I ≤ p C + a. We can also consider the other case of "degenerate" equilibrium, that is, when all users in Ω subscribe to the community network. Proposition 3. D = Ω, is a user equilibrium if and only if Φ Ω (u) ≥ 0 for all u ∈ Ω. Corollary 1. Under our assumption p I < a, there always exists at least one user equilibrium. Proof. Since we have assumed p I < a, Proposition 2 holds. C. A dynamic view Given that several user equilibria might exist, which one would be observed in practice? This may depend on a dynamic evolution of subscriptions: We can study how users make their choice, and how the repartition evolves, depending on an initial situation. If a user u is associated to the ISP (resp. community network) but U C (u) > U I (u) (resp. U C (u) < U I (u)) then it will switch to the other operator. Without loss of generality, we can first partition users by assuming that those with the largest U C (u) -U I (u) subscribe to the community network and the others to the ISP (a natural move to that situation will occur otherwise). We can relate this to the function Φ D (x) defined in [START_REF] Manshaei | On wireless social community networks[END_REF]. For a given D, users u ∈ D (resp. u ∈ D) with the largest value Φ D (u) > 0 (resp. lowest value Φ D (u) < 0 will have an incentive to switch operator and join (resp. leave) D. Hence, D will change up to a moment when no user has an interest to move, that, up to reaching a user equilibrium as defined above. All this will be made more specific and clearer in Section V on the analysis of two scenarios. Depending on the initial situation (that is, the initial mass of users subscribing to the community network), we may end up in different user equilibria. We can assume that the community network will offer free subscriptions, or make offers to users so that an initial point will allow to lead to different equilibria. D. Stability Among user equilibrium domains, some are more likely to be observed. They are the so-called stable user equilibrium domains, which can basically be defined as domains that are stable to small perturbations in the following sense. Definition 2. A user equilibrium domain D is said to be stable if there exists ε > 0 such that ∀D with (D∪D )\(D∩D ) f (v) dv ≤ ε (that is, any D with "measure" close enough to D), then starting from D the user repartition will converge to D. The following straightforward result establishes that there always exists at least one stable equilibrium. Proposition 4. If for all u ∈ Ω, the ratio of the densities g(•|u)/f (•) is upper-bounded on Ω, then for any price profile (p I , p C ) with p I < a, the situation D = ∅ is a stable equilibrium. Similarly, the other degenerate equilibrium D = Ω is stable if Φ Ω (u) > 0 for all u ∈ Ω. Proof. From Corollary 1, D = ∅ is a user equilibrium domain. Since p I < a, (2) yields Φ D (u) = p I -a -p C < 0 ∀u ∈ Ω. We also have from (2) that for any domain D and any u, Φ D (u) ≤ p I -a -p C + a D g(v|u) dv. But when the ratio g(•|u)/f (•) is upper-bounded by some value L, then the integral a D g(v|u) dv is smaller than aL D f (v) dv. Therefore, with ε < p C +a-P I aL , for a domain D such that D f (v) dv ≤ ε all users in D would be better off switching back to the ISP, hence D = ∅ is a stable equilibrium domain. Further characterizations are provided in Section V for specific scenarios. IV. PRICING GAME For any price pair (p I , p C ), being able to characterize all stable user equilibria, we can reasonably assume that the community network will set up things (again, by initial offers/bargains) such that the largest (defined in terms of demand d C ) stable user equilibrium domain is reached in the end. Therefore, for given prices, we will be able to compute the corresponding values of the utility functions V C and V I of each operator. Hence providers can non-cooperatively play the pricing game where the community network chooses p C and the ISP chooses p I , each operator trying to maximize its utility function. The solution concept is the classical Nash equilibrium [START_REF] Osborne | A course in game theory[END_REF], a pair (p * I , p * C ) from which no provider can improve its revenue from a unilateral price change. V. ANALYSIS AND DISCUSSION OF TWO SCENARIOS A. Users with a homogeneous mobility pattern We first consider the simplest situation where the mobility pattern is the same for all users, which means that g(v|u) does not depend on u, that is g(v|u) = g(v) ∀u as treated in Example 1. From this assumption, q u = q does not depend on u (but still depends on D). We also get a much simpler expression for Φ D , which is now: ∀u ∈ Ω, Φ D (u) = a(q -1) + (p I -p C ) -cM g(u), (3) which depends on u only through the term g(u), with q = D g(v) dv and M = D m(v)f (v) dv. From such an expression, we can show that with an homogeneous mobility pattern, user equilibria have a specific form. 1) Characterization of user equilibria: Proposition 5. Assume that location attractiveness values are distributed regularly over Ω: i.e. mathematically, that for all y ∈ R + , the mass of users with the specific value g(u) = y is null. Then a non-degenerate user equilibrium domain has the form D x := {u ∈ Ω | g(u) ≤ x} for a given x ≥ 0, with x solution of a Dx g(v) dv -1 + p I -p C -cx Dx m(v)f (v) dv :=Ψ(x) = 0. In the above characterization, x is a threshold such that all users u with mobility attractiveness density g(u) below x subscribe to the community network .Remark that Ψ(x) corresponds to Φ Dx (u) for a user u such that g(u) = x; and since D x is continuous in x under our assumption, Ψ is also a continuous function of x. Proof. See Appendix A. We can characterize, among all domains D x , which ones will actually be user equilibrium domains, and the corresponding dynamics. Assume that the set of subscriber to the community network is of the form D x = {u ∈ Ω : g(u) ≤ x} for some x ∈ R + . • If Ψ(x) > 0 and D x = Ω, it means that users u with g(u) just above x are associated with the ISP and are those with the largest utility difference and incentive to switch to the community network (indeed, from (3) that utility difference Φ Dx (u) is continuous and strictly decreasing in g(u)); hence they switch such that x and D x increase; • If Ψ(x) < 0 and D x = ∅, it is the opposite situation: users u with value g(u) just below x are with the community network but have the largest incentive to switch to the ISP; hence x and D x decrease; • If Ψ(x) = 0, all users u ∈ Ω are such that Φ Dx (u) ≥ 0, hence have no interest to switch; we are then in an equilibrium situation. We thus end up with the following characterization of user equilibrium domains. Let y := sup u∈Ω {g(u)} (possibly ∞), such that D y = Ω. • If Ψ(y) ≥ 0 then D = Ω (all users subscribe to the community network) is an equilibrium; • Since Ψ(0) = -a + p I -p C ≤ 0 by assumption, ∅ is always a user equilibrium domain (no users associated with the community network); • If Ψ(x) = 0, D x is a user equilibrium domain. 2) Stable equilibria: Among all user equilibrium domains, we can characterize the stable ones. Proposition 6. As suggested by the dynamics described in Subsection III-C, we consider that the community network subscriber set D is always of the form D y for some y. Then if Ψ(x) = 0 and Ψ (x) < 0, D x is a stable equilibrium. Proof. Assume a small variation, from x to x = x ± ε, in D (hence, from D x to D x ). If Ψ (x) < 0, for ε small enough, Ψ(x ) > 0 (resp. < 0) if x < x (resp. x > x) ; hence users u with g(u) between x and x are incentivized to switch back to their initial choice, driving back to the (then stable) equilibrium domain D x . 3) Nash equilibria for the pricing game between operators: For any pair (p C , p I ), we consider that the largest equilibrium domain is selected. Operators then play a non-cooperative game to determine their optimal strategy [START_REF] Osborne | A course in game theory[END_REF], [START_REF] Maillé | Telecommunication Network Economics: From Theory to Applications[END_REF]. The output concept is that of a Nash equilibrium, a point (p * C , p * I ) such that no operator has an interest to unilaterally move from, because it would decrease its utility (revenue). Because of analytical intractability, we are going to study the existence, and characterize, Nash equilibria numerically, for specific parameters values; the procedure can be repeated for any other set of parameters. 4) Examples: We first show situations where there are several user equilibria, and even several stable user equilibria. We then discuss the solution of the pricing game between operators. Example 2. Consider Ω = Ω = R + , i.e., users are placed over the positive line (negative values could be described as the sea). We assume: • m(u) = 1, i.e., all users generate the same amount of requests; • f (u) = α/(1 + u) 1+α with α > 0. In other words, users are located according to a Pareto distribution, potentially with infinite expected value. The closer to the 0 value (which can be thought of as the town center), the more users you can find. • g(u) = λe -λu , meaning that connections are exponentially distributed with rate λ, with more connections close to 0; even far-away users are more likely to require connections there. With these functions, noting that g is strictly decreasing, the set D x is simply the interval [ln(λ/x)/λ, +∞) when x ∈ (0, λ], D x = Ω when x > λ, and D x = ∅ when x = 0. It gives Ψ(x) = -a(1 -x/λ) + p I -p C -cx 1 1 + ln(λ/x)/λ α for x ∈ [0, λ], Ψ(x) = p I -p C -cx for x > λ, and Ψ(0) = p I -a -p C . Note that the assumption made in Proposition 5 holds here, hence Ψ is continuous over R + . Three outcomes are illustrated in the next three cases for λ when α = 1.2, a = 1, c = 1, p I = 0.95 and p C = 0.1. In Figure 1, there are two solutions for Ψ(x) = 0, with only the second one leading to a stable equilibrium domain (in addition to D 0 = ∅, which is stable too from Proposition 4, for which the assumptions also hold). In Figure 2, Ψ(x) = 0 has only one (unstable) solution but D 0 = ∅ and D 0.5 = Ω (because Ψ(0.5) > 0) are stable equilibrium domains. In Figure 3 there is no solution to Ψ(x) = 0, and ∅ is the only equilibrium domain. Remark that Ω is a (stable) equilibrium domain if Ψ(λ) = p I -p C -cλ ≥ 0. 2 Assuming now that the community network plays such that the largest equilibrium domain is selected (thanks to discounts for example), we can can draw the best responses of the pricing game between operators. It is for example displayed in Figure 4 for specific parameter values, here when the infrastructure for the ISP is χ I = 0. With these parameters, the community network is able to get a positive demand only if p I ≥ 0.76. d C is then jumping from 0 to 0.574 and is readily and slightly increasing to 0.678 when p I = 0.99. We actually have here a price war where each operator has an interest to give a price just below that of the opponent, and we end up with a Nash equilibrium (p I = 0.23985, p C = 0) where one operator stops with the zero price. Due to price war, only one operator survives. With an infrastructure cost χ I = 0, it is the ISP, but we note that the threat of the development of a community network significantly decreases the price set by the ISP (whose monopoly price was 1 as can be seen on Figure 4 when p C is prohibitively high). Also, there is a threshold on χ I over which it is the community network that survives. Indeed, remark that the value of χ I does not change the best-response values since it appears as a constant in the expression of V I , but the game stops as soon as one of the two providers gets a zero revenue. The revenue of the ISP will go to zero before that of the community network if χ I > 0.23. B. Several populations We slightly modify the model such that Ω = R + with a mass of users in 0, seen as a town (probability/mass π 0 ) while others with u > 0 are regularly distributed over the "countryside" with conditional density f . In terms of mobility, we assume that users at u = 0 do not move, while those at u > 0 have a probability π 1 to call from 0 and a conditional density (when connecting from another place) g(v) to make a connection from v > 0. With those assumptions, remark that q 0 = 1 (resp. q 0 = 0) if the community network is chosen (resp., not chosen) at 0, and for u > 0, q u = π 1 1l {0∈D} +(1-π 1 ) D 1 g(v)dv where D 1 = D∩(0, ∞) is D excluding 0 and 1l {•} is 1 if the condition is satisfied and 0 otherwise. The number of connections to a member of the community network at 0 (assuming 0 ∈ D) is then n(0) = π 0 m(0) + (1 -π 0 )π 1 D 1 m(v)f (v)dv and n(u) = g(u)(1 - π 0 ) D 1 m(v)f (v)dv for u > 0. The level of annoyance (interferences, etc.) is again assumed linear in n(•), leading to Φ D (0) = (p I -p C ) -c π 0 m(0) + (1 -π 0 )π 1 D 1 m(v)f (v)dv Φ D (u) = a π 1 1l {0∈D} + (1 -π 1 ) D 1 g(v)dv -1 +(p I -p C ) -c(g(u)(1 -π 0 ) D 1 m(v)f (v)dv). Assuming that 0 ∈ D or not, the above last equation tells us as in the previous subsection that D1 is of the form D 1 x = {u : g(u) ≤ x} for some value x. Exactly as in the previous homogeneous case, there might be several solutions, and we will assume that the selected one will be a stable one leading to the largest market share (revenue) for the community network. The more subscribers of the community network, the less likely users in 0 will subscribe because since they do not move, they experience only losses from an increased number of subscribers. But there might be a risk of oscillations on the user equilibrium. Indeed, if x is small enough, then users in 0 subscribe. This increases the interest for others to subscribe, leading to a larger value of x, which might deter users in 0, and so on. Example 3. We again consider m(u) = 1, f (u) = α/(1 + u) 1+α with α > 0, and g(u) = λe -λu . Then users in 0 join iff for D 1 = D 1 x = [ln(λ/x)/λ, +∞), they prefer the community network over the ISP, knowing that there is coverage in 0, i.e., iff 1 , there are oscillations in the user equilibrium domain, which does not exist with the chosen strategy from the community network. In that case, we are going to consider that when computing best responses in the pricing game, operators will make use of the "worst" scenario for them in the pricing game, in terms of market share: D = D ∅ 1 for the community network, and D = {0} ∪ D 0 1 for the ISP. Using the above user equilibrium domain, we can can draw the best responses of the pricing game between operators. It is for example displayed in Figure 5 for specific parameter values when the infrastructure cost for the ISP is χ I = 0. With these numerical values, the best response of the ISP to the community network price p C is such that d I = 1 for p C ≤ 0.63, and d I = 0.7 (just users in 0) when p C is above that value. We again have a price war (each provider setting its price just below its opponent) in the high-price region. Predicting the outcome of the competition is not trivial, since following a best-response dynamics leads to a cycle (appearing in the top-right corner in the figure): prices slide downwards until reaching (p i , p C ) ≈ (0.71, 0.63), at which point the ISP sets p I = 1, reinitiating the cycle. Fig Fig. Ψ(x) when λ = 1.0. Fig. 2 .Fig. 3 . 23 Fig. 2. Ψ(x) when λ = 0.5. CFig. 4 . 4 Fig. 4. Best responses in the pricing game when λ = 0.25, α = 1.2, a = 1, c = 1, and χ I = 0 .CFig. 5 . 5 Fig. 5. Best responses in the pricing game with heterogeneous users when λ = 0.25, α = 1.2, a = 1, c = 1, π 0 = 0.7, π 1 = 0.1, and χ I = 0 https://fon.com/ Note that with other functions, depending on the variations of f and g, an arbitrary number of solutions of Ψ(x) = 0 can be obtained. , it means that users 0 are satisfied with the gain in price to be in the community network, and D = {0} ∪ D 0 1 is the considered user equilibrium domain; APPENDIX A. All domains on equilibrium situations have the form D x for homogeneous and regular mobility patterns Proof. At an equilibrium situation, the community domain D must be such that Let p := p I -p C . Under the assumption that g(v|u) does not depend on u and with the expressions of q and n we had before, we now have ∀u ∈ Ω, So an equilibrium domain D should be made of u such that plus possibly some of the users for which there is equality above, but which are of measure 0 under our assumption. Hence the general form of the solution Using that form D x for candidate domains, one can write Φ Dx (u) as a function of x and g(u), and an equilibrium is reached when the users wanting to subscribe to the community network (currently made of D x ) is exactly D x , i.e., when x is a root of the function Ψ(x) := a Dx g(v)dv -1 + pcx Dx m(v)f (v) dv.
01730383
en
[ "sdv.mhep.geo", "sdv.mhep.rsoa", "sdv.spee" ]
2024/03/05 22:32:18
2018
https://univ-rennes.hal.science/hal-01730383/file/draft%20int%20j%20rheum.pdf
Nicolas ; Belhomme email: [email protected]. Marine Le Noir De Carlan Alain Lescoat Thomas Le Gallou Florence Rouget Philippe Loget Patrick Jego Tomas Le 3 Gallou Investigating in utero fetal death outcome of internal medicine consultation Keywords: Clinical aspects < Anti-phospholipid antibody syndrome, Drug treatment < Anti-phospholipid antibody syndrome, Epidemiology < Anti-phospholipid antibody syndrome in utero fetal death, stillbirth, internal medicine, placental pathology, 50 antiphospholipid antibodies syndrome, placental vascular disorders 51 Tweetable Results: From January 2007 to December 2014, 53 women who presented an IUFD at 14 32 weeks or more of gestational age were included. The main cause for each IUFD was 33 determined by expert agreement. Primary outcome was the prevalence of IUFD related to 34 placental disorders. Secondary outcomes included the frequency of antiphospholipid 35 antibodies syndrome (APS) among patients with IUFD of placental origin and the 36 pathological and clinical features associated to APS. IUFD resulted from placental disorders 37 in 36/53 (68%) patients, and remained unexplained in 11 cases (20.8%). Among the 36 38 patients with placental disorders, APS was diagnosed in 5 (13.9%) cases, and 4(11.1%) 39 patients were considered as having "non-criteria" APS. History of thrombosis (p=0.001) and 40 placental infarcts (p=0.047) were significantly associated to APS. 41 Conclusion: Placental disorders were the major cause for IUFD in patients who were 42 referred to internal medicine specialists. Importantly, APS was seldom found in patients with 43 placental disorders. Venous thromboembolism history and placental infarcts were both 44 significantly associated to APS. Further studies are needed in order to deepen our 45 understanding of the physiopathology of placental disorders and its underlying causes 46 among non-APS women, and to determine the best treatment regimen for future 47 There is currently no international consensus on the gestational age threshold which defines 56 In Utero Fetal Death (IUFD). The French National College of Gynecologists and 57 Obstetricians (CNGOF) defines IUFD as a spontaneous cessation of fetal cardiac activity 58 occurring before or during labor, after 14 weeks of amenorrhea 1 , whereas the threshold of 59 either 22 or 28 weeks or more is commonly used in many countries. 2 60 Bukowski et al [START_REF]Stillbirth Collaborative Research Network Writing Group ([END_REF] showed that obstetric complications, placental disease, genetic abnormality 61 and infections, were involved in respectively 29%, 26%, 13% and 13% of IUFD, then 62 followed by cord abnormalities (10.4%), hypertensive disorders (9.2%), and maternal 63 complications (7.8%). IUFD were considered idiopathic in 24% of cases. 64 The objectives of IUFD investigations are to identify the cause, to prepare for future 65 pregnancies, and to detect a maternal pathology requiring specific care, such as 66 antiphospholipid antibodies syndrome (APS). 1 67 APS diagnosis requires the presence of clinical and biological criteria meeting the Sydney 68 classification criteria. [START_REF] Miyakis | International consensus statement on an 260 update of the classification criteria for definite antiphospholipid syndrome (APS)[END_REF] However, many patients, while fulfilling the clinical criteria, cannot be 69 classified as having APS either because of low titers or non-persistent positivity of 70 conventional antibodies, or because of the detection of other antibodies that have currently 71 not been validated (eg anti-prothrombin ): in those cases, the appellation of "non-criteria" 72 APS is sometimes used. [START_REF] Rodríguez-García | 263 Examining the prevalence of non-criteria anti-phospholipid antibodies in patients with 264 anti-phospholipid syndrome: a systematic review[END_REF] Study objectives 75 The objectives of our study were to bring to light the internist's role in the diagnosis of IUFD 76 by 1) analyzing the final etiologies diagnosed, on which depend the treatments that will be 77 proposed for future pregnancies 2) evaluating the prevalence of IUFD consecutive to 78 placental vascular disorders 3) assessing the prevalence of APS among these patients and 79 comparing their clinical and pathological features to non-APS patients, in order to find out 80 relevant factors that support APS testing. 81 METHODS 83 All women who consulted in the internal medicine department of Rennes University hospital, 84 a French tertiary care center, after IUFD (defined as death at 14 weeks of gestation or later 85 according to the CNGOF) were retrospectively included between January, 1 st 2007 and 86 December, 31 st 2014. All cases of IUFD were reviewed by four specialists (a pediatrician, a 87 pathologist specialised in foetopathology, an obstetrician and an internist) and the main 88 cause of death was determined through consensual agreement. 89 According to the Amsterdam Placental Workshop Group Consensus Statement 8 , the 90 following histological features were recorded from the pathological records: placental infarcts, 91 retroplacental hematoma, hypotrophia, decidual arteriopathy, fetal vascular malperfusion, 92 thrombosis, villitis, intervillitis, and cord abnormalities. IUFD were classified as placenta-93 related if the lesions observed were deemed sufficient to lead to fetal death. 94 APS testing results were recorded for all patients, and obstetrical APS was diagnosed 95 according to Sydney criteria. Clinical and pathological characteristics of patients with APS, 96 "non-criteria" APS and no APS were compared. Patients with APS were classified in the 97 "maternal cause" category according to the CODAC classification[9], and they were also 98 considered as placenta-related IUFD if their placentas exhibited significant lesions. 99 53 patients consulted in internal medicine for IUFD, all of whom were referred by their 109 obstetrician. 22 patients (42%) were primigravida, 5 patients (9%) had a history of IUFD, 9 110 patients (17%) had a history of miscarriage, 2 patients had an antecedent of preeclampsia, 2 111 patients an antecedent of HELLP syndrome. A history of severe fetal growth restriction (<5 112 percentile) was mentioned in two cases. One patient suffered from chronic hypertension, no 113 patients had diabetes mellitus and none had renal disease. 9 patients (13%) were overweight 114 or obese. Three patients were known to have APS, revealed in all cases by venous thrombo-115 embolism (VTE). APS was considered as primary in all cases as none of our patient had 116 associated connective tissue disease. The patients' mean age at the time of IUFD was of 30 117 +/-4.5 years (extremes: 19-42). The mean term was 29.2 weeks of amenorrhea +/-7.9 118 (extremes 15-40). There were no multiple pregnancies. Distribution of IUFD according to 119 gestational age is shown in Figure 1. 120 Results of the consultation 150 The main causes of IUFD are reported in Table 1. Etiological categories were extracted from 151 the CODAC classification 9 : 6 cases issued from maternal causes, including the 5 definite 152 APS patients plus one patient whose IUFD was secondary to severe pre-eclampsia. IUFD of 153 "non-criteria" (n=4) and non-APS patients (n=27) were ruled as having a placental cause. 154 IUFD resulted from placental disorders in 36/53 (68%) patients: placental insufficiency in 33 155 (62%) cases (including one case of pre-eclampsia), and placental inflammatory disorders in 3 156 (6%) cases (Table 1). None of our patients had isolated gestational hypertension, renal 157 disease or connective tissue disease. 158 IUFD was caused by fetal anemia in one case which was classified as a fetal cause, by 159 Parvovirus B19 infection in one case, and by funicular pathology in 4 cases. 160 IUFD remained unexplained in 11/53 (21%) cases, including all the patients whose placenta 161 was not examined. 162 163 Comparison between APS, "non-criteria" APS and non-APS patients 164 The 36 patients who had significant placental disorders were allocated to one of the following 165 groups according to their APS status: definite APS, "non-criteria" APS and non-APS (in which 166 the patient with pre-eclampsia was included as her placenta exhibited hypotrophia along with 167 massive infarct and retroplacental hematoma). 168 Only the three patients with known APS had VTE history. Comparisons between the three 169 groups revealed that antecedent of VTE (p=0.001) and placental infarcts (p=0.047) were 170 significantly associated with the definite APS group, whereas fetal vascular malperfusion or 171 inflammatory disorders (villitis/ intervillitis) were not. Patients with "non-criteria" APS were not 172 different from non-APS patients regarding VTE history or pathological findings. 173 There were no differences in mothers' age, time of pregnancy termination, presence of 174 livedo, history of fetal loss, or placental hypotrophia, between the 3 groups. 175 176 DISCUSSION 177 53 cases of IUFD were referred to an internist over the study period. In the meantime, 440 178 cases were registered in our center, meaning that internists were consulted in 12% of cases. 179 Placental disorders were over represented, as they accounted for 31 (58%) of cases, 180 whereas Bukowski et al. showed that they are usually involved in only 23.6% of IUFD. [START_REF]Stillbirth Collaborative Research Network Writing Group ([END_REF] 181 Moreover, no genetic causes were observed while infectious and fetal causes were rarely 182 found (one case of each), although Bukowski et al demonstrated that such etiologies are 183 involved in respectively 29.3, 13.7 and 12.9% of IUFD. This reveals the selection bias which 184 applied while referring the patients to our department: a thorough etiological assessment had 185 previously been conducted by the obstetrician, thus ruling out the most common causes such 186 as genetics, infections or fetal pathologies. Overall, 36/53 (68%) patients were referred to 187 investigate IUFD due to inflammatory or vascular placental disorders, which demonstrates 188 the obstetricians' expectations concerning internists in this field. 189 IUFD remains unexplained in 21% of cases which is comparable to the literature 4 10 11 . These 190 include the 4 patients whose placenta was not examined. This emphasizes the crucial 191 importance of performing placental pathological examinations. 192 In our study, as previously demonstrated, placental examination was highly contributive 12 13 , 193 since significant histological lesions were noted in 40/49 (81%) cases (including cord 194 abnormalities in 4 cases). 195 As for patients with placental abnormalities, only 5/36 (14%) were found to have APS 196 meeting Sydney criteria. [START_REF] Miyakis | International consensus statement on an 260 update of the classification criteria for definite antiphospholipid syndrome (APS)[END_REF] Unsurprisingly, history of thrombosis was significantly associated 197 with the diagnosis of definite APS (p=0.001). [START_REF] Reynaud | Risk of venous and arterial thrombosis 283 according to type of antiphospholipid antibodies in adults without systemic lupus 284 erythematosus: a systematic review and meta-analysis[END_REF] Among histological patterns, placental infarcts 198 were associated with APS (p=0.047), whereas other lesions such as fetal thrombosis or 199 villitis were not: this is consistent with previous studies which showed that infarcts is the most 200 common pathological feature encountered in patients with obstetrical APS. 15 16 These results 201 emphasize the need for APL testing in patients presenting a VTE history or placental infarcts. 202 Diagnosing obstetrical APS is essential since such patients may benefit from the association 203 of aspirin and heparin, which increases the live birth rate for future pregnancies up to 70%. [START_REF] Rai | Randomised controlled trial of aspirin and 291 aspirin plus heparin in pregnant women with recurrent miscarriage associated with 292 phospholipid antibodies (or antiphospholipid antibodies)[END_REF] others. In our study, "non-criteria" APS patients were not different from non-APS patients 208 regarding clinical or pathological features. Notably, they showed neither a larger history of 209 thrombosis, nor more placental infarcts. This finding does not support the relevance of an 210 entity such as "non-criteria" APS. The best treatment regimen whose would benefit to 211 patients with "non-criteria" APS for future pregnancies is still debated 19 20 , although Mekinian 212 et al recently showed that they may benefit from the same treatment as patients with definite 213 definite APS. [START_REF] Mekinian | Non-conventional antiphospholipid 302 antibodies in patients with clinical obstetrical APS: Prevalence and treatment efficacy in 303 pregnancies[END_REF] 214 Above all, 27/36 (75%) patients did not have APS although they exhibited significant 215 placental lesions. Placental inflammatory disorders, including villitis and intervillitis were 216 found in 3 of our patients (8%). Their physiopathology remains unclear in most of cases [START_REF] Derricott | 305 Characterizing Villitis of Unknown Etiology and Inflammation in Stillbirth[END_REF] ; 217 nevertheless previous studies have shown that such lesions might not be associated to 218 APS. 15 23-26 219 The final question is to find out how to manage the future pregnancies of the 24/36 (67%) 220 Abbreviations: IUFD= In Utero Fetal Death. APS= Antiphospholipid Antibodies Syndrome. 49 were analyzed by conducting a Chi square or Fisher exact 101 test. Quantitative data was analyzed by conducting student or Mann and Whitney U test. We 102 performed all tests with a significance level of P<0.05. Statistical analyzis was performed 103 using SPSS 20.0 software. 104This study was approved by the ethics committee of Rennes University Hospital. At the time of IUFD, 7/53 (13%) patients were receiving anticoagulant therapy or antiplatelet 121 agents: 4 patients were treated with low dose aspirin (LDA), because of previous fetal loss in 122 3 cases, and of severe fetal growth restriction in one case. The three patients with known 123 APS were receiving LDA combined with low-molecular-weight heparin (LMWH). 124 LA, aCL and β2-GPI) were tested in 49/53 patients (90%), and were 126 positive in 9 (17%) cases. 127 Among the 9 APL positive patients, 8 underwent repeat testing 12 weeks later. Among them, 128 5 still tested positive: they were classified as having definite APS. The four other patients 129 presented "non-criteria" APS: one of them was tested positive for LA without confirmation12 130 weeks apart and presented low titers aCL, the 3 others were negative for conventional APL, 131 but tested positive for anti-prothrombin antibodies. 132Pathological findings 133A placental histological analyzis was performed in 49/53 cases (92%). Anomalies were found 134 in 40/49 (81%) of the placentas. 135 Vascular disorders were noted in 33/49 (73%) placentas. Retroplacental hematoma was 136 found in 10 cases, infarcts in 19 cases. Thrombosis were observed in 2 cases, decidual 137 arteriopathy was found in 3 cases. Fetal vascular malperfusion was observed in 7 patients, 138 only one of whom had gestational diabetes. Inflammatory disorders were found in 3 cases: 2 139 cases of villitis and one case of intervillitis, in all of which TORCH (Toxoplasmosis, Rubella, 140 CMV and Herpes virus) screening was negative. IUFD was related to cord anomaly in 4 141 cases: cord thrombosis in 3 cases, and tight loops in one case. 142All the placentas of the 9 APL positive patients (definite and "non-criteria" APS) were 143 examined: all presented vascular disorders, which associated hypotrophy (constant) with 144 decidual arteriopathy in one case, infarcts in 8/9 cases, retroplacental hematoma in 2 cases, 145 and thrombi in two cases. In one patient was noted the co-existence of fetal vascular criteria" APS was discussed in 4/36 (11%) patients, because of intermittent LA 206 positivity and low-titers aCL in one of them, and isolated anti-prothrombin positivity in the 3 207 Figure 1 . 1 Figure 1. Distribution of cases according to gestationnal age. Table 1 . 1 Main cause of In Utero Fetal Death, determined through expert agreement. Abbreviations: IUFD: In Utero Fetal Doss; PVD: Placental Vascular Disorders; PID: Placental inflammatory Disorders; APS: Antiphospholipid Syndrome; PE: Pre-Eclampsia. Main cause of IUFD N % Total: 53 Placental 30 56.6 (27 PVD, 3 PID) Maternal disease 6 11.3 (5 proven APS, 1 PE) Cord conditions 4 7.5 Infection 1 1.9 Fetal 1 1.9 Intrapartum or 0 0 obstetric complication Congenital/ genetic 0 0 Unknown 11 20.8 Total 53 100 (APS) among women referred to the internal medicine department. Huchon C, Deffieux X, Beucher G et al (2016) Pregnancy loss: French clinical practice guidelines. Eur J Obstet Gynecol Reprod Biol 201:18-26 Alexander S, Zeitlin J (2016) Stillbirths and fetal deaths-Better definitions to monitor (Codac): a utilitarian approach to the classification of perinatal deaths. BMC Pregnancy Childbirth 9:22 Gardosi J, Kady SM, McGeown P, Francis A, Tonks A (2005) Classification of stillbirth by 136:9-15 Kaandorp SP, Goddijn M, van der Post JAM et al (2010) Aspirin plus heparin or aspirin alone in women with recurrent miscarriage. N Engl J Med 362:1586-1596
01775320
en
[ "spi.meca.solid" ]
2024/03/05 22:32:18
2018
https://hal.science/hal-01775320/file/Cocou_Applicable%20Analysis_HAL.pdf
Marius Cocou email: [email protected] A variational inequality and applications to quasistatic problems with Coulomb friction Keywords: Evolution inequalities, existence results, quasistatic problems, nonlinear materials, Coulomb friction MSC(2010): 35Q74, 49J40, 74D10, 74H20 The aim of this paper is to study an evolution variational inequality that generalizes some contact problems with Coulomb friction in small deformation elasticity. Using an incremental procedure, appropriate estimates and convergence properties of the discrete solutions, the existence of a continuous solution is proved. This abstract result is applied to quasistatic contact problems with a local Coulomb friction law for nonlinear Hencky and also for linearly elastic materials. Introduction This paper concerns the analysis of an evolution variational inequality that represents a generalization of some quasistatic elastic problems with pointwise Coulomb friction and relaxed unilateral contact. Existence and approximation of solutions to the quasistatic elastic problems have been studied for various contact conditions. Based on the variational formulation proposed in [START_REF] Telega | Quasistatic Signorini's problem with friction and duality[END_REF], the quasistatic unilateral contact problems with local Coulomb friction have been studied in [START_REF] Andersson | Existence results for quasistatic contact problems with Coulomb friction[END_REF][START_REF] Rocca | Existence and approximation of a solution to quasistatic Signorini problem with local friction[END_REF][START_REF] Rocca | Numerical analysis of quasistatic unilateral contact problems with local friction[END_REF] and the normal compliance models have been investigated by several authors, see, e.g. [START_REF] Kikuchi | Contact problems in elasticity : a study of variational inequalities and finite element methods[END_REF][START_REF] Andersson | A quasistatic frictional problem with a normal compliance penalization term[END_REF][START_REF] Han | Quasistatic contact problems in viscoelasticity and viscoplasticity[END_REF] and references therein. Dynamic frictional contact problems with normal compliance laws for some viscoelastic bodies have been studied in [START_REF] Martins | Existence and uniqueness results for dynamic contact problems with nonlinear normal and friction interface laws[END_REF][START_REF] Kikuchi | Contact problems in elasticity : a study of variational inequalities and finite element methods[END_REF][START_REF] Kuttler | Dynamic friction contact problems for general normal and friction laws[END_REF][START_REF] Chau | A dynamic frictional contact problem with normal damped response[END_REF][START_REF] Migórski | Nonlinear inclusions and hemivariational inequalities[END_REF]. 1 A comprehensive presentation of contact models for the quasistatic processes can be found in [START_REF] Shillor | Models and analysis of quasistatic contact[END_REF]. An unified approach, which can be applied to various quasistatic problems, including unilateral and bilateral contact with nonlocal friction, or normal compliance conditions, has been considered in [START_REF] Badea | Internal and subspace correction approximations of implicit variational inequalities[END_REF], and different (quasi)static contact problems with nonlocal friction are analyzed in [START_REF] Capatina | Variational inequalities and frictional contact problems[END_REF]. A static contact problem with relaxed unilateral conditions and pointwise Coulomb friction was studied in [START_REF] Rabier | Fixed points of multi-valued maps and static Coulomb friction problems[END_REF], based on abstract formulations and Ky Fan's fixed point theorem. Recently, an extension of these results to dynamic contact problems in viscoelasticity was treated in [START_REF] Cocou | A class of dynamic contact problems with Coulomb friction in viscoelasticity[END_REF][START_REF] Cocou | A variational analysis of a class of dynamic problems with slip dependent friction[END_REF]. This paper extends the results presented in [START_REF] Rabier | Fixed points of multi-valued maps and static Coulomb friction problems[END_REF] to a new evolution variational inequality involving a nonlinear operator and with applications to two-field formulations of some nonsmooth elastic quasistatic contact problems with friction. The paper is organized as follows. In Section 2, a general evolution variational inequality is analyzed by an incremental method. Using the Ky Fan's theorem, the existence of incremental solutions is proved. Then several estimates and compactness arguments enable to pass to limits in order to establish the existence of a continuous solution. In Section 3, applications to quasistatic contact problems with local Coulomb friction, for nonlinear Hencky and linearly elastic bodies, are presented. An implicit variational inequality For simplicity and also in view of applications to contact mechanics, we shall confine attention to the case when Ω is an open, bounded, connected set Ω ⊂ R d , d = 2, 3, with the boundary Γ ∈ C 1,1 and with Ξ an open part of Γ. We denote Ξ T := Ξ × (0, T ), where 0 < T < +∞, and define the closed convex cones L 2 -(Ξ), L 2 -(Ξ T ) in the Hilbert spaces L 2 (Ξ), L 2 (Ξ T ), respectively, as follows: L 2 -(Ξ) := {δ ∈ L 2 (Ξ); δ ≤ 0 a.e. in Ξ}, L 2 -(Ξ T ) := {δ ∈ L 2 (0, T ; L 2 (Ξ)); δ ≤ 0 a.e. in Ξ T }. Let κ, κ : R → R be two mappings with κ lower semicontinuous and κ upper semicontinuous, satisfying the following conditions: κ(s) ≤ κ(s) ≤ 0 ∀ s ∈ R, (1) ∃ r 0 ≥ 0 such that |κ(s)| ≤ r 0 ∀ s ∈ R. (2) For every ζ ∈ L 2 (Ξ), define the following subset of L 2 -(Ξ): Λ(ζ) = {η ∈ L 2 -(Ξ); κ • ζ ≤ η ≤ κ • ζ a.e. in Ξ }, (3) which is clearly nonempty, because the bounding functions belong to the respective set, closed and convex. Since meas(Ξ) < ∞ and κ, κ satisfy (2), it is also readily seen that for all ζ ∈ L 2 (Ξ) the set Λ(ζ) is bounded in norm in L 2 (Ξ) by R 0 = r 0 (meas(Ξ)) 1/2 and in L ∞ (Ξ) by r 0 . The following compactness theorem proved in [START_REF] Simon | Compact sets in the space L p (0, T ; B)[END_REF] will be used in this paper. Theorem 2.1. Let X, Û and Ŷ be three Banach spaces such that X ⊂ Û ⊂ Ŷ with compact embedding from X into Û . (i) Let G be bounded in L p (0, T ; X), where 1 ≤ p < ∞, and ∂G/∂t := { ḟ ; f ∈ G} be bounded in L 1 (0, T ; Ŷ ). Then G is relatively compact in L p (0, T ; Û ). (ii) Let G be bounded in L ∞ (0, T ; X) and ∂G/∂t be bounded in L r (0, T ; Ŷ ), where r > 1. Then G is relatively compact in C([0, T ]; Û ). If H is a Hilbert space, unless otherwise stated we shall denote by . , . H its inner product and by . H the corresponding norm. Let (V, . , . , . ) and (U, . U ) be two Hilbert spaces such that V ⊂ U with continuous and compact embedding. Consider a functional F : V → R differentiable on V and assume that its derivative F : V → V is strongly monotone and Lipschitz continuous, that is there exist two constants α, β > 0 for which α v -u 2 ≤ F (v) -F (u), v -u (4) and F (v) -F (u) ≤ β v -u (5) for all u, v ∈ V . Using the relations F (v) -F (u) = 1 0 F (u + r(v -u)), v -u dr = F (u), v -u + 1 0 F (u + r(v -u)) -F (u), v -u dr and (4), [START_REF] Kikuchi | Contact problems in elasticity : a study of variational inequalities and finite element methods[END_REF], it is easily seen that for all u, v ∈ V it results F (u), v -u + α 2 v -u 2 ≤ F (v) -F (u) ≤ F (u), v -u + β 2 v -u 2 . ( 6 ) We remark that since F satisfies [START_REF] Andersson | A quasistatic frictional problem with a normal compliance penalization term[END_REF], it follows that F is strictly convex and sequentially weakly lower semicontinuous on V . Let (X, . X ) be a Hilbert space such that X ⊂ L 2 (Γ) with continuous and compact embedding, and l 0 : V → X, l : V → L 2 (Ξ), φ : L 2 -(Ξ) × V → R be three mappings satisfying the following conditions: l 0 is linear and continuous, ∃ k 1 > 0 such that ∀ v 1 , v 2 ∈ V, l(v 1 ) -l(v 2 ) L 2 (Ξ) ≤ k 1 v 1 -v 2 U , (7) ∀ γ, δ ∈ L 2 -(Ξ), ∀ v, w ∈ V verifying γ ∈ Λ(l(v)) and δ ∈ Λ(l(w)), γ -δ, l 0 (v -w) L 2 (Ξ) ≤ 0. (8) ∀ γ ∈ L 2 -(Ξ), ∀ θ ≥ 0, ∀ v 1 , v 2 , v ∈ V, φ(γ, v 1 + v 2 ) ≤ φ(γ, v 1 ) + φ(γ, v 2 ), (9) φ(γ, θv) = θ φ(γ, v), (10) ∀ v ∈ V, φ(0, v) = 0, (11) ∃ k 2 , k 3 > 0 such that ∀ γ, δ ∈ L 2 -(Ξ), ∀ v ∈ V, |φ(γ, v) -φ(δ, v)| ≤ k 2 γ -δ L 2 (Ξ) v U , (12) |φ(γ, v) -φ(δ, v)| ≤ k 3 γ -δ X v , (13) ∃ k 4 > 0 such that γ 1 -γ 2 X ≤ k 4 ( u 1 -u 2 + f 1 -f 2 ), ( (14) ) for all γ 1,2 ∈ L 2 -(Ξ), u 1,2 , f 1,2 , d 1,2 ∈ V verifying (Q 1 ) F (u 1 ), v -u 1 -γ 1 , l 0 (v -u 1 ) L 2 (Ξ) +φ(γ 1 , v -d 1 ) -φ(γ 1 , u 1 -d 1 ) ≥ f 1 , v -u 1 ∀ v ∈ V, (Q 2 ) F (u 2 ), v -u 2 -γ 2 , l 0 (v -u 2 ) L 2 (Ξ) +φ(γ 2 , v -d 2 ) -φ(γ 2 , u 2 -d 2 ) ≥ f 2 , v -u 2 ∀ v ∈ V, 15 and we assume that k 3 k 4 < α. ( ) 16 Let f ∈ W 1,2 (0, T ; V ), u 0 ∈ V , λ 0 ∈ Λ(l(u 0 )) be given and satisfy the following compatibility condition: F (u 0 ), v -u 0 -λ 0 , l 0 (v -u 0 ) L 2 (Ξ) (17) +φ(λ 0 , v) -φ(λ 0 , u 0 ) ≥ f (0), v -u 0 ∀ v ∈ V. Consider the following problem. Problem Q : Find u ∈ W 1,2 (0, T ; V ), λ ∈ W 1,2 (0, T ; X ) such that u(0) = u 0 , λ(0) = λ 0 , λ(t) ∈ Λ(l(u(t))) for almost all t ∈ (0, T ), and F (u), v -u -λ, l 0 (v -u) L 2 (Ξ) + φ(λ, v) (18) -φ(λ, u) ≥ f, v -u ∀ v ∈ V a.e. on (0, T ). Incremental formulations For n ∈ N * , we set ∆t := T /n, t i := i ∆t, i = 0, 1, ..., n. If θ is a continuous function of t ∈ [0, T ] valued in some vector space, we use the notations θ i := θ(t i ) unless θ = u, and if i , ∀ i ∈ {0, 1, ..., n}, are elements of some vector space, then we set ∂ i := i+1 -i ∆t , ∆ i := i+1 -i ∀ i ∈ {0, 1, ..., n -1}. We approximate the problem Q using the following sequence of incremental problems (Q i,n ) i=0,1,...,n-1 . Problem Q i,n : Find u i+1 ∈ V , λ i+1 ∈ Λ(l(u i+1 )) such that F (u i+1 ), v -∂u i -λ i+1 , l 0 (v -∂u i ) L 2 (Ξ) + φ(λ i+1 , v) (19) -φ(λ i+1 , ∂u i ) ≥ f i+1 , v -∂u i ∀ v ∈ V. It is easily seen that for all i ∈ {0, 1, ..., n -1} the problem Q i,n is equivalent to the following implicit variational inequality. Problem Qi,n : Find u i+1 ∈ V , λ i+1 ∈ Λ(l(u i+1 )) such that F (u i+1 ), v -u i+1 -λ i+1 , l 0 (v -u i+1 ) L 2 (Ξ) + φ(λ i+1 , v -u i ) ( 20 ) -φ(λ i+1 , u i+1 -u i ) ≥ f i+1 , v -u i+1 ∀ v ∈ V. Let us define the following functions: u n (0) = ûn (0) = u 0 , λ n (0) = λ 0 , f n (0) = f 0 and ∀ i ∈ {0, 1, ..., n -1}, ∀ t ∈ (t i , t i+1 ], u n (t) = u i+1 , λ n (t) = λ i+1 , ûn (t) = u i + (t -t i )∂u i , λn (t) = λ i + (t -t i )∂λ i , f n (t) = f i+1 . Then for all n ∈ N * each of the sequences of inequalities (Q i,n ) i=0,1,...,n-1 , ( Qi,n ) i=0,1,...,n-1 is equivalent to the following incremental formulation. Problem Q n : Find u n ∈ L 2 (0, T ; V ), λ n ∈ L 2 (Ξ T ) such that λ n (t) ∈ Λ(l(u n (t))) ∀ t ∈ (0, T ) and F (u n (t)), v - d dt ûn (t) -λ n (t), l 0 (v - d dt ûn (t)) L 2 (Ξ) +φ(λ n (t), v) -φ(λ n (t), d dt ûn (t)) (21) ≥ f n (t), v - d dt ûn (t) ∀ v ∈ V, a.e. on (0, T ). First, we prove the existence of a solution to the incremental problem Qi,n by a fixed point method. Let Φ i : L 2 -(Ξ) → 2 L 2 -(Ξ) \ {∅} be the set-valued mapping defined by for all γ ∈ L 2 -(Ξ) Φ i (γ) = Λ(l(u γ )), (22) where u γ is the solution of the following variational inequality of the second kind: find u γ ∈ V such that F (u γ ), v -u γ -γ, l 0 (v -u γ ) L 2 (Ξ) + φ(γ, v -u i ) (23) -φ(γ, u γ -u i ) ≥ f i+1 , v -u γ ∀ v ∈ V. It is easily seen that λ is a fixed point of Φ i , i.e. λ ∈ Φ i (λ), iff (u i+1 , λ i+1 ) = (u λ , λ) is a solution of the problem Qi,n . We shall prove the existence of a fixed point of the multifunction Φ i by using a corollary of the Ky Fan's fixed point theorem [START_REF] Fan | Fixed points and minimax theorems in locally convex topological linear spaces[END_REF], proved in [START_REF] Rabier | Fixed points of multi-valued maps and static Coulomb friction problems[END_REF] in the particular case of a reflexive Banach space. Note that since Y is a reflexive Banach space and D is convex, closed and bounded, there is no assumption that Y is separable, see [START_REF] Rabier | Fixed points of multi-valued maps and static Coulomb friction problems[END_REF][START_REF] Browder | Nonlinear operators and nonlinear equations of eyolution in Banach spaces[END_REF]. Theorem 2.2. Assume that (1 -5), (7 -14) hold. Then there exists λ ∈ L 2 -(Ξ) such that λ ∈ Φ i (λ) and (u i+1 , λ i+1 )=(u λ , λ) is a solution of the prob- lem Qi,n . Proof. We apply Proposition 2.1 to Φ = Φ i , Y = L 2 (Ξ) and D = L 2 -(Ξ) ∩ ζ ∈ L 2 (Ξ); ζ L 2 (Ξ) ≤ R 0 . The set D ⊂ L 2 (Ξ) is clearly convex, closed and bounded. By ( 4), ( 5), ( 7), ( 8), (10 -14), for every γ ∈ D the classical variational inequality [START_REF] Brezis | Problèmes unilatéraux[END_REF] has a unique solution u γ . Since for each ζ ∈ L 2 (Ξ) the set Λ(ζ) is nonempty, convex, closed, and bounded by R 0 , it follows that Φ i (γ) = Λ(l(u γ )) is a nonempty, convex and closed subset of D for every γ ∈ D. In order to prove that the multifunction Φ i is sequentially weakly upper semicontinuous, let γ p γ in L 2 (Ξ), γ p ∈ D, η p ∈ Φ i (γ p ) ∀ p ∈ N, η p η in L 2 (Ξ) and let us verify that η ∈ Φ i (γ). Let u γp ∈ V be the solution of the variational inequality F (u γp ), v -u γp -γ p , l 0 (v -u γp ) L 2 (Ξ) + φ(γ p , v -u i ) (24) -φ(γ p , u γp -u i ) ≥ f i+1 , v -u γp ∀ v ∈ V. Taking v = 0 in (24) and using [START_REF] Chau | A dynamic frictional contact problem with normal damped response[END_REF], we obtain F (u γp ) -F (0), u γp ≤ γ p , l 0 (u γp ) L 2 (Ξ) + φ(γ p , -u γp ) + f i+1 -F (0), u γp , so that, by ( 4), ( 7), ( 12), [START_REF] Capatina | Variational inequalities and frictional contact problems[END_REF], α u γp 2 ≤ ( γ p X l 0 + k 3 γ p X + f i+1 + F (0) ) u γp , which implies u γp ≤ α -1 ( l 0 + k 3 ) γ p X + f i+1 + F (0) ). Thus u γp ≤ C 1 ( γ p X + f i+1 + F (0) ) ∀ p ∈ N, (25) where C 1 = α -1 max( l 0 + k 3 , 1). As the sequence (γ p ) p is bounded in L 2 (Ξ), it follows that (u γp ) p is bounded in V which implies that there exists a subsequence, still denoted by (u γp ) p , and an element u ∈ V such that u γp u in V. (26) By ( 6) and ( 10), the inequality [START_REF] Nečas | Mathematical theory of elastic and elasto-plastic bodies. An introduction[END_REF] implies F (v) -F (u γp ) -γ p , l 0 (v -u γp ) L 2 (Ξ) + φ(γ p , v -u γp ) (27) ≥ f i+1 , v -u γp + α 2 v -u γp 2 ∀ v ∈ V, and taking v = u, we obtain F (u) -F (u γp ) -γ p , l 0 (u -u γp ) L 2 (Ξ) + φ(γ p , u -u γp ) ≥ f i+1 , u -u γp + α 2 u -u γp 2 . As F is sequentially weakly lower semicontinuous, using the previous relation, [START_REF] Han | Quasistatic contact problems in viscoelasticity and viscoplasticity[END_REF], [START_REF] Badea | Internal and subspace correction approximations of implicit variational inequalities[END_REF] and the compact embeddings X ⊂ L 2 (Ξ), V ⊂ U , we have lim sup p→∞ α 2 u -u γp 2 ≤ F (u) + lim sup p→∞ (-F (u γp )) + lim p→∞ | γ p , l 0 (u -u γp ) L 2 (Ξ) | + lim p→∞ φ(γ p , u -u γp ) -lim p→∞ f i+1 , u -u γp ≤ F (u) -lim inf p→∞ F (u γp ) + lim p→∞ γ p L 2 (Ξ) l 0 (u -u γp ) L 2 (Ξ) + lim p→∞ k 2 γ p L 2 (Ξ) u -u γp U -lim p→∞ f i+1 , u -u γp = F (u) -lim inf p→∞ F (u γp ) ≤ 0, which proves that u γp → u in V. ( 28 ) Passing to the limit in [START_REF] Nečas | Mathematical theory of elastic and elasto-plastic bodies. An introduction[END_REF], it follows that u is a solution of ( 23) and since its solution is unique we obtain that u γ = u = lim p→∞ u γp . Now, the relation η p ∈ Φ i (γ p ) implies η p ∈ Λ(l(u γp )) that is κ • l p ≤ η p ≤ κ • l p a.e. in Ξ, (29) for all p ∈ N, where l p := l(u γp ). The relations (29) are equivalent to ω κ • l p ≤ ω η p ≤ ω κ • l p , for every measurable subset ω ⊂ Ξ and for all p ∈ N. Using (28), ( 8), the semi-continuity of κ and κ, the relations (1), ( 2), the convergence property ω η p → ω η, and passing to limits according to Fatou's lemma, we obtain ω κ • l(u γ ) ≤ ω η ≤ ω κ • l(u γ ), (30) for every measurable subset ω ⊂ Ξ, which implies η ∈ Φ i (γ). By Proposition 2.1 there exists a fixed point λ of Φ i and (u λ , λ) is clearly a solution to the problem Qi,n . Remark 2.1. This existence result insures also that there exists (u 0 , λ 0 ) satisfying the compatibility condition (17). Existence of a solution to the continuous problem We now establish some useful estimates independent of n for the solutions of the incremental formulations Qi,n and Q n . Lemma 2.1. Under the above hypotheses, for all n ∈ N * and all i ∈ {0, 1, ..., n -1} the following estimates hold: u i+1 ≤ C 1 ( λ i+1 X + f i+1 + F (0) ), (31) ∆u i ≤ k 3 α ∆λ i X + 1 α ∆f i ), ( 32 ) ∆λ i X ≤ k 4 ( ∆u i + ∆f i ), ( 33 ) ∆u i ≤ C 2 ∆f i , ( 34 ) ∆λ i X ≤ C 3 ∆f i , ( 35 ) where C 2 = k 3 k 4 + 1 α -k 3 k 4 , C 3 = (α + 1)k 4 α -k 3 k 4 . Proof. By similar arguments to those that enabled to prove [START_REF] Zeidler | Nonlinear functional analysis and its applications[END_REF], using [START_REF] Browder | Nonlinear operators and nonlinear equations of eyolution in Banach spaces[END_REF] the estimate (31) follows. If we take v = u i in [START_REF] Browder | Nonlinear operators and nonlinear equations of eyolution in Banach spaces[END_REF] then F (u i+1 ), u i -u i+1 -λ i+1 , l 0 (u i -u i+1 ) L 2 (Ξ) (36) -φ(λ i+1 , u i+1 -u i ) ≥ f i+1 , u i -u i+1 , and taking v = u i+1 in (20), corresponding to i - 1 if i ≥ 1, or in (17) if i = 0, we have F (u i ), u i+1 -u i -λ i , l 0 (u i+1 -u i ) L 2 (Ξ) + φ(λ i , u i+1 -u i-1 ) (37) -φ(λ i , u i -u i-1 ) ≥ f i , u i+1 -u i . By [START_REF] Chau | A dynamic frictional contact problem with normal damped response[END_REF], the inequalities (36) and (37) imply F (u i+1 ) -F (u i ), u i+1 -u i ≤ λ i+1 -λ i , l 0 (u i+1 -u i ) L 2 (Ξ) (38) +φ(λ i , u i+1 -u i ) -φ(λ i+1 , u i+1 -u i ) + f i+1 -f i , u i+1 -u i . As (u i+1 , λ i+1 ) and (u i , λ i ) are solutions of Qi+1,n and Qi,n , respectively, we have λ i+1 ∈ Λ(l(u i+1 )), λ i ∈ Λ(l(u i )), so that by ( 9) λ i+1 -λ i , l 0 (u i+1 -u i ) L 2 (Ξ) ≤ 0. Using this relation, ( 4) and ( 14) in (38), we have α u i+1 -u i 2 ≤ k 3 λ i+1 -λ i X u i+1 -u i + f i+1 -f i u i+1 -u i , from which (32) follows. From [START_REF] Rabier | Fixed points of multi-valued maps and static Coulomb friction problems[END_REF], we obtain (33) and by (32), (33) the estimates (34), (35) can be easily verified. Based on the previous lemma and the fact that f ∈ W 1,2 (0, T ; V ) is absolutely continuous, after possibly being redefined on a set of measure zero, the following estimates can be established by a straightforward computation, see, e.g. [START_REF] Cocou | Formulation and approximation of quasistatic frictional contact[END_REF], [START_REF] Capatina | A class of implicit variational inequalities and applications to frictional contact[END_REF]. Lemma 2.2. For all n ∈ N * u n (t) ≤ C 1 ( λ n (t) X + f n (t) + F (0) ) ∀ t ∈ [0, T ], (39) u n (t) -ûn (t) ≤ T n d dt ûn (t) ≤ C 2 f n (t) -f n (t - T n ) (40) ≤ C 2 min{t+ T n ,T } t-T n ḟ (τ ) dτ ∀ t ∈ [0, T ], λ n (t) -λn (t) X ≤ T n d dt λn (t) X (41) ≤ C 3 f n (t) -f n (t - T n ) ∀ t ∈ [0, T ], u n -ûn L 2 (0,T ;V ) = T n √ 3 d dt ûn L 2 (0,T ;V ) (42) ≤ C 2 T n √ 3 ḟ L 2 (0,T ;V ) , λ n -λn L 2 (0,T ;X ) = T n √ 3 d dt λn L 2 (0,T ;X ) (43) ≤ C 3 T n √ 3 ḟ L 2 (0,T ;V ) . Using Lemma 2.2, it follows that (û n ) n is bounded in W 1,2 (0, T ; V ), ( λn ) n is bounded in W 1,2 (0, T ; X ) ∩ L ∞ (Ξ T ), and since all these functions are absolutely continuous, after possibly being redefined on a set of measure zero, we have the following convergence results. Lemma 2.3. There exist subsequences of (u n , ûn ) n and (λ n , λn ) n , denoted by (u np , ûnp ) p and (λ np , λnp ) p , and two elements u ∈ W 1,2 (0, T ; V ), λ ∈ W 1,2 (0, T ; X ) ∩ L 2 (Ξ T ) such that u np (t) u(t) in V ∀ t ∈ [0, T ], ( 44 ) ûnp u in W 1,2 (0, T ; V ), ( 45 ) λ np (t) λ(t) in X ∀ t ∈ [0, T ], ( 46 ) λ np , λnp λ in L 2 (0, T ; L 2 (Ξ)), ( 47 ) λnp λ in W 1,2 (0, T ; X ). ( 48 ) Lemma 2.4. For the subsequences (û np ) p , (λ np ) p , the following relation holds: lim inf p→∞ T 0 φ(λ np (t), d dt ûnp (t)) dt ≥ T 0 φ(λ(t), d dt û(t)) dt. ( 49 ) Proof. According to Theorem 2.1 with G = ( λnp ) p , X = L 2 (Ξ), Û = H ι-1/2 (Ξ), Ŷ = X , p = 2, 0 < ι < 1 2 , and to (47), (48), we obtain that λ np , λnp → λ in L 2 (0, T ; X ). ( 50 ) By ( 14) it follows that T 0 (φ(λ np (t), d dt ûnp (t)) -φ(λ(t), d dt ûnp (t))) dt ≤ k 3 T 0 λ np (t) -λ(t) X d dt ûnp (t) dt ≤ k 3 λ np -λ L 2 (0,T ;X ) d dt ûnp L 2 (0,T ;V ) , which implies lim p→∞ T 0 (φ(λ np (t), d dt ûnp (t)) -φ(λ(t), d dt ûnp (t))) dt = 0. ( 51 ) Since by ( 10), ( 11), [START_REF] Capatina | Variational inequalities and frictional contact problems[END_REF], φ(λ(t), •) is convex lower semicontinuous on V for a.e. t ∈ [0, T ], the mapping T 0 φ(λ(t), •) dt is convex lower semicontinuous on L 2 (0, T ; V ) (see, e.g. [START_REF] Brezis | Problèmes unilatéraux[END_REF]), so that lim inf p→∞ T 0 φ(λ(t), d dt ûnp (t)) dt ≥ T 0 φ(λ(t), d dt û(t)) dt. (52) From ( 51) and ( 52), (49) follows. Now, we prove the main strong convergence and existence result. Theorem 2.3. Under the assumptions (1 -5), (7 -16), every convergent subsequence of Lemma 2.3, (u np , ûnp ) p , (λ np , λnp ) p , and their limits u ∈ W 1,2 (0, T ; V ), λ ∈ W 1,2 (0, T ; X ) ∩ L 2 (Ξ T ) have the following strong convergence properties u np (t) → u(t) in V ∀ t ∈ [0, T ], (53) λ np (t) → λ(t) in X ∀ t ∈ [0, T ], (54) and (u, λ) is a solution to the problem Q. Proof. In order to prove (53), we use the same method as the one that enabled to obtain (28). By [START_REF] Chau | A dynamic frictional contact problem with normal damped response[END_REF] the sequence ( Qi,n ) i=0,1,...,n-1 implies the following inequality: for every t ∈ [0, T ] F (u n (t)), v -u n (t) -λ n (t), l 0 (v -u n (t)) L 2 (Ξ) (55) +φ(λ n (t), v -u n (t)) ≥ f n (t), v -u n (t) ∀ v ∈ V and taking v = u, by ( 6) we derive F (u(t)) -F (u np (t)) -λ np (t), l 0 (u(t) -u np (t)) L 2 (Ξ) (56) +φ(λ np (t), u(t) -u np (t)) ≥ f np (t), u(t) -u np (t) + α 2 u(t) -u np (t) 2 ∀ p ∈ N. Using that F is sequentially weakly lower semicontinuous, ( 7), ( 13), the compact embeddings X ⊂ L 2 (Ξ), V ⊂ U and that for all t ∈ [0, T ] (λ np (t)) p is bounded in L 2 (Ξ) by R 0 , the previous relation implies lim sup p→∞ α 2 u(t) -u np (t) 2 ≤ F (u(t)) + lim sup p→∞ (-F (u np (t))) + lim p→∞ | λ np (t), l 0 (u(t) -u np (t)) L 2 (Ξ) | + lim p→∞ φ(λ np (t), u(t) -u np (t)) -lim p→∞ f np (t), u(t) -u np (t) ≤ F (u(t)) -lim inf p→∞ F (u np (t)) + lim p→∞ λ np (t) L 2 (Ξ) l 0 (u(t) -u np (t)) L 2 (Ξ) + lim p→∞ k 2 λ np (t) L 2 (Ξ) u(t) -u np (t) U -lim p→∞ f np (t), u(t) -u np (t) = F (u(t)) -lim inf p→∞ F (u np (t)) ≤ 0, which proves (53). By Theorem 2.1 with G = ( λnp ) p , X = L 2 (Ξ), Û = H ι-1/2 (Ξ), Ŷ = X , r = 2, 0 < ι < 1 2 , it follows that λnp → λ in C([0, T ]; X ), (57) so that by (41) we obtain (54). It remains to prove that (u, λ) is a solution of the problem Q. First, since λ np (t) ∈ Λ(l(u np (t))) for all t ∈ (0, T ), we have ω κ • l(u np ) ≤ ω λ np ≤ ω κ • l(u np ), (58) for every measurable subset ω ⊂ Ξ T and for all p ∈ N. Using (53), ( 8), the semi-continuity of κ and κ, the relations (1), ( 2), (47), which implies the convergence property ω λ np → ω λ, and passing to limits according to Fatou's lemma, we obtain ω κ • l(u) ≤ ω λ ≤ ω κ • l(u), (59) for every measurable subset ω ⊂ Ξ T , which implies λ(t) ∈ Λ(l(u(t))) for almost all t ∈ (0, T ). Second, integrating both sides in ( 21) over [0, T ] and passing to the limit, by the relations (53), ( 54), ( 45), (49), it follows that for all v ∈ L 2 (0, T ; V ) T 0 F (u(t)), v(t) -u(t) dt - T 0 λ(t), l 0 (v(t) -u(t)) L 2 (Ξ) dt + T 0 φ(λ(t), v(t))dt - T 0 φ(λ(t), u(t))dt ≥ T 0 f (t), v(t) -u(t) dt. By Lebesgue's theorem, it follows that (u, λ) is a solution of the variational inequality [START_REF] Simon | Compact sets in the space L p (0, T ; B)[END_REF]. Assume the small deformation hypothesis and that the inertial effects are negligible. We denote by u = u(x, t) the displacement field, by ε the infinitesimal strain tensor and by σ the stress tensor, with the components u = (u i ), ε = (ε ij ) and σ = (σ ij ), respectively. We use the classical decompositions u = u N n + u T , u N = u • n, σn = σ N n + σ T , σ N = (σn) • n, where n is the outward normal unit vector to Γ with the components n = (n i ). The usual summation convention will be used for i, j, k, l = 1, . . . , d. Consider the Hilbert space V and the closed convex sets L 2 -(Γ 3 ), Λ 1 (ζ) as follows: V = {v ∈ H 1 (Ω; R d ); v = 0 a.e. on Γ 1 }, L 2 -(Γ 3 ) := {δ ∈ L 2 (Γ 3 ); δ ≤ 0 a.e. in Γ 3 }, Λ 1 (ζ) = {η ∈ L 2 -(Γ 3 ); κ • ζ ≤ η ≤ κ • ζ a.e. in Γ 3 } ∀ζ ∈ L 2 (Γ 3 ). Assume that in Ω a body force ϕ 1 ∈ W 1,2 (0, T ; L 2 (Ω; R d )) is prescribed, on Γ 1 the displacement vector equals zero and on Γ 2 a traction ϕ 2 ∈ W 1,2 (0, T ; L 2 (Γ 2 ; R d )) is applied. On Γ 3 , the contact between the body and a support is possible with the initial gap denoted by g 0 and the gap corresponding to the solution u denoted by [u N ] := u N -g 0 . We assume that there exists g ∈ V such that g N = g 0 on Γ 3 . Since the displacements, their derivatives and the gap are assumed small, we obtain the following unilateral contact condition at time t : [u N ] ≤ 0 on Γ 3 . On the potential contact surface Γ 3 , the displacements and the stress vector will satisfy some contact conditions having the following form: κ([u N ]) ≤ σ N ≤ κ([u N ]). Assume that, for all γ, δ ∈ L 2 -(Γ 3 ) and all v, w ∈ V such that γ ∈ Λ 1 ([v N ]), δ ∈ Λ 1 ([w N ]), γ -δ, v N -w N L 2 (Γ 3 ) ≤ 0. (60) Different choices for κ, κ will give various contact and friction conditions as can be seen in the following examples. Example 1. (Friction conditions with controlled normal stress) Let M ≥ 0 be a constant and define κ(s) = κ M (s) = 0 if s < 0, -M if s ≥ 0, κ(s) = κ M (s) = 0 if s ≤ 0, -M if s > 0. The classical Signorini's conditions correspond formally to M = +∞. Example 2. (Normal compliance conditions) Various normal compliance conditions and friction laws can be obtained if one considers κ = κ = κ, where κ : R → R is some negative, decreasing, and bounded Lipschitz continuous function, so that σ N is given by the relation σ N = κ([u N ]). It is easily seen that these two examples verify the condition (60). Let F ≥ 0 be the coefficient of friction, assumed to be a Lipschitz continuous function on Γ, which ensures to belong to the set of the multipliers on H 1/2 (Γ) denoted by M, see, e.g. [START_REF] Andersson | Existence results for quasistatic contact problems with Coulomb friction[END_REF], [START_REF] Rocca | Numerical analysis of quasistatic unilateral contact problems with local friction[END_REF]. Therefore the mapping H 1/2 (Γ) v → Fv ∈ H 1/2 (Γ) is bounded with norm F M . In order to describe the frictional contact conditions on Γ 3 , we define ∀ l ∈ V , S l := {v ∈ V ; Ω σ(v) • ε(ψ)dx = l, ψ V ∀ ψ ∈ V such that ψ = 0 a.e. on Γ 3 }, L ∈ V , L, w V = ϕ 1 , w L 2 (Ω; R d ) + ϕ 2 , w L 2 (Γ 2 ; R d ) ∀ w ∈ V , ∀ v ∈ S L , σ N (v), w Γ = Ω σ(v) • ε( w)dx -L, w V ∀ w ∈ H 1/2 (Γ), where • , • Γ denotes the duality pairing on H -1/2 (Γ) × H 1/2 (Γ), w ∈ V satisfies wT = 0 a.e. on Γ 3 , wN = w a.e. on Γ 3 . It is easy to verify that for all v ∈ S L σ N (v) depends only on the values of w on Γ 3 and not on the choices of w having the above properties. A contact problem with Coulomb friction for a nonlinear Hencky material Assume that the elastic body satisfies the following nonlinear Hencky-Mises constitutive equation (see [START_REF] Nečas | Mathematical theory of elastic and elasto-plastic bodies. An introduction[END_REF], [START_REF] Zeidler | Nonlinear functional analysis and its applications[END_REF]): σ(u) = σ(u) = (k - 2 3 µ(γ(u)))(tr ε(u)) I + 2 µ(γ(u)) ε(u), where k is the constant bulk modulus, µ is a continuously differentiable function in [ 0, +∞) satisfying Let F 1 : V → R be defined by 0 < µ 0 ≤ µ(r) ≤ 3 2 k, 0 < µ 1 ≤ µ(r) + 2 ∂µ(r) ∂r r ≤ µ 2 , ∀ r ≥ 0, (61) F 1 (v) = 1 2 k Ω ϑ 2 (v)dx + 1 2 Ω γ(v) 0 µ(r)dr dx ∀ v ∈ V , (67) and J : L 2 -(Γ 3 ) × V → R be defined by J(γ, v) = - Γ 3 F γ |v T |ds ∀ γ ∈ L 2 -(Γ 3 ), ∀ v ∈ V . ( 68 ) One can verify, see, e.g. [START_REF] Nečas | Mathematical theory of elastic and elasto-plastic bodies. An introduction[END_REF], Ch. 8, that F 1 is differentiable on V and for all u, v ∈ V F 1 (u), v V = Ω [(k - 2 3 µ(γ(u))) ϑ(u) ϑ(v) + 2 µ(γ(u)) ε(u) • ε(v)]dx. ( 69 ) Let u 0 ∈ V , λ 0 ∈ Λ 1 ([u 0N ]) satisfy the following compatibility condition: F 1 (u 0 ), v -u 0 V -λ 0 , v N -u 0N L 2 (Γ 3 ) (70) +J(λ 0 , v) -J(λ 0 , u 0 ) ≥ L(0), vu 0 V ∀ v ∈ V . We have the following variational formulation of problem P c 1 . Problem P v 1 : Find u ∈ W 1,2 (0, T ; V ), λ ∈ W 1,2 (0, T ; H -1/2 (Γ)) such that u(0) = u 0 , λ(0) = λ 0 , λ(t) ∈ Λ 1 ([u N (t)]) for almost all t ∈ (0, T ), and F 1 (u), v -u V -λ, v N -uN L 2 (Γ 3 ) + J(λ, v) (71) -J(λ, u) ≥ L, vu V ∀ v ∈ V a.e. on (0, T ). The formal equivalence between the variational problem P v 1 and the classical problem (62)-(66) can be easily proved by using Green's formula. The Lagrange multiplier λ ∈ L 2 (Γ 3 ) satisfies the relation σ N = λ in H -1/2 (Γ) that is σ N (u), w Γ = λ, w L 2 (Γ 3 ) ∀ w ∈ H 1/2 (Γ). Taking Ξ = Γ 3 , Λ = Λ 1 , V = V , U = H ι (Ω; R d ), 1 > ι > 1 2 , X = H 1/2 (Γ), F = F 1 , φ = J, f = L, and l 0 (v) = v N , l(v) = [v N ] = v N -g 0 ∀v ∈ V , it results that the problem P v 1 is a particular case of problem Q. As it is straightforward to verify the assumptions (1 -5), (7 -15), and also [START_REF] Cocou | A class of dynamic contact problems with Coulomb friction in viscoelasticity[END_REF] if F M is sufficiently small, by Theorem 2.3 we obtain the following existence result. Proposition 3.1. Under the previous assumptions and if F M is sufficiently small there exists a solution to problem P v 1 . Definition 2 . 1 . 21 Let Y be a reflexive Banach space, D a weakly closed set in Y , and Φ : D → 2 Y \ {∅} be a multivalued function. Φ is called sequentially weakly upper semicontinuous if z p z, y p ∈ Φ(z p ) and y p y imply y ∈ Φ(z). Proposition 2 . 1 . 21 ([15]) Let Y be a reflexive Banach space, D a convex, closed and bounded set in Y , and Φ : D → 2 D \ {∅} a sequentially weakly upper semicontinuous multivalued function such that Φ(z) is convex for every z ∈ D. Then Φ has a fixed point. 3 Applications to two quasistatic contact problemsConsider an elastic body occupying the set Ω ⊂ R d , d = 2, 3, with Γ = Γ 1 ∪ Γ 2 ∪ Γ 3 , where Γ 1 , Γ 2 , Γ 3 are open, disjoint parts of Γ and meas(Γ 1 ) > 0. c 1 : 1 and, for all u, v ∈ V , γ(u) := γ(u, u), γ(u, v) = -2 3 ϑ(u) ϑ(v)+2 ε(u)•ε(v), ϑ(u) := tr ε(u) = div u.Consider the following quasistatic contact problem with Coulomb friction. Problem P Find u such that u(0) = u 0 and, for all t ∈ (0, T ), divσ(u) = -ϕ 1 in Ω, (62) σ(u) = σ(u) in Ω, (63) u = 0 on Γ 1 , σn = ϕ 2 on Γ 2 , (64) κ([u N ]) ≤ σ N ≤ κ([u N ]) on Γ 3 ,(65) |σ T | ≤ F |σ N | and (66) uT = 0 ⇒ σ T = -F|σ N | uT | uT | on Γ 3 . A contact problem with local friction for a linearly elastic body Let A denote the elasticity tensor, with the components A = (A ijkl ) satisfying the following classical symmetry and ellipticity conditions: Consider the following elastic contact problem with Coulomb friction. Problem P c 2 : Find u such that u(0) = u 0 , satisfying and ( 62), (64 -66) for all t ∈ (0, T ). Let us define the bilinear and symmetric mapping a : The form a is continuous on V × V and, since meas(Γ 1 ) > 0, by Korn's inequality is also ) satisfy the following compatibility condition: We have the following variational formulation of problem P c 2 . ) for almost all t ∈ (0, T ), and The Lagrange multiplier λ ∈ L 2 (Γ 3 ) satisfies again the relation Taking Ξ, Λ, V , U , X, φ, f , l 0 , l as in 3.1 and F (v) = 1 2 a(v, v) ∀v ∈ V , we see that the problem P v 2 is a particular case of problem Q so that by using again Theorem 2.3 one obtains the following existence result. Proposition 3.2. Under the previous assumptions and if F M is sufficiently small there exists a solution to problem P v 2 . Finally, we remark that viscoelastic or viscoplastic bulk behaviors can also be studied by using similar methods to those presented here.
00177563
en
[ "phys.astr.co", "sdu.astr" ]
2024/03/05 22:32:18
2007
https://hal.science/hal-00177563/file/dobrijevic2007-39.pdf
N Carrasco E Hébrard M Banaszkiewicz M Dobrijevic Pascal Pernot email: [email protected] Influence of neutral transport on ion chemistry uncertainties in Titan ionosphere Keywords: Atmosphere chemistry, Atmosphere composition, Atmosphere dynamics, Ionospheres, Titan Models of Titan ionospheric chemistry have shown that ion densities depend strongly on the neutral composition. The turbulent diffusion transport conditions, as modeled by eddy coefficients, can spectacularly affect the uncertainty on predicted neutral densities. In order to evaluate the error budget on ion densities predicted by photochemical models, we perform uncertainty propagation of neutral densities by Monte Carlo sampling and assess their sensitivity to two turbulent diffusion profiles, corresponding to the extreme profiles at high altitudes described in the literature. A strong sensitivity of the ion density uncertainties to transport is observed, generally more important than to ion-molecule reaction parameters themselves. This highlights the necessity to constrain eddy diffusion profiles for Titan ionosphere, which should progressively be done thanks to the present and future measurements of the orbiter Cassini. Introduction Discrepancies between the outputs of different models and available data are difficult to assess in the absence of quantified uncertainties. In particular, modelling the chemistry of planetary ionospheres involves numerous physical and chemical parameters, which values are known from laboratory measurements with experimental uncertainty factors. These uncertainty sources should be accounted for in the modelling, in order to quantify the uncertainties on the model outputs and more generally to evaluate the model predictivity [START_REF] Wakelam | Estimation and reduction of the uncertainties in chemical models: application to hot core chemistry[END_REF]Zádor et al. 2006;[START_REF] Hébrard | Consequences of chemical kinetics uncertainties in modeling Titan's atmosphere[END_REF]). In a previous work [START_REF] Carrasco | Uncertainty analysis of bimolecular reactions in Titan ionosphere chemistry model[END_REF], we evaluated the uncertainties on a Titan ionospheric chemistry model (based on the work of [START_REF] Banaszkiewicz | A coupled model of Titan's atmosphere and ionosphere[END_REF], due to the rate constants and branching ratios of ion-molecule reactions. Neutral densities were considered as fixed inputs, with the neutral density profiles calculated by [START_REF] Lara | Vertical distribution of titan's atmospheric neutral constituents[END_REF]. In parallel, [START_REF] Hébrard | Consequences of chemical kinetics uncertainties in modeling Titan's atmosphere[END_REF] studied the chemical kinetics uncertainties in a photochemical model for neutral species in Titan atmosphere. Considering that ion densities closely depend on the neutral atmosphere composition [START_REF] Keller | Model of Titan's ionosphere with detailed hydrocarbon ion chemistry[END_REF], one can expect a direct impact of neutral uncertainties on ion uncertainties. In order to evaluate this influence, we built a semi-coupled Titan ionospheric model for neutral and ion species, using the chemistry model described in [START_REF] Carrasco | Uncertainty analysis of bimolecular reactions in Titan ionosphere chemistry model[END_REF] with neutral density profiles and their uncertainties as calculated by [START_REF] Hébrard | Consequences of chemical kinetics uncertainties in modeling Titan's atmosphere[END_REF]. Dynamics plays undoubtedly some role in the distribution of Titan's both neutral and ionic constituents but yet, the significance of eddy diffusion processes is not entirely known and still requires some attention. The eddy diffusion coefficient K(z) usually acts in photochemical modeling of planetary atmospheres as a free parameter that must be estimated to fit observations (see Fig. 1). In particular, the eventuality that the globally averaged distribution of Titan's constituents may be accurately and simultaneously described with a single eddy diffusion profile is still discussed A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT (Wilson and Atreya 2004). [START_REF] Hidayat | Millimeter and submillimeter heterodyne observations of Titan: Retrieval of the vertical profile of HCN and the 12C/13C solar ratio[END_REF] inferred a low homopause profile (840 km) from their millimeter observations of HCN vertical profile in much of the lower regions of the atmosphere; [START_REF] Strobel | Titan's upper atmosphere: structure and ultraviolet emissions[END_REF] inferred a higher homopause profile (1040 km) from their analysis of Voyager UVS solar occultation and airglow data; [START_REF] Toublanc | Photochemical modeling of Titan's atmosphere[END_REF] developed a very low homopause profile (680 km) from [START_REF] Toon | A physical model of Titan's aerosols[END_REF] profile and adapted it to fit [START_REF] Tanguy | Stratospheric profile of HCN on Titan from millimeter observations[END_REF] HCN distribution and Voyager UVS data for methane CH 4 . This diversity of profiles is in part due to the differences in the chemical scheme adopted by the authors. In fact, [START_REF] Hébrard | Consequences of chemical kinetics uncertainties in modeling Titan's atmosphere[END_REF] showed that the eddy diffusion profile may not currently be constrained as tightly as expected. Uncertainties attached to the computed abundances can be indeed so important that modifying the eddy diffusion coefficient K(z) does not change significantly their agreement with the different abundances inferred from the available observations. It appears moreover that the uncertainty factors of computed abundances are very sensitive to the choice of the eddy diffusion profile adopted, especially to the choice of a high-or low-homopause profile. [Figure 1 about here.] In order to assess their effect on ion densities, we considered two turbulent diffusion profiles for the neutral species, corresponding to the extreme profiles at high altitudes described in the literature [START_REF] Strobel | Titan's upper atmosphere: structure and ultraviolet emissions[END_REF][START_REF] Toublanc | Photochemical modeling of Titan's atmosphere[END_REF]. We first evaluated the uncertainties on the ion densities for both neutral turbulent transport cases, with a fixed ion-molecule chemistry. Then, we calculated the contributions of both uncertainty sources (neutral densities and ion-molecule reaction parameters), and identified the main sources of uncertainty for all the major ions predicted by our model of Titan ionosphere. A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT 2 Methods 2.1 Ion and neutral chemistry semi-coupled models for Titan ionosphere Titan ionospheric chemistry model [START_REF] Banaszkiewicz | A coupled model of Titan's atmosphere and ionosphere[END_REF][START_REF] Carrasco | Uncertainty analysis of bimolecular reactions in Titan ionosphere chemistry model[END_REF] is semi-coupled with a photochemistry model of neutral species [START_REF] Hébrard | Consequences of chemical kinetics uncertainties in modeling Titan's atmosphere[END_REF]. Neutral density profiles are calculated with their uncertainties by the neutral photochemistry model. These profiles, with their uncertainties, are taken as inputs of the ionospheric chemistry model. Furthermore the correlations between neutral densities are taken into account through their covariance matrix. Density profiles are built with a 5 km scale in the 800-1300 km altitude range. Uncertainty propagation Because of non-linearities in the model and large uncertainties on numerous parameters, chances to be outside the validity range of linear uncertainty propagation are important. To avoid this bias, we used a Monte Carlo sampling method, which requires the definition of a probability density function for input parameters. As their are not correlated, we design separately the probability density functions for the kinetic parameters and for the neutral density profiles. Uncertainties of kinetic parameters of ion-molecule reactions. The distributions are parametrized from the preferred values and the uncertainties reported in the review of [START_REF] Anicich | Ion-molecule chemistry in Titan's ionosphere[END_REF]. If an uncertainty value is not given for rate constants, the preferred value is considered as being inaccurate, with a relative uncertainty of 60% (highest uncertainty value reported in the review). The global rate k is depicted by a log-uniform distribution [START_REF] Carrasco | Uncertainty analysis of bimolecular reactions in Titan ionosphere chemistry model[END_REF]). As uncertainty is not quantified for branching ratios in the reference review, uncertainty intervals have been defined according to the statistical deviations reported A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT in [START_REF] Carrasco | Uncertainty analysis of bimolecular reactions in Titan ionosphere chemistry model[END_REF]: 10% for branching ratios larger than 0.5, 30% for branching ratios between 0.1 and 0.5 and 100% for the smaller values. For a few reactions, branching ratios are not reported. In such cases the pathways are considered as equiprobable with an uncertainty of 90%. Branching ratios were previously (Carrasco and Pernot 2007) modeled by Dirichlet distributions, which respect the sum rule for these parameters. In this work, we refine our elicitation of the branching ratios with Dirichlet uniform distributions (DIUD), which have the additional property not to favor any value within the given intervals (see the Appendix). The sample for probability density function of ion-molecule reaction parameters can be produced with the following procedure. In order to preserve the intrinsic correlation due to the sum rule, partial reaction rates for a reaction with n pathways are produced following three steps: 1. a global rate k is sampled from a log-uniform distribution; 2. the branching ratios, b i , are sampled from the DIUD method (see Appendix), 3. the partial rate constants, k i , are products of two random numbers (k i = kb i ) n 1 . Uncertainties on neutral densities. The different chemical sources of uncertainties in photochemical models of Titan's atmosphere and their associated probability density functions were recently reviewed and evaluated at representative temperatures through a comprehensive cross-examination of extensive reaction rates database [START_REF] Hébrard | Photochemical kinetics uncertainties in modeling Titan's atmosphere: a review[END_REF] and implemented through a Monte-Carlo procedure into a 1D photochemical model of Titan's neutral atmosphere [START_REF] Hébrard | Consequences of chemical kinetics uncertainties in modeling Titan's atmosphere[END_REF]. A sample of N run = 500 density profiles for each neutral species is generated at all altitudes up to 1300 km and stored for analysis. The data are in a three dimensional table {c ijk ; i = 1, N sp ; j = 1, N alt ; k = 1, N run } (1) where N sp is the number of species, N alt is the size of the altitude grid. A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT These profiles could be used directly in the 0D ionospheric model. However we preferred to build an intermediate probability density function (PDF) because it provides insight into the structure of neutral densities uncertainties. In addition this procedure enables a much larger number of runs to be used for uncertainty propagation and analysis in ionospheric chemistry. Typically, we found that cumulative density functions for ion densities were satisfyingly converged for 10 4 samples, a number presently out of reach of the 1D model. To design the PDF for neutral densities, we first analyze their correlation structure. We consider two sources of correlation : • ρ alt ij,il = c ijk , c ilk k : spatial correlations for a given species, resulting mainly from continuity laws of chemistry-transport processes, where ... k denotes the correlation coefficient calculated over the sample; • ρ sp il,jl = c ilk , c jlk k : inter-species correlations, resulting from the chemical processes and mass conservation law. To account for non-linearities, Rank Correlation Coefficients (RCC) have been used. They convert nonlinear but monotonic relationship into a linear relationship by replacing the values of the sampled inputs/outputs by their respective ranks [START_REF] Hamby | A review of techniques for parameter sensitivity analysis of environmental models[END_REF][START_REF] Helton | Survey of sampling-based methods for uncertainty and sensitivity analysis[END_REF]. The spatial correlation of densities for a species is linked to the deformations caused to the density profile by chemistry fluctuations. If the density variations are similar at all altitudes (all curves in a sample remain parallel), the RCC should be equal to 1. A negative correlation would indicate opposite variations between two altitudes. We calculated the RCC for altitudes between 800 and 1300 km in the case of [START_REF] Strobel | Titan's upper atmosphere: structure and ultraviolet emissions[END_REF] eddy diffusion profile. As expected, spatial correlation is high: for all species, ρ alt ij,il is above 0.4, and for most species, the RCC distribution over altitudes is strongly peaked at the maximal value. Globally, when a species undergoes a density increase/decrease at the base of the ionosphere, there is a similar A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT increase/decrease in all the upper column. As we are performing 0D calculations for ionospheric chemistry, we can assume that it is safe to consider that all altitudes are maximally correlated, ρ alt ij,il =1 for all altitude-pairs and species. To evaluate the correlation between neutral densities, we chose an altitude representative for ionospheric chemistry, i.e. 1200 km, and RCC's for all pairs of species have been evaluated at that altitude. We checked that the RCC's matrix was globally constant in the ionospheric altitude range. It appears that some neutral densities are significantly correlated. A good approximation of the correlation matrix was obtained by using the linear correlations between the log-densities log 10 c il . This representative correlation matrix was used at every altitudes. The probability density function is thus finally built as a multivariate normal density, parametrized by the average values log 10 c il and uncertainty factors F il , such as log 10 c il = log 10 c il ± log 10 F il , and the correlation matrix with elements ρ sp il,jl . The sample for the full neutral densities probability density function, assuming unity correlation between altitudes and altitude-independent inter-species correlations, can be produced with the following procedure: • Initializations: estimate average values and uncertainty factors of log-densities for all species, at all altitudes; evaluate inter-species linear correlation matrix of log-densities C at a representative altitude and calculate its Cholesky decomposition [START_REF] Gelman | Bayesian Data Analysis[END_REF]. • Monte Carlo loop: generates N sp standard normal deviates {u i ∼ N(0, 1); i = 1, N sp }; combines these into N sp correlated numbers ε i by the Cholesky procedure; loop over altitudes (j); c ij = log 10 c ij + ε i × log 10 F ij ; i = 1, N sp . 3 Results and Discussion Correlation of neutral densities [Table 1 mass conservation. Full exploration of the other correlation is not relevant to the present study, but will be detailed in a future article. A consequence to be kept in mind for data fitting is that, within the framework of a consistent photochemical model, densities of species should not be adjusted independently of each other. For this study, we conclude that those strong correlations cannot be a priori neglected in the uncertainty propagation to ionospheric chemistry. In order to check this point, we generated two samples in the Strobel case, one correlated and one uncorrelated (setting the correlation matrix to identity: C = I). We compared the densities and their uncertainties obtained with the two samples. The effect is illustrated on the 40 most abundant ions (Fig. 2). Except for very few ions, the impact is negligible. Predicted ion densities and uncertainties are practically insensitive to the correlation of neutral densities. [Figure 2 about here.] A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT A significant effect can however be observed on the correlation between ion densities, as can be seen from the compared cumulative density functions of correlation coefficients of all ions pairs on Fig. 3. For uncorrelated neutral densities, the RCC's are globally weak, massively located, with 90% probability, between -0.1 and 0.6, whereas for the correlated case the corresponding probability interval spreads between -0.3 and 0.8. The proportion of significant correlations increases thus notably when neutrals correlation is taken into account. We observe therefore a correlation transfer from neutrals to ions. This might be relevant for the computation of observables combining ion densities. As observed before, this correlation restricts considerably the degrees of freedom when, for instance, adjusting the ion densities to match an observable (as an ion mass spectrum). [Figure 3 about here.] In conclusion to this section, we note that correlation between neutral densities has certainly to be taken into account for data fitting or sensitivity analysis, but that it can safely be neglected for the sole purpose of uncertainty propagation. Effect of the neutral transport processes on the ion concentrations As shown in [START_REF] Hébrard | Consequences of chemical kinetics uncertainties in modeling Titan's atmosphere[END_REF], the turbulent macroscopic transport of the neutral species in Titan ionosphere is not yet tightly constrained by the existing observations and might vary between two extreme cases described in [START_REF] Strobel | Titan's upper atmosphere: structure and ultraviolet emissions[END_REF] and [START_REF] Toublanc | Photochemical modeling of Titan's atmosphere[END_REF]. For simplicity, these two cases will be called the "Strobel were used as inputs of our simulations, allowing us to evaluate the impact of the poorly known neutral transport processes on the ionic species. The density profile of the major ion , HCNH + in Titan ionosphere, is represented on Fig. 4 for both neutral transport cases. The nominal profiles are significantly different for altitudes lower than 950 km: HCNH + density is up to 10 times larger at 800 km in the Strobel case. This confirms the sensitivity of the ionospheric chemistry model to the neutral density profiles previously noticed by [START_REF] Keller | Model of Titan's ionosphere with detailed hydrocarbon ion chemistry[END_REF]. Moreover, the uncertainties on HCNH + density differ significantly from one case to the other, with a larger uncertainty in the Toublanc case, at all altitudes. [Figure 4 about here.] The altitude 1200 km is of specific interest, being the average altitude of the first Cassini's flyby T5 providing data on ion densities. We compared the effect of both neutral diffusion cases on the ten major ions calculated by our model at this altitude: HCNH + , C 2 H + 5 , CH + 5 , N + 2 , C 3 H + 5 , CH + 3 , N 2 H + , N( 3 P) + , C 2 H + 4 and C 2 H + 3 . The densities with their uncertainties are reported on Fig. 5. The ions are ranked by decreasing density. The results are highly dependent on the eddy diffusion profile: quite precise ion densities in the Strobel case, and relative uncertainties of one or two orders of magnitude for all ions in the Toublanc case. [Figure 5 about here.] One can conclude that the eddy coefficient chosen to describe the transport of the neutral species influences significantly the nominal density profiles of the ion species but also their uncertainty. The chemistry of the ions in Titan ionosphere cannot be decorrelated from transport considerations. There is still to understand why the uncertainties in the Toublanc case are much larger than in the Strobel case. A reason lies probably in the uncertainty on the major neutral reactant, methane (CH 4 ). Indeed the uncertainty on the methane A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT profiles at the ionospheric altitudes is substantially larger in the Toublanc case (see Fig. 6). The uncertainty factor F CH 4 at 1200 km is equal to about 1.1 in the Strobel case and about 5.7 in the Toublanc case. High-homopause profiles seem to restrain to some extent the propagation of chemical uncertainties in the upper atmosphere contrary to low-homopause profiles. [Figure 6 about here.] The reason for such a discrepancy on methane uncertainty is not yet fully established. However, the vertical transport is more important in the case of a high homopause. This means that the vertical renewal of the species through transport, compensates more efficiently for the chemical losses. The transport can thus be considered as an attenuation factor of the chemical uncertainties on the neutral densities. Comparison of two uncertainty sources: neutral densities and ion-molecule reactions parameters We compared the respective contributions of the neutral densities and of the parameters of ion-molecule reactions to the ion density uncertainty by first performing uncertainty propagation on both uncertainty sources separately. As earlier, both Strobel and Toublanc cases were considered to encompass extreme transport processes in Titan ionosphere. The ten major ion densities are reported in Fig. 7, with their uncertainties for both Strobel and Toublanc cases. Each case is compared with the uncertainty contribution of the ion-molecule reaction parameters. In the Toublanc case, the uncertainty due to the ion-molecule reaction parameters is negligible in comparison to the uncertainty due to the neutral densities. This implies that in this case, the uncertainty on the ion densities is mainly controlled by the uncertainty on the neutral atmosphere rather than by the ion reactivity itself. In the Strobel case, [Figure 7 about here.] A C C E P T E D M A N U S C The Strobel case corresponds to a high homopause configuration. One can notice that the hypothesis of a high homopause seems to be consolidated by the recent INMS data (Yelle et al., 2006). Conclusion We presented the first evidence of the influence of the modeling of turbulent transport of neutral species on the uncertainty of ion chemistry in Titan ionosphere. We found a strong sensitivity of ion chemistry to the description of turbulent neutral transport. The uncertainties on the ion densities were much higher with a low homopause hypothesis than with a high homopause level. This effect can be explained by a competition between vertical transport and chemistry: an efficient vertical transport attenuates the chemical uncertainties. In the case of a low homopause, the uncertainty of the ion densities due to the ion chemistry was even We showed that neutral species are strongly correlated by the photochemical model and transfer an important correlation between ion densities. This provides chemistry-based constraints that might be useful when trying to fit models to measured Mass Spectra, for instance by tuning neutral densities. The present simulations provide a basis for a sensitivity analysis, from which to identify the key reactions and species strongly responsible for outputs uncertainty. In a forthcoming paper, we will present results along this line, pertaining to both ion-molecule and neutral chemistry. Appendix: Dirichlet Uniform Distribution (DIUD) for branching ratios of ion-molecule reactions An elicitation scheme is based on considerations about the nature of the uncertainties to be represented. Branching ratios are often specified by intervals [START_REF] Jenkinson | The elicitation of probabilities -A review of the statistical literature[END_REF][START_REF] Bates | Bayesian uncertainty assessment in multicompartment deterministic simulation models for environmental risk assessment[END_REF], and it is assumed that there is no reason to favor any value within a given interval. This case seems indeed to be the one favored by experts in the measurement of branching ratios. They consider that they report intervals accounting for systematic errors with enough latitude to encompass all acceptable values [START_REF] Carrasco | Uncertainty analysis of bimolecular reactions in Titan ionosphere chemistry model[END_REF]). The distribution is defined uniformly over the (n -1)-simplex (b 1 , . . . , b n ) ∼ Dirichlet(1, . . . , 1) (2) A C C E P T E D M A N U S C R I P T A C C E P T E D M A N U S C R I P T A C C E P T E D M A N U S C * generates samples of correlated log-densities with altitude dependent uncertainties log 10 case" and the "Toublanc case". The eddy coefficient profiles are given on Fig.1. The Strobel case corresponds to a high homopause level (1040 km) whereas the Toublanc case corresponds to a low homopause level (680 km). The two corresponding neutral density profiles obtained by[START_REF] Hébrard | Consequences of chemical kinetics uncertainties in modeling Titan's atmosphere[END_REF], for diurnally averaged chemistry, found negligible in comparison to the uncertainty due to the neutral densities. This highlights the necessity to constrain eddy diffusion profiles for Titan ionosphere, which should progressively be done thanks to the present and future measurements of the Cassini orbiter(Yelle et al. 2006).The present study focused on the sensitivity of a Titan ionospheric model to two particular sources of uncertainty: the description of the neutral density profiles, and the ion-molecule chemistry parameters. Additional sources of uncertainty are still to be studied, such as uncertainties on the photo-dissociation or recombination would allow a complete overview of the needs and limits of the actual model, in order to improve it. Figure 1 :Fig. 2 -Figure 3 :Figure 5 : 1235 Figure 1: Eddy diffusion profiles -Hidayat et al. (1997) profile (solid line), Strobel et al. (1992) profile (dashed line) and[START_REF] Toublanc | Photochemical modeling of Titan's atmosphere[END_REF] profile (dot dashed line). The methane molecular diffusion coefficient profile is also included. Figure 7 : 7 Figure 7: Densities of the ten major ions at altitude 1200 km for the two eddy coefficients : (a) Strobel case, (b) Toublanc case. Boxplots depict 50% and 90% confidence intervals. For each ion density, the upper boxplot corresponds to uncertainty propagation of the neutral density, whereas the lower boxplot depicts uncertainty propagation of ion chemistry parameters. about here.] We evaluated the correlation between the neutral density profiles in the Strobel case. Representative values for some important species are reported in Table 1. Strong correlations are observed between some neutrals, e.g. H 2 /CH 4 , N 2 /CH 4 or C 2 H 2 /C 2 H 4 , and result from the chemical network. A salient feature is the negative correlation of CH 4 with all other species in the table (increasing the density of CH 4 causes a decrease in the density of those species). The strong negative correlation of CH 4 with N 2 and H 2 has probably not a chemical origin, but a physical one, i.e. R I P T ACCEPTED MANUSCRIPT similar amplitudes for both uncertainty sources are observed. The conclusions are therefore depending on the ions: a prevalence of the neutral density uncertainty is observed for C 3 H + 5 and C 2 H + 3 , whereas a prevalence of the ion-molecule reaction parameters is observed for C 2 H + 4 . Similar contribution of both uncertainty sources affect HCNH + , C 2 H + 5 , CH + 5 , N + 2 , CH + 3 , N 2 H + , and N( 3 P) + . Table 1 : 1 Rank correlation coefficients between selected neutral densities at 1200 km. Coefficients with an absolute value above 0.5 are in bold face. R I P T Acknowledgments This work was partly supported by EuroPlaNet through travel grants to M.D. and M.B. We also acknowledge the support received from Centre National de Recherche Scientifique (CNRS) and from Centre National d'Etude Spatiale (CNES) through postdoctoral positions for N.C. We gratefully thank O. Dutuit, R. Thissen and C. Alcaraz for fruitful discussions. (3) The definition of the limits of the intervals depends on the available information. If one gets a set as (b i , b i ) n i=1 , one has simply b i,min = b i -Δb i and b i,max = b i + Δb i . For the sake of convenience, we name this truncated uniform Dirichlet distribution DIUD in the following. It has 2n parameters Samples corresponding to the same branching ratios (0.6, 0.3, 0.1) for 10% and 90% relative uncertainty are displayed in Fig. 8. Note that, although the distribution is uniform over the restricted (n -1)-simplex, this is not the case for the marginal distributions. The most thorough method to produce samples from this distribution is to draw samples from the uniform Dirichlet distribution (generated by the Gamma algorithm, [START_REF] Gelman | Bayesian Data Analysis[END_REF] and to reject those lying outside the prescribed intervals. However for fairly accurate branching ratios, this method can spill a lot of random draws. [Figure 8 about here.] List of Figures
00177566
en
[ "spi.auto" ]
2024/03/05 22:32:18
2007
https://hal.science/hal-00177566/file/ICEEDT07.pdf
Samir Ladaci email: [email protected] Emmanuel Moulay email: [email protected] L p -stability analysis of a class of nonlinear fractional differential equations This paper investigates the L p -stability properties of fractional nonlinear differential equations. Systems defined on a finite time interval are considered. The principal contributions are summarized in a theorem which give sufficient conditions for bounded stability of fractional order systems. We show that the proposed results can not be extended to the case of systems defined on an infinite time interval. I. INTRODUCTION The fractional calculus and fractional order differential equations attracted a great attention these last decades (see [START_REF] Miller | An Introduction to the Fractional Calculus and Fractional Differential Equations[END_REF], [START_REF] Podlubny | Fractional Differential Equations[END_REF]). One of the most important reasons for this interest is their ability to model many natural systems and their seducing properties like robustness and dynamical behavior. Fractional order systems have found many applications in various domains such as heat transfer, viscoelasticity, electrical circuit, electro-chemistry, dynamics, economics, polymer physics and control. The study of stability for this kind of systems focuses a great interest in the research community. We can cite in this domain the works of Matignon [START_REF] Matignon | Stability result on fractional differential equations with applications to control processing[END_REF] and Bonnet and Partington [START_REF] Bonnet | Coprime factorizations and stability of fractional differential systems[END_REF] for the stability of linear fractional systems, those of Khusainov [START_REF] Khusainov | Stability Analysis of a Linear-Fractional Delay System[END_REF], Bonnet and Partington [START_REF] Bonnet | Analysis of fractional delay systems of retarded and neutral type[END_REF], Chen and Moore [START_REF] Chen | Analytical Stability Bound for a Class of Delayed Fractional-Order Dynamic Systems[END_REF] and Deng et al. [START_REF] Deng | Stability analysis of linear fractional differential system with multiple time delays[END_REF] for fractional systems with time delay and Ladaci et al. [START_REF] Ladaci | Fractional Order Adaptive High-Gain Controllers for a Class of Linear Systems[END_REF] for fractional adaptive control systems. Ahn et al. have proposed robust stability test methods for fractional order systems [START_REF] Ahn | Robust stability test of a class of linear time-invariant interval fractional-order system using Lyapunov inequality[END_REF], [START_REF] Chen | Robust stability check of fractional order linear time invariant systems with interval uncertainties[END_REF]. Recently Lazarević [START_REF] Lazarević | Finite time stability analysis of PD α fractional control of robotic time-delay systems[END_REF] has studied the finite time stability of a fractional order controller for robotic time delay systems. In this paper we are concerned by the stability analysis of fractional order systems represented by the following nonlinear differential equation: D α x(t) = h (t, x(t)) , x ∈ R n , t ∈ R + (1) where 0 < α < 1, h ∈ C (R + × R n , R n+ ) is a continuous positive function, with the initial condition D α-1 x(t 0 ) = x 0 . (2) In the following we will use the notation h x (t) = h(t, x(t)). This paper is organized as follows. In Section II, we present some useful theoretical background. In section III, the main result on L p -stability of nonlinear fractional order systems defined on a finite interval is given with an illustrative example. Section IV concludes the paper. II. THEORETICAL BACKGROUND The mathematical definition of fractional derivatives and integrals has been the subject of several different approaches [START_REF] Oldham | The Fractional Calculus[END_REF]. In this paper we consider the following Riemann-Liouville definition [START_REF] Miller | An Introduction to the Fractional Calculus and Fractional Differential Equations[END_REF], Definition 1 (Fractional integral): Let ν ∈ C such that Re(ν) > 0 and let g be piecewise continuous on (0, ∞) and integrable on any finite subinterval of [0, ∞). Then for 0 ≤ t 0 < t we call: t0 D -ν t g(t) = 1 Γ(α) t t0 (t -τ ) ν-1 g(τ )d(τ ) the Riemann-Liouville fractional integral of g of order ν where Γ(x) = ∞ 0 y x-1 e -y dy is the Gamma function. For simplicity we will note D µ g(t) for t0 D µ t g(t). The Riemann-Liouville definition of fractional order derivative of g is now recalled. Definition 2 (Fractional derivative): Let g be a continuous function and let µ > 0. Let m be the smallest integer that exceeds µ. The fractional derivative of g of order µ is defined as, D µ g(t) = D m D -ν g(t) , µ > 0, (if it exists) where ν = m -µ > 0. Lemma 3: The solution of System (1)-( 2) is given by the vector equality x(t) = x 0 Γ(α) (t -t 0 ) α-1 + 1 Γ(α) t t0 (t -τ ) α-1 h x (τ )d(τ ). For the proof see for instance [START_REF] Diethelm | Analysis of Fractional Differential Equations[END_REF], [START_REF] Hadid | Lyapunov stability of differential equations of non-integer order[END_REF]. With no loss of generality we can take x 0 = 0 and t 0 = 0. This reduces our solution to, x(t) = 1 Γ(α) t 0 (t -τ ) α-1 h x (τ )d(τ ) (3) Let us recall the definition of L p -stability. Definition 4: Let 1 ≤ p ≤ ∞ and Ω ⊂ R + , the system (1) is L p (Ω)-stable if the solution x(t) T = (x 1 , . . . , x n ) defined by Equation (3) belongs to L p (Ω). We now introduce the convolution product on C([t 0 , +∞), R) with t 0 ≥ 0. Definition 5: For all functions f, g ∈ C(R + , R) we define the operator f g as follows: f g(t) = t 0 f (t -τ )g(τ )dτ, t ≥ 0. ( 4 ) Lemma 6: The product is a commutative internal composition rule on C(R + , R). Proof: By using the change of variables τ = tu, we obtain f g(t) = t 1 0 f (t(1 -u))g(tu)du. Due to the theorem about continuity of an integral depending on a parameter, we deduce that t → f g(t) belongs to C(R + , R). The commutativity property of the operator can be easily proven by using the change of variables u = tτ in (4). In the next we show interesting properties of this product operator. Lemma 7: Let f, g ∈ C(R + , R + ) ∩ L 1 ([0, T ]) with T > 0, the following properties hold: (i) T 0 f g(t)dt = T 0 g(τ ) T -τ t0 f (t)dt dτ, (ii) T 2 0 f (t)dt T 2 0 g(τ )dτ ≤ T 0 f g(t)dt ≤ T 0 f (t)dt T 0 g(τ )dτ , (iii) For any p ≥ 1 we have: T 2 0 g(τ ) p dτ T 2 0 f (t)dt p ≤ T 0 (f g(t)) p dt ≤ T 0 g(τ ) p dτ T 0 f (t)dt p , (iv) Moreover, if f, g ∈ L 1 (R + ), the integral of f g converges and we have: +∞ 0 f g(t)dt = +∞ 0 f (t)dt +∞ 0 g(τ )dτ , (v) Moreover, if f ∈ L 1 (R + ) and g ∈ L p (R + ), we have for any p ≥ 1: +∞ 0 (f g(t)) p dt ≤ +∞ 0 f (t)dt p +∞ 0 g(τ ) p dτ . Proof: (i)-By using the theorem of Fubini for positive functions, we have: T 0 f g(t)dt = T 0 t 0 f (t -τ )g(τ )dτ dt = T 0 T τ f (t -τ )g(τ )dt dτ Then by using the change of variables u = tτ , we obtain: T 0 f g(t)dt = T 0 g(τ ) T τ f (t -τ )dt dτ = T 0 g(τ ) T -τ 0 f (u)du dτ. (5) (ii)-Since f is positive, we have T -τ 0 f (t)dt ≤ T 0 f (t)dt and the right inequality is immediate from [START_REF] Chen | Analytical Stability Bound for a Class of Delayed Fractional-Order Dynamic Systems[END_REF]. For the left inequality we remark that T 0 g(τ ) T -τ 0 f (t)dt dτ ≥ T 2 0 g(τ ) T -τ t0 f (t)dt dτ Now if 0 ≤ τ ≤ T 2 , then T -τ ≥ T 2 and, T 0 g(τ ) T -τ 0 f (t)dt dτ ≥ T 2 0 g(τ ) T 2 0 f (t)dt dτ ≥ T 2 0 f (t)dt T 2 0 g(τ )dτ. (iii)-From (i) we have, T 0 f g(t)dt = T 0 g(τ ) T -τ 0 f (t)dt dτ, then for the right inequality, T 0 (f g(t)) p dt = T 0 g(τ ) T -τ 0 f (t)dt p dτ ≤ T 0 g(τ ) p T 0 f (t)dt p dτ ≤ T 0 g(τ ) p dτ T 0 f (t)dt p For the left inequality we use the same proof as in (ii). Since 0 ≤ τ ≤ T 2 , we have Tτ ≥ T 2 and: T 0 g(τ ) T -τ 0 f (t)dt p dτ ≥ T 2 0 g(τ ) p dτ T 2 0 f (t)dt p . (iv)-If T tends to infinity, then both of the right and the left sides of the inequality (5) converge to the same limit, that is: +∞ 0 f (t)dt +∞ 0 g(τ )dτ . (v)-By using the preceding reasoning for the double inequality (5) we get, +∞ 0 (f g(t)) p dt ≤ +∞ 0 f (t)dt p +∞ 0 (g(τ )) p dτ . III. MAIN RESULTS The solution (3) of ( 1) can be rewritten using the product operator defined in (4) as follows: x(t) = (K α h x ) (t) (6) where K α with 0 < α < 1 is the so called convolution kernel defined by K α (u) = u α-1 Γ(α) (7) Lemma 8: Consider the convolution kernel function K α defined in [START_REF] Deng | Stability analysis of linear fractional differential system with multiple time delays[END_REF], let > 0 then K α ∈ L p ([0, ]) if and only if p-1 p < α < 1 and p ≥ 1. Proof: We have 0 K α (u) p du = 0 u α-1 Γ(α) p du (8) = 0 u (α-1)p Γ(α) p du = u (α-1)p+1 ((α -1)p + 1)Γ(α) p 0 The generalized Riemann integral (8) is convergent if and only p ≥ 1 and (α -1)p + 1 > 0 that is p-1 p < α < 1. Then we have the main result on the L p -stability of the solution of the system (1)-(2) defined on a finite time interval. Theorem 9: Let p ≥ 1 and consider the system defined by the fractional order differential equation ( 1)-( 2) where the time t ∈ [0, t f ] then the system (1)-( 2) is L p ([0, t f ])-stable if and only if p-1 p < α < 1 and h x ∈ L 1 ([0, t f ]). Proof: As the operator is commutative, we have: t f 0 x(t) p = t f 0 (K α h x ) p 1/p dt = t f 0 (h x K α ) p 1/p dt and again from Lemma 7-(iii) and Equality (6) we get: t f 0 x(t) p ≤ t f 0 h x (τ )dτ t f 0 K α (t) p dt 1/p . (9) From Lemma 8, we have that t f 0 K α (t) p dt < +∞ if and only if p-1 p < α < 1. If in addition h x ∈ L 1 ([0, t f ]), Equation (9) implies that t f 0 x(t) p ≤ t (α-1)p+1 p f ((α -1)p + 1) 1 p Γ(α) t f 0 h x (τ )dτ < +∞. Example 10: Let us consider the system D 3 4 x(t) = B(x) (t + 1) 3 , 0 ≤ t ≤ 1 where B(x) is a positive real function bounded by a constant c > 0 as for instance the function x → |sin(x)|. Then, by using Theorem 9 with p = 2 and α = 3 4 , we deduce that 1 0 x(t) 2 ≤ 3 √ 2 c 8 Γ 3 4 . Remark 11: The generalization to the infinite case where t ∈ R + is not possible because the kernel function u → K α (u) does not belong to L 1 (R + ). Thus we can not use points (iv) and (v) of lemma 7. Even if the system is defined since t 0 > 0, the generalization to the infinite case where t ∈ [t 0 , +∞) is not possible. Indeed the solution of System (1)-(2) given by x(t) = t t0 K α (t -τ ) h x (τ )d(τ ), t ≥ t 0 > 0 (10) can not be defined by using a convolution product which is commutative. So, Theorem 9 can not be extended to the case where t ∈ [t 0 , +∞) with t 0 > 0. A commutative convolution product with initial condition t 0 > 0 which can be used would be f g(t) = t t0 f (t + t 0τ )g(τ )dτ, t ≥ 0. However, our kernel function K α (tτ ) as it appears in the definition of the solution of System (1)-( 2) given by Equation [START_REF] Khusainov | Stability Analysis of a Linear-Fractional Delay System[END_REF] does not depend on the initial time t 0 . IV. CONCLUSION In this paper the L p -stability properties of fractional nonlinear differential equations are considered. Sufficient conditions for L p -stability of the fractional order system are presented in the case of finite time window, with analytical proofs based on the properties of a special convolution product. This work opens a new method for the L p -stability analysis of a class of nonlinear fractional order systems.
00177572
en
[ "spi.auto" ]
2024/03/05 22:32:18
2008
https://hal.science/hal-00177572/file/IJCFiniteTime.pdf
Emmanuel Moulay email: [email protected] W Perruquetti Finite time stability conditions for non autonomous continuous systems published or not. The documents may come Finite time stability conditions for non autonomous continuous systems Introduction Since the end of the 19 th and the beginning of the 20 th century, various concepts dedicated to the qualitative behavior of the solutions for dynamical systems have been introduced (for example the seminal definitions and results from A.M. [START_REF] Lyapunov | Stability of Motion: General Problem[END_REF]). But rapidly, one had to face some more precise time specifications of the behavior of the state variables (or outputs) for real process. For example, some finite time stabilizing control design relies upon sliding mode theory (see [START_REF] Utkin | Sliding Modes in Control Optimization[END_REF]) and another one upon optimal control theory (see [START_REF] Ryan | Singular Optimal Controls for Second-Order Saturating Systems[END_REF]). In both cases the controller leads to some state discontinuous feedback control laws. Over the years much work has been dedicated to this concept, getting some sufficient conditions and the application to the construction of finite-time stabilization control (see for example [START_REF] Bhat | Continuous Finite-Time Stabilization of the Translational and Rotational Double Integrator[END_REF], [START_REF] Hong | Finite-Time Stabilization and Stabilizability of a Class of Controllable Systems[END_REF], [START_REF] Hong | On an Output Feedback Finite-Time Stabilization Problem[END_REF][START_REF] Hong | Finite-Time Control for Robot Manipulators[END_REF], Moulay and Perruquetti (2006a)). The main idea lies in assigning infinite eigenvalue to the closed loop system at the origin. Let us mention the following illustrative example ẋ = -|x| a sgn(x), x ∈ 4, a ∈ ]0, 1[ , (1) for which the solutions starting at τ = 0 from x are: φ x (τ ) =    sgn(x) |x| 1-a -τ (1 -a) 1 1-a ) if 0 ≤ τ ≤ |x| 1-a 1-a 0 if τ > |x| 1-a 1-a , (2) and they reach the origin in finite time. In fact, there exists a function called settling time that increases the time for a solution to reach the equilibrium. Usually, this function depends on the initial condition of a solution. But, for non autonomous systems, it may also depend on the initial time. Lastly, notice from this example, that in order to obtain finite time stability, the right hand side of the ordinary differential equation cannot be locally Lipschitz at the origin. The paper is organized as follows. After defining the notion of finite time stability for continuous non autonomous systems in section 2 , we recall the necessary and sufficient conditions for finite time stability of autonomous scalar systems (section 3) (result which appears in [START_REF] Haimo | Finite Time Controllers[END_REF], Moulay and Perruquetti (2006b) and whose proof is given here. Then in section 4, we give a generalization for continuous non autonomous systems of any dimension. For this we introduce a Lyapunov function whose derivative satisfies an increase to obtain sufficient conditions for finite time stability. In the subsection 4.1, one uses a smooth Lyapunov function and in the subsection 4.2, one uses a nonsmooth one. Nevertheless, as we will see the use of a smooth Lyapunov function is not a weaker result, only a different one. Then, a Lyapunov function whose derivative verifies a decrease is used in order to obtain necessary conditions for finite time stability (see subsection 4.3). Through the paper, the following notations will be used: • for a > 0, the following function will be used ϕ a (x) = |x| a sgn(x), • V denotes a neighborhood of the origin in 4 n , • B ǫ is the open ball centered at the origin of radius ǫ > 0. The upper Dini derivative of a function f : 4 → 4 is the function D + f : 4 → 4 defined by: D + f (x) = lim sup h→0 + f (x + h) -f (x) h . 2 What is finite time stability? Consider the system ẋ = f (t, x), t ∈ 4 ≥0 , x ∈ 4 n , (3) where f : 4 ≥0 × 4 n → 4 n is a continuous function. Then φ x t (τ ) denotes a solution of the system ( 3) starting from (t, x) ∈ 4 ≥0 × 4 n and S (t, x) represents the set of all solutions φ x t . The existence of solutions is given by the well known Cauchy-Peano Theorem given for example in [START_REF] Hale | Ordinary Differential Equations, 2nd Edition, Pure and applied mathematics XXI[END_REF] . A continuous function α : [0, a] → [0, +∞[ belongs to class K if it is strictly increasing and α(0) = 0. It is said to belong to class K ∞ if a = +∞ and α(r) → +∞ as r → +∞. Moreover, a continuous function V : 4 ≥0 × V → 4 ≥0 such that L1) V is positive definite, L2) V (t, x) = D + V • φx t (0) is negative definite with φx t (τ ) = (τ, φ x t (τ )), is a Lyapunov function for (3). If V is smooth then V (t, x) = ∂V ∂t (t, x) + n i=1 ∂ i V ∂x i (t, x)f i (t, x). A continuous function v : 4 ≥0 × V → 4 is decrescent if there exists a K-function ψ such that |v(t, x)| ≤ ψ (||x||) ∀(t, x) ∈ 4 ≥0 × V. A continuous function v : 4 ≥0 ×4 n → 4 is radially unbounded if there exists a K ∞ -function ϕ such that v(t, y) ≥ ϕ(||y||), ∀t ∈ 4 ≥0 ∀y ∈ 4 n . The definition of asymptotic stability is well known (see [START_REF] Hahn | Theory and Application of Liapunov's Direct Method[END_REF]). In this case, the solutions of the system (3) tend to the origin (but without information about the time transient). This information comes from the notion of "settling time" which, when finite and combined with the stability concept, leads to the following definition for finite time stability Definition 2.1 The origin is weakly finite time stable for the system (3) if: A1) the origin is Lyapunov stable for the system (3), A2) for all t ∈ I, there exists δ(t) > 0, such that if x ∈ B δ(t) then for all Φ x t ∈ S (t, x): a) φ x t (τ ) is defined for τ ≥ t, b) there exists 0 ≤ T (φ x t ) < +∞ such that φ x t (τ ) = 0 for all τ ≥ t + T (φ x t ). T 0 (φ x t ) = inf{T (φ x t ) ≥ 0 : φ x t (τ ) = 0 ∀τ ≥ t + T (φ x t )} is called the settling time of the solution φ x t . A3) Moreover, if T 0 (t, x) = sup φ x t ∈S(t,x) T 0 (φ x t ) < +∞, then the origin is finite time stable for the system (3). T 0 (t, x) is called the settling time with respect to the initial conditions of the system (3). Remark 1 When the system is asymptotically stable, the settling time of a solution may be infinite. If the system (3) is continuous on I × V and locally Lipschitz on I × V \ {0}, because of solution uniqueness, the settling time of a solution and the settling time with respect to the initial conditions of the system are the same: T 0 (t, x) = T 0 (φ x t ). Definition 2.2 Let the origin be an equilibrium point of the system (3). The origin is uniformly finite time stable for the system (3) if the origin is B1) uniformly asymptotically stable for the system (3), B2) finite time stable for the system (3), B3) there exists a positive definite continuous function α : 4 ≥0 → 4 ≥0 such that the settling time with respect initial condition of the system (3) satisfies: T 0 (t, x) ≤ α ( x ) . 3 Autonomous scalar systems Let us recall the result for finite time stability of autonomous scalar systems of the form ẋ = f (x), x ∈ 4, (4) where f : 4 → 4 given in [START_REF] Haimo | Finite Time Controllers[END_REF]. In this particular case there exists a necessary and sufficient condition for finite time stability: Lemma 3.1 Let the origin be an equilibrium point of the system (4) where f is continuous. The origin is finite time stable for the system (4) if and only if there exists a neighborhood of the origin V such that for all x ∈ V \ {0} xf (x) < 0, ( 5 ) 0 x dz f (z) < +∞. (6) Proof (⇐) As xf (x) < 0, V (x) = x 2 is a Lyapunov function for the system. So, the origin of the system (4) is asymptotically stable. Let φ x (τ ) be a solution of the system which tend to the origin with time T (φ x ). We have to show that T (φ x ) < +∞. With the asymptotic stability, if x is small enough, then τ → φ x (τ ) is strictly monotone for τ ≥ 0. E. Moulay and W. Perruquetti Moreover, T (φ x ) = T (φ x ) 0 dτ. As xf (x) < 0 for all x ∈ V\{0}, 1 f is defined on V\{0}. The following change of variables, [0, T (φ x )[ → ]0, x], τ → φ x (τ ) leads to 0 x dz f (z) = T (φ x ) 0 φx (τ ) f (φ x (τ )) dτ = T (φ x ) < +∞. T (x) = sup φ x ∈S(x) T (φ x ) does not depend on φ x . So, we conclude that the origin of the system (4) is finite time stable with the following settling time T 0 (x) = T (φ x ) satisfying T 0 (x) = 0 x dz f (z) . (⇒) Suppose that the origin of the system (4) is finite time stable. Let δ > 0 given by definition 2.1. Suppose that there exists x ∈ ]-δ, δ[ \ {0} such that xf (x) ≥ 0. • If xf (x) = 0, then f (x) = 0 et φ x (τ ) ≡ x is a solution of the system (4) which does not tend to the origin. • If xf (x) > 0, with no loss of generality, we can suppose that x > 0 et f (x) > 0. The continuity of f and the fact that f (x) > 0 lead to f (z) > 0 for z in a neighborhood of x. Then, the function τ → φ x (τ ) increases in a neighborhood of the origin. With its continuity, this solution can not tend to the origin. Let x ∈ ]-δ, δ[ \ {0} and consider the solution φ x (τ ). By assumption, there exists 0 ≤ T 0 (φ x ) < +∞ such that φ x (τ ) = 0 for all τ ≥ t + T 0 (φ x ). With the asymptotic stability, x can be chosen small enough such that τ → φ x (τ ) decreases for τ ≥ t. The following change of variables [0, T 0 (φ x )[ → ]0, x], τ → φ x (τ ) leads to 0 x dz f (z) = T0(φ x ) 0 dτ = T 0 (φ x ) < +∞. For x in a neighborhood of the origin, the settling time of a solution of (4) and the settling time with respect to the initial conditions of the system (4) are equal and T 0 (x) = 0 x dz f (z) . If the system (4) is globally defined and if conditions ( 5) and ( 6) hold globally, then the origin is globally finite time stable. Moreover, it is obvious that for autonomous systems, uniform finite time stability is finite time stability. Example 3.2 Let a ∈ ]0, 1[ and consider system (1). Obviously -xϕ a (x) < 0 for x = 0, and let x ∈ 4 then 0 x dz -|z| a sgn(z) = |x| 1-a 1 -a < +∞. The assumptions of Lemma 3.1 are satisfied. Thus the origin is uniformly finite time stable and the solutions φ x (τ ) tend to the origin with the settling time T 0 (x) = |x| 1-a 1-a . These conclusions were directly obtained in the introduction by explicit computation of the solutions (2). General case For the more general systems described by Equation (3), a natural extension will invoke the use of Lyapunov functions to give sufficient or necessary conditions for finite time stability. Sufficient condition using smooth Lyapunov function In this section, we extend a result coming from [START_REF] Haimo | Finite Time Controllers[END_REF] to non autonomous systems. In the following, one needs the existence of a Lyapunov function V : 4 ≥ × V → 4 ≥ and a continuous function r : 4 ≥ → 4 ≥ such that r (0) = 0 satisfying the following differential inequality V (t, x) ≤ -r (V (t, x)) (7) for all (t, x) ∈ I × V. Since the use of a Lyapunov function will lead to some scalar differential inequality, the following proposition will give sufficient condition for finite time stability: the existence of a Lyapunov function satisfying the condition (7). Proposition 4.1 Let the origin be an equilibrium point for the system (3) where f is continuous. i) If there exists a continuously differentiable Lyapunov function satisfying condition (7) with a positive definite continuous function r : 4 ≥0 → 4 ≥0 such that for some ǫ > 0 ǫ 0 dz r(z) < +∞ (8) then the origin (3) is finite time stable for the system (3). ii) If in addition to i), V is decrescent, then the origin of the system (3) is uniformly finite time stable for the system (3). iii) If in addition to i), the system (3) is globally defined and V is radially unbounded, then the origin of the system (3) is globally finite time stable for the system (3). Proof i) Since V : I × V → 4 ≥0 is a Lyapunov function (thus satisfies L1 and L2), then the Lyapunov Theorem tells us that the origin is asymptotically stable. Let φ x t (τ ) be a solution of (3) which tends to the origin with the settling time T 0 (φ x t (τ )) (0 ≤ T 0 (φ x t (τ )) ≤ +∞ : from the attractivity of the origin). We have to prove that T 0 (t, x) < +∞. By using the asymptotic stability definition, x can be chosen small enough to ensure that φ x t (τ ) ∈ V for τ ≥ t and τ → V (τ, φ x t (τ )) strictly decreases for τ ≥ t. By using the change of variables: [t, t + T 0 (φ x t )] → [0, V (t, x)] given by z = V (τ, φ x t (τ )), one obtains 0 V (t,x) dz -r(z) = t+T0(φ x t ) t V (τ, φ x t (τ )) -r(V (τ, φ x t (τ ))) dτ. By assumption, V (τ, φ x t (τ )) ≤ -r(V (τ, φ x t (τ ))) ≤ 0 for all τ ≥ t. This shows that T 0 (φ x t ) = t+T0(φ x t ) t dτ ≤ V (t,x) 0 dz r(z) . This implies that T 0 (φ x t ) < +∞. As V (t,x) 0 dz r(z) is independent of φ x t , T 0 (t, x) < +∞. Thus, the origin of the system (3) is finite time stable. ii) If V is decrescent, then the system is uniformly asymptotically stable. Moreover, there exists a E. Moulay and W. Perruquetti K-function β such that V (t, x) ≤ β ( x ). So T 0 (t, x) ≤ β( x ) 0 dz r(z) = α ( x ) with α positive definite. iii) If V is radially unbounded, the system is globally asymptotically stable. Then, for all x in 4 n all the functions τ → V (τ, φ x t (τ )) decrease. Thus the system is globally finite time stable. Remark 1 The settling time with respect to initial conditions of the system (3) satisfies the following inequality T 0 (t, x) ≤ V (t,x) 0 dz r(z) . so it is continuous at the origin. Example 4.2 Scalar non autonomous systemConsider the following system: ẋ = -(1 + t)ϕ a (x), t ≥ 0, x ∈ 4. (9) The function V (x) = x 2 is a decrescent Lyapunov function for (9) and V (t, x) = -2(1 + t)xϕ a (x) ≤ -2ϕ a+1 2 (x 2 ) = -r(V (x)). Since ǫ 0 dz r(z) < +∞, the origin is uniformly finite time stable with the settling time with respect to initial conditions of the system T 0 (t, x) ≤ 4 |x| 1-a 1 -a . Example 4.3 two dimensional systemConsider a ∈]0, 1[ and the system: ẋ1 = -ϕ a (x 1 ) -x 3 1 + x 2 ẋ2 = -ϕ a (x 2 ) -x 3 2 -x 1 . Taking V (x) = x 2 2 , we obtain V (x 1 , x 2 ) = - 2 i=1 (x 4 i + |x i | a+1 ) ≤ 0. V is a Lyapunov function for the system. Moreover, if r (z) = ϕ a+1 2 (z), then V (x 1 , x 2 ) ≤ -r (V (x 1 , x 2 )) . In fact 2 i=1 (x 4 i + |x i | a+1 ) ≥ x 2 1 + x 2 2 a+1 2 = x a+1 . Thus the origin is uniformly finite time stable with T 0 (t, x) ≤ 2 1+a 2 x 1-a 1a . In order to test condition (8) and to conclude to the finite time stability, first one must have the existence of a pair (V, r) satisfying condition (7). For this, it is enough that there exists a decrescent Lyapunov function for the system (3). Indeed, if V is decrescent, then there exists a class K-function α such that V (t, x) ≤ α ( x ) for all (t, x) ∈ 4 ≥0 × V. As V is negative definite, there exists a class K-function β such that -V (t, x) ≥ β( x ) for all (t, x) ∈ 4 ≥0 × V. Combining both results, one obtains that V (t, x) ≤ -β α -1 (V (t, x)) for all (t, x) ∈ 4 ≥0 × V. Note that this condition does not imply that the constructed pair (V, r) will be a good candidate for condition (8). Sufficient condition using nonsmooth Lyapunov function The question is to know if it is possible to use a continuous only Lyapunov function to show finite time stability. This is possible by adding the condition that r is locally Lipschitz. We shall use the comparison lemma which can be found in (Khalil 1996, Lemma 3.4): D + g(t) ≤ f (g(t)) then g(t) ≤ Φ(t, g(a)) for all t ∈ [a, b). Now, one may give a proposition which generalizes a result given in [START_REF] Bhat | Finite Time Stability of Continuous Autonomous Systems[END_REF] to continuous non autonomous systems. Proposition 4.5 Let the origin be an equilibrium point for the system (3) where f is continuous. If there exists a continuous Lyapunov function for the system (3) satisfying condition (7) with a positive definite continuous function r : 4 ≥0 → 4 ≥0 locally Lipschitz outside the origin such that ǫ 0 dz r(z) < +∞ with ǫ > 0, then the origin is finite time stable. Moreover, the settling time with respect to initial conditions of the system (3) satisfies T 0 (t, x) ≤ V (t,x) 0 dz r(z) . Proof Since V : 4 ≥0 × V → 4 ≥0 is Lyapunov function of the system (3) satisfying condition (7), then from the Lyapunov's theorem, the origin is asymptotically stable. Let x 0 ∈ V and φ x t (τ ) be a solution E. Moulay and W. Perruquetti of (3) which tends to the origin with the settling time T 0 (φ x t ). It remains to prove that T 0 (t, x) < +∞. Because of asymptotic stability, one may suppose with no loss of generality that φ x t (τ ) ∈ V for t ≥ 0 and V (t, x) ∈ [0, ǫ]. Let us consider the system ż = -r(z), z ≥ 0, with the global semi flow Φ(τ, z) for z ≥ 0. Now, applying the comparison lemma (4.4), one deduces that V (τ, φ x t (τ )) ≤ Φ(τ, V (t, x)), τ ≥ 0, x ∈ V \ {0} . From Lemma 3.1, one knows that Φ(τ, z) = 0 for τ ≥ V (t,x) 0 dz r(z) . With the positive definiteness of V , one concludes that φ x t (τ ) = 0 for τ ≥ V (t,x) 0 dz r(z) . Moreover, V (t,x) 0 dz r(z) is independent of φ x t , this shows that T 0 (t, x) < +∞. Thus, the origin of the system (3) is finite time stable. One wants to give an example using a continuous only Lyapunov function. Example 4.6 Let 0 < a < 1, if we consider the simplest system ẋ = -x a sgn(x), x ∈ 4, and the continuous Lyapunov function V (x) = |x|, we have V (x) = -|x| a = -V (x) a . So the system is finite time stable. Let us sum up the two previous results: in order to obtain the finite time stability, we have the choice between a pair (V 1 , r 1 ) with V 1 which is continuously differentiable and r 1 only continuous, or a pair (V 2 , r 2 ) with V 2 which is continuous and r 2 locally Lipschitz. Moreover, the two conditions are not equivalent because there is no converse theorem for general non autonomous continuous systems. Necessary conditions For the moment, there is no necessary and sufficient condition for finite time stability of general continuous (even autonomous) systems. The only converse theorem appears in [START_REF] Bhat | Finite Time Stability of Continuous Autonomous Systems[END_REF] for autonomous systems with uniqueness of solutions in forward time and when the settling time is continuous at the origin. So, one proposes a sufficient condition for non autonomous continuous systems. In the following, one needs the existence of a Lyapunov function V : 4 ≥ × V → 4 ≥ and a continuous function s : 4 ≥ → 4 ≥ such that s (0) = 0 satisfying the following differential inequality V (t, x) ≥ -s(V (t, x)) (10) for all (t, x) ∈ 4 ≥0 × V. Then, the following proposition gives necessary conditions for finite time stability: the existence of such a pair (V, s) such that the differential inequality (10) holds. Proposition 4.7 Let the origin be an equilibrium point for the system (3) where f is continuous. If the origin is weakly finite time stable for the system (3) then for all Lyapunov functions for the system (3) satisfying condition (10) with a continuous positive definite function s : 4 ≥0 → 4 ≥0 , there exists ǫ > 0 such that ǫ 0 dz s(z) < +∞. ( 11 ) Proof Suppose that V : I × V → 4 ≥0 is a Lyapunov function satisfying condition 10. Let φ x t (τ ) be a solution of (3) with the settling time 0 ≤ T 0 (φ x t ) < +∞. Because of the asymptotic stability, one may choose x small enough to ensure that φ x t (τ ) ∈ V for all τ ≥ t and τ → V (τ, φ x t (τ )) strictly decreases for τ ≥ t. By using the change of variables, [t, t + T 0 (φ x t )] → [0, V (t, x)] given by z = V (τ, φ x t (τ )), one obtains 0 V (t,x) dz -s(z) = t+T0(φ x t ) t V (τ, φ x t (τ )) -s(V (τ, φ x t (τ ))) dτ. Since V (τ, φ x t (τ )) ≥ -s(V (τ, φ x t (τ ))) and -s(V (τ, φ x t (τ ))) < 0 for all τ ≥ t one obtains V (t,x) 0 dz s(z) ≤ t+T0(φ x t ) t dτ = T 0 (φ x t ) < +∞. Remark 2 This condition may be used to conclude to the non weakly finite time stability and in particular to the non finite time stability for some systems. Example 4.8 Consider the following system: ẋ = -|x| 1 + g(t) where t ∈ 4 and g is a positive function bounded below by c > 0. The function V (x) = x 2 2 is a Lyapunov function for the system and -V (t, x) ≤ x 2 1 + c = s x 2 2 with s(z) = 2z 1+c . Since x 0 dz s(z) = +∞ for all x > 0, the origin is not finite time stable. To test condition (11) and to conclude to the non finite time stability, one must have the existence of a Lyapunov function satisfying condition (10). A sufficient condition to obtain condition (10) is that there exists a Lyapunov function for the system (3) such that -V is decrescent. There exists a gap between the sufficient and the necessary conditions. If r : 4 ≥0 → 4 ≥0 is a positive definite function such that V = -r (V ) for the system (3) then there exists a necessary and sufficient condition for finite time stability: • if ǫ 0 dz r(z) < +∞ then the origin is finite time stable; • if In general, one uses for r (z) the following function r (z) = ϕ a (z) with 0 < a < 1. In the following example, one has forced to change the form for r (z) in order to show the finite time stability. Let the function u be defined on ]0, 1] by its graph given in figure 1 and such that u 1 n = n 2 for n ≥ 1 and 0 < a < 1. Consider the continuous function f (x) = 0 if x = 0 r(x 2 ) 2x if x = 0 As f 1 n ≤ -1 2 n u 1 n 2 ≤ 1 2n 3 , f is continuous at the origin and thus on 4. Consider the system ẋ = f (x) , x ∈ 4. V (x) = x 2 is a Lyapunov function for the system and V (x) ≤ -r (V (x)) . So, the system is finite time stable. Nevertheless, there is no function ϕ b with 0 < b < 1 such that r is bounded below by ϕ b . Conclusion A necessary and a sufficient condition for finite time stability of non autonomous continuous systems are given. As mentioned, there is still a gap to obtain necessary and sufficient conditions. The main difficulty comes from the non existence of the flow for only continuous systems and the non continuity of the settling time. Thus it is not an easy task to prove the existence of a Lyapunov function satisfying condition 7 under the hypothesis of finite time stability of the origin. However, with the given sufficient conditions, it is possible to investigate the problem of finite time stabilization for general continuous and non autonomous systems. Lemma 4. 4 4 Comparison lemmaIf the scalar differential equation ẋ = f (x), x ∈ 4, has a global semi-flow Φ : 4 ≥0 × 4 → 4, where f is continuous, and if g : [a, b) → 4 (b could be infinity) is a continuous function such that for all t ∈ [a, b), = +∞ then the origin is not finite time stable. E. Moulay and W. Perruquetti 5 Application Figure Figure 1. graph of u(z) Acknowledgment The authors thank the referees for careful reading and helpful suggestions on the improvement of the manuscript.
01776292
en
[ "phys.cond.cm-sm" ]
2024/03/05 22:32:18
2019
https://hal.science/hal-01776292/file/1706.07950.pdf
Giulio Pettini email: [email protected] Matteo Gori email: [email protected] Roberto Franzosi email: [email protected] Cecilia Clementi email: [email protected] Marco Pettini email: [email protected] On the origin of Phase Transitions in the absence of Symmetry-Breaking Keywords: numbers: 05.45.+b, 02.40.-k, 05.20.-y In this paper we investigate the Hamiltonian dynamics of a lattice gauge model in three spatial dimensions. Our model Hamiltonian is defined on the basis of a continuum version of a duality transformation of a three dimensional Ising model. The system so obtained undergoes a thermodynamic phase transition in the absence of a global symmetry-breaking and thus in the absence of an order parameter. It is found that the first order phase transition undergone by this model fits into a microcanonical version of an Ehrenfest-like classification of phase transitions applied to the configurational entropy. It is discussed why the seemingly divergent behaviour of the third derivative of configurational entropy can be considered as the "shadow" of some suitable geometrical and topological transition of the equipotential submanifolds of configuration space. I. INTRODUCTION One of the main topics in Statistical Mechanics concerns phase transitions phenomena. From the theoretical viewpoint, understanding their origin, and the way of classifying them, is of central interest. Usually, phase transitions are associated with a spontaneous symmetry-breaking phenomenon: at low temperatures the accessible states of a system can lack some of the global symmetries of the Hamiltonian, so that the corresponding phase is the less symmetric one, whereas at higher temperatures the thermal fluctuations allow the access to a wider range of energy states having all the symmetries of the Hamiltonian. In the symmetry-breaking phenomena, the extra variable which characterizes the physical states of a system is the order parameter. The order parameter vanishes in the symmetric phase and is different from zero in the broken-symmetry phase. This is the essence of Landau's theory. If G 0 is the global symmetry group of the Hamiltonian, the order of a phase transition is determined by the index of the subgroup G ⊂ G 0 of the broken symmetry phase. The corresponding mechanism in quantum field theory is described by the Nambu-Goldstone's Theorem. However, this is not an all-encompassing theory. In fact, many systems do not fit in this scheme and undergo a phase transition in the absence of a symmetry-breaking. This is the case of liquid-gas transitions, Kosterlitz-Thouless transitions, coulombian/confined regime transition for gauge theories on lattice, transitions in glasses and supercooled liquids, in general, transitions in amorphous and disordered systems, folding transitions in homopolymers and proteins, to quote remarkable examples. All these physical systems lack an order parameter. Moreover, classical theories, as those of Yang-Lee [START_REF] Yang | Statistical theory of equations of state and phase transitions I. Theory of condensation[END_REF] and of Dobrushin-Lanford-Ruelle [START_REF]A comprehensive account of the Dobrushin-Lanford-Ruelle theory and of its developments can be found[END_REF], require the N → ∞ limit (thermodynamic limit) to mathematically describe a phase transition, but the study of transitional phenomena in finite N systems is particularly relevant in many other contemporary problems [START_REF] Gross | Microcanonical Thermodynamics. Phase Transitions in "Small" Systems[END_REF], for instance related with polymers thermodynamics and biophysics [START_REF] Bachmann | Thermodynamics and Statistical Mechanics of Macromolecular Systems[END_REF], with Bose-Einstein condensation, Dicke's superradiance in microlasers, nuclear physics [START_REF] Ph | Caloric Curves and Energy Fluctuations in the Microcanonical Liquid-Gas Phase Transition[END_REF], superconductive transitions in small metallic objects. The topological theory of phase transitions provides a natural framework to get rid of the thermodynamic limit dogma because clear topological signatures of phase transitions are found already at finite and small N [START_REF] Pettini | Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics[END_REF][START_REF] Casetti | Phase transitions and topology changes in configuration space[END_REF]. Therefore, looking for generalisations of the existing theories is a well motivated and timely purpose. The present paper aims at giving a contribution in this direction along a line of thought initiated several years ago with the investigation of the Hamiltonian dynamical counterpart of phase transitions [START_REF] Pettini | Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics[END_REF][START_REF] Casetti | Dynamical and Statistical properties of Hamiltonian systems with many degrees of freedom[END_REF][START_REF] Casetti | Geometric approach to Hamiltonian dynamics and statistical mechanics[END_REF] which eventually led to formulate a topological hypothesis. In fact, Hamiltonian flows (H-flows) can be seen as geodesic flows on suitable Riemannian manifolds [START_REF] Pettini | Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics[END_REF][START_REF] Pettini | Geometrical hints for a nonperturbative approach to Hamiltonian dynamics[END_REF], and then the question naturally arises of whether and how these manifolds "encode" the fact that their geodesic flows/H-flows are associated or not with a thermodynamic phase transition (TDPT). It is by following this conceptual pathway that one is eventually led to hypothesize that suitable topological changes of certain submanifolds of phase space are the deep origin of TDPT. This hypothesis was corroborated by several studies on specific exactly solvable models [START_REF] Casetti | Topological origin of the phase transition in a mean-field model[END_REF][START_REF] Casetti | Exact result on topology and phase transitions at any finite N[END_REF][START_REF] Casetti | Phase transitions and topology changes in configuration space[END_REF][START_REF] Angelani | Topology and Phase Transitions: from an exactly solvable model to a relation between topology and thermodynamics[END_REF][START_REF] Santos | Topological approach to microcanonical thermodynamics and phase transition of interacting classical spins[END_REF] and by two theorems. These theorems state that the unbounded growth with N of relevant thermodynamic quantities, eventually leading to singularities in the N → ∞ limit -the hallmark of an equilibrium phase transition -is necessarily due to appropriate topological transitions in configuration space [START_REF] Pettini | Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics[END_REF][START_REF] Franzosi | Theorem on the origin of Phase Transitions[END_REF][START_REF] Franzosi | Topology and Phase Transitions I. Preliminary results[END_REF][START_REF] Franzosi | Topology and Phase Transitions II. Theorem on a necessary relation[END_REF][START_REF] Kastner | Phase Transitions Detached from Stationary Points of the Energy Landscape[END_REF]. Hence, and more precisely, the present paper aims at investigating whether also TDPT occurring in the absence of symmetry-breaking, and thus in the absence of an order parameter, can be ascribed to some major geometrical/topological change of the previously mentioned manifolds. To this purpose, inspired by the dual Ising model, we define a continuous variables Hamiltonian in three spatial dimensions (3D) having the same local (gauge) symmetry of the dual Ising model (reported in Section II) and then proceed to its numerical investigation. The results are reported and discussed in Section III. Through a standard analysis of thermodynamic observables, it is found that this model undergoes a first order phase transition. It is also found that the larger the number of degrees of freedom the sharper the jump of the second derivative of configurational entropy, what naturally fits into a proposed microcanonical version of an Ehrenfest-like classification of phase transitions. A crucial finding of the present work consists of the observation that this jump of the second derivative of configurational entropy coincides with a jump of the second derivative of a geometric quantity measuring the total dispersion of the principal curvatures of certain submanifolds (the potential level sets) of configuration space. This is a highly non trivial fact because the peculiar energy variation of the geometry of these submanifolds, entailing the jump of the second derivative of the total dispersion of their principal curvatures, is a primitive, a fundamental phenomenon: it is the cause and not the effect of the energy dependence of the entropy and its derivatives, that is, the phase transition is a consequence of a deeper phenomenon. In its turn, the peculiar energy-pattern of this geometric quantity appears to be rooted in the variations of topology of the potential level sets, thus the present results provide a further argument in favour of the topological theory of phase transitions, also in the absence of symmetry-breaking. II. THE MODEL Starting from the Ising Hamiltonian -J ij ∈Λ σ i σ j (1) with nearest-neighbor interactions ( ij ) on a 3D-lattice Λ, where the σ i are discrete dichotomic variables (σ i = ±1) defined on the lattice sites and J is real positive (ferromagnetic coupling), one defines the dual model [START_REF] Kogut | An introduction to lattice gauge theory and spin systems[END_REF] - J U ij U jk U kl U li (2) where the discrete variables U mm are defined on the links joining the sites m and m , and U mm = ±1. The summation is carried over all the minimal plaquettes (denoted by ) into which the lattice can be decomposed. The dual model in (2) has the local (gauge) symmetry U ij → ε i ε j U ij (3) with ε i , ε j = ±1, and i, j ∈ Λ. Such a gauge transformation leaves the model (2) unaltered, and after the Elitzur theorem [START_REF] Elitzur | Impossibility of Spontaneously Breaking Local Symmetries[END_REF] U ij does not qualify as a good order parameter to detect the occurrence of a phase transition because U ij = 0 always. In other words, no bifurcation of U ij can be observed at any phase transition point inherited by the model ( 2) from the Ising model [START_REF] Pettini | was on leave of absence from Osservatorio Astrofisico di Arcetri[END_REF]. In order to define a Hamiltonian flow with the same property of local symmetry -hindering the existence of a standard order parameter -we borrow the analytic form of (2) and replace the discrete dichotomic variables U ij with continuous ones U ij ∈ R. We remark that we donot want to investigate the dual-Ising model, rather we just heuristically refer to it in order to define a gauge model with the desired properties. Moreover, we add to the continuous version of (2) a stabilizing term which is invariant under the same local gauge transformation (3); this reads α ij U 2 ij -1 4 , (4) where ij stands for nearest-neighbor interactions for link variables and α is a real positive coupling constant. On a 3D-lattice Λ, and with I = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}, we thus define the following model Hamiltonian H(π, U ) = i∈Λ µ∈I 1 2 π 2 iµ -J ∈Λ U ij U jk U kl U li + α i∈Λ µ∈I U 2 iµ -1 4 ( 5 ) whose flow is investigated through the numerical integration of the corresponding Hamilton equations of motion. A more explicit form of ( 5) is given by H(π, U ) = n i,j,k=1 3 ν=1 1 2 π 2 ijkν -J n i,j,k=1 [U ijk1 U i+1jk2 U ij+1k1 U ijk2 (6) + U ijk2 U ij+1k3 U ijk+12 U ijk3 + U ijk3 U ijk+11 U i+1jk3 U ijk1 ] + α n i,j,k=1 3 ν=1 U 2 ijkν -1 4 , where the summation is carried over trihedrals made of three orthogonal plaquettes. Here U ijk1 is the link variable joining the sites (i, j, k) and (i + 1, j, k), U ijk2 is the link variable joining the sites (i, j, k) and (i, j + 1, k), U ijk3 is the link variable joining the sites (i, j, k) and (i, j, k + 1). Similarly, for example, U i+1jk2 joins the sites (i + 1, j, k) and (i + 1, j + 1, k), U ij+1k1 joins (i + 1, j + 1, k) and (i, j + 1, k), and so on. That is to say that the fourth index labels the direction, i.e which index is varied by one unit. The Hamilton equations of motion are given by Uijkν = π ijkν , πijkν = - ∂H ∂U ijkν , i, j, k = 1, . . . , n; ν = 1, 2, 3, (7) periodic boundary conditions are always assumed. The numerical integration of these equations is correctly performed only by means of symplectic integration schemes. These algorithms satisfy energy conservation (with zero mean fluctuations around a reference value of the energy) for arbitrarily long times as well as the conservation of all the Poincaré invariants, which include the phase space volume, so that also Liouville's theorem is satisfied by a symplectic integration. We adopted a thirdorder bilateral symplectic algorithm as described in [START_REF] Casetti | Efficient symplectic algorithms for numerical simulations of Hamiltonian flows[END_REF]. We used J = 1 and α = 1, the integration time step ∆t varied from 0.005 at low energy to 0.001 at high energy so as to keep the relative energy fluctuations ∆E/E close to 10 -6 . III. DEFINITION OF THE OBSERVABLES AND NUMERICAL INVESTIGA-TION Given any observable A = A(π, U ), one computes its time average as A t = 1 t t 0 dτ A[π(τ ), U (τ )] (8) along the numerically computed phase space trajectories. For sufficiently long integration times, and for generic nonlinear (chaotic) systems, these time averages are used as estimates of microcanonical ensemble averages in all the expressions given below. A. Thermodynamic observables The basic macroscopic thermodynamic observable is temperature. The microcanonical definition of temperature depends on entropy -the basic thermodynamic potential in the microcanonical ensemble -according to the relation 1 T = ∂S ∂E V , ( 9 ) where V is the volume, E is the energy and the entropy S is given by S(N, E, V) = k B log dπ 1 • • • dπ N dU 1 • • • dU N δ[E -H(π, U )] ( 10 ) where N is the total number of degrees of freedom, N = 3n 3 in the present context, and U k stands for any suitable labelling of them. By means of a Laplace transform technique [START_REF] Pearson | Laplace-transform technique for deriving thermodynamic equations from the classical microcanonical ensemble[END_REF], from Eqs. ( 9) and [START_REF] Pettini | Geometrical hints for a nonperturbative approach to Hamiltonian dynamics[END_REF] one gets (setting k B = 1) T = 2 (N -2) K -1 -1 . ( 11 ) where K -1 is the microcanonical ensemble average of the inverse of the kinetic energy K = E -V (U ) , where V (U ) is the potential part of the Hamiltonian [START_REF] Pettini | Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics[END_REF]. In numerical simulations K -1 = 1 t t 0 dτ n i,j,k=1 3 ν=1 1 2 π 2 ijkν (τ ) -1 . ( 12 ) For t sufficiently large K -1 attains a stable value (in general this is a rapidly converging quantity entailing a rapid convergence of T ). Since the invariant measure for nonintegrable Hamiltonian dynamics is the microcanonical measure in phase space, the occurrence of equilibrium phase transitions can be investigated in the microcanonical statistical ensemble through Hamiltonian dynamics [START_REF] Pettini | Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics[END_REF][START_REF]For generic quasi-integrable systems[END_REF]. Standard numerical signatures of phase transitions (also found with canonical Monte Carlo random walks in phase space) are: the bifurcation of the order parameter at the transition point (somewhat smoothed at finite number of degrees of freedom), and sharp peaks -of increasing height with an increasing number of degrees of freedom -of the specific heat at the transition point. As already remarked above, our model [START_REF] Pettini | Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics[END_REF], because of the local (gauge) symmetry, lacks a standard definition of an order parameter as is usually given in the case of symmetrybreaking phase transitions. And in fact, in every numerical simulation of the dynamics we have computed the time average U ij t always finding U ij t 0 independently of the lattice size and of the energy value (the double average means averaging over the entire lattice first, and then averaging over time). Thus, the presence of a phase transition is detected through the shape of the so-called caloric curve, that is, T = T (E). For the model in [START_REF] Pettini | Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics[END_REF] this has been computed by means of Eq. [START_REF] Casetti | Topological origin of the phase transition in a mean-field model[END_REF]. Then the microcanonical constant-volume specific heat follows according to the relation 1/C V = ∂T (E)/∂E. The numerical computation of specific heat can be independently performed, with respect to the caloric curve, as follows. Starting with the definition of the entropy, given in [START_REF] Pettini | Geometrical hints for a nonperturbative approach to Hamiltonian dynamics[END_REF], an analytic formula can be worked out [START_REF] Pearson | Laplace-transform technique for deriving thermodynamic equations from the classical microcanonical ensemble[END_REF], which is exact at any value of N . This formula reads c V (E) = C V N = N (N -2) 4 (N -2) -(N -4) K -2 K -1 2 -1 , ( 13 ) and this is the natural expression to work out the microcanonical specific heat by means of Hamiltonian dynamical simulations. In order to get the above defined specific heat, time averages of the kind K α t = 1 t t 0 dτ n i,j,k=1 3 ν=1 1 2 π 2 ijkν (τ ) α are computed with α = -1, -2. Then, for sufficiently large t, the microcanonical averages K α can be replaced by K α t . In Figure 1 the caloric curve is reported for different sizes of the lattice. A kink, typical of first order phase transitions, can be seen. This entails the presence of negative values of the specific heat, and, consequently, ensemble nonequivalence for the model under consideration. And, in fact, in Figure 2, where we report the outcomes of the computations of the specific heat according to Eq.( 13), we can observe an energy interval where the specific heat C V is negative, and very high peaks are also found. Nevertheless, these peaks are not related with an analyticity loss of the entropy (see Section III C) but rather depend on the existence of two points of flat tangency to the caloric curve. In Figure 3 The results so far reported provide us with a standard numerical evidence of the existence of a first order phase transition undergone by the model investigated. Besides standard thermodynamic observables, the study of phase transitions through Hamiltonian dynamics makes available a new observable, the largest Lyapunov exponent λ, which is of purely dy-namical kind, and which has usually displayed peculiar patterns in presence of a symmetrybreaking phase transition [START_REF] Pettini | Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics[END_REF][START_REF] Casetti | Geometric approach to Hamiltonian dynamics and statistical mechanics[END_REF][START_REF] Caiani | Geometry of dynamics, Lyapunov exponents and phase transitions[END_REF][START_REF] Caiani | Geometry of dynamics and phase transitions in classical lattice ϕ 4 theories[END_REF][START_REF] Caiani | Hamiltonian dynamics of the two-dimensional lattice ϕ 4 model[END_REF][START_REF] Firpo | Analytic estimation of the Lyapunov exponent in a mean-field model undergoing a phase transition[END_REF][START_REF] Barré | Lyapunov exponents as a dynamical indicator of a phase transition[END_REF]. Therefore in the following Section an attempt is made to characterise the phase transition undergone by our model also through the energy dependence of λ. B. A Dynamic observable: the largest Lyapunov exponent The largest Lyapounov exponent λ is the standard and most relevant indicator of the dynamical stability/instability (chaos) of phase space trajectories. Let us quickly recall that the numerical computation of λ proceeds by integrating the tangent dynamics equations, which, for Hamiltonian flows, read ξi = ζ i , ζi = - N j=1 ∂ 2 V ∂q 1 ∂q j q(t) ξ j , i = 1, . . . , N (14) together with the equations of motion of the Hamiltonian system under investigation. Then the largest Lyapunov exponent λ is defined by λ = lim t→∞ 1 t log [ξ 2 1 (t) + • • • + ξ 2 N (t) + ζ 2 1 (t) + • • • + ζ 2 N (t)] 1/2 [ξ 2 1 (0) + • • • + ξ 2 N (0) + ζ 2 1 (0) + • • • + ζ 2 N (0)] 1/2 , (15) In a numerical computation the discretized version of ( 15) is used, with ξ = (ξ 1 , . . . , ξ 2N ) and ξ i+N = ξi λ = lim m→∞ 1 m m i=1 1 ∆t log ξ[(i + 1)∆t] ξ(i∆t) , (16) where, after a given number of time steps ∆t, for practical numerical reasons it is convenient to renormalize the value of ξ to a fixed one. The numerical estimate of λ is obtained by retaining the time asymptotic value of λ(m∆t). This is obtained by checking the relaxation pattern of log λ(m∆t) versus log(m∆t). Note that λ can be expressed as the time average of a suitable observable defined as follows. From the compact notation ξi = k J ik [q(t)]ξ k for the system [START_REF] Angelani | Topology and Phase Transitions: from an exactly solvable model to a relation between topology and thermodynamics[END_REF] and observing that 1 2 setting d dt log(ξ T ξ) = ξ T ξ + ξT ξ 2ξ T ξ = ξ T J[q(t)]ξ + ξ T J T [q(t)]ξ 2ξ T ξ , J [q(t), ξ(t)] = {ξ T J[q(t)]ξ + ξ T J T [q(t)]ξ}/(2ξ T ξ), one gets λ = lim t→∞ 1 t log ξ(t) ξ(0) = lim t→∞ 1 t t 0 dτ J [q(τ ), ξ(τ )] , (17) which formally gives λ as a time average as per Eq.( 8). The numerical results summarised in Figure 4 qualitatively indicate a transition between two dynamical regimes of chaoticity: from a weakly chaotic dynamics at low energy density values, to a more chaotic dynamics at large energy density values. However the transition between these dynamical states is a mild one. At variance with those models where a phase transition stems from a symmetry-breaking, here there is no peculiar property of the shapes of λ = λ(E/N ) in correspondence of the phase transition. Therefore, in the following Section we directly tackle the numerical study of the differentiability class of the entropy. C. Microcanonical definition of phase transitions As is well known, according to the Ehrenfest classification, the order of a phase transition is given by the order of the discontinuous derivative with respect to temperature T of the Helmholtz free energy F (T ). However, a difficulty arises in presence of divergent specific heat C V associated with a second order phase transition because this implies a divergence of (∂ 2 F/∂T 2 ), and, in turn, a discontinuity of (∂F/∂T ) so that the distinction between first and second order transitions is lost. By resorting to the concept of symmetry-breaking, Landau theory circumvents this difficulty by classifying the order of a phase transition according to the index of the symmetry group of the broken-symmetry phase which is a subgroup of the group of the more-symmetric phase. As in the present work we are tackling a system undergoing a phase transition in the absence of symmetry-breaking, we have to get back to the origins as follows. According to the Ehrenfest theory, a phase transition is associated with a loss of analyticity of a thermodynamic potential (Helmholtz free energy, or, equivalently Gibbs free energy), and the order of the transition depends on the differentiability class of this thermodynamic potential. Later, on a mathematically rigorous ground, the identification of a phase transition with an analyticity loss of a thermodynamic potential (in the grancanonical ensemble) was confirmed by the Yang-Lee theorem. Now, let us consider the statistical ensemble which is the natural counterpart of microscopic Hamiltonian dynamics, that is, microcanonical ensemble. As already recalled in Section III.A, here the relevant thermodynamic potential is entropy, and considering the specific heat C -1 V = ∂T (E) ∂E which, after Eq.( 9), reads C V = - ∂S ∂E 2 ∂ 2 S ∂E 2 -1 , (18) from the last expression we see that C V can diverge only as a consequence of the vanishing of (∂ 2 S/∂E 2 ) which has nothing to do with a loss of analyticity of S(E). This is why in Section III.A we have affirmed that the peaks of C V reported in Figure 2 stem from a rather trivial effect. For standard Hamiltonian systems (i.e. quadratic in the momenta) the relevant information is carried by the configurational microcanonical ensemble, where the configurational canonical free energy is f N (β) ≡ f N (β; V N ) = 1 N log Z c (β, N ) with Z c (β, N ) = (Λ d ) ×n dq 1 . . . dq N exp[-βV N (q 1 , . . . , q N )] and the configurational microcanonical entropy (in units s.t. k B = 1) is S N (v) ≡ S N (v; V N ) = 1 N log Ω(N v, N ) , where v = v/N is the potential energy per degree of freedom, and Ω(v, N ) = (Λ d ) ×n dq 1 • • • dq N δ[V N (q 1 , . . . , q N ) -v] , (19) is the volume of the equipotential hypersurface Σ v (codimension-1 subset of configuration space), and δ[•] is the Dirac functional. Then S N (v) is related to the configurational canonical free energy, f N , for any N ∈ N, v ∈ R, and β ∈ R through the Legendre transform -f N (β) = β • vN -S N (v N ) ( 20 ) where the inverse of the configurational temperature T (v) is given by β N (v) = ∂S N ∂v (v) . (21) Then consider the function φ(v) = f N [β(v)], from φ (v) = -v [dβ N (v)/dv] we see that if β N (v) ∈ C k (R) then also φ(v) ∈ C k (R) which in turn means S N (v) ∈ C k+1 (R) while f N (β) ∈ C k (R). Hence, if the functions {S N (v)} N ∈N are convex, thus ensuring the existence of the above Legendre transform, and if in the N → ∞ limit it is f ∞ (β) ∈ C 0 (R) then S ∞ (v) ∈ C 1 (R), and if f ∞ (β) ∈ C 1 (R) then S ∞ (v) ∈ C 2 (R). We are now ready for a classification of phase transitions à la Ehrenfest in the present microcanonical configurational context. The original Ehrenfest definition associates a first or second order phase transition with a discontinuity in the first or second derivatives of f ∞ (β), that is with f ∞ (β) ∈ C 0 (R) or f ∞ (β) ∈ C 1 (R) , respectively. This premise heuristically suggests to associate a first order phase transition with a discontinuity of the second derivative of the entropy S ∞ (v), and to associate a second order phase transition with a discontinuity of the third derivative of the entropy S ∞ (v). Let us stress that this definition is proposed regardless the existence of the Legendre transform, which typically fails in presence of first order phase transitions which bring about a kink-shaped energy dependence of the entropy [START_REF] Gross | Microcanonical Thermodynamics. Phase Transitions in "Small" Systems[END_REF]. Thus, strictly speaking, the definition that we are putting forward does not mathematically and logically stem from the original Ehrenfest classification. The introduction of our entropy-based classification of phase transitions à la Ehrenfest is heuristically motivated, but to some extent arbitrary. This entropy-based classification no longer suffers the previously mentioned difficulty arising in the framework of canonical ensemble, including here both divergent specific heat in presence of a second order phase transition and ensemble non-equivalence. In the end the validity of the proposed classification has to be checked against practical examples. The gauge model, here under investigation, provides a first benchmarking in this direction. It is worth mentioning that a thorough investigation of microcanonical thermodynamics, with special emphasis on phase transitions, can be found in Ref. [START_REF] Gross | Microcanonical Thermodynamics. Phase Transitions in "Small" Systems[END_REF]. However, our above proposed approach is rather different from (and complementary to) that discussed in [START_REF] Gross | Microcanonical Thermodynamics. Phase Transitions in "Small" Systems[END_REF]. In fact, in [START_REF] Gross | Microcanonical Thermodynamics. Phase Transitions in "Small" Systems[END_REF] the determinant and the eigenvalues of the curvature matrix associated with the two dimensional entropy surface S(E, N ) are the basic quantities to signal the presence, and define the order, of a phase transition; while no reference is made to the differentiability class of the entropy to characterise a phase transition. From the numerical results concerning the functions T (E) and u(E), reported in Figure 1 and Figure 3, respectively, we computed the first and second derivatives of the configurational entropy as The derivative (dE/du) entering Eq.( 22) is obtained after inversion of the function u = u(E) ∂S ∂u = ∂S ∂E dE du = 1 T (E) dE du , (22) ∂ 2 S ∂u 2 = ∂ ∂u 1 T (E) dE du . (23) reported in Figure 3 and by means of a spline interpolation of its points. Whereas ∂ 2 u S(u) in Eq.( 23) is computed from the raw numerical data, and the derivatives with respect to u have been obtained by means of a standard central difference formula. The four patterns of ∂ u S(u), computed for different sizes of the lattice and reported in Figure 5 and Figure 6, show that each ∂ u S(u) appears splitted into two monotonic branches, one decreasing and the other increasing as functions of u, respectively. Approximately out of the interval u ∈ (-1.6, -0.65) the four patterns are perfectly superposed, whereas within this interval -which contains the transition value u c -1.32 -we can observe that the transition from ∂ u S < 0 to ∂ u S > 0 gets sharper at increasing lattice dimension. This means that the second derivative of the entropy, ∂ 2 u S(u), tends to make a sharper jump at increasing N . And in fact, this is what is suggested by the four patterns of ∂ 2 u S(u) -computed for the same sizes of the lattice -reported in Figure 7. These are strongly suggestive to belong to a sequence of patterns converging to a step-like limit pattern. In this case the third order derivative (∂ 3 S/∂u 3 ) would asymptotically diverge entailing a loss of analyticity of the entropy which, in fact, would drop to S ∞ (u) ∈ C 1 . And this is in agreement with the above proposed classification à la Ehrenfest. It is worth noting that we have found evidence of only one transition point, despite the presence of two peaks of the specific heat, thus confirming that in order to correctly characterize a phase transition in the microcanonical ensemble one has to look for the signals of analyticity loss of the entropy. D. A geometric observable for the level sets Σ v in configuration space We have seen in the preceding Section that -within the confidence limits of a numerical investigation -the first order phase transition of the gauge model under investigation seems to correspond to an asymptotic divergence of the third derivative (∂ 3 S/∂u 3 ) of the microcanonical configurational entropy. Under the main theorem in Ref. [START_REF] Franzosi | Topology and Phase Transitions II. Theorem on a necessary relation[END_REF] this should stem from a topological change of the potential level sets Σ u = V -1 (u). In order to get some information of topological kind about these level sets one has to resort to concepts and methods of differential topology. In fact, differential topology allows to catch some topological infor-mation on differentiable manifolds through suitable curvature integrals (the Gauss-Bonnet theorem being the first classical example of this type). Relevant geometric quantities can be computed through the extrinsic geometry of hypersurfaces of a Euclidean space. To do this one has to study the way in which an N -surface Σ curves around in R N +1 by measuring the way the normal direction changes as we move from point to point on the surface. The rate of change of the normal direction N at a point x ∈ Σ in direction v is described by the shape operator (sometimes also called Weingarten's map) L x (v) = -∇ v N = -(v • ∇)N, where v is a tangent vector at x and ∇ v is the directional derivative; gradients and vectors being represented in R N +1 . For the level sets of a regular function, as is the case of the constant-energy hypersurfaces in the phase space of Hamiltonian systems or of the equipotential hypersurfaces in configuration space, thus generically defined through a regular real-valued function f as Σ a := f -1 (a), the normal vector is N = ∇f / ∇f . The eigenvalues κ 1 (x), . . . , κ N (x) of the shape operator are the principal curvatures at x ∈ Σ. For the potential level sets Σ v = V -1 (v) the trace of the shape operator at any given point is the mean curvature at that point and can be written as [START_REF] Pettini | Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics[END_REF][START_REF] Thorpe | Elementary Topics in Differential Geometry[END_REF] M = - 1 N ∇ • ∇V ∇V = 1 N N i=1 κ i . (24) We have numerically computed the second moment of M averaged along the Hamiltonian flow σ M = N V ar(M ) t = N [ M 2 t -M 2 t ] 1 N N i=1 κ 2 i t - 1 N N i=1 κ i 2 t , (25) where we have assumed that the correlation term N -2 i,j [ k i k j t -k i t k j t ] vanishes. In fact, on the one side there is no conserved ordering of the eigenvalues of the shape operator along a dynamical trajectory, and on the other side the averages are performed along chaotic trajectories (the largest Lyapounov exponent is always positive) so that k i and k j vary almost randomly from point to point and independently one from the other. The numerical results are reported in Figure 8 and Figure 9, where an intriguing feature of the patterns of σ M (E) and σ M (u) is evident: below the transition point (marked with a vertical dashed line) the concavity of both σ M (E) and σ ( u) is oriented downward so that d 2 σ M /dE 2 and d 2 σ M /du 2 are negative, whereas just above the transition point both σ M (E) and σ M (u) are segments of a straight line, so that d 2 σ M /dE 2 and d 2 σ M /du 2 vanish. Thus both derivatives make a jump at the transition point. Again within the validity limits of numerical investigations, this means that the third order derivatives, and in particular d 3 σ M /du 3 , diverge. It is then natural to think of a connection with the asymptotic divergence of d 3 S/du 3 suggested by the results reported in the preceding Section. Remark. There is a point of utmost importance to comment on. In presence of a phase transition (and of a finite size of a phase transition as is the case of numerical simulations) the typical variations of many observables at the transition point are the effects of the singular properties of the statistical measures and hence of the corresponding thermodynamic potentials (entropy, free energy, pressure). But this is not true for the geometric quantity σ M (u) which is independent of the properties of any statistical measure. Peculiar changes of the geometry -and possibly of the topology -of the potential level sets of configuration space (detected by σ M ) constitute the deep origin, the cause of phase transitions, not an effect. The singular pattern of σ M at the transition point is a primitive phenomenon. In other words, geometrical/topological variations of the spaces where the statistical measures are defined (phase space and configuration space) entail their singular properties [START_REF] Pettini | Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics[END_REF]. The vice versa is meaningless. Let us now see how the jump of the second derivative of σ M (u) and the jump of the second derivative of the configurational entropy S(u) can be both attributed to a deeper phenomenon: a suitable change of the topology of the {Σ u } u∈R . In what follows we resort to the best, non-trivial approximations at present available. Consider the pointwise dispersion of the principal curvatures s κ = 1 N N i=1 (κ i -κ) 2 (26) where κ = 1 N N i=1 κ i (27) equation ( 26) is equivalently rewritten as [START_REF] Zhang | Some New Deformation Formulas about Variance and Covariance[END_REF] s κ = 1 N 2 N i,j=1 (κ i -κ j ) 2 (28) and the time average along the Hamiltonian flow of s κ is then equivalently written as s κ t = 1 N N i=1 κ 2 i t -κ 2 t = 1 N 2 N i,j=1 (κ i -κ j ) 2 t . (29) Now, from Eqs.( 25) and ( 29) we get σ M -s κ t - 1 N N i=1 κ i 2 t + κ 2 t ( 30 ) so that, if we make a "mean field"-like approximation in the first term of the r.h.s. of [START_REF] Barré | Lyapunov exponents as a dynamical indicator of a phase transition[END_REF] by replacing the κ i with their average κ it follows σ M -s κ t -κ 2 t + κ 2 t , and as κ in Eq.( 27) is the same of M in Eq.( 24) one trivially gets (1 - 1 N )σ M -s κ t 0 (31) so that, under this "mean field"-like approximation, σ M in Eq.( 25) can be used to estimate s κ t . Then, as the ergodic invariant measure of chaotic Hamiltonian dynamics is the microcanonical one, the time averages • t provide the values of the surface averages • Σ E . Hence, and within the validity limits of the undertaken approximations, an interesting connection can be used between extrinsic curvature properties of an hypersurface of a Euclidean space R N +1 and its Betti numbers (the diffeomorphism-invariant dimensions of the cohomology groups of the hypersurface, thus topological invariants) [START_REF] Nakahara | Geometry, Topology and Physics[END_REF]. This connection is made by Pinkall's inequality given in the following. Denoting by σ(L x ) 2 = 1 N 2 i<j (κ i -κ j ) 2 the dispersion of the principal curvatures of the hypersurface, then after Pinkall's theorem [34] 1 vol(S N ) Σ N v [σ(L x )] N dµ(x) ≥ N -1 i=1 i N -i N/2-i b i (Σ N v ) , (32) where b i (Σ N v ) are the Betti numbers of the manifold Σ N v immersed in the Euclidean space R N +1 , S N is an N -dimensional sphere of unit radius, and µ(x) is the measure on Σ N v . With the help of the Hölder inequality for integrals we have Σ N v [σ(L x )] 2 dµ(x) ≤ Σ N v {[σ(L x )] 2 } N/2 dµ(x) 2/N Σ N v dµ(x) 1/(1-2/N ) (33) whence, at large N , [START_REF] Pinkall | Inequalities of Willmore Type for Submanifolds[END_REF] this inequality becomes an equality when |f | p / f p p = |g| q / g q q almost everywhere [START_REF] Reed | Functional Analysis, revised and enlarged edition[END_REF], where f p is the standard L p norm f p = S |f | p dµ(x) Σ N v dµ(x) -1 Σ N v [σ(L x )] 2 dµ(x) ≤ Σ N v {[σ(L x )] 2 } N/2 dµ(x) 2/N 1/p , where S is a measurable space. In the inequalities above g(x) = 1, thus the Hölder inequality becomes an equality when |f | p = f p p / S dµ(x), that is, when |σ(L x )| N equals its average value almost everywhere on Σ N v . Introducing a positive remainder function r(v), Eq.( 34) is rewritten as Σ N v dµ(x) -1 Σ N v [σ(L x )] 2 dµ(x) = Σ N v {[σ(L x )] 2 } N/2 dµ(x) 2/N -r(v) . (35) For the model under investigation, the pointwise dispersion of the principal curvatures of the potential level sets actually displays a limited variability from point to point. This follows from the observation that the numerically computed variance of the mean curvature in Eq.( 25) is very fastly convergent to its asymptotic value, independently of the initial condition [START_REF]The spread of the values of σ M = N V ar(M ) ∆t = N [ M 2 ∆t -M 2 ∆t ], numerically computed along short segments of time duration ∆t = 100[END_REF]. Hence, in the present case, the remainder r(v) appears to be a small correction and, consequently, the Hölder inequality is tight. Then, using σ M = N [ M 2 Σv -M 2 Σv ] ∼ Σ N v dµ(x) -1 Σv [σ(L x )] 2 dµ(x) (36) together with Eq.( 35) and the Pinkall inequality, one finally gets σ M ∼ Σ N v {[σ(L x )] 2 } N/2 dµ(x) 2/N -r(v) ∼ vol(S N ) N -1 i=1 i N -i N/2-i b i (Σ N v ) 2/N -r(v) , (37) that is, the observable σ M (v) is explicitly related with the topology of the level sets Σ N v . This relation, even being an approximate one, is definitely non-trivial because there are very few possibilities of relating total curvature properties of a manifold with its topological invariants. On the other hand, both Pinkall's inequality and the Hölder inequality are sufficiently tight to make Eq.( 37) meaningful. In fact, in addition to the already given arguments concerning the Hölder inequality in Eq.( 34), Pinkall's inequality stems from the Morse inequalities Therefore, the integral in the l.h.s. of Eq.( 37) necessarily follows the topological variations of the Σ N v described by the weighted sum of its Betti numbers. The consequence is that a suitable variation with v of the weighted sum of the Betti numbers of a Σ N v can be sufficient to entail a sudden change of the convexity of the function σ M (v), as reported in Figure 9, and thus entail a discontinuity of its second derivative [START_REF]The Betti numbers -as well as Morse indexes -are integers so that their sum, weighted or not, forms only staircase-like patterns which do not qualify as continuous and possibly differentiable functions. Actually the technical details of the reason why the corners of these staircase-like patterns are rounded can be found in Section 9[END_REF]. µ k (M ) ≥ b k (M ) On the other hand, the existence of a relationship between thermodynamics and configuration space topology is provided by the following exact formula [START_REF] Pettini | Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics[END_REF][START_REF] Franzosi | Topology and Phase Transitions II. Theorem on a necessary relation[END_REF] S (-) N (v) = (k B /N ) log M N v d N q = k B N log   vol[M N v \ N (v) i=1 Γ(x (i) c )] + N i=0 w i µ i (M N v ) + R   , (38) where S (-) N is the configurational entropy, and the µ i (M N v ) are the Morse indexes (in one-toone correspondence with topology changes) of the submanifolds {M N v = V -1 N ((-∞, v] )} v∈R of configuration space; in square brackets: the first term is the result of the excision of certain neighbourhoods of the critical points of the interaction potential from M N v ; the second term is a weighed sum of the Morse indexes, and the third term is a smooth function of N and v. Again, sharp changes in the potential energy pattern of at least some of the µ i (M N v ) (thus of the way topology changes with v) affect S (-) N (v) and its derivatives. In other words, both the jump of the second derivative of the entropy and of the second derivative of σ M are possibly rooted in the same topological ground, where some adequate variation of the topology of the Σ N v -foliating the configuration space -takes place. Notice that even if in Eq.( 38) S (-) N (v) depends on the topology of the M N v through the Morse indexes µ i (M N v ), in the framework of Morse theory a topology change of a level set Σ N v is always associated with a topology change of the associated manifold M N v of which Σ N v is the boundary [START_REF] Milnor | Morse Theory[END_REF]. Summarizing, the topology changes indirectly detected by the function σ M (u) can affect the configurational entropy S N (v) and its tendency to develop an asymptotic discontinuity of ∂ 2 v S ∞ (v) (we use u and v interchangeably). Finally, in Appendix we show that the non-trivial contribution to the homology groups of the energy level sets Σ N E comes from the homology groups of the configuration space submanifolds M N v ⊂ M N E and Σ N v ⊂ Σ N E . Therefore, the topology variations of the Σ N v imply topology variations of the Σ N E , and these necessarily affect also the functional dependence on E of the total entropy S N (E). In fact, the variation with v of the topology of the Σ N v is in one-to-one correspondence with some variation with v of the Betti numbers b i (Σ N v ) entering Eq. [START_REF]The Betti numbers -as well as Morse indexes -are integers so that their sum, weighted or not, forms only staircase-like patterns which do not qualify as continuous and possibly differentiable functions. Actually the technical details of the reason why the corners of these staircase-like patterns are rounded can be found in Section 9[END_REF], and this entails the variation with E of the Betti numbers b i (Σ N E ), so that, according to the following formula [START_REF] Pettini | Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics[END_REF] for the total entropy S N (E) ≈ k B N log vol(S N -1 1 ) N i=0 b i (Σ N E ) + R 1 (E) + R 2 (E) , (39) where R 1 (E), and R 2 (E) are smooth functions, we see that the variation with v of the topology of the Σ N v implies also the variation with E of the total entropy. After the Yang-Lee theory, phase transitions are commonly associated with a loss of analyticity of a thermodynamic potential entailing non-analytic patterns of thermodynamic observables (the pertinent potential depends on the statistical ensemble chosen). However, the caloric curve found for our gauge model is very regular, no bifurcating order parameter exists, and the peaks of specific heat are just due to horizontal tangencies to the caloric curve. In other words, apparently there is no evidence of the existence of genuinely nonanalytic energy pattern of some observable. However, by directly looking at the energy pattern of the entropy, we have identified a point of discontinuity of its second derivative (or at least a finite size version of such a discontinuity), hence an asymptotic divergence of the third derivative of the entropy. Then we have discussed how this fits into a classification scheme à la Ehrenfest, adapted to the framework of microcanonical ensemble, that allows to determine the order of a phase transition on the basis of the differentiability class of the entropy in the thermodynamic limit. Remarkably, we have found a quantity, σ M (v), that -by measuring the total degree of inhomogeneity of the extrinsic curvature of the potential level sets Σ v = V -1 (v) in configuration space -identifies the phase transition point. This quantity is not a thermodynamic observable, has a purely geometric meaning, and displays a discontinuity of its second derivative in coincidence with the same kind of discontinuity displayed by the entropy. Rather than being a trivial consequence of the presence of the phase transition, the peculiar change of the geometry of the {Σ N v } v∈R so detected is the deep cause of the singularity of the entropy. In fact, the potential level sets are simply subsets of R N defined as Σ N v = {(q 1 , . . . , q N ) ∈ R N |V (q 1 , . . . , q N ) = v}, whose ensemble {Σ N v } v∈R foliates the configuration space; the volume Ω(v, N ) -of each leaf Σ N v -and the way it varies as a function of v is just a matter of geometrical/topological properties of the leaves of the foliation. These properties entail the v-dependence of the entropy S N (v) = (1/N ) log Ω(v, N ), and, of course, its differentiability class. This is why the v-pattern of the quantity σ M (v) is not the consequence of the presence of a phase transition but, rather, the reason of its appearance. This is already a highly non trivial fact indicating that whether a physical system can undergo a phase transition is somehow already encoded in the interactions among its degrees of freedom described by the potential function V (q 1 , . . . , q N ), independently of the statistical ensemble chosen to describe its macroscopic observables. However, we can wonder if one can go deeper by looking for the origin of the peculiar changes with v of the geometry of the Σ N v . Actually, by resorting to a theorem in differential topology, and with some approximations, these geometrical changes appear to be due to changes of the topology of both the potential level sets in configuration space and the energy level sets in phase space. Therefore, the results of the present work lend further support to the topological theory of phase transitions. Moreover, since the practical computation of σ M (v), or of σ M (E), is rather straightforward, this can be used to complement the study of transitional phenomena in the absence of symmetry-breaking, as is the case of: liquid-gas change of state, Kosterlitz-Thouless transitions, glasses and supercooled liquids, amorphous and disordered systems, folding transitions in homopolymers and proteins, both classical and quantum transitions in small N systems. With respect to the latter case, a remark about the topological theory is in order. In nature, phase transitions (that is major qualitative physical changes) occur also in very small systems with N much smaller than the Avogadro number, but their mathematical description through the loss of analyticity of thermodynamic observables requires the asymptotic limit N → ∞. To the contrary, within the topological framework a sharp difference between the presence or the absence of a phase transition can be made also at any finite and even very small N . At finite N , the microscopic states that significantly contribute to the statistical averages of thermodynamic observables are spread in regions of configuration space which get narrower as N increases, so that the statistical measures better concentrate on a specific potential level set thus better detecting its sudden and major topology changes, if any. Eventually, in the N → ∞ limit the extreme sharpening of the effective support of the measure leads to a topology-induced nonanalyticity of thermodynamic observables [START_REF] Pettini | Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics[END_REF]. Furthermore, even if somewhat abstract, the model studied in the present work has the basic properties of a lattice gauge model, that is, its potential depends on the circulations of the gauge field on the plaquettes, so that the geometrical/topological approach developed here could be also of some interest to the numerical investigation of phase transitions of Euclidean gauge theories on lattice. In fact, computing σ M (v), or σ M (E), is definitely easier than computing the Wilson loop, commonly adopted in place of an order parameter for gauge theories. Actually, a few decades ago, several papers on the microcanonical formulation of quantum field theories appeared [START_REF] Callaway | Lattice gauge theory in the microcanonical ensemble[END_REF][START_REF] Fukugita | Testing microcanonical simulation with SU(2) lattice gauge theory[END_REF], motivated by the fact that in statistical mechanics and in field theory there are systems for which the canonical description is pathological, but the microcanonical is not, also arguing, for instance and among other things, that a microcanonical formulation of quantum gravity may be less pathological than the usual canonical formulation [START_REF] Strominger | Microcanonical Quantum Field Theory[END_REF][START_REF] Iwazaki | Microcanonical formulation of Quantum field theories[END_REF][START_REF] Morikawa | Supercooled states and order of phase transitions in microcanonical simulations[END_REF][START_REF] Duane | Stochastic quantization versus the microcanonical ensemble: getting the best of both worlds[END_REF]. More recent works can also be found on these topics [START_REF] Cirilo-Lombardo | Quantum field propagator for extended-objects in the microcanonical ensemble and the S-matrix formulation[END_REF][START_REF] Casadio | Microcanonical Description of (Micro) Black Holes[END_REF][START_REF] Sinatra | Genuine phase diffusion of a Bose-Einstein condensate in the microcanonical ensemble: A classical field study[END_REF][START_REF] Strauss | Quantum field theory of classically unstable Hamiltonian dynamics[END_REF]. Finally, as a side issue, it is provided here an example of statistical ensemble nonequivalence in a system with short-range interactions. Ensemble non-equivalence is another topic which is being given much attention in recent literature [START_REF] Campa | Statistical mechanics and dynamics of solvable models with long-range interactions[END_REF]. S (-) (E) = k B 2N log E 0 dη d N p δ( i p 2 i /2 -η) Σ E-η dσ ∇V . ( 40 ) As N increases the microscopic configurations giving a relevant contribution to the entropy, and to any microcanonical average, concentrate closer and closer on the level set Σ E-η . A link among the topology of the energy level sets and the topology of configuration space can be established for systems described by a Hamiltonian of the form H N (p, q) = N i=1 p 2 i /2 + V N (q 1 , ..., q N ). In fact, (using a cumbersome notation for the sake of clarity) the level sets Σ H N E of the energy function H N can be given by the disjoint union of a trivial unitary sphere bundle (representing the phase space region where the kinetic energy does not vanish) and the hypersurface in configuration space where the potential energy takes the total energy value (details are given in [? ]) Σ H N E homeomorphic to M V N E × S N -1 Σ V N E ( 41 ) where S n is the n-dimensional unitary sphere and M f c = {x ∈ Dom(f )|f (x) < c} , Σ f c = {x ∈ Dom(f )|f (x) = c} . (42) The idea that finite N topology, and "asymptotic topology" as well, of Σ H N E is affected by the topology of the accessible region of configuration space is suggested by the Künneth formula: if H k (X) is the k-th homological group of the topological space X on the field F then H k (X × Y ; F) i+j=k H i (X; F) ⊗ H j (Y ; F) . (43) Moreover, as H k N i=1 X i , F = N i H k (X i , F), it follows that: H k Σ H N E , R i+j=k H i M V N E ; R ⊗ H j S N -1 ; R ⊕ H k Σ V N E ; R H k-(N -1) M V N E ; R ⊗ R ⊕ H k M V N E ; R ⊗ R ⊕ H k Σ V N E ; R (44) the r.h.s. of Eq. [START_REF] Duane | Stochastic quantization versus the microcanonical ensemble: getting the best of both worlds[END_REF] shows that the topological changes of Σ H N E only stem from the topological changes in configuration space. Figure 1 . 1 Figure 1. (Color online) Caloric curve. Temperature is computed according to Eq.(11). Lattice dimensions: n 3 = 6 × 6 × 6 (rhombs), n 3 = 8 × 8 × 8 (squares), n 3 = 10 × 10 × 10 (circles). The dashed lines identify the point of flat tangency at lower energy. the average potential energy per lattice site u = V /N is displayed as a function of the total energy density. Also in this case we observe a regular function which is stable with the number of degrees of freedom. The dashed lines identify the phase transition point which corresponds to E c /N -0.40 and u c /N -1.32. Figure 2 . 2 Figure 2. (Color online) Constant volume specific heat computed by means of Eq.(13). Lattice dimensions: n 3 = 6 × 6 × 6 (rhombs), n 3 = 8 × 8 × 8 (squares), n 3 = 10 × 10 × 10 (circles). Figure 3 . 3 Figure 3. (Color online) Internal potential energy density computed through Eq.(8) where the observable A is the potential function per degree of freedom of the system. Lattice dimensions: n 3 = 6 × 6 × 6 (rhombs), n 3 = 8 × 8 × 8 (squares), n 3 = 10 × 10 × 10 (circles). Figure 4 . 4 Figure 4. Largest Lyapunov exponent versus the energy per degree of freedom. Lattice dimensions: n 3 = 6 × 6 × 6 (rhombs), n 3 = 8 × 8 × 8 (squares), n 3 = 10 × 10 × 10 (circles). The dashed vertical l line indicates the phase transition point. Figure 5 . 5 Figure 5. (Color online) First derivative ∂S/∂u of the configurational entropy versus the average potential energy per degree of freedom u. Lattice dimensions: n 3 = 6 × 6 × 6 (red full circles), n 3 = 8 × 8 × 8 (green full circles), n 3 = 10 × 10 × 10 (blue full circles), n 3 = 14 × 14 × 14 (black full circles). The vertical dashed line locates the phase transition point. Figure 6 . 6 Figure 6. (Color online) Zoom of the first derivative ∂S/∂u of the configurational entropy versus the average potential energy per degree of freedom u. Lattice dimensions: n 3 = 6 × 6 × 6 (red full circles), n 3 = 8 × 8 × 8 (green full circles), n 3 = 10 × 10 × 10 (blue full circles), n 3 = 14 × 14 × 14 (black full circles). The vertical dashed line locates the phase transition point. Figure 7 . 7 Figure 7. (Color online) Second derivative ∂ 2 S/∂u 2 of the configurational entropy versus the average potential energy per degree of freedom u. Lattice dimensions: n 3 = 6 × 6 × 6 (thin solid line with small triangles), n 3 = 8 × 8 × 8 (dot-dashed line), n 3 = 10 × 10 × 10 (dashed line), n 3 = 14 × 14 × 14 (thick solid line). Figure 8 . 8 Figure 8. (Color online) Second moment of the total mean curvature of the potential level sets Σ u versus energy density E/N . n 3 = 6 × 6 × 6 (rhombs), n 3 = 8 × 8 × 8 (squares), n 3 = 10 × 10 × 10 (circles). The oblique dashed line is a guide to the eye. The vertical dashed line corresponds to the point where the second derivative d 2 σ M /dE 2 jumps from a negative value to zero. Figure 9 . 9 Figure 9. (Color online) Second moment of the total mean curvature of the potential level sets Σ u versus the average potential energy per degree of freedom u. n 3 = 6×6×6 (rhombs), n 3 = 8×8×8 (squares), n 3 = 10 × 10 × 10 (circles). The oblique dashed line is a guide to the eye. The vertical dashed line corresponds to the point where the second derivative d 2 σ M /du 2 jumps from a negative value to zero. which relate the Morse indexes µ k (M ) to the Betti numbers b k (M ) of a manifold M (Pinkall's inequality would be replaced by an equality if written with Morse indexes), and these Morse inequalities are very tight since the alternating sums of Morse indexes and of Betti numbers, respectively, give the same result (the Euler characteristic). IV. CONCLUDING REMARKSWe have tackled the problem of characterizing a phase transition in the absence of global symmetry breaking from the point of view of Hamiltonian dynamics and related geometrical and topological aspects. In this condition the Landau classification of phase transitions does not apply, because no order parameter -commonly associated with a global symmetry -exists. The system chosen is inspired by the dual of the Ising model, and the discrete variables are replaced by continuous ones. We stress that our work has nothing to do with the true dual Ising model, which has just suggested how to define a classical Hamiltonian system with a local (gauge) symmetry. Since the ergodic invariant measure for generically non-integrable Hamiltonian systems is the microcanonical measure in phase space, studying phase transitions through Hamiltonian dynamics is the same as studying them in the microcanonical ensemble.A standard analysis has been performed to locate the phase transition and to determine its order through the shape of the caloric curve, T = T (E), which appeared typical of a first order phase transition. The presence of energy intervals of negative specific heat are indicative of ensemble nonequivalence. At variance with what has been systematically observed for systems undergoing symmetry-breaking phase transitions, the energy pattern of the largest Lyapunov exponent does not allow to locate the transition point. APPENDIXA. Relation between topological changes of the Σ v and of the Σ E Now, let us see why a topological change of the configuration space submanifolds Σ v = V -1 (v) (potential level sets) implies the same phenomenon for the Σ E . The potential level sets are the basic objects, foliating configuration space, that represent the nontrivial topological part of phase space. The link of these geometric objects with microcanonical entropy is given by